'planetlab-3_1-branch'.
--- /dev/null
+
+-------------------------------------------------------------------------
+Release Notes for Linux on Intel's IXP4xx Network Processor
+
+Maintained by Deepak Saxena <dsaxena@plexity.net>
+-------------------------------------------------------------------------
+
+1. Overview
+
+Intel's IXP4xx network processor is a highly integrated SOC that
+is targeted for network applications, though it has become popular
+in industrial control and other areas due to low cost and power
+consumption. The IXP4xx family currently consists of several processors
+that support different network offload functions such as encryption,
+routing, firewalling, etc. For more information on the various
+versions of the CPU, see:
+
+ http://developer.intel.com/design/network/products/npfamily/ixp4xx.htm
+
+Intel also made the IXCP1100 CPU for sometime which is an IXP4xx
+stripped of much of the network intelligence.
+
+2. Linux Support
+
+Linux currently supports the following features on the IXP4xx chips:
+
+- Dual serial ports
+- PCI interface
+- Flash access (MTD/JFFS)
+- I2C through GPIO
+- GPIO for input/output/interrupts
+ See include/asm-arm/arch-ixp4xx/platform.h for access functions.
+- Timers (watchdog, OS)
+
+The following components of the chips are not supported by Linux and
+require the use of Intel's propietary CSR softare:
+
+- USB device interface
+- Network interfaces (HSS, Utopia, NPEs, etc)
+- Network offload functionality
+
+If you need to use any of the above, you need to download Intel's
+software from:
+
+ http://developer.intel.com/design/network/products/npfamily/ixp425swr1.htm
+
+DO NOT POST QUESTIONS TO THE LINUX MAILING LISTS REGARDING THE PROPIETARY
+SOFTWARE.
+
+There are several websites that provide directions/pointers on using
+Intel's software:
+
+http://ixp4xx-osdg.sourceforge.net/
+ Open Source Developer's Guide for using uClinux and the Intel libraries
+
+http://gatewaymaker.sourceforge.net/
+ Simple one page summary of building a gateway using an IXP425 and Linux
+
+http://ixp425.sourceforge.net/
+ ATM device driver for IXP425 that relies on Intel's libraries
+
+3. Known Issues/Limitations
+
+3a. Limited inbound PCI window
+
+The IXP4xx family allows for up to 256MB of memory but the PCI interface
+can only expose 64MB of that memory to the PCI bus. This means that if
+you are running with > 64MB, all PCI buffers outside of the accessible
+range will be bounced using the routines in arch/arm/common/dmabounce.c.
+
+3b. Limited outbound PCI window
+
+IXP4xx provides two methods of accessing PCI memory space:
+
+1) A direct mapped window from 0x48000000 to 0x4bffffff (64MB).
+ To access PCI via this space, we simply ioremap() the BAR
+ into the kernel and we can use the standard read[bwl]/write[bwl]
+ macros. This is the preffered method due to speed but it
+ limits the system to just 64MB of PCI memory. This can be
+ problamatic if using video cards and other memory-heavy devices.
+
+2) If > 64MB of memory space is required, the IXP4xx can be
+ configured to use indirect registers to access PCI This allows
+ for up to 128MB (0x48000000 to 0x4fffffff) of memory on the bus.
+ The disadvantadge of this is that every PCI access requires
+ three local register accesses plus a spinlock, but in some
+ cases the performance hit is acceptable. In addition, you cannot
+ mmap() PCI devices in this case due to the indirect nature
+ of the PCI window.
+
+By default, the direct method is used for performance reasons. If
+you need more PCI memory, enable the IXP4XX_INDIRECT_PCI config option.
+
+3c. GPIO as Interrupts
+
+Currently the code only handles level-sensitive GPIO interrupts
+
+4. Supported platforms
+
+ADI Engineering Coyote Gateway Reference Platform
+http://www.adiengineering.com/productsCoyote.html
+
+ The ADI Coyote platform is reference design for those building
+ small residential/office gateways. One NPE is connected to a 10/100
+ interface, one to 4-port 10/100 switch, and the third to and ADSL
+ interface. In addition, it also supports to POTs interfaces connected
+ via SLICs. Note that those are not supported by Linux ATM. Finally,
+ the platform has two mini-PCI slots used for 802.11[bga] cards.
+ Finally, there is an IDE port hanging off the expansion bus.
+
+Gateworks Avila Network Platform
+http://www.gateworks.com/avila_sbc.htm
+
+ The Avila platform is basically and IXDP425 with the 4 PCI slots
+ replaced with mini-PCI slots and a CF IDE interface hanging off
+ the expansion bus.
+
+Intel IXDP425 Development Platform
+http://developer.intel.com/design/network/products/npfamily/ixdp425.htm
+
+ This is Intel's standard reference platform for the IXDP425 and is
+ also known as the Richfield board. It contains 4 PCI slots, 16MB
+ of flash, two 10/100 ports and one ADSL port.
+
+Intel IXDPG425 Development Platform
+
+ This is basically and ADI Coyote board with a NEC EHCI controller
+ added. One issue with this board is that the mini-PCI slots only
+ have the 3.3v line connected, so you can't use a PCI to mini-PCI
+ adapter with an E100 card. So to NFS root you need to use either
+ the CSR or a WiFi card and a ramdisk that BOOTPs and then does
+ a pivot_root to NFS.
+
+Motorola PrPMC1100 Processor Mezanine Card
+http://www.fountainsys.com/datasheet/PrPMC1100.pdf
+
+ The PrPMC1100 is based on the IXCP1100 and is meant to plug into
+ and IXP2400/2800 system to act as the system controller. It simply
+ contains a CPU and 16MB of flash on the board and needs to be
+ plugged into a carrier board to function. Currently Linux only
+ supports the Motorola PrPMC carrier board for this platform.
+ See https://mcg.motorola.com/us/ds/pdf/ds0144.pdf for info
+ on the carrier board.
+
+5. TODO LIST
+
+- Add support for Coyote IDE
+- Add support for edge-based GPIO interrupts
+- Add support for CF IDE on expansion bus
+
+6. Thanks
+
+The IXP4xx work has been funded by Intel Corp. and MontaVista Software, Inc.
+
+The following people have contributed patches/comments/etc:
+
+Lutz Jaenicke
+Justin Mayfield
+Robert E. Ranslam
+[I know I've forgotten others, please email me to be added]
+
+-------------------------------------------------------------------------
+
+Last Update: 11/16/2004
--- /dev/null
+Anticipatory IO scheduler
+-------------------------
+Nick Piggin <piggin@cyberone.com.au> 13 Sep 2003
+
+Attention! Database servers, especially those using "TCQ" disks should
+investigate performance with the 'deadline' IO scheduler. Any system with high
+disk performance requirements should do so, in fact.
+
+If you see unusual performance characteristics of your disk systems, or you
+see big performance regressions versus the deadline scheduler, please email
+me. Database users don't bother unless you're willing to test a lot of patches
+from me ;) its a known issue.
+
+Also, users with hardware RAID controllers, doing striping, may find
+highly variable performance results with using the as-iosched. The
+as-iosched anticipatory implementation is based on the notion that a disk
+device has only one physical seeking head. A striped RAID controller
+actually has a head for each physical device in the logical RAID device.
+
+However, setting the antic_expire (see tunable parameters below) produces
+very similar behavior to the deadline IO scheduler.
+
+
+Selecting IO schedulers
+-----------------------
+To choose IO schedulers at boot time, use the argument 'elevator=deadline'.
+'noop' and 'as' (the default) are also available. IO schedulers are assigned
+globally at boot time only presently.
+
+
+Anticipatory IO scheduler Policies
+----------------------------------
+The as-iosched implementation implements several layers of policies
+to determine when an IO request is dispatched to the disk controller.
+Here are the policies outlined, in order of application.
+
+1. one-way Elevator algorithm.
+
+The elevator algorithm is similar to that used in deadline scheduler, with
+the addition that it allows limited backward movement of the elevator
+(i.e. seeks backwards). A seek backwards can occur when choosing between
+two IO requests where one is behind the elevator's current position, and
+the other is in front of the elevator's position. If the seek distance to
+the request in back of the elevator is less than half the seek distance to
+the request in front of the elevator, then the request in back can be chosen.
+Backward seeks are also limited to a maximum of MAXBACK (1024*1024) sectors.
+This favors forward movement of the elevator, while allowing opportunistic
+"short" backward seeks.
+
+2. FIFO expiration times for reads and for writes.
+
+This is again very similar to the deadline IO scheduler. The expiration
+times for requests on these lists is tunable using the parameters read_expire
+and write_expire discussed below. When a read or a write expires in this way,
+the IO scheduler will interrupt its current elevator sweep or read anticipation
+to service the expired request.
+
+3. Read and write request batching
+
+A batch is a collection of read requests or a collection of write
+requests. The as scheduler alternates dispatching read and write batches
+to the driver. In the case a read batch, the scheduler submits read
+requests to the driver as long as there are read requests to submit, and
+the read batch time limit has not been exceeded (read_batch_expire).
+The read batch time limit begins counting down only when there are
+competing write requests pending.
+
+In the case of a write batch, the scheduler submits write requests to
+the driver as long as there are write requests available, and the
+write batch time limit has not been exceeded (write_batch_expire).
+However, the length of write batches will be gradually shortened
+when read batches frequently exceed their time limit.
+
+When changing between batch types, the scheduler waits for all requests
+from the previous batch to complete before scheduling requests for the
+next batch.
+
+The read and write fifo expiration times described in policy 2 above
+are checked only when in scheduling IO of a batch for the corresponding
+(read/write) type. So for example, the read FIFO timeout values are
+tested only during read batches. Likewise, the write FIFO timeout
+values are tested only during write batches. For this reason,
+it is generally not recommended for the read batch time
+to be longer than the write expiration time, nor for the write batch
+time to exceed the read expiration time (see tunable parameters below).
+
+When the IO scheduler changes from a read to a write batch,
+it begins the elevator from the request that is on the head of the
+write expiration FIFO. Likewise, when changing from a write batch to
+a read batch, scheduler begins the elevator from the first entry
+on the read expiration FIFO.
+
+4. Read anticipation.
+
+Read anticipation occurs only when scheduling a read batch.
+This implementation of read anticipation allows only one read request
+to be dispatched to the disk controller at a time. In
+contrast, many write requests may be dispatched to the disk controller
+at a time during a write batch. It is this characteristic that can make
+the anticipatory scheduler perform anomalously with controllers supporting
+TCQ, or with hardware striped RAID devices. Setting the antic_expire
+queue paramter (see below) to zero disables this behavior, and the anticipatory
+scheduler behaves essentially like the deadline scheduler.
+
+When read anticipation is enabled (antic_expire is not zero), reads
+are dispatched to the disk controller one at a time.
+At the end of each read request, the IO scheduler examines its next
+candidate read request from its sorted read list. If that next request
+is from the same process as the request that just completed,
+or if the next request in the queue is "very close" to the
+just completed request, it is dispatched immediately. Otherwise,
+statistics (average think time, average seek distance) on the process
+that submitted the just completed request are examined. If it seems
+likely that that process will submit another request soon, and that
+request is likely to be near the just completed request, then the IO
+scheduler will stop dispatching more read requests for up time (antic_expire)
+milliseconds, hoping that process will submit a new request near the one
+that just completed. If such a request is made, then it is dispatched
+immediately. If the antic_expire wait time expires, then the IO scheduler
+will dispatch the next read request from the sorted read queue.
+
+To decide whether an anticipatory wait is worthwhile, the scheduler
+maintains statistics for each process that can be used to compute
+mean "think time" (the time between read requests), and mean seek
+distance for that process. One observation is that these statistics
+are associated with each process, but those statistics are not associated
+with a specific IO device. So for example, if a process is doing IO
+on several file systems on separate devices, the statistics will be
+a combination of IO behavior from all those devices.
+
+
+Tuning the anticipatory IO scheduler
+------------------------------------
+When using 'as', the anticipatory IO scheduler there are 5 parameters under
+/sys/block/*/queue/iosched/. All are units of milliseconds.
+
+The parameters are:
+* read_expire
+ Controls how long until a read request becomes "expired". It also controls the
+ interval between which expired requests are served, so set to 50, a request
+ might take anywhere < 100ms to be serviced _if_ it is the next on the
+ expired list. Obviously request expiration strategies won't make the disk
+ go faster. The result basically equates to the timeslice a single reader
+ gets in the presence of other IO. 100*((seek time / read_expire) + 1) is
+ very roughly the % streaming read efficiency your disk should get with
+ multiple readers.
+
+* read_batch_expire
+ Controls how much time a batch of reads is given before pending writes are
+ served. A higher value is more efficient. This might be set below read_expire
+ if writes are to be given higher priority than reads, but reads are to be
+ as efficient as possible when there are no writes. Generally though, it
+ should be some multiple of read_expire.
+
+* write_expire, and
+* write_batch_expire are equivalent to the above, for writes.
+
+* antic_expire
+ Controls the maximum amount of time we can anticipate a good read (one
+ with a short seek distance from the most recently completed request) before
+ giving up. Many other factors may cause anticipation to be stopped early,
+ or some processes will not be "anticipated" at all. Should be a bit higher
+ for big seek time devices though not a linear correspondence - most
+ processes have only a few ms thinktime.
+
--- /dev/null
+Deadline IO scheduler tunables
+==============================
+
+This little file attempts to document how the deadline io scheduler works.
+In particular, it will clarify the meaning of the exposed tunables that may be
+of interest to power users.
+
+Each io queue has a set of io scheduler tunables associated with it. These
+tunables control how the io scheduler works. You can find these entries
+in:
+
+/sys/block/<device>/queue/iosched
+
+assuming that you have sysfs mounted on /sys. If you don't have sysfs mounted,
+you can do so by typing:
+
+# mount none /sys -t sysfs
+
+
+********************************************************************************
+
+
+read_expire (in ms)
+-----------
+
+The goal of the deadline io scheduler is to attempt to guarentee a start
+service time for a request. As we focus mainly on read latencies, this is
+tunable. When a read request first enters the io scheduler, it is assigned
+a deadline that is the current time + the read_expire value in units of
+miliseconds.
+
+
+write_expire (in ms)
+-----------
+
+Similar to read_expire mentioned above, but for writes.
+
+
+fifo_batch
+----------
+
+When a read request expires its deadline, we must move some requests from
+the sorted io scheduler list to the block device dispatch queue. fifo_batch
+controls how many requests we move, based on the cost of each request. A
+request is either qualified as a seek or a stream. The io scheduler knows
+the last request that was serviced by the drive (or will be serviced right
+before this one). See seek_cost and stream_unit.
+
+
+write_starved (number of dispatches)
+-------------
+
+When we have to move requests from the io scheduler queue to the block
+device dispatch queue, we always give a preference to reads. However, we
+don't want to starve writes indefinitely either. So writes_starved controls
+how many times we give preference to reads over writes. When that has been
+done writes_starved number of times, we dispatch some writes based on the
+same criteria as reads.
+
+
+front_merges (bool)
+------------
+
+Sometimes it happens that a request enters the io scheduler that is contigious
+with a request that is already on the queue. Either it fits in the back of that
+request, or it fits at the front. That is called either a back merge candidate
+or a front merge candidate. Due to the way files are typically laid out,
+back merges are much more common than front merges. For some work loads, you
+may even know that it is a waste of time to spend any time attempting to
+front merge requests. Setting front_merges to 0 disables this functionality.
+Front merges may still occur due to the cached last_merge hint, but since
+that comes at basically 0 cost we leave that on. We simply disable the
+rbtree front sector lookup when the io scheduler merge function is called.
+
+
+Nov 11 2002, Jens Axboe <axboe@suse.de>
+
+
--- /dev/null
+
+Intel PRO/Wireless 2100 802.11b Driver for Linux
+README.ipw2100
+
+October 13, 2004
+
+
+Release 0.56 Current Features
+------------ ----- ----- ---- --- -- -
+
+- IBSS and BSS modes
+- 802.11 fragmentation
+- WEP (shared key and open)
+- wireless extension support
+- 802.1x EAP via xsupplicant
+- Monitor/RFMon mode
+- transmit power control
+- long/short preamble support
+- power states support (ACPI)
+
+TODO
+------------ ----- ----- ---- --- -- -
+- Fix bugs... The biggies:
+ C3 corruption
+ Fragmentation
+
+
+Command Line Parameters
+------------ ----- ----- ---- --- -- -
+
+If the driver is built as a module, the following optional parameters are used
+by entering them on the command line with the modprobe command using this
+syntax:
+
+ modprobe ipw2100 [<option>=<VAL1><,VAL2>...]
+
+For example, to set the interface name for driver, entering:
+
+ modprobe ipw2100 if_name=wlan%d
+
+results in the ipw2100 driver defaulting to the wlan prefix, with the system
+assigning a unique number in place of %d. The default interface name is eth%d.
+
+The ipw2100 driver supports the following module parameters:
+
+Name Value Example:
+debug 0x0-0xffffffff debug=1024
+if_name string if_name=wlan%d
+mode 0,1,2 mode=1 /* AdHoc */
+channel int channel=3 /* Only valid in AdHoc or Monitor */
+associate boolean associate=0 /* Do NOT auto associate */
+disable boolean disable=1 /* Do not power the HW */
+
+
+Radio Kill Switch
+------------ ----- ----- ---- --- -- -
+Most laptops provide the ability for the user to physically disable the radio.
+Some vendors have implemented this as a physical switch that requires no
+software to turn the radio off and on. On other laptops, however, the switch
+is controlled through a button being pressed and a software driver then making
+calls to turn the radio off and on. This is referred to as a "software based
+RF kill switch"
+
+To determine if you have such a switch, you can check the contents of:
+
+ /sys/bus/pci/drivers/ipw2100/*/rf_kill
+
+A value of:
+
+ Radio is {en,dis}abled by RF switch
+
+means that you have an RF switch and the radio is in the state
+described.
+
+A value of:
+
+ Your hardware does not have an RF switch
+
+is self explanatory. In this case you should not need to worry about
+enabling the radio.
+
+
+Dynamic Firmware
+------------ ----- ----- ---- --- -- -
+As the firmware is licensed under a restricted use license, it can not be
+included within the kernel sources. To enable the IPW2100 you will need a
+firmware image to load into the wireless NIC's processors.
+
+You can obtain these images from <http://ipw2100.sf.net/firmware.php>.
+
+See INSTALL for instructions on installing the firmware.
+
+
+Power Management
+------------ ----- ----- ---- --- -- -
+The IPW2100 supports the configuration of the Power Save Protocol
+through a private wireless extension interface. The IPW2100 supports
+the following different modes:
+
+ off No power management. Radio is always on.
+ on Automatic power management
+ 1-5 Different levels of power management. The higher the
+ number the greater the power savings, but with an impact to
+ packet latencies.
+
+Power management works by powering down the radio after a certain
+interval of time has passed where no packets are passed through the
+radio. Once powered down, the radio remains in that state for a given
+period of time. For higher power savings, the interval between last
+packet processed to sleep is shorter and the sleep period is longer.
+
+When the radio is asleep, the access point sending data to the station
+must buffer packets at the AP until the station wakes up and requests
+any buffered packets. If you have an AP that does not correctly support
+the PSP protocol you may experience packet loss or very poor performance
+while power management is enabled. If this is the case, you will need
+to try and find a firmware update for your AP, or disable power
+management (via `iwconfig eth1 power off`)
+
+To configure the power level on the IPW2100 you use a combination of
+iwconfig and iwpriv. iwconfig is used to turn power management on, off,
+and set it to auto.
+
+ iwconfig eth1 power off Disables radio power down
+ iwconfig eth1 power on Enables radio power management to
+ last set level (defaults to AUTO)
+ iwpriv eth1 set_power 0 Sets power level to AUTO and enables
+ power management if not previously
+ enabled.
+ iwpriv eth1 set_power 1-5 Set the power level as specified,
+ enabling power management if not
+ previously enabled.
+
+You can view the current power level setting via:
+
+ iwpriv eth1 get_power
+
+It will return the current period or timeout that is configured as a string
+in the form of xxxx/yyyy (z) where xxxx is the timeout interval (amount of
+time after packet processing), yyyy is the period to sleep (amount of time to
+wait before powering the radio and querying the access point for buffered
+packets), and z is the 'power level'. If power management is turned off the
+xxxx/yyyy will be replaced with 'off' -- the level reported will be the active
+level if `iwconfig eth1 power on` is invoked.
+
+
+Support
+------------ ----- ----- ---- --- -- -
+
+For general information and support, go to:
+
+ http://ipw2100.sf.net/
+
+License
+------------ ----- ----- ---- --- -- -
+
+ Copyright(c) 2003 - 2004 Intel Corporation. All rights reserved.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by the Free
+ Software Foundation; either version 2 of the License, or (at your option)
+ any later version.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ You should have received a copy of the GNU General Public License along with
+ this program; if not, write to the Free Software Foundation, Inc., 59
+ Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+ The full GNU General Public License is included in this distribution in the
+ file called LICENSE.
+
+ Contact Information:
+ James P. Ketrenos <ipw2100-admin@linux.intel.com>
+ Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
--- /dev/null
+
+Intel PRO/Wireless 2200 802.11bg Driver for Linux
+README.ipw2200
+
+October 13, 2004
+
+Release 0.12 Current Features
+------------ ----- ----- ---- --- -- -
+- BSS mode (Infrastructure, Managed)
+- IBSS mode (Ad-Hoc)
+- WEP (OPEN and SHARED KEY mode)
+- 802.1x EAP via xsupplicant
+- Wireless Extension support
+- long/short preamble support
+- Full B and G rate support (2200 and 2915)
+- Full A rate support (2915 only)
+- Transmit power control
+- S state support (ACPI suspend/resume)
+
+TODO
+------------ ----- ----- ---- --- -- -
+- Fix statistics returned by iwconfig and /proc/net/wireless
+- Add firmware restart backoff algorithm (see ipw2100 project)
+- Look into (and hopefully enable) Monitor/RFMon mode
+- Add WPA support
+
+
+Command Line Parameters
+------------ ----- ----- ---- --- -- -
+ associate
+ Set to 0 to disable the auto scan-and-associate functionality of the
+ driver. Default is 1 (auto-associate)
+
+ auto_create
+ Set to 0 to disable the auto creation of an Ad-Hoc network
+ matching the channel and network name parameters provided.
+ Default is 1.
+
+ channel
+ channel number for association. The normal method for setting
+ the channel would be to use the standard wireless tools
+ (i.e. `iwconfig eth1 channel 10`), but it is useful sometimes
+ to set this while debugging. Channel 0 means 'ANY'
+
+ debug
+ If using a debug build, this is used to control the amount of debug
+ info is logged. See the 'dval' and 'load' script for more info on
+ how to use this.
+
+ ifname
+ Can be used to override the default interface name of eth%. For
+ example:
+
+ modprobe ipw2200 ifname=wlan%d
+
+ You can also specify a specific interface number -- be warned
+ that if that number conflicts with an already assigned interface
+ the driver will not load correctly.
+
+ mode
+ Can be used to set the default mode of the adapter.
+ 0 = Managed, 1 = Ad-Hoc
+
+Wireless Extension Private Methods
+------------ ----- ----- ---- --- -- -
+ get_mode
+ Can be used to report out which IEEE mode the driver is
+ configured to support. Example:
+
+ % iwpriv eth1 get_mode
+ eth1 get_mode:802.11bg (6)
+
+ set_mode
+ Can be used to configure which IEEE mode the driver will
+ support.
+
+ Usage:
+ % iwpriv eth1 set_mode {mode}
+ Where {mode} is a number in the range 1-7:
+ 1 802.11a (2915 only)
+ 2 802.11b
+ 3 802.11ab (2915 only)
+ 4 802.11g
+ 5 802.11ag (2915 only)
+ 6 802.11bg
+ 7 802.11abg (2915 only)
+
+
+Sysfs Helper Files: (NOTE: All of these are only useful for developers)
+------------ ----- ----- ---- --- -- -
+
+----- Driver Level ------
+For the driver level files, look in /sys/bus/pci/drivers/ipw2200/
+
+ debug_level
+
+ This controls the same global as the 'debug' module parameter
+
+----- Device Level ------
+For the device level files, look in
+
+ /sys/bus/pci/drivers/ipw2200/{PCI-ID}/
+
+For example:
+ /sys/bus/pci/drivers/ipw2200/0000:02:01.0
+
+For the device level files, see /sys/bus/pci/[drivers/ipw2200:
+
+ command_event_reg
+ read access to the the Command Event register
+
+ eeprom
+ reading from this fill will cause our private copy of the
+ contents of the EEPROM to be flushed to the log
+
+ eeprom_sram
+ reading this file will behave like the 'eeprom' file, except
+ that instead of pulling from the device's cached copy of the
+ eeprom data, the region of the device's sram that should
+ hold eeprom data is dumped.
+
+ eeprom_clear
+ reading from this file will cause the eeprom info in sram to be
+ cleared.
+
+ error_log
+ reading this file will cause the contents of the device's error
+ log to be flushed to our log. normally the event_log is empty,
+ but if the device's fw get's into an odd state, this log contains
+ some hints.
+
+ fw_date
+ read-only access to the firmware release date
+
+ fw_version
+ read-only access to the firmware release version
+
+ rf_kill
+ read -
+ 0 = RF kill not enabled (radio on)
+ 1 = HW based RF kill active (radio off)
+ 2 = SW based RF kill active (radio off)
+ write -
+ 0 = If SW based RF kill active, turn the radio back on
+ 1 = If radio is on, activate SW based RF kill
+
+ NOTE: If you enable the SW based RF kill and then toggle the HW
+ based RF kill from ON -> OFF -> ON, the radio will come back on
+ (resetting the SW based RF kill to the 'radio on' state)
+
+ ucode
+ read-only access to the ucode version number
+
+ rtc
+ read-only access the the device's real-time clock
+
+ [in]direct_byte
+ [in]direct_word
+ enables read-only access to the device's sram by first writing
+ the address of the data to read, and then reading from the file
+ will return the word/byte the address points to.
+
+Support
+------------ ----- ----- ---- --- -- -
+
+For general information and support, go to:
+
+ http://ipw2200.sf.net/
+
+License
+------------ ----- ----- ---- --- -- -
+
+ Copyright(c) 2003 - 2004 Intel Corporation. All rights reserved.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of the GNU General Public License as published by the Free
+ Software Foundation; either version 2 of the License.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ You should have received a copy of the GNU General Public License along with
+ this program; if not, write to the Free Software Foundation, Inc., 59
+ Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+ The full GNU General Public License is included in this distribution in the
+ file called LICENSE.
+
+ Contact Information:
+ James P. Ketrenos <ipw2100-admin@linux.intel.com>
+ Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
--- /dev/null
+===========================================================================
+ HVCS
+ IBM "Hypervisor Virtual Console Server" Installation Guide
+ for Linux Kernel 2.6.4+
+ Copyright (C) 2004 IBM Corporation
+
+===========================================================================
+NOTE:Eight space tabs are the optimum editor setting for reading this file.
+===========================================================================
+
+ Author(s) : Ryan S. Arnold <rsa@us.ibm.com>
+ Date Created: March, 02, 2004
+ Last Changed: August, 24, 2004
+
+---------------------------------------------------------------------------
+Table of contents:
+
+ 1. Driver Introduction:
+ 2. System Requirements
+ 3. Build Options:
+ 3.1 Built-in:
+ 3.2 Module:
+ 4. Installation:
+ 5. Connection:
+ 6. Disconnection:
+ 7. Configuration:
+ 8. Questions & Answers:
+ 9. Reporting Bugs:
+
+---------------------------------------------------------------------------
+1. Driver Introduction:
+
+This is the device driver for the IBM Hypervisor Virtual Console Server,
+"hvcs". The IBM hvcs provides a tty driver interface to allow Linux user
+space applications access to the system consoles of logically partitioned
+operating systems (Linux and AIX) running on the same partitioned Power5
+ppc64 system. Physical hardware consoles per partition are not practical
+on this hardware so system consoles are accessed by this driver using
+firmware interfaces to virtual terminal devices.
+
+---------------------------------------------------------------------------
+2. System Requirements:
+
+This device driver was written using 2.6.4 Linux kernel APIs and will only
+build and run on kernels of this version or later.
+
+This driver was written to operate solely on IBM Power5 ppc64 hardware
+though some care was taken to abstract the architecture dependent firmware
+calls from the driver code.
+
+Sysfs must be mounted on the system so that the user can determine which
+major and minor numbers are associated with each vty-server. Directions
+for sysfs mounting are outside the scope of this document.
+
+---------------------------------------------------------------------------
+3. Build Options:
+
+The hvcs driver registers itself as a tty driver. The tty layer
+dynamically allocates a block of major and minor numbers in a quantity
+requested by the registering driver. The hvcs driver asks the tty layer
+for 64 of these major/minor numbers by default to use for hvcs device node
+entries.
+
+If the default number of device entries is adequate then this driver can be
+built into the kernel. If not, the default can be over-ridden by inserting
+the driver as a module with insmod parameters.
+
+---------------------------------------------------------------------------
+3.1 Built-in:
+
+The following menuconfig example demonstrates selecting to build this
+driver into the kernel.
+
+ Device Drivers --->
+ Character devices --->
+ <*> IBM Hypervisor Virtual Console Server Support
+
+Begin the kernel make process.
+
+---------------------------------------------------------------------------
+3.2 Module:
+
+The following menuconfig example demonstrates selecting to build this
+driver as a kernel module.
+
+ Device Drivers --->
+ Character devices --->
+ <M> IBM Hypervisor Virtual Console Server Support
+
+The make process will build the following kernel modules:
+
+ hvcs.ko
+ hvcserver.ko
+
+To insert the module with the default allocation execute the following
+commands in the order they appear:
+
+ insmod hvcserver.ko
+ insmod hvcs.ko
+
+The hvcserver module contains architecture specific firmware calls and must
+be inserted first, otherwise the hvcs module will not find some of the
+symbols it expects.
+
+To override the default use an insmod parameter as follows (requesting 4
+tty devices as an example):
+
+ insmod hvcs.ko hvcs_parm_num_devs=4
+
+There is a maximum number of dev entries that can be specified on insmod.
+We think that 1024 is currently a decent maximum number of server adapters
+to allow. This can always be changed by modifying the constant in the
+source file before building.
+
+NOTE: The length of time it takes to insmod the driver seems to be related
+to the number of tty interfaces the registering driver requests.
+
+In order to remove the driver module execute the following command:
+
+ rmmod hvcs.ko
+
+The recommended method for installing hvcs as a module is to use depmod to
+build a current modules.dep file in /lib/modules/`uname -r` and then
+execute:
+
+modprobe hvcs hvcs_parm_num_devs=4
+
+The modules.dep file indicates that hvcserver.ko needs to be inserted
+before hvcs.ko and modprobe uses this file to smartly insert the modules in
+the proper order.
+
+The following modprobe command is used to remove hvcs and hvcserver in the
+proper order:
+
+modprobe -r hvcs
+
+---------------------------------------------------------------------------
+4. Installation:
+
+The tty layer creates sysfs entries which contain the major and minor
+numbers allocated for the hvcs driver. The following snippet of "tree"
+output of the sysfs directory shows where these numbers are presented:
+
+ sys/
+ |-- *other sysfs base dirs*
+ |
+ |-- class
+ | |-- *other classes of devices*
+ | |
+ | `-- tty
+ | |-- *other tty devices*
+ | |
+ | |-- hvcs0
+ | | `-- dev
+ | |-- hvcs1
+ | | `-- dev
+ | |-- hvcs2
+ | | `-- dev
+ | |-- hvcs3
+ | | `-- dev
+ | |
+ | |-- *other tty devices*
+ |
+ |-- *other sysfs base dirs*
+
+For the above examples the following output is a result of cat'ing the
+"dev" entry in the hvcs directory:
+
+ Pow5:/sys/class/tty/hvcs0/ # cat dev
+ 254:0
+
+ Pow5:/sys/class/tty/hvcs1/ # cat dev
+ 254:1
+
+ Pow5:/sys/class/tty/hvcs2/ # cat dev
+ 254:2
+
+ Pow5:/sys/class/tty/hvcs3/ # cat dev
+ 254:3
+
+The output from reading the "dev" attribute is the char device major and
+minor numbers that the tty layer has allocated for this driver's use. Most
+systems running hvcs will already have the device entries created or udev
+will do it automatically.
+
+Given the example output above, to manually create a /dev/hvcs* node entry
+mknod can be used as follows:
+
+ mknod /dev/hvcs0 c 254 0
+ mknod /dev/hvcs1 c 254 1
+ mknod /dev/hvcs2 c 254 2
+ mknod /dev/hvcs3 c 254 3
+
+Using mknod to manually create the device entries makes these device nodes
+persistent. Once created they will exist prior to the driver insmod.
+
+Attempting to connect an application to /dev/hvcs* prior to insertion of
+the hvcs module will result in an error message similar to the following:
+
+ "/dev/hvcs*: No such device".
+
+NOTE: Just because there is a device node present doesn't mean that there
+is a vty-server device configured for that node.
+
+---------------------------------------------------------------------------
+5. Connection
+
+Since this driver controls devices that provide a tty interface a user can
+interact with the device node entries using any standard tty-interactive
+method (e.g. "cat", "dd", "echo"). The intent of this driver however, is
+to provide real time console interaction with a Linux partition's console,
+which requires the use of applications that provide bi-directional,
+interactive I/O with a tty device.
+
+Applications (e.g. "minicom" and "screen") that act as terminal emulators
+or perform terminal type control sequence conversion on the data being
+passed through them are NOT acceptable for providing interactive console
+I/O. These programs often emulate antiquated terminal types (vt100 and
+ANSI) and expect inbound data to take the form of one of these supported
+terminal types but they either do not convert, or do not _adequately_
+convert, outbound data into the terminal type of the terminal which invoked
+them (though screen makes an attempt and can apparently be configured with
+much termcap wrestling.)
+
+For this reason kermit and cu are two of the recommended applications for
+interacting with a Linux console via an hvcs device. These programs simply
+act as a conduit for data transfer to and from the tty device. They do not
+require inbound data to take the form of a particular terminal type, nor do
+they cook outbound data to a particular terminal type.
+
+In order to ensure proper functioning of console applications one must make
+sure that once connected to a /dev/hvcs console that the console's $TERM
+env variable is set to the exact terminal type of the terminal emulator
+used to launch the interactive I/O application. If one is using xterm and
+kermit to connect to /dev/hvcs0 when the console prompt becomes available
+one should "export TERM=xterm" on the console. This tells ncurses
+applications that are invoked from the console that they should output
+control sequences that xterm can understand.
+
+As a precautionary measure an hvcs user should always "exit" from their
+session before disconnecting an application such as kermit from the device
+node. If this is not done, the next user to connect to the console will
+continue using the previous user's logged in session which includes
+using the $TERM variable that the previous user supplied.
+
+Hotplug add and remove of vty-server adapters affects which /dev/hvcs* node
+is used to connect to each vty-server adapter. In order to determine which
+vty-server adapter is associated with which /dev/hvcs* node a special sysfs
+attribute has been added to each vty-server sysfs entry. This entry is
+called "index" and showing it reveals an integer that refers to the
+/dev/hvcs* entry to use to connect to that device. For instance cating the
+index attribute of vty-server adapter 30000004 shows the following.
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat index
+ 2
+
+This index of '2' means that in order to connect to vty-server adapter
+30000004 the user should interact with /dev/hvcs2.
+
+It should be noted that due to the system hotplug I/O capabilities of a
+system the /dev/hvcs* entry that interacts with a particular vty-server
+adapter is not guarenteed to remain the same across system reboots. Look
+in the Q & A section for more on this issue.
+
+---------------------------------------------------------------------------
+6. Disconnection
+
+As a security feature to prevent the delivery of stale data to an
+unintended target the Power5 system firmware disables the fetching of data
+and discards that data when a connection between a vty-server and a vty has
+been severed. As an example, when a vty-server is immediately disconnected
+from a vty following output of data to the vty the vty adapter may not have
+enough time between when it received the data interrupt and when the
+connection was severed to fetch the data from firmware before the fetch is
+disabled by firmware.
+
+When hvcs is being used to serve consoles this behavior is not a huge issue
+because the adapter stays connected for large amounts of time following
+almost all data writes. When hvcs is being used as a tty conduit to tunnel
+data between two partitions [see Q & A below] this is a huge problem
+because the standard Linux behavior when cat'ing or dd'ing data to a device
+is to open the tty, send the data, and then close the tty. If this driver
+manually terminated vty-server connections on tty close this would close
+the vty-server and vty connection before the target vty has had a chance to
+fetch the data.
+
+Additionally, disconnecting a vty-server and vty only on module removal or
+adapter removal is impractical because other vty-servers in other
+partitions may require the usage of the target vty at any time.
+
+Due to this behavioral restriction disconnection of vty-servers from the
+connected vty is a manual procedure using a write to a sysfs attribute
+outlined below, on the other hand the initial vty-server connection to a
+vty is established automatically by this driver. Manual vty-server
+connection is never required.
+
+In order to terminate the connection between a vty-server and vty the
+"vterm_state" sysfs attribute within each vty-server's sysfs entry is used.
+Reading this attribute reveals the current connection state of the
+vty-server adapter. A zero means that the vty-server is not connected to a
+vty. A one indicates that a connection is active.
+
+Writing a '0' (zero) to the vterm_state attribute will disconnect the VTERM
+connection between the vty-server and target vty ONLY if the vterm_state
+previously read '1'. The write directive is ignored if the vterm_state
+read '0' or if any value other than '0' was written to the vterm_state
+attribute. The following example will show the method used for verifying
+the vty-server connection status and disconnecting a vty-server connection.
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat vterm_state
+ 1
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # echo 0 > vterm_state
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat vterm_state
+ 0
+
+All vty-server connections are automatically terminated when the device is
+hotplug removed and when the module is removed.
+
+---------------------------------------------------------------------------
+7. Configuration
+
+Each vty-server has a sysfs entry in the /sys/devices/vio directory, which
+is symlinked in several other sysfs tree directories, notably under the
+hvcs driver entry, which looks like the following example:
+
+ Pow5:/sys/bus/vio/drivers/hvcs # ls
+ . .. 30000003 30000004 rescan
+
+By design, firmware notifies the hvcs driver of vty-server lifetimes and
+partner vty removals but not the addition of partner vtys. Since an HMC
+Super Admin can add partner info dynamically we have provided the hvcs
+driver sysfs directory with the "rescan" update attribute which will query
+firmware and update the partner info for all the vty-servers that this
+driver manages. Writing a '1' to the attribute triggers the update. An
+explicit example follows:
+
+ Pow5:/sys/bus/vio/drivers/hvcs # echo 1 > rescan
+
+Reading the attribute will indicate a state of '1' or '0'. A one indicates
+that an update is in process. A zero indicates that an update has
+completed or was never executed.
+
+Vty-server entries in this directory are a 32 bit partition unique unit
+address that is created by firmware. An example vty-server sysfs entry
+looks like the following:
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # ls
+ . current_vty devspec name partner_vtys
+ .. detach_state index partner_clcs vterm_state
+
+Each entry is provided, by default with a "name" attribute. Reading the
+"name" attribute will reveal the device type as shown in the following
+example:
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000003 # cat name
+ vty-server
+
+Each entry is also provided, by default, with a "devspec" attribute which
+reveals the full device specification when read, as shown in the following
+example:
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat devspec
+ /vdevice/vty-server@30000004
+
+Each vty-server sysfs dir is provided with two read-only attributes that
+provide lists of easily parsed partner vty data: "partner_vtys" and
+"partner_clcs".
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat partner_vtys
+ 30000000
+ 30000001
+ 30000002
+ 30000000
+ 30000000
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # cat partner_clcs
+ U5112.428.103048A-V3-C0
+ U5112.428.103048A-V3-C2
+ U5112.428.103048A-V3-C3
+ U5112.428.103048A-V4-C0
+ U5112.428.103048A-V5-C0
+
+Reading partner_vtys returns a list of partner vtys. Vty unit address
+numbering is only per-partition-unique so entries will frequently repeat.
+
+Reading partner_clcs returns a list of "converged location codes" which are
+composed of a system serial number followed by "-V*", where the '*' is the
+target partition number, and "-C*", where the '*' is the slot of the
+adapter. The first vty partner corresponds to the first clc item, the
+second vty partner to the second clc item, etc.
+
+A vty-server can only be connected to a single vty at a time. The entry,
+"current_vty" prints the clc of the currently selected partner vty when
+read.
+
+The current_vty can be changed by writing a valid partner clc to the entry
+as in the following example:
+
+ Pow5:/sys/bus/vio/drivers/hvcs/30000004 # echo U5112.428.10304
+ 8A-V4-C0 > current_vty
+
+Changing the current_vty when a vty-server is already connected to a vty
+does not affect the current connection. The change takes effect when the
+currently open connection is freed.
+
+Information on the "vterm_state" attribute was covered earlier on the
+chapter entitled "disconnection".
+
+---------------------------------------------------------------------------
+8. Questions & Answers:
+===========================================================================
+Q: What are the security concerns involving hvcs?
+
+A: There are three main security concerns:
+
+ 1. The creator of the /dev/hvcs* nodes has the ability to restrict
+ the access of the device entries to certain users or groups. It
+ may be best to create a special hvcs group privilege for providing
+ access to system consoles.
+
+ 2. To provide network security when grabbing the console it is
+ suggested that the user connect to the console hosting partition
+ using a secure method, such as SSH or sit at a hardware console.
+
+ 3. Make sure to exit the user session when done with a console or
+ the next vty-server connection (which may be from another
+ partition) will experience the previously logged in session.
+
+---------------------------------------------------------------------------
+Q: How do I multiplex a console that I grab through hvcs so that other
+people can see it:
+
+A: You can use "screen" to directly connect to the /dev/hvcs* device and
+setup a session on your machine with the console group privileges. As
+pointed out earlier by default screen doesn't provide the termcap settings
+for most terminal emulators to provide adequate character conversion from
+term type "screen" to others. This means that curses based programs may
+not display properly in screen sessions.
+
+---------------------------------------------------------------------------
+Q: Why are the colors all messed up?
+Q: Why are the control characters acting strange or not working?
+Q: Why is the console output all strange and unintelligible?
+
+A: Please see the preceding section on "Connection" for a discussion of how
+applications can affect the display of character control sequences.
+Additionally, just because you logged into the console using and xterm
+doesn't mean someone else didn't log into the console with the HMC console
+(vt320) before you and leave the session logged in. The best thing to do
+is to export TERM to the terminal type of your terminal emulator when you
+get the console. Additionally make sure to "exit" the console before you
+disconnect from the console. This will ensure that the next user gets
+their own TERM type set when they login.
+
+---------------------------------------------------------------------------
+Q: When I try to CONNECT kermit to an hvcs device I get:
+"Sorry, can't open connection: /dev/hvcs*"What is happening?
+
+A: Some other Power5 console mechanism has a connection to the vty and
+isn't giving it up. You can try to force disconnect the consoles from the
+HMC by right clicking on the partition and then selecting "close terminal".
+Otherwise you have to hunt down the people who have console authority. It
+is possible that you already have the console open using another kermit
+session and just forgot about it. Please review the console options for
+Power5 systems to determine the many ways a system console can be held.
+
+OR
+
+A: Another user may not have a connectivity method currently attached to a
+/dev/hvcs device but the vterm_state may reveal that they still have the
+vty-server connection established. They need to free this using the method
+outlined in the section on "Disconnection" in order for others to connect
+to the target vty.
+
+OR
+
+A: The user profile you are using to execute kermit probably doesn't have
+permissions to use the /dev/hvcs* device.
+
+OR
+
+A: You probably haven't inserted the hvcs.ko module yet but the /dev/hvcs*
+entry still exists (on systems without udev).
+
+OR
+
+A: There is not a corresponding vty-server device that maps to an existing
+/dev/hvcs* entry.
+
+---------------------------------------------------------------------------
+Q: When I try to CONNECT kermit to an hvcs device I get:
+"Sorry, write access to UUCP lockfile directory denied."
+
+A: The /dev/hvcs* entry you have specified doesn't exist where you said it
+does? Maybe you haven't inserted the module (on systems with udev).
+
+---------------------------------------------------------------------------
+Q: If I already have one Linux partition installed can I use hvcs on said
+partition to provide the console for the install of a second Linux
+partition?
+
+A: Yes granted that your are connected to the /dev/hvcs* device using
+kermit or cu or some other program that doesn't provide terminal emulation.
+
+---------------------------------------------------------------------------
+Q: Can I connect to more than one partition's console at a time using this
+driver?
+
+A: Yes. Of course this means that there must be more than one vty-server
+configured for this partition and each must point to a disconnected vty.
+
+---------------------------------------------------------------------------
+Q: Does the hvcs driver support dynamic (hotplug) addition of devices?
+
+A: Yes, if you have dlpar and hotplug enabled for your system and it has
+been built into the kernel the hvcs drivers is configured to dynamically
+handle additions of new devices and removals of unused devices.
+
+---------------------------------------------------------------------------
+Q: For some reason /dev/hvcs* doesn't map to the same vty-server adapter
+after a reboot. What happened?
+
+A: Assignment of vty-server adapters to /dev/hvcs* entries is always done
+in the order that the adapters are exposed. Due to hotplug capabilities of
+this driver assignment of hotplug added vty-servers may be in a different
+order than how they would be exposed on module load. Rebooting or
+reloading the module after dynamic addition may result in the /dev/hvcs*
+and vty-server coupling changing if a vty-server adapter was added in a
+slot inbetween two other vty-server adapters. Refer to the section above
+on how to determine which vty-server goes with which /dev/hvcs* node.
+Hint; look at the sysfs "index" attribute for the vty-server.
+
+---------------------------------------------------------------------------
+Q: Can I use /dev/hvcs* as a conduit to another partition and use a tty
+device on that partition as the other end of the pipe?
+
+A: Yes, on Power5 platforms the hvc_console driver provides a tty interface
+for extra /dev/hvc* devices (where /dev/hvc0 is most likely the console).
+In order to get a tty conduit working between the two partitions the HMC
+Super Admin must create an additional "serial server" for the target
+partition with the HMC gui which will show up as /dev/hvc* when the target
+partition is rebooted.
+
+The HMC Super Admin then creates an additional "serial client" for the
+current partition and points this at the target partition's newly created
+"serial server" adapter (remember the slot). This shows up as an
+additional /dev/hvcs* device.
+
+Now a program on the target system can be configured to read or write to
+/dev/hvc* and another program on the current partition can be configured to
+read or write to /dev/hvcs*. Now you have a tty conduit between two
+partitions.
+
+---------------------------------------------------------------------------
+9. Reporting Bugs:
+
+The proper channel for reporting bugs is either through the Linux OS
+distribution company that provided your OS or by posting issues to the
+ppc64 development mailing list at:
+
+linuxppc64-dev@lists.linuxppc.org
+
+This request is to provide a documented and searchable public exchange
+of the problems and solutions surrounding this driver for the benefit of
+all users.
--- /dev/null
+Linux 2.6.x on MPC52xx family
+-----------------------------
+
+For the latest info, go to http://www.246tNt.com/mpc52xx/
+
+To compile/use :
+
+ - U-Boot:
+ # <edit Makefile to set ARCH=ppc & CROSS_COMPILE=... ( also EXTRAVERSION
+ if you wish to ).
+ # make lite5200_defconfig
+ # make uImage
+
+ then, on U-boot:
+ => tftpboot 200000 uImage
+ => tftpboot 400000 pRamdisk
+ => bootm 200000 400000
+
+ - DBug:
+ # <edit Makefile to set ARCH=ppc & CROSS_COMPILE=... ( also EXTRAVERSION
+ if you wish to ).
+ # make lite5200_defconfig
+ # cp your_initrd.gz arch/ppc/boot/images/ramdisk.image.gz
+ # make zImage.initrd
+ # make
+
+ then in DBug:
+ DBug> dn -i zImage.initrd.lite5200
+
+
+Some remarks :
+ - The port is named mpc52xxx, and config options are PPC_MPC52xx. The MGT5100
+ is not supported, and I'm not sure anyone is interesting in working on it
+ so. I didn't took 5xxx because there's apparently a lot of 5xxx that have
+ nothing to do with the MPC5200. I also included the 'MPC' for the same
+ reason.
+ - Of course, I inspired myself from the 2.4 port. If you think I forgot to
+ mention you/your company in the copyright of some code, I'll correct it
+ ASAP.
--- /dev/null
+Each CPU has a "base" scheduling domain (struct sched_domain). These are
+accessed via cpu_sched_domain(i) and this_sched_domain() macros. The domain
+hierarchy is built from these base domains via the ->parent pointer. ->parent
+MUST be NULL terminated, and domain structures should be per-CPU as they
+are locklessly updated.
+
+Each scheduling domain spans a number of CPUs (stored in the ->span field).
+A domain's span MUST be a superset of it child's span (this restriction could
+be relaxed if the need arises), and a base domain for CPU i MUST span at least
+i. The top domain for each CPU will generally span all CPUs in the system
+although strictly it doesn't have to, but this could lead to a case where some
+CPUs will never be given tasks to run unless the CPUs allowed mask is
+explicitly set. A sched domain's span means "balance process load among these
+CPUs".
+
+Each scheduling domain must have one or more CPU groups (struct sched_group)
+which are organised as a circular one way linked list from the ->groups
+pointer. The union of cpumasks of these groups MUST be the same as the
+domain's span. The intersection of cpumasks from any two of these groups
+MUST be the empty set. The group pointed to by the ->groups pointer MUST
+contain the CPU to which the domain belongs. Groups may be shared among
+CPUs as they contain read only data after they have been set up.
+
+Balancing within a sched domain occurs between groups. That is, each group
+is treated as one entity. The load of a group is defined as the sum of the
+load of each of its member CPUs, and only when the load of a group becomes
+out of balance are tasks moved between groups.
+
+In kernel/sched.c, rebalance_tick is run periodically on each CPU. This
+function takes its CPU's base sched domain and checks to see if has reached
+its rebalance interval. If so, then it will run load_balance on that domain.
+rebalance_tick then checks the parent sched_domain (if it exists), and the
+parent of the parent and so forth.
+
+*** Implementing sched domains ***
+The "base" domain will "span" the first level of the hierarchy. In the case
+of SMT, you'll span all siblings of the physical CPU, with each group being
+a single virtual CPU.
+
+In SMP, the parent of the base domain will span all physical CPUs in the
+node. Each group being a single physical CPU. Then with NUMA, the parent
+of the SMP domain will span the entire machine, with each group having the
+cpumask of a node. Or, you could do multi-level NUMA or Opteron, for example,
+might have just one domain covering its one NUMA level.
+
+The implementor should read comments in include/linux/sched.h:
+struct sched_domain fields, SD_FLAG_*, SD_*_INIT to get an idea of
+the specifics and what to tune.
+
+For SMT, the architecture must define CONFIG_SCHED_SMT and provide a
+cpumask_t cpu_sibling_map[NR_CPUS], where cpu_sibling_map[i] is the mask of
+all "i"'s siblings as well as "i" itself.
+
+Architectures may retain the regular override the default SD_*_INIT flags
+while using the generic domain builder in kernel/sched.c if they wish to
+retain the traditional SMT->SMP->NUMA topology (or some subset of that). This
+can be done by #define'ing ARCH_HASH_SCHED_TUNE.
+
+Alternatively, the architecture may completely override the generic domain
+builder by #define'ing ARCH_HASH_SCHED_DOMAIN, and exporting your
+arch_init_sched_domains function. This function will attach domains to all
+CPUs using cpu_attach_domain.
+
+Implementors should change the line
+#undef SCHED_DOMAIN_DEBUG
+to
+#define SCHED_DOMAIN_DEBUG
+in kernel/sched.c as this enables an error checking parse of the sched domains
+which should catch most possible errors (described above). It also prints out
+the domain structure in a visual format.
--- /dev/null
+
+ SN9C10x PC Camera Controllers
+ Driver for Linux
+ =============================
+
+ - Documentation -
+
+
+Index
+=====
+1. Copyright
+2. Disclaimer
+3. License
+4. Overview
+5. Driver installation
+6. Module loading
+7. Module parameters
+8. Optional device control through "sysfs"
+9. Supported devices
+10. How to add support for new image sensors
+11. Notes for V4L2 application developers
+12. Contact information
+13. Credits
+
+
+1. Copyright
+============
+Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it>
+
+
+2. Disclaimer
+=============
+SONiX is a trademark of SONiX Technology Company Limited, inc.
+This software is not sponsored or developed by SONiX.
+
+
+3. License
+==========
+This program is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 2 of the License, or
+(at your option) any later version.
+
+This program is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with this program; if not, write to the Free Software
+Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+
+
+4. Overview
+===========
+This driver attempts to support the video and audio streaming capabilities of
+the devices mounting the SONiX SN9C101, SN9C102 and SN9C103 (or SUI-102) PC
+Camera Controllers.
+
+It's worth to note that SONiX has never collaborated with the author during the
+development of this project, despite several requests for enough detailed
+specifications of the register tables, compression engine and video data format
+of the above chips.
+
+The driver relies on the Video4Linux2 and USB core modules. It has been
+designed to run properly on SMP systems as well.
+
+The latest version of the SN9C10x driver can be found at the following URL:
+http://www.linux-projects.org/
+
+Some of the features of the driver are:
+
+- full compliance with the Video4Linux2 API (see also "Notes for V4L2
+ application developers" paragraph);
+- available mmap or read/poll methods for video streaming through isochronous
+ data transfers;
+- automatic detection of image sensor;
+- support for any window resolutions and optional panning within the maximum
+ pixel area of image sensor;
+- image downscaling with arbitrary scaling factors from 1, 2 and 4 in both
+ directions (see "Notes for V4L2 application developers" paragraph);
+- two different video formats for uncompressed or compressed data (see also
+ "Notes for V4L2 application developers" paragraph);
+- full support for the capabilities of many of the possible image sensors that
+ can be connected to the SN9C10x bridges, including, for istance, red, green,
+ blue and global gain adjustments and exposure (see "Supported devices"
+ paragraph for details);
+- use of default color settings for sunlight conditions;
+- dynamic I/O interface for both SN9C10x and image sensor control (see
+ "Optional device control through 'sysfs'" paragraph);
+- dynamic driver control thanks to various module parameters (see "Module
+ parameters" paragraph);
+- up to 64 cameras can be handled at the same time; they can be connected and
+ disconnected from the host many times without turning off the computer, if
+ your system supports hotplugging;
+- no known bugs.
+
+
+5. Module dependencies
+======================
+For it to work properly, the driver needs kernel support for Video4Linux and
+USB.
+
+The following options of the kernel configuration file must be enabled and
+corresponding modules must be compiled:
+
+ # Multimedia devices
+ #
+ CONFIG_VIDEO_DEV=m
+
+ # USB support
+ #
+ CONFIG_USB=m
+
+In addition, depending on the hardware being used, the modules below are
+necessary:
+
+ # USB Host Controller Drivers
+ #
+ CONFIG_USB_EHCI_HCD=m
+ CONFIG_USB_UHCI_HCD=m
+ CONFIG_USB_OHCI_HCD=m
+
+And finally:
+
+ # USB Multimedia devices
+ #
+ CONFIG_USB_SN9C102=m
+
+
+6. Module loading
+=================
+To use the driver, it is necessary to load the "sn9c102" module into memory
+after every other module required: "videodev", "usbcore" and, depending on
+the USB host controller you have, "ehci-hcd", "uhci-hcd" or "ohci-hcd".
+
+Loading can be done as shown below:
+
+ [root@localhost home]# modprobe sn9c102
+
+At this point the devices should be recognized. You can invoke "dmesg" to
+analyze kernel messages and verify that the loading process has gone well:
+
+ [user@localhost home]$ dmesg
+
+
+7. Module parameters
+====================
+Module parameters are listed below:
+-------------------------------------------------------------------------------
+Name: video_nr
+Type: int array (min = 0, max = 64)
+Syntax: <-1|n[,...]>
+Description: Specify V4L2 minor mode number:
+ -1 = use next available
+ n = use minor number n
+ You can specify up to 64 cameras this way.
+ For example:
+ video_nr=-1,2,-1 would assign minor number 2 to the second
+ recognized camera and use auto for the first one and for every
+ other camera.
+Default: -1
+-------------------------------------------------------------------------------
+Name: debug
+Type: int
+Syntax: <n>
+Description: Debugging information level, from 0 to 3:
+ 0 = none (use carefully)
+ 1 = critical errors
+ 2 = significant informations
+ 3 = more verbose messages
+ Level 3 is useful for testing only, when only one device
+ is used. It also shows some more informations about the
+ hardware being detected. This parameter can be changed at
+ runtime thanks to the /sys filesystem.
+Default: 2
+-------------------------------------------------------------------------------
+
+
+8. Optional device control through "sysfs"
+==========================================
+It is possible to read and write both the SN9C10x and the image sensor
+registers by using the "sysfs" filesystem interface.
+
+Every time a supported device is recognized, a write-only file named "green" is
+created in the /sys/class/video4linux/videoX directory. You can set the green
+channel's gain by writing the desired value to it. The value may range from 0
+to 15 for SN9C101 or SN9C102 bridges, from 0 to 127 for SN9C103 bridges.
+Similarly, only for SN9C103 controllers, blue and red gain control files are
+available in the same directory, for which accepted values may range from 0 to
+127.
+
+There are other four entries in the directory above for each registered camera:
+"reg", "val", "i2c_reg" and "i2c_val". The first two files control the
+SN9C10x bridge, while the other two control the sensor chip. "reg" and
+"i2c_reg" hold the values of the current register index where the following
+reading/writing operations are addressed at through "val" and "i2c_val". Their
+use is not intended for end-users, unless you know what you are doing. Note
+that "i2c_reg" and "i2c_val" won't be created if the sensor does not actually
+support the standard I2C protocol. Also, remember that you must be logged in as
+root before writing to them.
+
+As an example, suppose we were to want to read the value contained in the
+register number 1 of the sensor register table - which is usually the product
+identifier - of the camera registered as "/dev/video0":
+
+ [root@localhost #] cd /sys/class/video4linux/video0
+ [root@localhost #] echo 1 > i2c_reg
+ [root@localhost #] cat i2c_val
+
+Note that "cat" will fail if sensor registers cannot be read.
+
+Now let's set the green gain's register of the SN9C101 or SN9C102 chips to 2:
+
+ [root@localhost #] echo 0x11 > reg
+ [root@localhost #] echo 2 > val
+
+Note that the SN9C10x always returns 0 when some of its registers are read.
+To avoid race conditions, all the I/O accesses to the files are serialized.
+
+
+9. Supported devices
+====================
+None of the names of the companies as well as their products will be mentioned
+here. They have never collaborated with the author, so no advertising.
+
+From the point of view of a driver, what unambiguously identify a device are
+its vendor and product USB identifiers. Below is a list of known identifiers of
+devices mounting the SN9C10x PC camera controllers:
+
+Vendor ID Product ID
+--------- ----------
+0x0c45 0x6001
+0x0c45 0x6005
+0x0c45 0x6009
+0x0c45 0x600d
+0x0c45 0x6024
+0x0c45 0x6025
+0x0c45 0x6028
+0x0c45 0x6029
+0x0c45 0x602a
+0x0c45 0x602b
+0x0c45 0x602c
+0x0c45 0x6030
+0x0c45 0x6080
+0x0c45 0x6082
+0x0c45 0x6083
+0x0c45 0x6088
+0x0c45 0x608a
+0x0c45 0x608b
+0x0c45 0x608c
+0x0c45 0x608e
+0x0c45 0x608f
+0x0c45 0x60a0
+0x0c45 0x60a2
+0x0c45 0x60a3
+0x0c45 0x60a8
+0x0c45 0x60aa
+0x0c45 0x60ab
+0x0c45 0x60ac
+0x0c45 0x60ae
+0x0c45 0x60af
+0x0c45 0x60b0
+0x0c45 0x60b2
+0x0c45 0x60b3
+0x0c45 0x60b8
+0x0c45 0x60ba
+0x0c45 0x60bb
+0x0c45 0x60bc
+0x0c45 0x60be
+
+The list above does not imply that all those devices work with this driver: up
+until now only the ones that mount the following image sensors are supported;
+kernel messages will always tell you whether this is the case:
+
+Model Manufacturer
+----- ------------
+PAS106B PixArt Imaging Inc.
+PAS202BCB PixArt Imaging Inc.
+TAS5110C1B Taiwan Advanced Sensor Corporation
+TAS5130D1B Taiwan Advanced Sensor Corporation
+
+All the available control settings of each image sensor are supported through
+the V4L2 interface.
+
+If you think your camera is based on the above hardware and is not actually
+listed in the above table, you may try to add the specific USB VendorID and
+ProductID identifiers to the sn9c102_id_table[] in the file "sn9c102_sensor.h";
+then compile, load the module again and look at the kernel output.
+If this works, please send an email to the author reporting the kernel
+messages, so that a new entry in the list of supported devices can be added.
+
+Donations of new models for further testing and support would be much
+appreciated. Non-available hardware won't be supported by the author of this
+driver.
+
+
+10. How to add support for new image sensors
+============================================
+It should be easy to write code for new sensors by using the small API that I
+have created for this purpose, which is present in "sn9c102_sensor.h"
+(documentation is included there). As an example, have a look at the code in
+"sn9c102_pas106b.c", which uses the mentioned interface.
+
+At the moment, possible unsupported image sensors are: HV7131x series (VGA),
+MI03x series (VGA), OV7620 (VGA), OV7630 (VGA), CIS-VF10 (VGA).
+
+
+11. Notes for V4L2 application developers
+=========================================
+This driver follows the V4L2 API specifications. In particular, it enforces two
+rules:
+
+- exactly one I/O method, either "mmap" or "read", is associated with each
+file descriptor. Once it is selected, the application must close and reopen the
+device to switch to the other I/O method;
+
+- previously mapped buffer memory must always be unmapped before calling any
+of the "VIDIOC_S_CROP", "VIDIOC_TRY_FMT" and "VIDIOC_S_FMT" ioctl's. The same
+number of buffers as before will be allocated again to match the size of the
+new video frames, so you have to map the buffers again before any I/O attempts
+on them.
+
+Consistently with the hardware limits, this driver also supports image
+downscaling with arbitrary scaling factors from 1, 2 and 4 in both directions.
+However, the V4L2 API specifications don't correctly define how the scaling
+factor can be chosen arbitrarily by the "negotiation" of the "source" and
+"target" rectangles. To work around this flaw, we have added the convention
+that, during the negotiation, whenever the "VIDIOC_S_CROP" ioctl is issued, the
+scaling factor is restored to 1.
+
+This driver supports two different video formats: the first one is the "8-bit
+Sequential Bayer" format and can be used to obtain uncompressed video data
+from the device through the current I/O method, while the second one provides
+"raw" compressed video data (without the initial and final frame headers). The
+compression quality may vary from 0 to 1 and can be selected or queried thanks
+to the VIDIOC_S_JPEGCOMP and VIDIOC_G_JPEGCOMP V4L2 ioctl's. For maximum
+flexibility, the default active video format depends on how the image sensor
+being used is initialized (as described in the documentation of the API for the
+image sensors supplied by this driver).
+
+
+12. Contact information
+=======================
+I may be contacted by e-mail at <luca.risolia@studio.unibo.it>.
+
+I can accept GPG/PGP encrypted e-mail. My GPG key ID is 'FCE635A4'.
+My public 1024-bit key should be available at any keyserver; the fingerprint
+is: '88E8 F32F 7244 68BA 3958 5D40 99DA 5D2A FCE6 35A4'.
+
+
+13. Credits
+===========
+I would thank the following persons:
+
+- Stefano Mozzi, who donated 45 EU;
+- Luca Capello for the donation of a webcam;
+- Mizuno Takafumi for the donation of a webcam;
+- Carlos Eduardo Medaglia Dyonisio, who added the support for the PAS202BCB
+ image sensor.
--- /dev/null
+/*
+ * linux/arch/arm/common/locomo.c
+ *
+ * Sharp LoCoMo support
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This file contains all generic LoCoMo support.
+ *
+ * All initialization functions provided here are intended to be called
+ * from machine specific code with proper arguments when required.
+ *
+ * Based on sa1111.c
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/mach/irq.h>
+
+#include <asm/hardware/locomo.h>
+
+/* the following is the overall data for the locomo chip */
+struct locomo {
+ struct device *dev;
+ unsigned long phys;
+ unsigned int irq;
+ void *base;
+};
+
+struct locomo_dev_info {
+ unsigned long offset;
+ unsigned long length;
+ unsigned int devid;
+ unsigned int irq[1];
+ const char * name;
+};
+
+static struct locomo_dev_info locomo_devices[] = {
+};
+
+
+/** LoCoMo interrupt handling stuff.
+ * NOTE: LoCoMo has a 1 to many mapping on all of its IRQs.
+ * that is, there is only one real hardware interrupt
+ * we determine which interrupt it is by reading some IO memory.
+ * We have two levels of expansion, first in the handler for the
+ * hardware interrupt we generate an interrupt
+ * IRQ_LOCOMO_*_BASE and those handlers generate more interrupts
+ *
+ * hardware irq reads LOCOMO_ICR & 0x0f00
+ * IRQ_LOCOMO_KEY_BASE
+ * IRQ_LOCOMO_GPIO_BASE
+ * IRQ_LOCOMO_LT_BASE
+ * IRQ_LOCOMO_SPI_BASE
+ * IRQ_LOCOMO_KEY_BASE reads LOCOMO_KIC & 0x0001
+ * IRQ_LOCOMO_KEY
+ * IRQ_LOCOMO_GPIO_BASE reads LOCOMO_GIR & LOCOMO_GPD & 0xffff
+ * IRQ_LOCOMO_GPIO[0-15]
+ * IRQ_LOCOMO_LT_BASE reads LOCOMO_LTINT & 0x0001
+ * IRQ_LOCOMO_LT
+ * IRQ_LOCOMO_SPI_BASE reads LOCOMO_SPIIR & 0x000F
+ * IRQ_LOCOMO_SPI_RFR
+ * IRQ_LOCOMO_SPI_RFW
+ * IRQ_LOCOMO_SPI_OVRN
+ * IRQ_LOCOMO_SPI_TEND
+ */
+
+#define LOCOMO_IRQ_START (IRQ_LOCOMO_KEY_BASE)
+#define LOCOMO_IRQ_KEY_START (IRQ_LOCOMO_KEY)
+#define LOCOMO_IRQ_GPIO_START (IRQ_LOCOMO_GPIO0)
+#define LOCOMO_IRQ_LT_START (IRQ_LOCOMO_LT)
+#define LOCOMO_IRQ_SPI_START (IRQ_LOCOMO_SPI_RFR)
+
+static void locomo_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ int req, i;
+ struct irqdesc *d;
+ void *mapbase = get_irq_chipdata(irq);
+
+ /* Acknowledge the parent IRQ */
+ desc->chip->ack(irq);
+
+ /* check why this interrupt was generated */
+ req = locomo_readl(mapbase + LOCOMO_ICR) & 0x0f00;
+
+ if (req) {
+ /* generate the next interrupt(s) */
+ irq = LOCOMO_IRQ_START;
+ d = irq_desc + irq;
+ for (i = 0; i <= 3; i++, d++, irq++) {
+ if (req & (0x0100 << i)) {
+ d->handle(irq, d, regs);
+ }
+
+ }
+ }
+}
+
+static void locomo_ack_irq(unsigned int irq)
+{
+}
+
+static void locomo_mask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_ICR);
+ r &= ~(0x0010 << (irq - LOCOMO_IRQ_START));
+ locomo_writel(r, mapbase + LOCOMO_ICR);
+}
+
+static void locomo_unmask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_ICR);
+ r |= (0x0010 << (irq - LOCOMO_IRQ_START));
+ locomo_writel(r, mapbase + LOCOMO_ICR);
+}
+
+static struct irqchip locomo_chip = {
+ .ack = locomo_ack_irq,
+ .mask = locomo_mask_irq,
+ .unmask = locomo_unmask_irq,
+};
+
+static void locomo_key_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ struct irqdesc *d;
+ void *mapbase = get_irq_chipdata(irq);
+
+ if (locomo_readl(mapbase + LOCOMO_KIC) & 0x0001) {
+ d = irq_desc + LOCOMO_IRQ_KEY_START;
+ d->handle(LOCOMO_IRQ_KEY_START, d, regs);
+ }
+}
+
+static void locomo_key_ack_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_KIC);
+ r &= ~(0x0100 << (irq - LOCOMO_IRQ_KEY_START));
+ locomo_writel(r, mapbase + LOCOMO_KIC);
+}
+
+static void locomo_key_mask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_KIC);
+ r &= ~(0x0010 << (irq - LOCOMO_IRQ_KEY_START));
+ locomo_writel(r, mapbase + LOCOMO_KIC);
+}
+
+static void locomo_key_unmask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_KIC);
+ r |= (0x0010 << (irq - LOCOMO_IRQ_KEY_START));
+ locomo_writel(r, mapbase + LOCOMO_KIC);
+}
+
+static struct irqchip locomo_key_chip = {
+ .ack = locomo_key_ack_irq,
+ .mask = locomo_key_mask_irq,
+ .unmask = locomo_key_unmask_irq,
+};
+
+static void locomo_gpio_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ int req, i;
+ struct irqdesc *d;
+ void *mapbase = get_irq_chipdata(irq);
+
+ req = locomo_readl(mapbase + LOCOMO_GIR) &
+ locomo_readl(mapbase + LOCOMO_GPD) &
+ 0xffff;
+
+ if (req) {
+ irq = LOCOMO_IRQ_GPIO_START;
+ d = irq_desc + LOCOMO_IRQ_GPIO_START;
+ for (i = 0; i <= 15; i++, irq++, d++) {
+ if (req & (0x0001 << i)) {
+ d->handle(irq, d, regs);
+ }
+ }
+ }
+}
+
+static void locomo_gpio_ack_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_GWE);
+ r |= (0x0001 << (irq - LOCOMO_IRQ_GPIO_START));
+ locomo_writel(r, mapbase + LOCOMO_GWE);
+
+ r = locomo_readl(mapbase + LOCOMO_GIS);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_GPIO_START));
+ locomo_writel(r, mapbase + LOCOMO_GIS);
+
+ r = locomo_readl(mapbase + LOCOMO_GWE);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_GPIO_START));
+ locomo_writel(r, mapbase + LOCOMO_GWE);
+}
+
+static void locomo_gpio_mask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_GIE);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_GPIO_START));
+ locomo_writel(r, mapbase + LOCOMO_GIE);
+}
+
+static void locomo_gpio_unmask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_GIE);
+ r |= (0x0001 << (irq - LOCOMO_IRQ_GPIO_START));
+ locomo_writel(r, mapbase + LOCOMO_GIE);
+}
+
+static struct irqchip locomo_gpio_chip = {
+ .ack = locomo_gpio_ack_irq,
+ .mask = locomo_gpio_mask_irq,
+ .unmask = locomo_gpio_unmask_irq,
+};
+
+static void locomo_lt_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ struct irqdesc *d;
+ void *mapbase = get_irq_chipdata(irq);
+
+ if (locomo_readl(mapbase + LOCOMO_LTINT) & 0x0001) {
+ d = irq_desc + LOCOMO_IRQ_LT_START;
+ d->handle(LOCOMO_IRQ_LT_START, d, regs);
+ }
+}
+
+static void locomo_lt_ack_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_LTINT);
+ r &= ~(0x0100 << (irq - LOCOMO_IRQ_LT_START));
+ locomo_writel(r, mapbase + LOCOMO_LTINT);
+}
+
+static void locomo_lt_mask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_LTINT);
+ r &= ~(0x0010 << (irq - LOCOMO_IRQ_LT_START));
+ locomo_writel(r, mapbase + LOCOMO_LTINT);
+}
+
+static void locomo_lt_unmask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_LTINT);
+ r |= (0x0010 << (irq - LOCOMO_IRQ_LT_START));
+ locomo_writel(r, mapbase + LOCOMO_LTINT);
+}
+
+static struct irqchip locomo_lt_chip = {
+ .ack = locomo_lt_ack_irq,
+ .mask = locomo_lt_mask_irq,
+ .unmask = locomo_lt_unmask_irq,
+};
+
+static void locomo_spi_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ int req, i;
+ struct irqdesc *d;
+ void *mapbase = get_irq_chipdata(irq);
+
+ req = locomo_readl(mapbase + LOCOMO_SPIIR) & 0x000F;
+ if (req) {
+ irq = LOCOMO_IRQ_SPI_START;
+ d = irq_desc + irq;
+
+ for (i = 0; i <= 3; i++, irq++, d++) {
+ if (req & (0x0001 << i)) {
+ d->handle(irq, d, regs);
+ }
+ }
+ }
+}
+
+static void locomo_spi_ack_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_SPIWE);
+ r |= (0x0001 << (irq - LOCOMO_IRQ_SPI_START));
+ locomo_writel(r, mapbase + LOCOMO_SPIWE);
+
+ r = locomo_readl(mapbase + LOCOMO_SPIIS);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_SPI_START));
+ locomo_writel(r, mapbase + LOCOMO_SPIIS);
+
+ r = locomo_readl(mapbase + LOCOMO_SPIWE);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_SPI_START));
+ locomo_writel(r, mapbase + LOCOMO_SPIWE);
+}
+
+static void locomo_spi_mask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_SPIIE);
+ r &= ~(0x0001 << (irq - LOCOMO_IRQ_SPI_START));
+ locomo_writel(r, mapbase + LOCOMO_SPIIE);
+}
+
+static void locomo_spi_unmask_irq(unsigned int irq)
+{
+ void *mapbase = get_irq_chipdata(irq);
+ unsigned int r;
+ r = locomo_readl(mapbase + LOCOMO_SPIIE);
+ r |= (0x0001 << (irq - LOCOMO_IRQ_SPI_START));
+ locomo_writel(r, mapbase + LOCOMO_SPIIE);
+}
+
+static struct irqchip locomo_spi_chip = {
+ .ack = locomo_spi_ack_irq,
+ .mask = locomo_spi_mask_irq,
+ .unmask = locomo_spi_unmask_irq,
+};
+
+static void locomo_setup_irq(struct locomo *lchip)
+{
+ int irq;
+ void *irqbase = lchip->base;
+
+ /*
+ * Install handler for IRQ_LOCOMO_HW.
+ */
+ set_irq_type(lchip->irq, IRQT_FALLING);
+ set_irq_chipdata(lchip->irq, irqbase);
+ set_irq_chained_handler(lchip->irq, locomo_handler);
+
+ /* Install handlers for IRQ_LOCOMO_*_BASE */
+ set_irq_chip(IRQ_LOCOMO_KEY_BASE, &locomo_chip);
+ set_irq_chipdata(IRQ_LOCOMO_KEY_BASE, irqbase);
+ set_irq_chained_handler(IRQ_LOCOMO_KEY_BASE, locomo_key_handler);
+ set_irq_flags(IRQ_LOCOMO_KEY_BASE, IRQF_VALID | IRQF_PROBE);
+
+ set_irq_chip(IRQ_LOCOMO_GPIO_BASE, &locomo_chip);
+ set_irq_chipdata(IRQ_LOCOMO_GPIO_BASE, irqbase);
+ set_irq_chained_handler(IRQ_LOCOMO_GPIO_BASE, locomo_gpio_handler);
+ set_irq_flags(IRQ_LOCOMO_GPIO_BASE, IRQF_VALID | IRQF_PROBE);
+
+ set_irq_chip(IRQ_LOCOMO_LT_BASE, &locomo_chip);
+ set_irq_chipdata(IRQ_LOCOMO_LT_BASE, irqbase);
+ set_irq_chained_handler(IRQ_LOCOMO_LT_BASE, locomo_lt_handler);
+ set_irq_flags(IRQ_LOCOMO_LT_BASE, IRQF_VALID | IRQF_PROBE);
+
+ set_irq_chip(IRQ_LOCOMO_SPI_BASE, &locomo_chip);
+ set_irq_chipdata(IRQ_LOCOMO_SPI_BASE, irqbase);
+ set_irq_chained_handler(IRQ_LOCOMO_SPI_BASE, locomo_spi_handler);
+ set_irq_flags(IRQ_LOCOMO_SPI_BASE, IRQF_VALID | IRQF_PROBE);
+
+ /* install handlers for IRQ_LOCOMO_KEY_BASE generated interrupts */
+ set_irq_chip(LOCOMO_IRQ_KEY_START, &locomo_key_chip);
+ set_irq_chipdata(LOCOMO_IRQ_KEY_START, irqbase);
+ set_irq_handler(LOCOMO_IRQ_KEY_START, do_edge_IRQ);
+ set_irq_flags(LOCOMO_IRQ_KEY_START, IRQF_VALID | IRQF_PROBE);
+
+ /* install handlers for IRQ_LOCOMO_GPIO_BASE generated interrupts */
+ for (irq = LOCOMO_IRQ_GPIO_START; irq < LOCOMO_IRQ_GPIO_START + 16; irq++) {
+ set_irq_chip(irq, &locomo_gpio_chip);
+ set_irq_chipdata(irq, irqbase);
+ set_irq_handler(irq, do_edge_IRQ);
+ set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
+ }
+
+ /* install handlers for IRQ_LOCOMO_LT_BASE generated interrupts */
+ set_irq_chip(LOCOMO_IRQ_LT_START, &locomo_lt_chip);
+ set_irq_chipdata(LOCOMO_IRQ_LT_START, irqbase);
+ set_irq_handler(LOCOMO_IRQ_LT_START, do_edge_IRQ);
+ set_irq_flags(LOCOMO_IRQ_LT_START, IRQF_VALID | IRQF_PROBE);
+
+ /* install handlers for IRQ_LOCOMO_SPI_BASE generated interrupts */
+ for (irq = LOCOMO_IRQ_SPI_START; irq < LOCOMO_IRQ_SPI_START + 3; irq++) {
+ set_irq_chip(irq, &locomo_spi_chip);
+ set_irq_chipdata(irq, irqbase);
+ set_irq_handler(irq, do_edge_IRQ);
+ set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
+ }
+}
+
+
+static void locomo_dev_release(struct device *_dev)
+{
+ struct locomo_dev *dev = LOCOMO_DEV(_dev);
+
+ release_resource(&dev->res);
+ kfree(dev);
+}
+
+static int
+locomo_init_one_child(struct locomo *lchip, struct resource *parent,
+ struct locomo_dev_info *info)
+{
+ struct locomo_dev *dev;
+ int ret;
+
+ dev = kmalloc(sizeof(struct locomo_dev), GFP_KERNEL);
+ if (!dev) {
+ ret = -ENOMEM;
+ goto out;
+ }
+ memset(dev, 0, sizeof(struct locomo_dev));
+
+ strncpy(dev->dev.bus_id,info->name,sizeof(dev->dev.bus_id));
+ /*
+ * If the parent device has a DMA mask associated with it,
+ * propagate it down to the children.
+ */
+ if (lchip->dev->dma_mask) {
+ dev->dma_mask = *lchip->dev->dma_mask;
+ dev->dev.dma_mask = &dev->dma_mask;
+ }
+
+ dev->devid = info->devid;
+ dev->dev.parent = lchip->dev;
+ dev->dev.bus = &locomo_bus_type;
+ dev->dev.release = locomo_dev_release;
+ dev->dev.coherent_dma_mask = lchip->dev->coherent_dma_mask;
+ dev->res.start = lchip->phys + info->offset;
+ dev->res.end = dev->res.start + info->length;
+ dev->res.name = dev->dev.bus_id;
+ dev->res.flags = IORESOURCE_MEM;
+ dev->mapbase = lchip->base + info->offset;
+ memmove(dev->irq, info->irq, sizeof(dev->irq));
+
+ if (info->length) {
+ ret = request_resource(parent, &dev->res);
+ if (ret) {
+ printk("LoCoMo: failed to allocate resource for %s\n",
+ dev->res.name);
+ goto out;
+ }
+ }
+
+ ret = device_register(&dev->dev);
+ if (ret) {
+ release_resource(&dev->res);
+ out:
+ kfree(dev);
+ }
+ return ret;
+}
+
+/**
+ * locomo_probe - probe for a single LoCoMo chip.
+ * @phys_addr: physical address of device.
+ *
+ * Probe for a LoCoMo chip. This must be called
+ * before any other locomo-specific code.
+ *
+ * Returns:
+ * %-ENODEV device not found.
+ * %-EBUSY physical address already marked in-use.
+ * %0 successful.
+ */
+static int
+__locomo_probe(struct device *me, struct resource *mem, int irq)
+{
+ struct locomo *lchip;
+ unsigned long r;
+ int i, ret = -ENODEV;
+
+ lchip = kmalloc(sizeof(struct locomo), GFP_KERNEL);
+ if (!lchip)
+ return -ENOMEM;
+
+ memset(lchip, 0, sizeof(struct locomo));
+
+ lchip->dev = me;
+ dev_set_drvdata(lchip->dev, lchip);
+
+ lchip->phys = mem->start;
+ lchip->irq = irq;
+
+ /*
+ * Map the whole region. This also maps the
+ * registers for our children.
+ */
+ lchip->base = ioremap(mem->start, PAGE_SIZE);
+ if (!lchip->base) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ /* locomo initialize */
+ locomo_writel(0, lchip->base + LOCOMO_ICR);
+ /* KEYBOARD */
+ locomo_writel(0, lchip->base + LOCOMO_KIC);
+
+ /* GPIO */
+ locomo_writel(0, lchip->base + LOCOMO_GPO);
+ locomo_writel( (LOCOMO_GPIO(2) | LOCOMO_GPIO(3) | LOCOMO_GPIO(13) | LOCOMO_GPIO(14))
+ , lchip->base + LOCOMO_GPE);
+ locomo_writel( (LOCOMO_GPIO(2) | LOCOMO_GPIO(3) | LOCOMO_GPIO(13) | LOCOMO_GPIO(14))
+ , lchip->base + LOCOMO_GPD);
+ locomo_writel(0, lchip->base + LOCOMO_GIE);
+
+ /* FrontLight */
+ locomo_writel(0, lchip->base + LOCOMO_ALS);
+ locomo_writel(0, lchip->base + LOCOMO_ALD);
+ /* Longtime timer */
+ locomo_writel(0, lchip->base + LOCOMO_LTINT);
+ /* SPI */
+ locomo_writel(0, lchip->base + LOCOMO_SPIIE);
+
+ locomo_writel(6 + 8 + 320 + 30 - 10, lchip->base + LOCOMO_ASD);
+ r = locomo_readl(lchip->base + LOCOMO_ASD);
+ r |= 0x8000;
+ locomo_writel(r, lchip->base + LOCOMO_ASD);
+
+ locomo_writel(6 + 8 + 320 + 30 - 10 - 128 + 4, lchip->base + LOCOMO_HSD);
+ r = locomo_readl(lchip->base + LOCOMO_HSD);
+ r |= 0x8000;
+ locomo_writel(r, lchip->base + LOCOMO_HSD);
+
+ locomo_writel(128 / 8, lchip->base + LOCOMO_HSC);
+
+ /* XON */
+ locomo_writel(0x80, lchip->base + LOCOMO_TADC);
+ udelay(1000);
+ /* CLK9MEN */
+ r = locomo_readl(lchip->base + LOCOMO_TADC);
+ r |= 0x10;
+ locomo_writel(r, lchip->base + LOCOMO_TADC);
+ udelay(100);
+
+ /* init DAC */
+ r = locomo_readl(lchip->base + LOCOMO_DAC);
+ r |= LOCOMO_DAC_SCLOEB | LOCOMO_DAC_SDAOEB;
+ locomo_writel(r, lchip->base + LOCOMO_DAC);
+
+ r = locomo_readl(lchip->base + LOCOMO_VER);
+ printk(KERN_INFO "LoCoMo Chip: %lu%lu\n", (r >> 8), (r & 0xff));
+
+ /*
+ * The interrupt controller must be initialised before any
+ * other device to ensure that the interrupts are available.
+ */
+ if (lchip->irq != NO_IRQ)
+ locomo_setup_irq(lchip);
+
+ for (i = 0; i < ARRAY_SIZE(locomo_devices); i++)
+ locomo_init_one_child(lchip, mem, &locomo_devices[i]);
+
+ return 0;
+
+ out:
+ kfree(lchip);
+ return ret;
+}
+
+static void __locomo_remove(struct locomo *lchip)
+{
+ struct list_head *l, *n;
+
+ list_for_each_safe(l, n, &lchip->dev->children) {
+ struct device *d = list_to_dev(l);
+
+ device_unregister(d);
+ }
+
+ if (lchip->irq != NO_IRQ) {
+ set_irq_chained_handler(lchip->irq, NULL);
+ set_irq_data(lchip->irq, NULL);
+ }
+
+ iounmap(lchip->base);
+ kfree(lchip);
+}
+
+static int locomo_probe(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct resource *mem;
+ int irq;
+
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!mem)
+ return -EINVAL;
+ irq = platform_get_irq(pdev, 0);
+
+ return __locomo_probe(dev, mem, irq);
+}
+
+static int locomo_remove(struct device *dev)
+{
+ struct locomo *lchip = dev_get_drvdata(dev);
+
+ if (lchip) {
+ __locomo_remove(lchip);
+ dev_set_drvdata(dev, NULL);
+ }
+
+ return 0;
+}
+
+/*
+ * Not sure if this should be on the system bus or not yet.
+ * We really want some way to register a system device at
+ * the per-machine level, and then have this driver pick
+ * up the registered devices.
+ */
+static struct device_driver locomo_device_driver = {
+ .name = "locomo",
+ .bus = &platform_bus_type,
+ .probe = locomo_probe,
+ .remove = locomo_remove,
+};
+
+/*
+ * Get the parent device driver (us) structure
+ * from a child function device
+ */
+static inline struct locomo *locomo_chip_driver(struct locomo_dev *ldev)
+{
+ return (struct locomo *)dev_get_drvdata(ldev->dev.parent);
+}
+
+/*
+ * LoCoMo "Register Access Bus."
+ *
+ * We model this as a regular bus type, and hang devices directly
+ * off this.
+ */
+static int locomo_match(struct device *_dev, struct device_driver *_drv)
+{
+ struct locomo_dev *dev = LOCOMO_DEV(_dev);
+ struct locomo_driver *drv = LOCOMO_DRV(_drv);
+
+ return dev->devid == drv->devid;
+}
+
+static int locomo_bus_suspend(struct device *dev, u32 state)
+{
+ struct locomo_dev *ldev = LOCOMO_DEV(dev);
+ struct locomo_driver *drv = LOCOMO_DRV(dev->driver);
+ int ret = 0;
+
+ if (drv && drv->suspend)
+ ret = drv->suspend(ldev, state);
+ return ret;
+}
+
+static int locomo_bus_resume(struct device *dev)
+{
+ struct locomo_dev *ldev = LOCOMO_DEV(dev);
+ struct locomo_driver *drv = LOCOMO_DRV(dev->driver);
+ int ret = 0;
+
+ if (drv && drv->resume)
+ ret = drv->resume(ldev);
+ return ret;
+}
+
+static int locomo_bus_probe(struct device *dev)
+{
+ struct locomo_dev *ldev = LOCOMO_DEV(dev);
+ struct locomo_driver *drv = LOCOMO_DRV(dev->driver);
+ int ret = -ENODEV;
+
+ if (drv->probe)
+ ret = drv->probe(ldev);
+ return ret;
+}
+
+static int locomo_bus_remove(struct device *dev)
+{
+ struct locomo_dev *ldev = LOCOMO_DEV(dev);
+ struct locomo_driver *drv = LOCOMO_DRV(dev->driver);
+ int ret = 0;
+
+ if (drv->remove)
+ ret = drv->remove(ldev);
+ return ret;
+}
+
+struct bus_type locomo_bus_type = {
+ .name = "locomo-bus",
+ .match = locomo_match,
+ .suspend = locomo_bus_suspend,
+ .resume = locomo_bus_resume,
+};
+
+int locomo_driver_register(struct locomo_driver *driver)
+{
+ driver->drv.probe = locomo_bus_probe;
+ driver->drv.remove = locomo_bus_remove;
+ driver->drv.bus = &locomo_bus_type;
+ return driver_register(&driver->drv);
+}
+
+void locomo_driver_unregister(struct locomo_driver *driver)
+{
+ driver_unregister(&driver->drv);
+}
+
+static int __init locomo_init(void)
+{
+ int ret = bus_register(&locomo_bus_type);
+ if (ret == 0)
+ driver_register(&locomo_device_driver);
+ return ret;
+}
+
+static void __exit locomo_exit(void)
+{
+ driver_unregister(&locomo_device_driver);
+ bus_unregister(&locomo_bus_type);
+}
+
+module_init(locomo_init);
+module_exit(locomo_exit);
+
+MODULE_DESCRIPTION("Sharp LoCoMo core driver");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("John Lenz <lenz@cs.wisc.edu>");
+
+EXPORT_SYMBOL(locomo_driver_register);
+EXPORT_SYMBOL(locomo_driver_unregister);
--- /dev/null
+/*
+ * linux/arch/arm/common/time-acorn.c
+ *
+ * Copyright (c) 1996-2000 Russell King.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Changelog:
+ * 24-Sep-1996 RMK Created
+ * 10-Oct-1996 RMK Brought up to date with arch-sa110eval
+ * 04-Dec-1997 RMK Updated for new arch/arm/time.c
+ * 13=Jun-2004 DS Moved to arch/arm/common b/c shared w/CLPS7500
+ */
+#include <linux/timex.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/hardware/ioc.h>
+
+#include <asm/mach/time.h>
+
+unsigned long ioc_timer_gettimeoffset(void)
+{
+ unsigned int count1, count2, status;
+ long offset;
+
+ ioc_writeb (0, IOC_T0LATCH);
+ barrier ();
+ count1 = ioc_readb(IOC_T0CNTL) | (ioc_readb(IOC_T0CNTH) << 8);
+ barrier ();
+ status = ioc_readb(IOC_IRQREQA);
+ barrier ();
+ ioc_writeb (0, IOC_T0LATCH);
+ barrier ();
+ count2 = ioc_readb(IOC_T0CNTL) | (ioc_readb(IOC_T0CNTH) << 8);
+
+ offset = count2;
+ if (count2 < count1) {
+ /*
+ * We have not had an interrupt between reading count1
+ * and count2.
+ */
+ if (status & (1 << 5))
+ offset -= LATCH;
+ } else if (count2 > count1) {
+ /*
+ * We have just had another interrupt between reading
+ * count1 and count2.
+ */
+ offset -= LATCH;
+ }
+
+ offset = (LATCH - offset) * (tick_nsec / 1000);
+ return (offset + LATCH/2) / LATCH;
+}
+
+void __init ioctime_init(void)
+{
+ ioc_writeb(LATCH & 255, IOC_T0LTCHL);
+ ioc_writeb(LATCH >> 8, IOC_T0LTCHH);
+ ioc_writeb(0, IOC_T0GO);
+}
+
+static irqreturn_t
+ioc_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ write_seqlock(&xtime_lock);
+ timer_tick(regs);
+ write_sequnlock(&xtime_lock);
+ return IRQ_HANDLED;
+}
+
+static struct irqaction ioc_timer_irq = {
+ .name = "timer",
+ .flags = SA_INTERRUPT,
+ .handler = ioc_timer_interrupt
+};
+
+/*
+ * Set up timer interrupt.
+ */
+static void __init ioc_timer_init(void)
+{
+ ioctime_init();
+ setup_irq(IRQ_TIMER, &ioc_timer_irq);
+}
+
+struct sys_timer ioc_timer = {
+ .init = ioc_timer_init,
+ .offset = ioc_timer_gettimeoffset,
+};
+
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_ARM=y
+CONFIG_MMU=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+CONFIG_BSD_PROCESS_ACCT=y
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+# CONFIG_IKCONFIG is not set
+CONFIG_EMBEDDED=y
+CONFIG_KALLSYMS=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+# CONFIG_MODULE_UNLOAD is not set
+CONFIG_OBSOLETE_MODPARM=y
+CONFIG_MODVERSIONS=y
+CONFIG_KMOD=y
+
+#
+# System Type
+#
+# CONFIG_ARCH_ADIFCC is not set
+# CONFIG_ARCH_CLPS7500 is not set
+# CONFIG_ARCH_CLPS711X is not set
+# CONFIG_ARCH_CO285 is not set
+# CONFIG_ARCH_PXA is not set
+# CONFIG_ARCH_EBSA110 is not set
+# CONFIG_ARCH_CAMELOT is not set
+# CONFIG_ARCH_FOOTBRIDGE is not set
+# CONFIG_ARCH_INTEGRATOR is not set
+# CONFIG_ARCH_IOP3XX is not set
+CONFIG_ARCH_IXP4XX=y
+# CONFIG_ARCH_L7200 is not set
+# CONFIG_ARCH_RPC is not set
+# CONFIG_ARCH_SA1100 is not set
+# CONFIG_ARCH_SHARK is not set
+# CONFIG_ARCH_S3C2410 is not set
+# CONFIG_ARCH_OMAP is not set
+# CONFIG_ARCH_LH7A40X is not set
+# CONFIG_ARCH_VERSATILE_PB is not set
+
+#
+# CLPS711X/EP721X Implementations
+#
+
+#
+# Epxa10db
+#
+
+#
+# Footbridge Implementations
+#
+
+#
+# IOP3xx Implementation Options
+#
+# CONFIG_ARCH_IOP310 is not set
+# CONFIG_ARCH_IOP321 is not set
+
+#
+# IOP3xx Chipset Features
+#
+CONFIG_ARCH_SUPPORTS_BIG_ENDIAN=y
+
+#
+# Intel IXP4xx Implementation Options
+#
+
+#
+# IXP4xx Platforms
+#
+CONFIG_ARCH_IXDP425=y
+CONFIG_ARCH_IXCDP1100=y
+CONFIG_ARCH_PRPMC1100=y
+CONFIG_ARCH_ADI_COYOTE=y
+# CONFIG_ARCH_AVILA is not set
+CONFIG_ARCH_IXDP4XX=y
+
+#
+# IXP4xx Options
+#
+# CONFIG_IXP4XX_INDIRECT_PCI is not set
+
+#
+# Intel PXA250/210 Implementations
+#
+
+#
+# SA11x0 Implementations
+#
+
+#
+# TI OMAP Implementations
+#
+
+#
+# OMAP Core Type
+#
+
+#
+# OMAP Board Type
+#
+
+#
+# OMAP Feature Selections
+#
+
+#
+# S3C2410 Implementations
+#
+
+#
+# LH7A40X Implementations
+#
+CONFIG_DMABOUNCE=y
+
+#
+# Processor Type
+#
+CONFIG_CPU_32=y
+CONFIG_CPU_XSCALE=y
+CONFIG_CPU_32v5=y
+CONFIG_CPU_ABRT_EV5T=y
+CONFIG_CPU_TLB_V4WBI=y
+CONFIG_CPU_MINICACHE=y
+
+#
+# Processor Features
+#
+# CONFIG_ARM_THUMB is not set
+CONFIG_CPU_BIG_ENDIAN=y
+CONFIG_XSCALE_PMU=y
+
+#
+# General setup
+#
+CONFIG_PCI=y
+# CONFIG_ZBOOT_ROM is not set
+CONFIG_ZBOOT_ROM_TEXT=0x0
+CONFIG_ZBOOT_ROM_BSS=0x0
+CONFIG_PCI_LEGACY_PROC=y
+CONFIG_PCI_NAMES=y
+
+#
+# At least one math emulation must be selected
+#
+CONFIG_FPE_NWFPE=y
+# CONFIG_FPE_NWFPE_XP is not set
+# CONFIG_FPE_FASTFPE is not set
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_AOUT is not set
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Generic Driver Options
+#
+# CONFIG_DEBUG_DRIVER is not set
+CONFIG_PM=y
+# CONFIG_PREEMPT is not set
+CONFIG_APM=y
+# CONFIG_ARTHUR is not set
+CONFIG_CMDLINE="console=ttyS0,115200 ip=bootp root=/dev/nfs"
+CONFIG_ALIGNMENT_TRAP=y
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+CONFIG_MTD=y
+# CONFIG_MTD_DEBUG is not set
+CONFIG_MTD_PARTITIONS=y
+# CONFIG_MTD_CONCAT is not set
+CONFIG_MTD_REDBOOT_PARTS=y
+# CONFIG_MTD_CMDLINE_PARTS is not set
+# CONFIG_MTD_AFS_PARTS is not set
+
+#
+# User Modules And Translation Layers
+#
+CONFIG_MTD_CHAR=y
+CONFIG_MTD_BLOCK=y
+# CONFIG_FTL is not set
+# CONFIG_NFTL is not set
+# CONFIG_INFTL is not set
+
+#
+# RAM/ROM/Flash chip drivers
+#
+CONFIG_MTD_CFI=y
+# CONFIG_MTD_JEDECPROBE is not set
+CONFIG_MTD_GEN_PROBE=y
+# CONFIG_MTD_CFI_ADV_OPTIONS is not set
+CONFIG_MTD_CFI_INTELEXT=y
+# CONFIG_MTD_CFI_AMDSTD is not set
+# CONFIG_MTD_CFI_STAA is not set
+# CONFIG_MTD_RAM is not set
+# CONFIG_MTD_ROM is not set
+# CONFIG_MTD_ABSENT is not set
+# CONFIG_MTD_OBSOLETE_CHIPS is not set
+
+#
+# Mapping drivers for chip access
+#
+CONFIG_MTD_COMPLEX_MAPPINGS=y
+# CONFIG_MTD_PHYSMAP is not set
+# CONFIG_MTD_ARM_INTEGRATOR is not set
+CONFIG_MTD_IXP4XX=y
+# CONFIG_MTD_EDB7312 is not set
+# CONFIG_MTD_PCI is not set
+
+#
+# Self-contained MTD device drivers
+#
+# CONFIG_MTD_PMC551 is not set
+# CONFIG_MTD_SLRAM is not set
+# CONFIG_MTD_MTDRAM is not set
+# CONFIG_MTD_BLKMTD is not set
+
+#
+# Disk-On-Chip Device Drivers
+#
+# CONFIG_MTD_DOC2000 is not set
+# CONFIG_MTD_DOC2001 is not set
+# CONFIG_MTD_DOC2001PLUS is not set
+
+#
+# NAND Flash Device Drivers
+#
+CONFIG_MTD_NAND=m
+# CONFIG_MTD_NAND_VERIFY_WRITE is not set
+CONFIG_MTD_NAND_IDS=m
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+CONFIG_BLK_DEV_LOOP=y
+# CONFIG_BLK_DEV_CRYPTOLOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_CARMEL is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=8192
+CONFIG_BLK_DEV_INITRD=y
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=m
+CONFIG_PACKET_MMAP=y
+CONFIG_NETLINK_DEV=m
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+CONFIG_IP_MULTICAST=y
+CONFIG_IP_ADVANCED_ROUTER=y
+CONFIG_IP_MULTIPLE_TABLES=y
+CONFIG_IP_ROUTE_FWMARK=y
+CONFIG_IP_ROUTE_NAT=y
+CONFIG_IP_ROUTE_MULTIPATH=y
+CONFIG_IP_ROUTE_VERBOSE=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+CONFIG_IP_PNP_BOOTP=y
+# CONFIG_IP_PNP_RARP is not set
+CONFIG_NET_IPIP=m
+CONFIG_NET_IPGRE=m
+CONFIG_NET_IPGRE_BROADCAST=y
+CONFIG_IP_MROUTE=y
+CONFIG_IP_PIMSM_V1=y
+CONFIG_IP_PIMSM_V2=y
+# CONFIG_ARPD is not set
+CONFIG_SYN_COOKIES=y
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+
+#
+# IP: Virtual Server Configuration
+#
+CONFIG_IP_VS=m
+CONFIG_IP_VS_DEBUG=y
+CONFIG_IP_VS_TAB_BITS=12
+
+#
+# IPVS transport protocol load balancing support
+#
+# CONFIG_IP_VS_PROTO_TCP is not set
+# CONFIG_IP_VS_PROTO_UDP is not set
+# CONFIG_IP_VS_PROTO_ESP is not set
+# CONFIG_IP_VS_PROTO_AH is not set
+
+#
+# IPVS scheduler
+#
+CONFIG_IP_VS_RR=m
+CONFIG_IP_VS_WRR=m
+CONFIG_IP_VS_LC=m
+CONFIG_IP_VS_WLC=m
+CONFIG_IP_VS_LBLC=m
+CONFIG_IP_VS_LBLCR=m
+CONFIG_IP_VS_DH=m
+CONFIG_IP_VS_SH=m
+# CONFIG_IP_VS_SED is not set
+# CONFIG_IP_VS_NQ is not set
+
+#
+# IPVS application helper
+#
+# CONFIG_IPV6 is not set
+CONFIG_NETFILTER=y
+# CONFIG_NETFILTER_DEBUG is not set
+CONFIG_BRIDGE_NETFILTER=y
+
+#
+# IP: Netfilter Configuration
+#
+CONFIG_IP_NF_CONNTRACK=m
+CONFIG_IP_NF_FTP=m
+CONFIG_IP_NF_IRC=m
+# CONFIG_IP_NF_TFTP is not set
+# CONFIG_IP_NF_AMANDA is not set
+CONFIG_IP_NF_QUEUE=m
+CONFIG_IP_NF_IPTABLES=m
+CONFIG_IP_NF_MATCH_LIMIT=m
+# CONFIG_IP_NF_MATCH_IPRANGE is not set
+CONFIG_IP_NF_MATCH_MAC=m
+# CONFIG_IP_NF_MATCH_PKTTYPE is not set
+CONFIG_IP_NF_MATCH_MARK=m
+CONFIG_IP_NF_MATCH_MULTIPORT=m
+CONFIG_IP_NF_MATCH_TOS=m
+# CONFIG_IP_NF_MATCH_RECENT is not set
+# CONFIG_IP_NF_MATCH_ECN is not set
+# CONFIG_IP_NF_MATCH_DSCP is not set
+CONFIG_IP_NF_MATCH_AH_ESP=m
+CONFIG_IP_NF_MATCH_LENGTH=m
+CONFIG_IP_NF_MATCH_TTL=m
+CONFIG_IP_NF_MATCH_TCPMSS=m
+# CONFIG_IP_NF_MATCH_HELPER is not set
+CONFIG_IP_NF_MATCH_STATE=m
+# CONFIG_IP_NF_MATCH_CONNTRACK is not set
+CONFIG_IP_NF_MATCH_OWNER=m
+# CONFIG_IP_NF_MATCH_PHYSDEV is not set
+CONFIG_IP_NF_FILTER=m
+CONFIG_IP_NF_TARGET_REJECT=m
+CONFIG_IP_NF_NAT=m
+CONFIG_IP_NF_NAT_NEEDED=y
+CONFIG_IP_NF_TARGET_MASQUERADE=m
+CONFIG_IP_NF_TARGET_REDIRECT=m
+# CONFIG_IP_NF_TARGET_NETMAP is not set
+# CONFIG_IP_NF_TARGET_SAME is not set
+CONFIG_IP_NF_NAT_LOCAL=y
+CONFIG_IP_NF_NAT_SNMP_BASIC=m
+CONFIG_IP_NF_NAT_IRC=m
+CONFIG_IP_NF_NAT_FTP=m
+CONFIG_IP_NF_MANGLE=m
+CONFIG_IP_NF_TARGET_TOS=m
+# CONFIG_IP_NF_TARGET_ECN is not set
+# CONFIG_IP_NF_TARGET_DSCP is not set
+CONFIG_IP_NF_TARGET_MARK=m
+# CONFIG_IP_NF_TARGET_CLASSIFY is not set
+CONFIG_IP_NF_TARGET_LOG=m
+CONFIG_IP_NF_TARGET_ULOG=m
+CONFIG_IP_NF_TARGET_TCPMSS=m
+CONFIG_IP_NF_ARPTABLES=m
+CONFIG_IP_NF_ARPFILTER=m
+# CONFIG_IP_NF_ARP_MANGLE is not set
+CONFIG_IP_NF_COMPAT_IPCHAINS=m
+CONFIG_IP_NF_COMPAT_IPFWADM=m
+# CONFIG_IP_NF_RAW is not set
+
+#
+# Bridge: Netfilter Configuration
+#
+# CONFIG_BRIDGE_NF_EBTABLES is not set
+CONFIG_XFRM=y
+# CONFIG_XFRM_USER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+CONFIG_ATM=y
+CONFIG_ATM_CLIP=y
+# CONFIG_ATM_CLIP_NO_ICMP is not set
+CONFIG_ATM_LANE=m
+CONFIG_ATM_MPOA=m
+CONFIG_ATM_BR2684=m
+# CONFIG_ATM_BR2684_IPFILTER is not set
+CONFIG_BRIDGE=m
+CONFIG_VLAN_8021Q=m
+# CONFIG_DECNET is not set
+CONFIG_LLC=m
+# CONFIG_LLC2 is not set
+CONFIG_IPX=m
+# CONFIG_IPX_INTERN is not set
+CONFIG_ATALK=m
+CONFIG_DEV_APPLETALK=y
+CONFIG_IPDDP=m
+CONFIG_IPDDP_ENCAP=y
+CONFIG_IPDDP_DECAP=y
+CONFIG_X25=m
+CONFIG_LAPB=m
+# CONFIG_NET_DIVERT is not set
+CONFIG_ECONET=m
+CONFIG_ECONET_AUNUDP=y
+CONFIG_ECONET_NATIVE=y
+CONFIG_WAN_ROUTER=m
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+CONFIG_NET_SCHED=y
+CONFIG_NET_SCH_CBQ=m
+CONFIG_NET_SCH_HTB=m
+# CONFIG_NET_SCH_HFSC is not set
+CONFIG_NET_SCH_CSZ=m
+# CONFIG_NET_SCH_ATM is not set
+CONFIG_NET_SCH_PRIO=m
+CONFIG_NET_SCH_RED=m
+CONFIG_NET_SCH_SFQ=m
+CONFIG_NET_SCH_TEQL=m
+CONFIG_NET_SCH_TBF=m
+CONFIG_NET_SCH_GRED=m
+CONFIG_NET_SCH_DSMARK=m
+# CONFIG_NET_SCH_DELAY is not set
+CONFIG_NET_SCH_INGRESS=m
+CONFIG_NET_QOS=y
+CONFIG_NET_ESTIMATOR=y
+CONFIG_NET_CLS=y
+CONFIG_NET_CLS_TCINDEX=m
+CONFIG_NET_CLS_ROUTE4=m
+CONFIG_NET_CLS_ROUTE=y
+CONFIG_NET_CLS_FW=m
+CONFIG_NET_CLS_U32=m
+CONFIG_NET_CLS_RSVP=m
+CONFIG_NET_CLS_RSVP6=m
+CONFIG_NET_CLS_POLICE=y
+
+#
+# Network testing
+#
+CONFIG_NET_PKTGEN=m
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+CONFIG_DUMMY=y
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+# CONFIG_ETHERTAP is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+
+#
+# Tulip family network device support
+#
+# CONFIG_NET_TULIP is not set
+# CONFIG_HP100 is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_B44 is not set
+# CONFIG_FORCEDETH is not set
+# CONFIG_DGRS is not set
+CONFIG_EEPRO100=y
+# CONFIG_EEPRO100_PIO is not set
+# CONFIG_E100 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+# CONFIG_8139CP is not set
+# CONFIG_8139TOO is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+CONFIG_NET_RADIO=y
+
+#
+# Obsolete Wireless cards support (pre-802.11)
+#
+# CONFIG_STRIP is not set
+
+#
+# Wireless 802.11b ISA/PCI cards support
+#
+# CONFIG_AIRO is not set
+CONFIG_HERMES=y
+# CONFIG_PLX_HERMES is not set
+# CONFIG_TMD_HERMES is not set
+CONFIG_PCI_HERMES=y
+# CONFIG_ATMEL is not set
+
+#
+# Prism GT/Duette 802.11(a/b/g) PCI/Cardbus support
+#
+CONFIG_NET_WIRELESS=y
+
+#
+# Wan interfaces
+#
+CONFIG_WAN=y
+# CONFIG_DSCC4 is not set
+# CONFIG_LANMEDIA is not set
+# CONFIG_SYNCLINK_SYNCPPP is not set
+CONFIG_HDLC=m
+CONFIG_HDLC_RAW=y
+# CONFIG_HDLC_RAW_ETH is not set
+CONFIG_HDLC_CISCO=y
+CONFIG_HDLC_FR=y
+CONFIG_HDLC_PPP=y
+CONFIG_HDLC_X25=y
+# CONFIG_PCI200SYN is not set
+# CONFIG_WANXL is not set
+# CONFIG_PC300 is not set
+# CONFIG_FARSYNC is not set
+CONFIG_DLCI=m
+CONFIG_DLCI_COUNT=24
+CONFIG_DLCI_MAX=8
+CONFIG_WAN_ROUTER_DRIVERS=y
+# CONFIG_CYCLADES_SYNC is not set
+# CONFIG_LAPBETHER is not set
+# CONFIG_X25_ASY is not set
+
+#
+# ATM drivers
+#
+CONFIG_ATM_TCP=m
+# CONFIG_ATM_LANAI is not set
+# CONFIG_ATM_ENI is not set
+# CONFIG_ATM_FIRESTREAM is not set
+# CONFIG_ATM_ZATM is not set
+# CONFIG_ATM_NICSTAR is not set
+# CONFIG_ATM_IDT77252 is not set
+# CONFIG_ATM_AMBASSADOR is not set
+# CONFIG_ATM_HORIZON is not set
+# CONFIG_ATM_IA is not set
+# CONFIG_ATM_FORE200E_MAYBE is not set
+# CONFIG_ATM_HE is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_RCPCI is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+CONFIG_IDE=y
+CONFIG_BLK_DEV_IDE=y
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+CONFIG_BLK_DEV_IDEDISK=y
+# CONFIG_IDEDISK_MULTI_MODE is not set
+# CONFIG_IDEDISK_STROKE is not set
+# CONFIG_BLK_DEV_IDECD is not set
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+# CONFIG_IDE_TASK_IOCTL is not set
+# CONFIG_IDE_TASKFILE_IO is not set
+
+#
+# IDE chipset support/bugfixes
+#
+CONFIG_IDE_GENERIC=y
+CONFIG_BLK_DEV_IDEPCI=y
+# CONFIG_IDEPCI_SHARE_IRQ is not set
+# CONFIG_BLK_DEV_OFFBOARD is not set
+# CONFIG_BLK_DEV_GENERIC is not set
+# CONFIG_BLK_DEV_OPTI621 is not set
+# CONFIG_BLK_DEV_SL82C105 is not set
+CONFIG_BLK_DEV_IDEDMA_PCI=y
+# CONFIG_BLK_DEV_IDEDMA_FORCED is not set
+# CONFIG_IDEDMA_PCI_AUTO is not set
+CONFIG_BLK_DEV_ADMA=y
+# CONFIG_BLK_DEV_AEC62XX is not set
+# CONFIG_BLK_DEV_ALI15X3 is not set
+# CONFIG_BLK_DEV_AMD74XX is not set
+CONFIG_BLK_DEV_CMD64X=y
+# CONFIG_BLK_DEV_TRIFLEX is not set
+# CONFIG_BLK_DEV_CY82C693 is not set
+# CONFIG_BLK_DEV_CS5520 is not set
+# CONFIG_BLK_DEV_CS5530 is not set
+# CONFIG_BLK_DEV_HPT34X is not set
+CONFIG_BLK_DEV_HPT366=y
+# CONFIG_BLK_DEV_SC1200 is not set
+# CONFIG_BLK_DEV_PIIX is not set
+# CONFIG_BLK_DEV_NS87415 is not set
+# CONFIG_BLK_DEV_PDC202XX_OLD is not set
+CONFIG_BLK_DEV_PDC202XX_NEW=y
+# CONFIG_PDC202XX_FORCE is not set
+# CONFIG_BLK_DEV_SVWKS is not set
+# CONFIG_BLK_DEV_SIIMAGE is not set
+# CONFIG_BLK_DEV_SLC90E66 is not set
+# CONFIG_BLK_DEV_TRM290 is not set
+# CONFIG_BLK_DEV_VIA82CXXX is not set
+CONFIG_BLK_DEV_IDEDMA=y
+# CONFIG_IDEDMA_IVB is not set
+# CONFIG_IDEDMA_AUTO is not set
+# CONFIG_BLK_DEV_HD is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Fusion MPT device support
+#
+# CONFIG_FUSION is not set
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+# CONFIG_SERIO is not set
+# CONFIG_SERIO_I8042 is not set
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+# CONFIG_VT is not set
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=2
+# CONFIG_SERIAL_8250_EXTENDED is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+CONFIG_WATCHDOG=y
+# CONFIG_WATCHDOG_NOWAYOUT is not set
+
+#
+# Watchdog Device Drivers
+#
+# CONFIG_SOFT_WATCHDOG is not set
+CONFIG_IXP4XX_WATCHDOG=y
+
+#
+# PCI-based Watchdog Cards
+#
+# CONFIG_PCIPCWATCHDOG is not set
+# CONFIG_WDTPCI is not set
+# CONFIG_NVRAM is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+CONFIG_I2C=y
+CONFIG_I2C_CHARDEV=y
+
+#
+# I2C Algorithms
+#
+CONFIG_I2C_ALGOBIT=y
+# CONFIG_I2C_ALGOPCF is not set
+
+#
+# I2C Hardware Bus support
+#
+# CONFIG_I2C_ALI1535 is not set
+# CONFIG_I2C_ALI1563 is not set
+# CONFIG_I2C_ALI15X3 is not set
+# CONFIG_I2C_AMD756 is not set
+# CONFIG_I2C_AMD8111 is not set
+# CONFIG_I2C_I801 is not set
+# CONFIG_I2C_I810 is not set
+# CONFIG_I2C_ISA is not set
+CONFIG_I2C_IXP4XX=y
+# CONFIG_I2C_NFORCE2 is not set
+# CONFIG_I2C_PARPORT_LIGHT is not set
+# CONFIG_I2C_PIIX4 is not set
+# CONFIG_I2C_PROSAVAGE is not set
+# CONFIG_I2C_SAVAGE4 is not set
+# CONFIG_SCx200_ACB is not set
+# CONFIG_I2C_SIS5595 is not set
+# CONFIG_I2C_SIS630 is not set
+# CONFIG_I2C_SIS96X is not set
+# CONFIG_I2C_VIA is not set
+# CONFIG_I2C_VIAPRO is not set
+# CONFIG_I2C_VOODOO3 is not set
+
+#
+# Hardware Sensors Chip support
+#
+CONFIG_I2C_SENSOR=y
+# CONFIG_SENSORS_ADM1021 is not set
+# CONFIG_SENSORS_ASB100 is not set
+# CONFIG_SENSORS_DS1621 is not set
+# CONFIG_SENSORS_FSCHER is not set
+# CONFIG_SENSORS_GL518SM is not set
+# CONFIG_SENSORS_IT87 is not set
+# CONFIG_SENSORS_LM75 is not set
+# CONFIG_SENSORS_LM78 is not set
+# CONFIG_SENSORS_LM80 is not set
+# CONFIG_SENSORS_LM83 is not set
+# CONFIG_SENSORS_LM85 is not set
+# CONFIG_SENSORS_LM90 is not set
+# CONFIG_SENSORS_VIA686A is not set
+# CONFIG_SENSORS_W83781D is not set
+# CONFIG_SENSORS_W83L785TS is not set
+# CONFIG_SENSORS_W83627HF is not set
+
+#
+# Other I2C Chip support
+#
+CONFIG_SENSORS_EEPROM=y
+# CONFIG_SENSORS_PCF8574 is not set
+# CONFIG_SENSORS_PCF8591 is not set
+# CONFIG_I2C_DEBUG_CORE is not set
+# CONFIG_I2C_DEBUG_ALGO is not set
+# CONFIG_I2C_DEBUG_BUS is not set
+# CONFIG_I2C_DEBUG_CHIP is not set
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+CONFIG_EXT2_FS_XATTR=y
+CONFIG_EXT2_FS_POSIX_ACL=y
+# CONFIG_EXT2_FS_SECURITY is not set
+CONFIG_EXT3_FS=y
+CONFIG_EXT3_FS_XATTR=y
+CONFIG_EXT3_FS_POSIX_ACL=y
+# CONFIG_EXT3_FS_SECURITY is not set
+CONFIG_JBD=y
+# CONFIG_JBD_DEBUG is not set
+CONFIG_FS_MBCACHE=y
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+CONFIG_FS_POSIX_ACL=y
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_FAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_DEVPTS_FS_XATTR is not set
+CONFIG_TMPFS=y
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_JFFS_FS is not set
+CONFIG_JFFS2_FS=y
+CONFIG_JFFS2_FS_DEBUG=0
+# CONFIG_JFFS2_FS_NAND is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+CONFIG_NFS_V3=y
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+# CONFIG_NFSD is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_LOCKD_V4=y
+# CONFIG_EXPORTFS is not set
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_INTERMEZZO_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+CONFIG_PARTITION_ADVANCED=y
+# CONFIG_ACORN_PARTITION is not set
+# CONFIG_OSF_PARTITION is not set
+# CONFIG_AMIGA_PARTITION is not set
+# CONFIG_ATARI_PARTITION is not set
+# CONFIG_MAC_PARTITION is not set
+CONFIG_MSDOS_PARTITION=y
+# CONFIG_BSD_DISKLABEL is not set
+# CONFIG_MINIX_SUBPARTITION is not set
+# CONFIG_SOLARIS_X86_PARTITION is not set
+# CONFIG_UNIXWARE_DISKLABEL is not set
+# CONFIG_LDM_PARTITION is not set
+# CONFIG_NEC98_PARTITION is not set
+# CONFIG_SGI_PARTITION is not set
+# CONFIG_ULTRIX_PARTITION is not set
+# CONFIG_SUN_PARTITION is not set
+# CONFIG_EFI_PARTITION is not set
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Profiling support
+#
+# CONFIG_PROFILING is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# Misc devices
+#
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# Kernel hacking
+#
+CONFIG_FRAME_POINTER=y
+# CONFIG_DEBUG_USER is not set
+# CONFIG_DEBUG_INFO is not set
+CONFIG_DEBUG_KERNEL=y
+# CONFIG_DEBUG_SLAB is not set
+CONFIG_MAGIC_SYSRQ=y
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_WAITQ is not set
+CONFIG_DEBUG_BUGVERBOSE=y
+CONFIG_DEBUG_ERRORS=y
+CONFIG_DEBUG_LL=y
+# CONFIG_DEBUG_ICEDCC is not set
+# CONFIG_DEBUG_BDI2000_XSCALE is not set
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
+CONFIG_ZLIB_INFLATE=y
+CONFIG_ZLIB_DEFLATE=y
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ebsa285/time.h
+ *
+ * Copyright (C) 1998 Russell King.
+ * Copyright (C) 1998 Phil Blundell
+ *
+ * CATS has a real-time clock, though the evaluation board doesn't.
+ *
+ * Changelog:
+ * 21-Mar-1998 RMK Created
+ * 27-Aug-1998 PJB CATS support
+ * 28-Dec-1998 APH Made leds optional
+ * 20-Jan-1999 RMK Started merge of EBSA285, CATS and NetWinder
+ * 16-Mar-1999 RMK More support for EBSA285-like machines with RTCs in
+ */
+
+#define RTC_PORT(x) (rtc_base+(x))
+#define RTC_ALWAYS_BCD 0
+
+#include <linux/timex.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/mc146818rtc.h>
+#include <linux/bcd.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+
+#include <asm/mach/time.h>
+#include "common.h"
+
+static int rtc_base;
+
+static unsigned long __init get_isa_cmos_time(void)
+{
+ unsigned int year, mon, day, hour, min, sec;
+ int i;
+
+ // check to see if the RTC makes sense.....
+ if ((CMOS_READ(RTC_VALID) & RTC_VRT) == 0)
+ return mktime(1970, 1, 1, 0, 0, 0);
+
+ /* The Linux interpretation of the CMOS clock register contents:
+ * When the Update-In-Progress (UIP) flag goes from 1 to 0, the
+ * RTC registers show the second which has precisely just started.
+ * Let's hope other operating systems interpret the RTC the same way.
+ */
+ /* read RTC exactly on falling edge of update flag */
+ for (i = 0 ; i < 1000000 ; i++) /* may take up to 1 second... */
+ if (CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP)
+ break;
+
+ for (i = 0 ; i < 1000000 ; i++) /* must try at least 2.228 ms */
+ if (!(CMOS_READ(RTC_FREQ_SELECT) & RTC_UIP))
+ break;
+
+ do { /* Isn't this overkill ? UIP above should guarantee consistency */
+ sec = CMOS_READ(RTC_SECONDS);
+ min = CMOS_READ(RTC_MINUTES);
+ hour = CMOS_READ(RTC_HOURS);
+ day = CMOS_READ(RTC_DAY_OF_MONTH);
+ mon = CMOS_READ(RTC_MONTH);
+ year = CMOS_READ(RTC_YEAR);
+ } while (sec != CMOS_READ(RTC_SECONDS));
+
+ if (!(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ BCD_TO_BIN(sec);
+ BCD_TO_BIN(min);
+ BCD_TO_BIN(hour);
+ BCD_TO_BIN(day);
+ BCD_TO_BIN(mon);
+ BCD_TO_BIN(year);
+ }
+ if ((year += 1900) < 1970)
+ year += 100;
+ return mktime(year, mon, day, hour, min, sec);
+}
+
+static int set_isa_cmos_time(void)
+{
+ int retval = 0;
+ int real_seconds, real_minutes, cmos_minutes;
+ unsigned char save_control, save_freq_select;
+ unsigned long nowtime = xtime.tv_sec;
+
+ save_control = CMOS_READ(RTC_CONTROL); /* tell the clock it's being set */
+ CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
+
+ save_freq_select = CMOS_READ(RTC_FREQ_SELECT); /* stop and reset prescaler */
+ CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
+
+ cmos_minutes = CMOS_READ(RTC_MINUTES);
+ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
+ BCD_TO_BIN(cmos_minutes);
+
+ /*
+ * since we're only adjusting minutes and seconds,
+ * don't interfere with hour overflow. This avoids
+ * messing with unknown time zones but requires your
+ * RTC not to be off by more than 15 minutes
+ */
+ real_seconds = nowtime % 60;
+ real_minutes = nowtime / 60;
+ if (((abs(real_minutes - cmos_minutes) + 15)/30) & 1)
+ real_minutes += 30; /* correct for half hour time zone */
+ real_minutes %= 60;
+
+ if (abs(real_minutes - cmos_minutes) < 30) {
+ if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ BIN_TO_BCD(real_seconds);
+ BIN_TO_BCD(real_minutes);
+ }
+ CMOS_WRITE(real_seconds,RTC_SECONDS);
+ CMOS_WRITE(real_minutes,RTC_MINUTES);
+ } else
+ retval = -1;
+
+ /* The following flags have to be released exactly in this order,
+ * otherwise the DS12887 (popular MC146818A clone with integrated
+ * battery and quartz) will not reset the oscillator and will not
+ * update precisely 500 ms later. You won't find this mentioned in
+ * the Dallas Semiconductor data sheets, but who believes data
+ * sheets anyway ... -- Markus Kuhn
+ */
+ CMOS_WRITE(save_control, RTC_CONTROL);
+ CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
+
+ return retval;
+}
+
+void __init isa_rtc_init(void)
+{
+ if (machine_is_co285() ||
+ machine_is_personal_server())
+ /*
+ * Add-in 21285s shouldn't access the RTC
+ */
+ rtc_base = 0;
+ else
+ rtc_base = 0x70;
+
+ if (rtc_base) {
+ int reg_d, reg_b;
+
+ /*
+ * Probe for the RTC.
+ */
+ reg_d = CMOS_READ(RTC_REG_D);
+
+ /*
+ * make sure the divider is set
+ */
+ CMOS_WRITE(RTC_REF_CLCK_32KHZ, RTC_REG_A);
+
+ /*
+ * Set control reg B
+ * (24 hour mode, update enabled)
+ */
+ reg_b = CMOS_READ(RTC_REG_B) & 0x7f;
+ reg_b |= 2;
+ CMOS_WRITE(reg_b, RTC_REG_B);
+
+ if ((CMOS_READ(RTC_REG_A) & 0x7f) == RTC_REF_CLCK_32KHZ &&
+ CMOS_READ(RTC_REG_B) == reg_b) {
+ struct timespec tv;
+
+ /*
+ * We have a RTC. Check the battery
+ */
+ if ((reg_d & 0x80) == 0)
+ printk(KERN_WARNING "RTC: *** warning: CMOS battery bad\n");
+
+ tv.tv_nsec = 0;
+ tv.tv_sec = get_isa_cmos_time();
+ do_settimeofday(&tv);
+ set_rtc = set_isa_cmos_time;
+ } else
+ rtc_base = 0;
+ }
+}
--- /dev/null
+/*
+ * linux/arch/arm/mach-integrator/clock.c
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/errno.h>
+#include <linux/err.h>
+
+#include <asm/semaphore.h>
+#include <asm/hardware/clock.h>
+#include <asm/hardware/icst525.h>
+
+#include "clock.h"
+
+static LIST_HEAD(clocks);
+static DECLARE_MUTEX(clocks_sem);
+
+struct clk *clk_get(struct device *dev, const char *id)
+{
+ struct clk *p, *clk = ERR_PTR(-ENOENT);
+
+ down(&clocks_sem);
+ list_for_each_entry(p, &clocks, node) {
+ if (strcmp(id, p->name) == 0 && try_module_get(p->owner)) {
+ clk = p;
+ break;
+ }
+ }
+ up(&clocks_sem);
+
+ return clk;
+}
+EXPORT_SYMBOL(clk_get);
+
+void clk_put(struct clk *clk)
+{
+ module_put(clk->owner);
+}
+EXPORT_SYMBOL(clk_put);
+
+int clk_enable(struct clk *clk)
+{
+ return 0;
+}
+EXPORT_SYMBOL(clk_enable);
+
+void clk_disable(struct clk *clk)
+{
+}
+EXPORT_SYMBOL(clk_disable);
+
+int clk_use(struct clk *clk)
+{
+ return 0;
+}
+EXPORT_SYMBOL(clk_use);
+
+void clk_unuse(struct clk *clk)
+{
+}
+EXPORT_SYMBOL(clk_unuse);
+
+unsigned long clk_get_rate(struct clk *clk)
+{
+ return clk->rate;
+}
+EXPORT_SYMBOL(clk_get_rate);
+
+long clk_round_rate(struct clk *clk, unsigned long rate)
+{
+ struct icst525_vco vco;
+
+ vco = icst525_khz_to_vco(clk->params, rate / 1000);
+ return icst525_khz(clk->params, vco) * 1000;
+}
+EXPORT_SYMBOL(clk_round_rate);
+
+int clk_set_rate(struct clk *clk, unsigned long rate)
+{
+ int ret = -EIO;
+ if (clk->setvco) {
+ struct icst525_vco vco;
+
+ vco = icst525_khz_to_vco(clk->params, rate / 1000);
+ clk->rate = icst525_khz(clk->params, vco) * 1000;
+
+ printk("Clock %s: setting VCO reg params: S=%d R=%d V=%d\n",
+ clk->name, vco.s, vco.r, vco.v);
+
+ clk->setvco(clk, vco);
+ ret = 0;
+ }
+ return 0;
+}
+EXPORT_SYMBOL(clk_set_rate);
+
+/*
+ * These are fixed clocks.
+ */
+static struct clk kmi_clk = {
+ .name = "KMIREFCLK",
+ .rate = 24000000,
+};
+
+static struct clk uart_clk = {
+ .name = "UARTCLK",
+ .rate = 14745600,
+};
+
+int clk_register(struct clk *clk)
+{
+ down(&clocks_sem);
+ list_add(&clk->node, &clocks);
+ up(&clocks_sem);
+ return 0;
+}
+EXPORT_SYMBOL(clk_register);
+
+void clk_unregister(struct clk *clk)
+{
+ down(&clocks_sem);
+ list_del(&clk->node);
+ up(&clocks_sem);
+}
+EXPORT_SYMBOL(clk_unregister);
+
+static int __init clk_init(void)
+{
+ clk_register(&kmi_clk);
+ clk_register(&uart_clk);
+ return 0;
+}
+arch_initcall(clk_init);
--- /dev/null
+if ARCH_IXP4XX
+
+config ARCH_SUPPORTS_BIG_ENDIAN
+ bool
+ default y
+
+menu "Intel IXP4xx Implementation Options"
+
+comment "IXP4xx Platforms"
+
+config ARCH_AVILA
+ bool "Avila"
+ help
+ Say 'Y' here if you want your kernel to support the Gateworks
+ Avila Network Platform. For more information on this platform,
+ see Documentation/arm/IXP4xx.
+
+config ARCH_ADI_COYOTE
+ bool "Coyote"
+ help
+ Say 'Y' here if you want your kernel to support the ADI
+ Engineering Coyote Gateway Reference Platform. For more
+ information on this platform, see Documentation/arm/IXP4xx.
+
+config ARCH_IXDP425
+ bool "IXDP425"
+ help
+ Say 'Y' here if you want your kernel to support Intel's
+ IXDP425 Development Platform (Also known as Richfield).
+ For more information on this platform, see Documentation/arm/IXP4xx.
+
+config MACH_IXDPG425
+ bool "IXDPG425"
+ help
+ Say 'Y' here if you want your kernel to support Intel's
+ IXDPG425 Development Platform (Also known as Montajade).
+ For more information on this platform, see Documentation/arm/IXP4xx.
+
+#
+# IXCDP1100 is the exact same HW as IXDP425, but with a different machine
+# number from the bootloader due to marketing monkeys, so we just enable it
+# by default if IXDP425 is enabled.
+#
+config ARCH_IXCDP1100
+ bool
+ depends on ARCH_IXDP425
+ default y
+
+config ARCH_PRPMC1100
+ bool "PrPMC1100"
+ help
+ Say 'Y' here if you want your kernel to support the Motorola
+ PrPCM1100 Processor Mezanine Module. For more information on
+ this platform, see Documentation/arm/IXP4xx.
+
+#
+# Avila and IXDP share the same source for now. Will change in future
+#
+config ARCH_IXDP4XX
+ bool
+ depends on ARCH_IXDP425 || ARCH_AVILA
+ default y
+
+comment "IXP4xx Options"
+
+config IXP4XX_INDIRECT_PCI
+ bool "Use indirect PCI memory access"
+ help
+ IXP4xx provides two methods of accessing PCI memory space:
+
+ 1) A direct mapped window from 0x48000000 to 0x4bffffff (64MB).
+ To access PCI via this space, we simply ioremap() the BAR
+ into the kernel and we can use the standard read[bwl]/write[bwl]
+ macros. This is the preffered method due to speed but it
+ limits the system to just 64MB of PCI memory. This can be
+ problamatic if using video cards and other memory-heavy devices.
+
+ 2) If > 64MB of memory space is required, the IXP4xx can be
+ configured to use indirect registers to access PCI This allows
+ for up to 128MB (0x48000000 to 0x4fffffff) of memory on the bus.
+ The disadvantadge of this is that every PCI access requires
+ three local register accesses plus a spinlock, but in some
+ cases the performance hit is acceptable. In addition, you cannot
+ mmap() PCI devices in this case due to the indirect nature
+ of the PCI window.
+
+ By default, the direct method is used. Choose this option if you
+ need to use the indirect method instead. If you don't know
+ what you need, leave this option unselected.
+
+endmenu
+
+endif
--- /dev/null
+#
+# Makefile for the linux kernel.
+#
+
+obj-y += common.o common-pci.o
+
+obj-$(CONFIG_ARCH_IXDP4XX) += ixdp425-pci.o ixdp425-setup.o
+obj-$(CONFIG_MACH_IXDPG425) += ixdpg425-pci.o coyote-setup.o
+obj-$(CONFIG_ARCH_ADI_COYOTE) += coyote-pci.o coyote-setup.o
+obj-$(CONFIG_ARCH_PRPMC1100) += prpmc1100-pci.o prpmc1100-setup.o
+
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/common-pci.c
+ *
+ * IXP4XX PCI routines for all platforms
+ *
+ * Maintainer: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright (C) 2002 Intel Corporation.
+ * Copyright (C) 2003 Greg Ungerer <gerg@snapgear.com>
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <asm/dma-mapping.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/sizes.h>
+#include <asm/system.h>
+#include <asm/mach/pci.h>
+#include <asm/hardware.h>
+
+
+/*
+ * IXP4xx PCI read function is dependent on whether we are
+ * running A0 or B0 (AppleGate) silicon.
+ */
+int (*ixp4xx_pci_read)(u32 addr, u32 cmd, u32* data);
+
+/*
+ * Base address for PCI regsiter region
+ */
+unsigned long ixp4xx_pci_reg_base = 0;
+
+/*
+ * PCI cfg an I/O routines are done by programming a
+ * command/byte enable register, and then read/writing
+ * the data from a data regsiter. We need to ensure
+ * these transactions are atomic or we will end up
+ * with corrupt data on the bus or in a driver.
+ */
+static spinlock_t ixp4xx_pci_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Read from PCI config space
+ */
+static void crp_read(u32 ad_cbe, u32 *data)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&ixp4xx_pci_lock, flags);
+ *PCI_CRP_AD_CBE = ad_cbe;
+ *data = *PCI_CRP_RDATA;
+ spin_unlock_irqrestore(&ixp4xx_pci_lock, flags);
+}
+
+/*
+ * Write to PCI config space
+ */
+static void crp_write(u32 ad_cbe, u32 data)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&ixp4xx_pci_lock, flags);
+ *PCI_CRP_AD_CBE = CRP_AD_CBE_WRITE | ad_cbe;
+ *PCI_CRP_WDATA = data;
+ spin_unlock_irqrestore(&ixp4xx_pci_lock, flags);
+}
+
+static inline int check_master_abort(void)
+{
+ /* check Master Abort bit after access */
+ unsigned long isr = *PCI_ISR;
+
+ if (isr & PCI_ISR_PFE) {
+ /* make sure the Master Abort bit is reset */
+ *PCI_ISR = PCI_ISR_PFE;
+ pr_debug("%s failed\n", __FUNCTION__);
+ return 1;
+ }
+
+ return 0;
+}
+
+int ixp4xx_pci_read_errata(u32 addr, u32 cmd, u32* data)
+{
+ unsigned long flags;
+ int retval = 0;
+ int i;
+
+ spin_lock_irqsave(&ixp4xx_pci_lock, flags);
+
+ *PCI_NP_AD = addr;
+
+ /*
+ * PCI workaround - only works if NP PCI space reads have
+ * no side effects!!! Read 8 times. last one will be good.
+ */
+ for (i = 0; i < 8; i++) {
+ *PCI_NP_CBE = cmd;
+ *data = *PCI_NP_RDATA;
+ *data = *PCI_NP_RDATA;
+ }
+
+ if(check_master_abort())
+ retval = 1;
+
+ spin_unlock_irqrestore(&ixp4xx_pci_lock, flags);
+ return retval;
+}
+
+int ixp4xx_pci_read_no_errata(u32 addr, u32 cmd, u32* data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&ixp4xx_pci_lock, flags);
+
+ *PCI_NP_AD = addr;
+
+ /* set up and execute the read */
+ *PCI_NP_CBE = cmd;
+
+ /* the result of the read is now in NP_RDATA */
+ *data = *PCI_NP_RDATA;
+
+ if(check_master_abort())
+ retval = 1;
+
+ spin_unlock_irqrestore(&ixp4xx_pci_lock, flags);
+ return retval;
+}
+
+int ixp4xx_pci_write(u32 addr, u32 cmd, u32 data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&ixp4xx_pci_lock, flags);
+
+ *PCI_NP_AD = addr;
+
+ /* set up the write */
+ *PCI_NP_CBE = cmd;
+
+ /* execute the write by writing to NP_WDATA */
+ *PCI_NP_WDATA = data;
+
+ if(check_master_abort())
+ retval = 1;
+
+ spin_unlock_irqrestore(&ixp4xx_pci_lock, flags);
+ return retval;
+}
+
+static u32 ixp4xx_config_addr(u8 bus_num, u16 devfn, int where)
+{
+ u32 addr;
+ if (!bus_num) {
+ /* type 0 */
+ addr = BIT(32-PCI_SLOT(devfn)) | ((PCI_FUNC(devfn)) << 8) |
+ (where & ~3);
+ } else {
+ /* type 1 */
+ addr = (bus_num << 16) | ((PCI_SLOT(devfn)) << 11) |
+ ((PCI_FUNC(devfn)) << 8) | (where & ~3) | 1;
+ }
+ return addr;
+}
+
+/*
+ * Mask table, bits to mask for quantity of size 1, 2 or 4 bytes.
+ * 0 and 3 are not valid indexes...
+ */
+static u32 bytemask[] = {
+ /*0*/ 0,
+ /*1*/ 0xff,
+ /*2*/ 0xffff,
+ /*3*/ 0,
+ /*4*/ 0xffffffff,
+};
+
+static u32 local_byte_lane_enable_bits(u32 n, int size)
+{
+ if (size == 1)
+ return (0xf & ~BIT(n)) << CRP_AD_CBE_BESL;
+ if (size == 2)
+ return (0xf & ~(BIT(n) | BIT(n+1))) << CRP_AD_CBE_BESL;
+ if (size == 4)
+ return 0;
+ return 0xffffffff;
+}
+
+static int local_read_config(int where, int size, u32 *value)
+{
+ u32 n, data;
+ pr_debug("local_read_config from %d size %d\n", where, size);
+ n = where % 4;
+ crp_read(where & ~3, &data);
+ *value = (data >> (8*n)) & bytemask[size];
+ pr_debug("local_read_config read %#x\n", *value);
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int local_write_config(int where, int size, u32 value)
+{
+ u32 n, byte_enables, data;
+ pr_debug("local_write_config %#x to %d size %d\n", value, where, size);
+ n = where % 4;
+ byte_enables = local_byte_lane_enable_bits(n, size);
+ if (byte_enables == 0xffffffff)
+ return PCIBIOS_BAD_REGISTER_NUMBER;
+ data = value << (8*n);
+ crp_write((where & ~3) | byte_enables, data);
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static u32 byte_lane_enable_bits(u32 n, int size)
+{
+ if (size == 1)
+ return (0xf & ~BIT(n)) << 4;
+ if (size == 2)
+ return (0xf & ~(BIT(n) | BIT(n+1))) << 4;
+ if (size == 4)
+ return 0;
+ return 0xffffffff;
+}
+
+static int ixp4xx_pci_read_config(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *value)
+{
+ u32 n, byte_enables, addr, data;
+ u8 bus_num = bus->number;
+
+ pr_debug("read_config from %d size %d dev %d:%d:%d\n", where, size,
+ bus_num, PCI_SLOT(devfn), PCI_FUNC(devfn));
+
+ *value = 0xffffffff;
+ n = where % 4;
+ byte_enables = byte_lane_enable_bits(n, size);
+ if (byte_enables == 0xffffffff)
+ return PCIBIOS_BAD_REGISTER_NUMBER;
+
+ addr = ixp4xx_config_addr(bus_num, devfn, where);
+ if (ixp4xx_pci_read(addr, byte_enables | NP_CMD_CONFIGREAD, &data))
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ *value = (data >> (8*n)) & bytemask[size];
+ pr_debug("read_config_byte read %#x\n", *value);
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int ixp4xx_pci_write_config(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 value)
+{
+ u32 n, byte_enables, addr, data;
+ u8 bus_num = bus->number;
+
+ pr_debug("write_config_byte %#x to %d size %d dev %d:%d:%d\n", value, where,
+ size, bus_num, PCI_SLOT(devfn), PCI_FUNC(devfn));
+
+ n = where % 4;
+ byte_enables = byte_lane_enable_bits(n, size);
+ if (byte_enables == 0xffffffff)
+ return PCIBIOS_BAD_REGISTER_NUMBER;
+
+ addr = ixp4xx_config_addr(bus_num, devfn, where);
+ data = value << (8*n);
+ if (ixp4xx_pci_write(addr, byte_enables | NP_CMD_CONFIGWRITE, data))
+ return PCIBIOS_DEVICE_NOT_FOUND;
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+struct pci_ops ixp4xx_ops = {
+ .read = ixp4xx_pci_read_config,
+ .write = ixp4xx_pci_write_config,
+};
+
+/*
+ * PCI abort handler
+ */
+static int abort_handler(unsigned long addr, unsigned int fsr, struct pt_regs *regs)
+{
+ u32 isr, status;
+
+ isr = *PCI_ISR;
+ local_read_config(PCI_STATUS, 2, &status);
+ pr_debug("PCI: abort_handler addr = %#lx, isr = %#x, "
+ "status = %#x\n", addr, isr, status);
+
+ /* make sure the Master Abort bit is reset */
+ *PCI_ISR = PCI_ISR_PFE;
+ status |= PCI_STATUS_REC_MASTER_ABORT;
+ local_write_config(PCI_STATUS, 2, status);
+
+ /*
+ * If it was an imprecise abort, then we need to correct the
+ * return address to be _after_ the instruction.
+ */
+ if (fsr & (1 << 10))
+ regs->ARM_pc += 4;
+
+ return 0;
+}
+
+
+/*
+ * Setup DMA mask to 64MB on PCI devices. Ignore all other devices.
+ */
+static int ixp4xx_pci_platform_notify(struct device *dev)
+{
+ if(dev->bus == &pci_bus_type) {
+ *dev->dma_mask = SZ_64M - 1;
+ dev->coherent_dma_mask = SZ_64M - 1;
+ dmabounce_register_dev(dev, 2048, 4096);
+ }
+ return 0;
+}
+
+static int ixp4xx_pci_platform_notify_remove(struct device *dev)
+{
+ if(dev->bus == &pci_bus_type) {
+ dmabounce_unregister_dev(dev);
+ }
+ return 0;
+}
+
+int dma_needs_bounce(struct device *dev, dma_addr_t dma_addr, size_t size)
+{
+ return (dev->bus == &pci_bus_type ) && ((dma_addr + size) >= SZ_64M);
+}
+
+void __init ixp4xx_pci_preinit(void)
+{
+ unsigned long processor_id;
+
+ asm("mrc p15, 0, %0, cr0, cr0, 0;" : "=r"(processor_id) :);
+
+ /*
+ * Determine which PCI read method to use
+ */
+ if (!(processor_id & 0xf)) {
+ printk("PCI: IXP4xx A0 silicon detected - "
+ "PCI Non-Prefetch Workaround Enabled\n");
+ ixp4xx_pci_read = ixp4xx_pci_read_errata;
+ } else
+ ixp4xx_pci_read = ixp4xx_pci_read_no_errata;
+
+
+ /* hook in our fault handler for PCI errors */
+ hook_fault_code(16+6, abort_handler, SIGBUS, "imprecise external abort");
+
+ pr_debug("setup PCI-AHB(inbound) and AHB-PCI(outbound) address mappings\n");
+
+ /*
+ * We use identity AHB->PCI address translation
+ * in the 0x48000000 to 0x4bffffff address space
+ */
+ *PCI_PCIMEMBASE = 0x48494A4B;
+
+ /*
+ * We also use identity PCI->AHB address translation
+ * in 4 16MB BARs that begin at the physical memory start
+ */
+ *PCI_AHBMEMBASE = (PHYS_OFFSET & 0xFF000000) +
+ ((PHYS_OFFSET & 0xFF000000) >> 8) +
+ ((PHYS_OFFSET & 0xFF000000) >> 16) +
+ ((PHYS_OFFSET & 0xFF000000) >> 24) +
+ 0x00010203;
+
+ if (*PCI_CSR & PCI_CSR_HOST) {
+ printk("PCI: IXP4xx is host\n");
+
+ pr_debug("setup BARs in controller\n");
+
+ /*
+ * We configure the PCI inbound memory windows to be
+ * 1:1 mapped to SDRAM
+ */
+ local_write_config(PCI_BASE_ADDRESS_0, 4, PHYS_OFFSET + 0x00000000);
+ local_write_config(PCI_BASE_ADDRESS_1, 4, PHYS_OFFSET + 0x01000000);
+ local_write_config(PCI_BASE_ADDRESS_2, 4, PHYS_OFFSET + 0x02000000);
+ local_write_config(PCI_BASE_ADDRESS_3, 4, PHYS_OFFSET + 0x03000000);
+
+ /*
+ * Enable CSR window at 0xff000000.
+ */
+ local_write_config(PCI_BASE_ADDRESS_4, 4, 0xff000008);
+
+ /*
+ * Enable the IO window to be way up high, at 0xfffffc00
+ */
+ local_write_config(PCI_BASE_ADDRESS_5, 4, 0xfffffc01);
+ } else {
+ printk("PCI: IXP4xx is target - No bus scan performed\n");
+ }
+
+ printk("PCI: IXP4xx Using %s access for memory space\n",
+#ifndef CONFIG_IXP4XX_INDIRECT_PCI
+ "direct"
+#else
+ "indirect"
+#endif
+ );
+
+ pr_debug("clear error bits in ISR\n");
+ *PCI_ISR = PCI_ISR_PSE | PCI_ISR_PFE | PCI_ISR_PPE | PCI_ISR_AHBE;
+
+ /*
+ * Set Initialize Complete in PCI Control Register: allow IXP4XX to
+ * respond to PCI configuration cycles. Specify that the AHB bus is
+ * operating in big endian mode. Set up byte lane swapping between
+ * little-endian PCI and the big-endian AHB bus
+ */
+#ifdef __ARMEB__
+ *PCI_CSR = PCI_CSR_IC | PCI_CSR_ABE | PCI_CSR_PDS | PCI_CSR_ADS;
+#else
+ *PCI_CSR = PCI_CSR_IC;
+#endif
+
+ pr_debug("DONE\n");
+}
+
+int ixp4xx_setup(int nr, struct pci_sys_data *sys)
+{
+ struct resource *res;
+
+ if (nr >= 1)
+ return 0;
+
+ res = kmalloc(sizeof(*res) * 2, GFP_KERNEL);
+ if (res == NULL) {
+ /*
+ * If we're out of memory this early, something is wrong,
+ * so we might as well catch it here.
+ */
+ panic("PCI: unable to allocate resources?\n");
+ }
+ memset(res, 0, sizeof(*res) * 2);
+
+ local_write_config(PCI_COMMAND, 2, PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY);
+
+ res[0].name = "PCI I/O Space";
+ res[0].start = 0x00001000;
+ res[0].end = 0xffff0000;
+ res[0].flags = IORESOURCE_IO;
+
+ res[1].name = "PCI Memory Space";
+ res[1].start = 0x48000000;
+#ifndef CONFIG_IXP4XX_INDIRECT_PCI
+ res[1].end = 0x4bffffff;
+#else
+ res[1].end = 0x4fffffff;
+#endif
+ res[1].flags = IORESOURCE_MEM;
+
+ request_resource(&ioport_resource, &res[0]);
+ request_resource(&iomem_resource, &res[1]);
+
+ sys->resource[0] = &res[0];
+ sys->resource[1] = &res[1];
+ sys->resource[2] = NULL;
+
+ platform_notify = ixp4xx_pci_platform_notify;
+ platform_notify_remove = ixp4xx_pci_platform_notify_remove;
+
+ return 1;
+}
+
+struct pci_bus *ixp4xx_scan_bus(int nr, struct pci_sys_data *sys)
+{
+ return pci_scan_bus(sys->busnr, &ixp4xx_ops, sys);
+}
+
+/*
+ * We override these so we properly do dmabounce otherwise drivers
+ * are able to set the dma_mask to 0xffffffff and we can no longer
+ * trap bounces. :(
+ *
+ * We just return true on everyhing except for < 64MB in which case
+ * we will fail miseralby and die since we can't handle that case.
+ */
+int
+pci_set_dma_mask(struct pci_dev *dev, u64 mask)
+{
+ if (mask >= SZ_64M - 1 )
+ return 0;
+
+ return -EIO;
+}
+
+int
+pci_dac_set_dma_mask(struct pci_dev *dev, u64 mask)
+{
+ if (mask >= SZ_64M - 1 )
+ return 0;
+
+ return -EIO;
+}
+
+int
+pci_set_consistent_dma_mask(struct pci_dev *dev, u64 mask)
+{
+ if (mask >= SZ_64M - 1 )
+ return 0;
+
+ return -EIO;
+}
+
+EXPORT_SYMBOL(pci_set_dma_mask);
+EXPORT_SYMBOL(pci_dac_set_dma_mask);
+EXPORT_SYMBOL(pci_set_consistent_dma_mask);
+EXPORT_SYMBOL(ixp4xx_pci_read);
+EXPORT_SYMBOL(ixp4xx_pci_write);
+
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/common.c
+ *
+ * Generic code shared across all IXP4XX platforms
+ *
+ * Maintainer: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright 2002 (c) Intel Corporation
+ * Copyright 2003-2004 (c) MontaVista, Software, Inc.
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/serial.h>
+#include <linux/sched.h>
+#include <linux/tty.h>
+#include <linux/serial_core.h>
+#include <linux/bootmem.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <linux/time.h>
+#include <linux/timex.h>
+
+#include <asm/hardware.h>
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/irq.h>
+
+#include <asm/mach/map.h>
+#include <asm/mach/irq.h>
+#include <asm/mach/time.h>
+
+
+/*************************************************************************
+ * GPIO acces functions
+ *************************************************************************/
+
+/*
+ * Configure GPIO line for input, interrupt, or output operation
+ *
+ * TODO: Enable/disable the irq_desc based on interrupt or output mode.
+ * TODO: Should these be named ixp4xx_gpio_?
+ */
+void gpio_line_config(u8 line, u32 style)
+{
+ u32 enable;
+ volatile u32 *int_reg;
+ u32 int_style;
+
+ enable = *IXP4XX_GPIO_GPOER;
+
+ if (style & IXP4XX_GPIO_OUT) {
+ enable &= ~((1) << line);
+ } else if (style & IXP4XX_GPIO_IN) {
+ enable |= ((1) << line);
+
+ switch (style & IXP4XX_GPIO_INTSTYLE_MASK)
+ {
+ case (IXP4XX_GPIO_ACTIVE_HIGH):
+ int_style = IXP4XX_GPIO_STYLE_ACTIVE_HIGH;
+ break;
+ case (IXP4XX_GPIO_ACTIVE_LOW):
+ int_style = IXP4XX_GPIO_STYLE_ACTIVE_LOW;
+ break;
+ case (IXP4XX_GPIO_RISING_EDGE):
+ int_style = IXP4XX_GPIO_STYLE_RISING_EDGE;
+ break;
+ case (IXP4XX_GPIO_FALLING_EDGE):
+ int_style = IXP4XX_GPIO_STYLE_FALLING_EDGE;
+ break;
+ case (IXP4XX_GPIO_TRANSITIONAL):
+ int_style = IXP4XX_GPIO_STYLE_TRANSITIONAL;
+ break;
+ default:
+ int_style = IXP4XX_GPIO_STYLE_ACTIVE_HIGH;
+ break;
+ }
+
+ if (line >= 8) { /* pins 8-15 */
+ line -= 8;
+ int_reg = IXP4XX_GPIO_GPIT2R;
+ }
+ else { /* pins 0-7 */
+ int_reg = IXP4XX_GPIO_GPIT1R;
+ }
+
+ /* Clear the style for the appropriate pin */
+ *int_reg &= ~(IXP4XX_GPIO_STYLE_CLEAR <<
+ (line * IXP4XX_GPIO_STYLE_SIZE));
+
+ /* Set the new style */
+ *int_reg |= (int_style << (line * IXP4XX_GPIO_STYLE_SIZE));
+ }
+
+ *IXP4XX_GPIO_GPOER = enable;
+}
+
+EXPORT_SYMBOL(gpio_line_config);
+
+/*************************************************************************
+ * IXP4xx chipset I/O mapping
+ *************************************************************************/
+static struct map_desc ixp4xx_io_desc[] __initdata = {
+ { /* UART, Interrupt ctrl, GPIO, timers, NPEs, MACs, USB .... */
+ .virtual = IXP4XX_PERIPHERAL_BASE_VIRT,
+ .physical = IXP4XX_PERIPHERAL_BASE_PHYS,
+ .length = IXP4XX_PERIPHERAL_REGION_SIZE,
+ .type = MT_DEVICE
+ }, { /* Expansion Bus Config Registers */
+ .virtual = IXP4XX_EXP_CFG_BASE_VIRT,
+ .physical = IXP4XX_EXP_CFG_BASE_PHYS,
+ .length = IXP4XX_EXP_CFG_REGION_SIZE,
+ .type = MT_DEVICE
+ }, { /* PCI Registers */
+ .virtual = IXP4XX_PCI_CFG_BASE_VIRT,
+ .physical = IXP4XX_PCI_CFG_BASE_PHYS,
+ .length = IXP4XX_PCI_CFG_REGION_SIZE,
+ .type = MT_DEVICE
+ }
+};
+
+void __init ixp4xx_map_io(void)
+{
+ iotable_init(ixp4xx_io_desc, ARRAY_SIZE(ixp4xx_io_desc));
+}
+
+
+/*************************************************************************
+ * IXP4xx chipset IRQ handling
+ *
+ * TODO: GPIO IRQs should be marked invalid until the user of the IRQ
+ * (be it PCI or something else) configures that GPIO line
+ * as an IRQ. Also, we should use a different chip structure for
+ * level-based GPIO vs edge-based GPIO. Currently nobody needs this as
+ * all HW that's publically available uses level IRQs, so we'll
+ * worry about it if/when we have HW to test.
+ **************************************************************************/
+static void ixp4xx_irq_mask(unsigned int irq)
+{
+ *IXP4XX_ICMR &= ~(1 << irq);
+}
+
+static void ixp4xx_irq_mask_ack(unsigned int irq)
+{
+ ixp4xx_irq_mask(irq);
+}
+
+static void ixp4xx_irq_unmask(unsigned int irq)
+{
+ static int irq2gpio[NR_IRQS] = {
+ -1, -1, -1, -1, -1, -1, 0, 1,
+ -1, -1, -1, -1, -1, -1, -1, -1,
+ -1, -1, -1, 2, 3, 4, 5, 6,
+ 7, 8, 9, 10, 11, 12, -1, -1,
+ };
+ int line = irq2gpio[irq];
+
+ /*
+ * This only works for LEVEL gpio IRQs as per the IXP4xx developer's
+ * manual. If edge-triggered, need to move it to the mask_ack.
+ * Nobody seems to be using the edge-triggered mode on the GPIOs.
+ */
+ if (line >= 0)
+ gpio_line_isr_clear(line);
+
+ *IXP4XX_ICMR |= (1 << irq);
+}
+
+static struct irqchip ixp4xx_irq_chip = {
+ .ack = ixp4xx_irq_mask_ack,
+ .mask = ixp4xx_irq_mask,
+ .unmask = ixp4xx_irq_unmask,
+};
+
+void __init ixp4xx_init_irq(void)
+{
+ int i = 0;
+
+ /* Route all sources to IRQ instead of FIQ */
+ *IXP4XX_ICLR = 0x0;
+
+ /* Disable all interrupt */
+ *IXP4XX_ICMR = 0x0;
+
+ for(i = 0; i < NR_IRQS; i++)
+ {
+ set_irq_chip(i, &ixp4xx_irq_chip);
+ set_irq_handler(i, do_level_IRQ);
+ set_irq_flags(i, IRQF_VALID);
+ }
+}
+
+
+/*************************************************************************
+ * IXP4xx timer tick
+ * We use OS timer1 on the CPU for the timer tick and the timestamp
+ * counter as a source of real clock ticks to account for missed jiffies.
+ *************************************************************************/
+
+static unsigned volatile last_jiffy_time;
+
+#define CLOCK_TICKS_PER_USEC ((CLOCK_TICK_RATE + USEC_PER_SEC/2) / USEC_PER_SEC)
+
+/* IRQs are disabled before entering here from do_gettimeofday() */
+static unsigned long ixp4xx_gettimeoffset(void)
+{
+ u32 elapsed;
+
+ elapsed = *IXP4XX_OSTS - last_jiffy_time;
+
+ return elapsed / CLOCK_TICKS_PER_USEC;
+}
+
+static irqreturn_t ixp4xx_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ write_seqlock(&xtime_lock);
+
+ /* Clear Pending Interrupt by writing '1' to it */
+ *IXP4XX_OSST = IXP4XX_OSST_TIMER_1_PEND;
+
+ /*
+ * Catch up with the real idea of time
+ */
+ while ((*IXP4XX_OSTS - last_jiffy_time) > LATCH) {
+ timer_tick(regs);
+ last_jiffy_time += LATCH;
+ }
+
+ write_sequnlock(&xtime_lock);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction ixp4xx_timer_irq = {
+ .name = "IXP4xx Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = ixp4xx_timer_interrupt
+};
+
+static void __init ixp4xx_timer_init(void)
+{
+ /* Clear Pending Interrupt by writing '1' to it */
+ *IXP4XX_OSST = IXP4XX_OSST_TIMER_1_PEND;
+
+ /* Setup the Timer counter value */
+ *IXP4XX_OSRT1 = (LATCH & ~IXP4XX_OST_RELOAD_MASK) | IXP4XX_OST_ENABLE;
+
+ /* Reset time-stamp counter */
+ *IXP4XX_OSTS = 0;
+ last_jiffy_time = 0;
+
+ /* Connect the interrupt handler and enable the interrupt */
+ setup_irq(IRQ_IXP4XX_TIMER1, &ixp4xx_timer_irq);
+}
+
+struct sys_timer ixp4xx_timer = {
+ .init = ixp4xx_timer_init,
+ .offset = ixp4xx_gettimeoffset,
+};
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/coyote-setup.c
+ *
+ * Board setup for ADI Engineering and IXDGP425 boards
+ *
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/serial.h>
+#include <linux/tty.h>
+#include <linux/serial_core.h>
+
+#include <asm/types.h>
+#include <asm/setup.h>
+#include <asm/memory.h>
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/mach-types.h>
+#include <asm/mach/arch.h>
+#include <asm/mach/flash.h>
+
+#ifdef __ARMEB__
+#define REG_OFFSET 3
+#else
+#define REG_OFFSET 0
+#endif
+
+/*
+ * Only one serial port is connected on the Coyote & IXDPG425
+ */
+static struct uart_port coyote_serial_port = {
+ .membase = (char*)(IXP4XX_UART2_BASE_VIRT + REG_OFFSET),
+ .mapbase = (IXP4XX_UART2_BASE_PHYS),
+ .irq = IRQ_IXP4XX_UART2,
+ .flags = UPF_SKIP_TEST,
+ .iotype = UPIO_MEM,
+ .regshift = 2,
+ .uartclk = IXP4XX_UART_XTAL,
+ .line = 0,
+ .type = PORT_XSCALE,
+ .fifosize = 32
+};
+
+void __init coyote_map_io(void)
+{
+ if (machine_is_ixdpg425()) {
+ coyote_serial_port.membase =
+ (char*)(IXP4XX_UART1_BASE_VIRT + REG_OFFSET);
+ coyote_serial_port.mapbase = IXP4XX_UART1_BASE_PHYS;
+ coyote_serial_port.irq = IRQ_IXP4XX_UART1;
+ }
+
+ early_serial_setup(&coyote_serial_port);
+
+ ixp4xx_map_io();
+}
+
+static struct flash_platform_data coyote_flash_data = {
+ .map_name = "cfi_probe",
+ .width = 2,
+};
+
+static struct resource coyote_flash_resource = {
+ .start = COYOTE_FLASH_BASE,
+ .end = COYOTE_FLASH_BASE + COYOTE_FLASH_SIZE,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device coyote_flash = {
+ .name = "IXP4XX-Flash",
+ .id = 0,
+ .dev = {
+ .platform_data = &coyote_flash_data,
+ },
+ .num_resources = 1,
+ .resource = &coyote_flash_resource,
+};
+
+static struct platform_device *coyote_devices[] __initdata = {
+ &coyote_flash
+};
+
+static void __init coyote_init(void)
+{
+ *IXP4XX_EXP_CS0 |= IXP4XX_FLASH_WRITABLE;
+ *IXP4XX_EXP_CS1 = *IXP4XX_EXP_CS0;
+
+ platform_add_devices(&coyote_devices, ARRAY_SIZE(coyote_devices));
+}
+
+#ifdef CONFIG_ARCH_ADI_COYOTE
+MACHINE_START(ADI_COYOTE, "ADI Engineering Coyote")
+ MAINTAINER("MontaVista Software, Inc.")
+ BOOT_MEM(PHYS_OFFSET, IXP4XX_PERIPHERAL_BASE_PHYS,
+ IXP4XX_PERIPHERAL_BASE_VIRT)
+ MAPIO(coyote_map_io)
+ INITIRQ(ixp4xx_init_irq)
+ .timer = &ixp4xx_timer,
+ BOOT_PARAMS(0x0100)
+ INIT_MACHINE(coyote_init)
+MACHINE_END
+#endif
+
+/*
+ * IXDPG425 is identical to Coyote except for which serial port
+ * is connected.
+ */
+#ifdef CONFIG_MACH_IXDPG425
+MACHINE_START(IXDPG425, "Intel IXDPG425")
+ MAINTAINER("MontaVista Software, Inc.")
+ BOOT_MEM(PHYS_OFFSET, IXP4XX_PERIPHERAL_BASE_PHYS,
+ IXP4XX_PERIPHERAL_BASE_VIRT)
+ MAPIO(coyote_map_io)
+ INITIRQ(ixp4xx_init_irq)
+ .timer = &ixp4xx_timer,
+ BOOT_PARAMS(0x0100)
+ INIT_MACHINE(coyote_init)
+MACHINE_END
+#endif
+
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/ixdp425-setup.c
+ *
+ * IXDP425/IXCDP1100 board-setup
+ *
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/serial.h>
+#include <linux/tty.h>
+#include <linux/serial_core.h>
+
+#include <asm/types.h>
+#include <asm/setup.h>
+#include <asm/memory.h>
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/irq.h>
+#include <asm/mach/arch.h>
+#include <asm/mach/flash.h>
+
+#ifdef __ARMEB__
+#define REG_OFFSET 3
+#else
+#define REG_OFFSET 0
+#endif
+
+/*
+ * IXDP425 uses both chipset serial ports
+ */
+static struct uart_port ixdp425_serial_ports[] = {
+ {
+ .membase = (char*)(IXP4XX_UART1_BASE_VIRT + REG_OFFSET),
+ .mapbase = (IXP4XX_UART1_BASE_PHYS),
+ .irq = IRQ_IXP4XX_UART1,
+ .flags = UPF_SKIP_TEST,
+ .iotype = UPIO_MEM,
+ .regshift = 2,
+ .uartclk = IXP4XX_UART_XTAL,
+ .line = 0,
+ .type = PORT_XSCALE,
+ .fifosize = 32
+ } , {
+ .membase = (char*)(IXP4XX_UART2_BASE_VIRT + REG_OFFSET),
+ .mapbase = (IXP4XX_UART2_BASE_PHYS),
+ .irq = IRQ_IXP4XX_UART2,
+ .flags = UPF_SKIP_TEST,
+ .iotype = UPIO_MEM,
+ .regshift = 2,
+ .uartclk = IXP4XX_UART_XTAL,
+ .line = 1,
+ .type = PORT_XSCALE,
+ .fifosize = 32
+ }
+};
+
+void __init ixdp425_map_io(void)
+{
+ early_serial_setup(&ixdp425_serial_ports[0]);
+ early_serial_setup(&ixdp425_serial_ports[1]);
+
+ ixp4xx_map_io();
+}
+
+static struct flash_platform_data ixdp425_flash_data = {
+ .map_name = "cfi_probe",
+ .width = 2,
+};
+
+static struct resource ixdp425_flash_resource = {
+ .start = IXDP425_FLASH_BASE,
+ .end = IXDP425_FLASH_BASE + IXDP425_FLASH_SIZE,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device ixdp425_flash = {
+ .name = "IXP4XX-Flash",
+ .id = 0,
+ .dev = {
+ .platform_data = &ixdp425_flash_data,
+ },
+ .num_resources = 1,
+ .resource = &ixdp425_flash_resource,
+};
+
+static struct ixp4xx_i2c_pins ixdp425_i2c_gpio_pins = {
+ .sda_pin = IXDP425_SDA_PIN,
+ .scl_pin = IXDP425_SCL_PIN,
+};
+
+static struct platform_device ixdp425_i2c_controller = {
+ .name = "IXP4XX-I2C",
+ .id = 0,
+ .dev = {
+ .platform_data = &ixdp425_i2c_gpio_pins,
+ },
+ .num_resources = 0
+};
+
+static struct platform_device *ixdp425_devices[] __initdata = {
+ &ixdp425_i2c_controller,
+ &ixdp425_flash
+};
+
+static void __init ixdp425_init(void)
+{
+ platform_add_devices(&ixdp425_devices, ARRAY_SIZE(ixdp425_devices));
+}
+
+MACHINE_START(IXDP425, "Intel IXDP425 Development Platform")
+ MAINTAINER("MontaVista Software, Inc.")
+ BOOT_MEM(PHYS_OFFSET, IXP4XX_PERIPHERAL_BASE_PHYS,
+ IXP4XX_PERIPHERAL_BASE_VIRT)
+ MAPIO(ixdp425_map_io)
+ INITIRQ(ixp4xx_init_irq)
+ .timer = &ixp4xx_timer,
+ BOOT_PARAMS(0x0100)
+ INIT_MACHINE(ixdp425_init)
+MACHINE_END
+
+MACHINE_START(IXCDP1100, "Intel IXCDP1100 Development Platform")
+ MAINTAINER("MontaVista Software, Inc.")
+ BOOT_MEM(PHYS_OFFSET, IXP4XX_PERIPHERAL_BASE_PHYS,
+ IXP4XX_PERIPHERAL_BASE_VIRT)
+ MAPIO(ixdp425_map_io)
+ INITIRQ(ixp4xx_init_irq)
+ .timer = &ixp4xx_timer,
+ BOOT_PARAMS(0x0100)
+ INIT_MACHINE(ixdp425_init)
+MACHINE_END
+
+/*
+ * Avila is functionally equivalent to IXDP425 except that it adds
+ * a CF IDE slot hanging off the expansion bus. When we have a
+ * driver for IXP4xx CF IDE with driver model support we'll move
+ * Avila to it's own setup file.
+ */
+#ifdef CONFIG_ARCH_AVILA
+MACHINE_START(AVILA, "Gateworks Avila Network Platform")
+ MAINTAINER("Deepak Saxena <dsaxena@plexity.net>")
+ BOOT_MEM(PHYS_OFFSET, IXP4XX_PERIPHERAL_BASE_PHYS,
+ IXP4XX_PERIPHERAL_BASE_VIRT)
+ MAPIO(ixdp425_map_io)
+ INITIRQ(ixp4xx_init_irq)
+ .timer = &ixp4xx_timer,
+ BOOT_PARAMS(0x0100)
+ INIT_MACHINE(ixdp425_init)
+MACHINE_END
+#endif
+
--- /dev/null
+/*
+ * arch/arm/mach-ixp4xx/prpmc1100-setup.c
+ *
+ * Motorola PrPMC1100 board setup
+ *
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/serial.h>
+#include <linux/tty.h>
+#include <linux/serial_core.h>
+
+#include <asm/types.h>
+#include <asm/setup.h>
+#include <asm/memory.h>
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/mach-types.h>
+#include <asm/mach/arch.h>
+#include <asm/mach/flash.h>
+
+#ifdef __ARMEB__
+#define REG_OFFSET 3
+#else
+#define REG_OFFSET 0
+#endif
+
+/*
+ * Only one serial port is connected on the PrPMC1100
+ */
+static struct uart_port prpmc1100_serial_port = {
+ .membase = (char*)(IXP4XX_UART1_BASE_VIRT + REG_OFFSET),
+ .mapbase = (IXP4XX_UART1_BASE_PHYS),
+ .irq = IRQ_IXP4XX_UART1,
+ .flags = UPF_SKIP_TEST,
+ .iotype = UPIO_MEM,
+ .regshift = 2,
+ .uartclk = IXP4XX_UART_XTAL,
+ .line = 0,
+ .type = PORT_XSCALE,
+ .fifosize = 32
+};
+
+void __init prpmc1100_map_io(void)
+{
+ early_serial_setup(&prpmc1100_serial_port);
+
+ ixp4xx_map_io();
+}
+
+static struct flash_platform_data prpmc1100_flash_data = {
+ .map_name = "cfi_probe",
+ .width = 2,
+};
+
+static struct resource prpmc1100_flash_resource = {
+ .start = PRPMC1100_FLASH_BASE,
+ .end = PRPMC1100_FLASH_BASE + PRPMC1100_FLASH_SIZE,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device prpmc1100_flash = {
+ .name = "IXP4XX-Flash",
+ .id = 0,
+ .dev = {
+ .platform_data = &prpmc1100_flash_data,
+ },
+ .num_resources = 1,
+ .resource = &prpmc1100_flash_resource,
+};
+
+static struct platform_device *prpmc1100_devices[] __initdata = {
+ &prpmc1100_flash
+};
+
+static void __init prpmc1100_init(void)
+{
+ platform_add_devices(&prpmc1100_devices, ARRAY_SIZE(prpmc1100_devices));
+}
+
+MACHINE_START(PRPMC1100, "Motorola PrPMC1100")
+ MAINTAINER("MontaVista Software, Inc.")
+ BOOT_MEM(PHYS_OFFSET, IXP4XX_PERIPHERAL_BASE_PHYS,
+ IXP4XX_PERIPHERAL_BASE_VIRT)
+ MAPIO(prpmc1100_map_io)
+ INITIRQ(ixp4xx_init_irq)
+ .timer = &ixp4xx_timer,
+ BOOT_PARAMS(0x0100)
+ INIT_MACHINE(prpmc1100_init)
+MACHINE_END
+
--- /dev/null
+/*
+ * arch/arm/mach-lh7a40x/time.c
+ *
+ * Copyright (C) 2004 Logic Product Development
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/leds.h>
+
+#include <asm/mach/time.h>
+#include "common.h"
+
+#if HZ < 100
+# define TIMER_CONTROL TIMER_CONTROL2
+# define TIMER_LOAD TIMER_LOAD2
+# define TIMER_CONSTANT (508469/HZ)
+# define TIMER_MODE (TIMER_C_ENABLE | TIMER_C_PERIODIC | TIMER_C_508KHZ)
+# define TIMER_EOI TIMER_EOI2
+# define TIMER_IRQ IRQ_T2UI
+#else
+# define TIMER_CONTROL TIMER_CONTROL3
+# define TIMER_LOAD TIMER_LOAD3
+# define TIMER_CONSTANT (3686400/HZ)
+# define TIMER_MODE (TIMER_C_ENABLE | TIMER_C_PERIODIC)
+# define TIMER_EOI TIMER_EOI3
+# define TIMER_IRQ IRQ_T3UI
+#endif
+
+static irqreturn_t
+lh7a40x_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ write_seqlock(&xtime_lock);
+
+ TIMER_EOI = 0;
+ timer_tick(regs);
+
+ write_sequnlock(&xtime_lock);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction lh7a40x_timer_irq = {
+ .name = "LHA740x Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = lh7a40x_timer_interrupt
+};
+
+static void __init lh7a40x_timer_init(void)
+{
+ /* Stop/disable all timers */
+ TIMER_CONTROL1 = 0;
+ TIMER_CONTROL2 = 0;
+ TIMER_CONTROL3 = 0;
+
+ setup_irq (TIMER_IRQ, &lh7a40x_timer_irq);
+
+ TIMER_LOAD = TIMER_CONSTANT;
+ TIMER_CONTROL = TIMER_MODE;
+}
+
+struct sys_timer lh7a40x_timer = {
+ .init = &lh7a40x_timer_init,
+};
--- /dev/null
+/*
+ * linux/arch/arm/mach-omap/time.c
+ *
+ * OMAP Timers
+ *
+ * Copyright (C) 2004 Nokia Corporation
+ * Partial timer rewrite and additional VST timer support by
+ * Tony Lindgen <tony@atomide.com> and
+ * Tuukka Tikkanen <tuukka.tikkanen@elektrobit.com>
+ *
+ * MPU timer code based on the older MPU timer code for OMAP
+ * Copyright (C) 2000 RidgeRun, Inc.
+ * Author: Greg Lonnon <glonnon@ridgerun.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+ * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+
+#include <asm/system.h>
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/leds.h>
+#include <asm/irq.h>
+#include <asm/mach/irq.h>
+#include <asm/mach/time.h>
+
+struct sys_timer omap_timer;
+
+/*
+ * ---------------------------------------------------------------------------
+ * MPU timer
+ * ---------------------------------------------------------------------------
+ */
+#define OMAP_MPU_TIMER1_BASE (0xfffec500)
+#define OMAP_MPU_TIMER2_BASE (0xfffec600)
+#define OMAP_MPU_TIMER3_BASE (0xfffec700)
+#define OMAP_MPU_TIMER_BASE OMAP_MPU_TIMER1_BASE
+#define OMAP_MPU_TIMER_OFFSET 0x100
+
+#define MPU_TIMER_FREE (1 << 6)
+#define MPU_TIMER_CLOCK_ENABLE (1 << 5)
+#define MPU_TIMER_AR (1 << 1)
+#define MPU_TIMER_ST (1 << 0)
+
+/*
+ * MPU_TICKS_PER_SEC must be an even number, otherwise machinecycles_to_usecs
+ * will break. On P2, the timer count rate is 6.5 MHz after programming PTV
+ * with 0. This divides the 13MHz input by 2, and is undocumented.
+ */
+#ifdef CONFIG_MACH_OMAP_PERSEUS2
+/* REVISIT: This ifdef construct should be replaced by a query to clock
+ * framework to see if timer base frequency is 12.0, 13.0 or 19.2 MHz.
+ */
+#define MPU_TICKS_PER_SEC (13000000 / 2)
+#else
+#define MPU_TICKS_PER_SEC (12000000 / 2)
+#endif
+
+#define MPU_TIMER_TICK_PERIOD ((MPU_TICKS_PER_SEC / HZ) - 1)
+
+typedef struct {
+ u32 cntl; /* CNTL_TIMER, R/W */
+ u32 load_tim; /* LOAD_TIM, W */
+ u32 read_tim; /* READ_TIM, R */
+} omap_mpu_timer_regs_t;
+
+#define omap_mpu_timer_base(n) \
+((volatile omap_mpu_timer_regs_t*)IO_ADDRESS(OMAP_MPU_TIMER_BASE + \
+ (n)*OMAP_MPU_TIMER_OFFSET))
+
+static inline unsigned long omap_mpu_timer_read(int nr)
+{
+ volatile omap_mpu_timer_regs_t* timer = omap_mpu_timer_base(nr);
+ return timer->read_tim;
+}
+
+static inline void omap_mpu_timer_start(int nr, unsigned long load_val)
+{
+ volatile omap_mpu_timer_regs_t* timer = omap_mpu_timer_base(nr);
+
+ timer->cntl = MPU_TIMER_CLOCK_ENABLE;
+ udelay(1);
+ timer->load_tim = load_val;
+ udelay(1);
+ timer->cntl = (MPU_TIMER_CLOCK_ENABLE | MPU_TIMER_AR | MPU_TIMER_ST);
+}
+
+unsigned long omap_mpu_timer_ticks_to_usecs(unsigned long nr_ticks)
+{
+ /* Round up to nearest usec */
+ return ((nr_ticks * 1000) / (MPU_TICKS_PER_SEC / 2 / 1000) + 1) >> 1;
+}
+
+/*
+ * Last processed system timer interrupt
+ */
+static unsigned long omap_mpu_timer_last = 0;
+
+/*
+ * Returns elapsed usecs since last system timer interrupt
+ */
+static unsigned long omap_mpu_timer_gettimeoffset(void)
+{
+ unsigned long now = 0 - omap_mpu_timer_read(0);
+ unsigned long elapsed = now - omap_mpu_timer_last;
+
+ return omap_mpu_timer_ticks_to_usecs(elapsed);
+}
+
+/*
+ * Elapsed time between interrupts is calculated using timer0.
+ * Latency during the interrupt is calculated using timer1.
+ * Both timer0 and timer1 are counting at 6MHz (P2 6.5MHz).
+ */
+static irqreturn_t omap_mpu_timer_interrupt(int irq, void *dev_id,
+ struct pt_regs *regs)
+{
+ unsigned long now, latency;
+
+ write_seqlock(&xtime_lock);
+ now = 0 - omap_mpu_timer_read(0);
+ latency = MPU_TICKS_PER_SEC / HZ - omap_mpu_timer_read(1);
+ omap_mpu_timer_last = now - latency;
+ timer_tick(regs);
+ write_sequnlock(&xtime_lock);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction omap_mpu_timer_irq = {
+ .name = "mpu timer",
+ .flags = SA_INTERRUPT,
+ .handler = omap_mpu_timer_interrupt
+};
+
+static __init void omap_init_mpu_timer(void)
+{
+ omap_timer.offset = omap_mpu_timer_gettimeoffset;
+ setup_irq(INT_TIMER2, &omap_mpu_timer_irq);
+ omap_mpu_timer_start(0, 0xffffffff);
+ omap_mpu_timer_start(1, MPU_TIMER_TICK_PERIOD);
+}
+
+/*
+ * ---------------------------------------------------------------------------
+ * Timer initialization
+ * ---------------------------------------------------------------------------
+ */
+void __init omap_timer_init(void)
+{
+ omap_init_mpu_timer();
+}
+
+struct sys_timer omap_timer = {
+ .init = omap_timer_init,
+ .offset = NULL, /* Initialized later */
+};
--- /dev/null
+/*
+ * linux/arch/arm/mach-pxa/leds-mainstone.c
+ *
+ * Author: Nicolas Pitre
+ * Created: Nov 05, 2002
+ * Copyright: MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+
+#include <asm/hardware.h>
+#include <asm/leds.h>
+#include <asm/system.h>
+
+#include <asm/arch/pxa-regs.h>
+#include <asm/arch/mainstone.h>
+
+#include "leds.h"
+
+
+/* 8 discrete leds available for general use: */
+#define D28 (1 << 0)
+#define D27 (1 << 1)
+#define D26 (1 << 2)
+#define D25 (1 << 3)
+#define D24 (1 << 4)
+#define D23 (1 << 5)
+#define D22 (1 << 6)
+#define D21 (1 << 7)
+
+#define LED_STATE_ENABLED 1
+#define LED_STATE_CLAIMED 2
+
+static unsigned int led_state;
+static unsigned int hw_led_state;
+
+void mainstone_leds_event(led_event_t evt)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+
+ switch (evt) {
+ case led_start:
+ hw_led_state = 0;
+ led_state = LED_STATE_ENABLED;
+ break;
+
+ case led_stop:
+ led_state &= ~LED_STATE_ENABLED;
+ break;
+
+ case led_claim:
+ led_state |= LED_STATE_CLAIMED;
+ hw_led_state = 0;
+ break;
+
+ case led_release:
+ led_state &= ~LED_STATE_CLAIMED;
+ hw_led_state = 0;
+ break;
+
+#ifdef CONFIG_LEDS_TIMER
+ case led_timer:
+ hw_led_state ^= D26;
+ break;
+#endif
+
+#ifdef CONFIG_LEDS_CPU
+ case led_idle_start:
+ hw_led_state &= ~D27;
+ break;
+
+ case led_idle_end:
+ hw_led_state |= D27;
+ break;
+#endif
+
+ case led_halted:
+ break;
+
+ case led_green_on:
+ hw_led_state |= D21;;
+ break;
+
+ case led_green_off:
+ hw_led_state &= ~D21;
+ break;
+
+ case led_amber_on:
+ hw_led_state |= D22;;
+ break;
+
+ case led_amber_off:
+ hw_led_state &= ~D22;
+ break;
+
+ case led_red_on:
+ hw_led_state |= D23;;
+ break;
+
+ case led_red_off:
+ hw_led_state &= ~D23;
+ break;
+
+ default:
+ break;
+ }
+
+ if (led_state & LED_STATE_ENABLED)
+ MST_LEDCTRL = (MST_LEDCTRL | 0xff) & ~hw_led_state;
+ else
+ MST_LEDCTRL |= 0xff;
+
+ local_irq_restore(flags);
+}
--- /dev/null
+/*
+ * linux/arch/arm/mach-pxa/mainstone.c
+ *
+ * Support for the Intel HCDDBBVA0 Development Platform.
+ * (go figure how they came up with such name...)
+ *
+ * Author: Nicolas Pitre
+ * Created: Nov 05, 2002
+ * Copyright: MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include <linux/bitops.h>
+#include <linux/fb.h>
+
+#include <asm/types.h>
+#include <asm/setup.h>
+#include <asm/memory.h>
+#include <asm/mach-types.h>
+#include <asm/hardware.h>
+#include <asm/irq.h>
+
+#include <asm/mach/arch.h>
+#include <asm/mach/map.h>
+#include <asm/mach/irq.h>
+
+#include <asm/arch/pxa-regs.h>
+#include <asm/arch/mainstone.h>
+#include <asm/arch/pxafb.h>
+#include <asm/arch/mmc.h>
+
+#include "generic.h"
+
+
+static unsigned long mainstone_irq_enabled;
+
+static void mainstone_mask_irq(unsigned int irq)
+{
+ int mainstone_irq = (irq - MAINSTONE_IRQ(0));
+ MST_INTMSKENA = (mainstone_irq_enabled &= ~(1 << mainstone_irq));
+}
+
+static void mainstone_unmask_irq(unsigned int irq)
+{
+ int mainstone_irq = (irq - MAINSTONE_IRQ(0));
+ /* the irq can be acknowledged only if deasserted, so it's done here */
+ MST_INTSETCLR &= ~(1 << mainstone_irq);
+ MST_INTMSKENA = (mainstone_irq_enabled |= (1 << mainstone_irq));
+}
+
+static struct irqchip mainstone_irq_chip = {
+ .ack = mainstone_mask_irq,
+ .mask = mainstone_mask_irq,
+ .unmask = mainstone_unmask_irq,
+};
+
+
+static void mainstone_irq_handler(unsigned int irq, struct irqdesc *desc,
+ struct pt_regs *regs)
+{
+ unsigned long pending = MST_INTSETCLR & mainstone_irq_enabled;
+ do {
+ GEDR(0) = GPIO_bit(0); /* clear useless edge notification */
+ if (likely(pending)) {
+ irq = MAINSTONE_IRQ(0) + __ffs(pending);
+ desc = irq_desc + irq;
+ desc->handle(irq, desc, regs);
+ }
+ pending = MST_INTSETCLR & mainstone_irq_enabled;
+ } while (pending);
+}
+
+static void __init mainstone_init_irq(void)
+{
+ int irq;
+
+ pxa_init_irq();
+
+ /* setup extra Mainstone irqs */
+ for(irq = MAINSTONE_IRQ(0); irq <= MAINSTONE_IRQ(15); irq++) {
+ set_irq_chip(irq, &mainstone_irq_chip);
+ set_irq_handler(irq, do_level_IRQ);
+ set_irq_flags(irq, IRQF_VALID | IRQF_PROBE);
+ }
+ set_irq_flags(MAINSTONE_IRQ(8), 0);
+ set_irq_flags(MAINSTONE_IRQ(12), 0);
+
+ MST_INTMSKENA = 0;
+ MST_INTSETCLR = 0;
+
+ set_irq_chained_handler(IRQ_GPIO(0), mainstone_irq_handler);
+ set_irq_type(IRQ_GPIO(0), IRQT_FALLING);
+}
+
+
+static struct resource smc91x_resources[] = {
+ [0] = {
+ .start = (MST_ETH_PHYS + 0x300),
+ .end = (MST_ETH_PHYS + 0xfffff),
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = MAINSTONE_IRQ(3),
+ .end = MAINSTONE_IRQ(3),
+ .flags = IORESOURCE_IRQ,
+ }
+};
+
+static struct platform_device smc91x_device = {
+ .name = "smc91x",
+ .id = 0,
+ .num_resources = ARRAY_SIZE(smc91x_resources),
+ .resource = smc91x_resources,
+};
+
+
+static void mainstone_backlight_power(int on)
+{
+ if (on) {
+ pxa_gpio_mode(GPIO16_PWM0_MD);
+ pxa_set_cken(CKEN0_PWM0, 1);
+ PWM_CTRL0 = 0;
+ PWM_PWDUTY0 = 0x3ff;
+ PWM_PERVAL0 = 0x3ff;
+ } else {
+ PWM_CTRL0 = 0;
+ PWM_PWDUTY0 = 0x0;
+ PWM_PERVAL0 = 0x3FF;
+ pxa_set_cken(CKEN0_PWM0, 0);
+ }
+}
+
+static struct pxafb_mach_info toshiba_ltm04c380k __initdata = {
+ .pixclock = 50000,
+ .xres = 640,
+ .yres = 480,
+ .bpp = 16,
+ .hsync_len = 1,
+ .left_margin = 0x9f,
+ .right_margin = 1,
+ .vsync_len = 44,
+ .upper_margin = 0,
+ .lower_margin = 0,
+ .sync = FB_SYNC_HOR_HIGH_ACT|FB_SYNC_VERT_HIGH_ACT,
+ .lccr0 = LCCR0_Act,
+ .lccr3 = LCCR3_PCP,
+ .pxafb_backlight_power = mainstone_backlight_power,
+};
+
+static struct pxafb_mach_info toshiba_ltm035a776c __initdata = {
+ .pixclock = 110000,
+ .xres = 240,
+ .yres = 320,
+ .bpp = 16,
+ .hsync_len = 4,
+ .left_margin = 8,
+ .right_margin = 20,
+ .vsync_len = 3,
+ .upper_margin = 1,
+ .lower_margin = 10,
+ .sync = FB_SYNC_HOR_HIGH_ACT|FB_SYNC_VERT_HIGH_ACT,
+ .lccr0 = LCCR0_Act,
+ .lccr3 = LCCR3_PCP,
+ .pxafb_backlight_power = mainstone_backlight_power,
+};
+
+static int mainstone_mci_init(struct device *dev, irqreturn_t (*mstone_detect_int)(int, void *, struct pt_regs *), void *data)
+{
+ int err;
+
+ /*
+ * setup GPIO for PXA27x MMC controller
+ */
+ pxa_gpio_mode(GPIO32_MMCCLK_MD);
+ pxa_gpio_mode(GPIO112_MMCCMD_MD);
+ pxa_gpio_mode(GPIO92_MMCDAT0_MD);
+ pxa_gpio_mode(GPIO109_MMCDAT1_MD);
+ pxa_gpio_mode(GPIO110_MMCDAT2_MD);
+ pxa_gpio_mode(GPIO111_MMCDAT3_MD);
+
+ /* make sure SD/Memory Stick multiplexer's signals
+ * are routed to MMC controller
+ */
+ MST_MSCWR1 &= ~MST_MSCWR1_MS_SEL;
+
+ err = request_irq(MAINSTONE_MMC_IRQ, mstone_detect_int, SA_INTERRUPT,
+ "MMC card detect", data);
+ if (err) {
+ printk(KERN_ERR "mainstone_mci_init: MMC/SD: can't request MMC card detect IRQ\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static void mainstone_mci_setpower(struct device *dev, unsigned int vdd)
+{
+ struct pxamci_platform_data* p_d = dev->platform_data;
+
+ if (( 1 << vdd) & p_d->ocr_mask) {
+ printk(KERN_DEBUG "%s: on\n", __FUNCTION__);
+ MST_MSCWR1 |= MST_MSCWR1_MMC_ON;
+ MST_MSCWR1 &= ~MST_MSCWR1_MS_SEL;
+ } else {
+ printk(KERN_DEBUG "%s: off\n", __FUNCTION__);
+ MST_MSCWR1 &= ~MST_MSCWR1_MMC_ON;
+ }
+}
+
+static void mainstone_mci_exit(struct device *dev, void *data)
+{
+ free_irq(MAINSTONE_MMC_IRQ, data);
+}
+
+static struct pxamci_platform_data mainstone_mci_platform_data = {
+ .ocr_mask = MMC_VDD_32_33|MMC_VDD_33_34,
+ .init = mainstone_mci_init,
+ .setpower = mainstone_mci_setpower,
+ .exit = mainstone_mci_exit,
+};
+
+static void __init mainstone_init(void)
+{
+ platform_device_register(&smc91x_device);
+
+ /* reading Mainstone's "Virtual Configuration Register"
+ might be handy to select LCD type here */
+ if (0)
+ set_pxa_fb_info(&toshiba_ltm04c380k);
+ else
+ set_pxa_fb_info(&toshiba_ltm035a776c);
+
+ pxa_set_mci_info(&mainstone_mci_platform_data);
+}
+
+
+static struct map_desc mainstone_io_desc[] __initdata = {
+ { MST_FPGA_VIRT, MST_FPGA_PHYS, 0x00100000, MT_DEVICE }, /* CPLD */
+};
+
+static void __init mainstone_map_io(void)
+{
+ pxa_map_io();
+ iotable_init(mainstone_io_desc, ARRAY_SIZE(mainstone_io_desc));
+}
+
+MACHINE_START(MAINSTONE, "Intel HCDDBBVA0 Development Platform (aka Mainstone)")
+ MAINTAINER("MontaVista Software Inc.")
+ BOOT_MEM(0xa0000000, 0x40000000, io_p2v(0x40000000))
+ MAPIO(mainstone_map_io)
+ INITIRQ(mainstone_init_irq)
+ .timer = &pxa_timer,
+ INIT_MACHINE(mainstone_init)
+MACHINE_END
--- /dev/null
+/*
+ * linux/arch/arm/mach-pxa/pxa25x.c
+ *
+ * Author: Nicolas Pitre
+ * Created: Jun 15, 2001
+ * Copyright: MontaVista Software Inc.
+ *
+ * Code specific to PXA21x/25x/26x variants.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Since this file should be linked before any other machine specific file,
+ * the __initcall() here will be executed first. This serves as default
+ * initialization stuff for PXA machines which can be overridden later if
+ * need be.
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/pm.h>
+
+#include <asm/hardware.h>
+#include <asm/arch/pxa-regs.h>
+
+#include "generic.h"
+
+/*
+ * Various clock factors driven by the CCCR register.
+ */
+
+/* Crystal Frequency to Memory Frequency Multiplier (L) */
+static unsigned char L_clk_mult[32] = { 0, 27, 32, 36, 40, 45, 0, };
+
+/* Memory Frequency to Run Mode Frequency Multiplier (M) */
+static unsigned char M_clk_mult[4] = { 0, 1, 2, 4 };
+
+/* Run Mode Frequency to Turbo Mode Frequency Multiplier (N) */
+/* Note: we store the value N * 2 here. */
+static unsigned char N2_clk_mult[8] = { 0, 0, 2, 3, 4, 0, 6, 0 };
+
+/* Crystal clock */
+#define BASE_CLK 3686400
+
+/*
+ * Get the clock frequency as reflected by CCCR and the turbo flag.
+ * We assume these values have been applied via a fcs.
+ * If info is not 0 we also display the current settings.
+ */
+unsigned int get_clk_frequency_khz(int info)
+{
+ unsigned long cccr, turbo;
+ unsigned int l, L, m, M, n2, N;
+
+ cccr = CCCR;
+ asm( "mrc\tp14, 0, %0, c6, c0, 0" : "=r" (turbo) );
+
+ l = L_clk_mult[(cccr >> 0) & 0x1f];
+ m = M_clk_mult[(cccr >> 5) & 0x03];
+ n2 = N2_clk_mult[(cccr >> 7) & 0x07];
+
+ L = l * BASE_CLK;
+ M = m * L;
+ N = n2 * M / 2;
+
+ if(info)
+ {
+ L += 5000;
+ printk( KERN_INFO "Memory clock: %d.%02dMHz (*%d)\n",
+ L / 1000000, (L % 1000000) / 10000, l );
+ M += 5000;
+ printk( KERN_INFO "Run Mode clock: %d.%02dMHz (*%d)\n",
+ M / 1000000, (M % 1000000) / 10000, m );
+ N += 5000;
+ printk( KERN_INFO "Turbo Mode clock: %d.%02dMHz (*%d.%d, %sactive)\n",
+ N / 1000000, (N % 1000000) / 10000, n2 / 2, (n2 % 2) * 5,
+ (turbo & 1) ? "" : "in" );
+ }
+
+ return (turbo & 1) ? (N/1000) : (M/1000);
+}
+
+EXPORT_SYMBOL(get_clk_frequency_khz);
+
+/*
+ * Return the current memory clock frequency in units of 10kHz
+ */
+unsigned int get_memclk_frequency_10khz(void)
+{
+ return L_clk_mult[(CCCR >> 0) & 0x1f] * BASE_CLK / 10000;
+}
+
+EXPORT_SYMBOL(get_memclk_frequency_10khz);
+
+/*
+ * Return the current LCD clock frequency in units of 10kHz
+ */
+unsigned int get_lcdclk_frequency_10khz(void)
+{
+ return get_memclk_frequency_10khz();
+}
+
+EXPORT_SYMBOL(get_lcdclk_frequency_10khz);
--- /dev/null
+/*
+ * linux/arch/arm/mach-pxa/pxa27x.c
+ *
+ * Author: Nicolas Pitre
+ * Created: Nov 05, 2002
+ * Copyright: MontaVista Software Inc.
+ *
+ * Code specific to PXA27x aka Bulverde.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/pm.h>
+#include <linux/device.h>
+
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/arch/pxa-regs.h>
+
+#include "generic.h"
+
+/* Crystal clock: 13MHz */
+#define BASE_CLK 13000000
+
+/*
+ * Get the clock frequency as reflected by CCSR and the turbo flag.
+ * We assume these values have been applied via a fcs.
+ * If info is not 0 we also display the current settings.
+ */
+unsigned int get_clk_frequency_khz( int info)
+{
+ unsigned long ccsr, clkcfg;
+ unsigned int l, L, m, M, n2, N, S;
+ int cccr_a, t, ht, b;
+
+ ccsr = CCSR;
+ cccr_a = CCCR & (1 << 25);
+
+ /* Read clkcfg register: it has turbo, b, half-turbo (and f) */
+ asm( "mrc\tp14, 0, %0, c6, c0, 0" : "=r" (clkcfg) );
+ t = clkcfg & (1 << 1);
+ ht = clkcfg & (1 << 2);
+ b = clkcfg & (1 << 3);
+
+ l = ccsr & 0x1f;
+ n2 = (ccsr>>7) & 0xf;
+ m = (l <= 10) ? 1 : (l <= 20) ? 2 : 4;
+
+ L = l * BASE_CLK;
+ N = (L * n2) / 2;
+ M = (!cccr_a) ? (L/m) : ((b) ? L : (L/2));
+ S = (b) ? L : (L/2);
+
+ if (info) {
+ printk( KERN_INFO "Run Mode clock: %d.%02dMHz (*%d)\n",
+ L / 1000000, (L % 1000000) / 10000, l );
+ printk( KERN_INFO "Turbo Mode clock: %d.%02dMHz (*%d.%d, %sactive)\n",
+ N / 1000000, (N % 1000000)/10000, n2 / 2, (n2 % 2)*5,
+ (t) ? "" : "in" );
+ printk( KERN_INFO "Memory clock: %d.%02dMHz (/%d)\n",
+ M / 1000000, (M % 1000000) / 10000, m );
+ printk( KERN_INFO "System bus clock: %d.%02dMHz \n",
+ S / 1000000, (S % 1000000) / 10000 );
+ }
+
+ return (t) ? (N/1000) : (L/1000);
+}
+
+/*
+ * Return the current mem clock frequency in units of 10kHz as
+ * reflected by CCCR[A], B, and L
+ */
+unsigned int get_memclk_frequency_10khz(void)
+{
+ unsigned long ccsr, clkcfg;
+ unsigned int l, L, m, M;
+ int cccr_a, b;
+
+ ccsr = CCSR;
+ cccr_a = CCCR & (1 << 25);
+
+ /* Read clkcfg register: it has turbo, b, half-turbo (and f) */
+ asm( "mrc\tp14, 0, %0, c6, c0, 0" : "=r" (clkcfg) );
+ b = clkcfg & (1 << 3);
+
+ l = ccsr & 0x1f;
+ m = (l <= 10) ? 1 : (l <= 20) ? 2 : 4;
+
+ L = l * BASE_CLK;
+ M = (!cccr_a) ? (L/m) : ((b) ? L : (L/2));
+
+ return (M / 10000);
+}
+
+/*
+ * Return the current LCD clock frequency in units of 10kHz as
+ */
+unsigned int get_lcdclk_frequency_10khz(void)
+{
+ unsigned long ccsr;
+ unsigned int l, L, k, K;
+
+ ccsr = CCSR;
+
+ l = ccsr & 0x1f;
+ k = (l <= 7) ? 1 : (l <= 16) ? 2 : 4;
+
+ L = l * BASE_CLK;
+ K = L / k;
+
+ return (K / 10000);
+}
+
+EXPORT_SYMBOL(get_clk_frequency_khz);
+EXPORT_SYMBOL(get_memclk_frequency_10khz);
+EXPORT_SYMBOL(get_lcdclk_frequency_10khz);
+
+
+/*
+ * device registration specific to PXA27x.
+ */
+
+static u64 pxa27x_dmamask = 0xffffffffUL;
+
+static struct resource pxa27x_ohci_resources[] = {
+ [0] = {
+ .start = 0x4C000000,
+ .end = 0x4C00ff6f,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = IRQ_USBH1,
+ .end = IRQ_USBH1,
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+static struct platform_device ohci_device = {
+ .name = "pxa27x-ohci",
+ .id = -1,
+ .dev = {
+ .dma_mask = &pxa27x_dmamask,
+ .coherent_dma_mask = 0xffffffff,
+ },
+ .num_resources = ARRAY_SIZE(pxa27x_ohci_resources),
+ .resource = pxa27x_ohci_resources,
+};
+
+static struct platform_device *devices[] __initdata = {
+ &ohci_device,
+};
+
+static int __init pxa27x_init(void)
+{
+ return platform_add_devices(devices, ARRAY_SIZE(devices));
+}
+
+subsys_initcall(pxa27x_init);
--- /dev/null
+/*
+ * arch/arm/mach-pxa/time.c
+ *
+ * Author: Nicolas Pitre
+ * Created: Jun 15, 2001
+ * Copyright: MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+#include <linux/signal.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+
+#include <asm/system.h>
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/leds.h>
+#include <asm/irq.h>
+#include <asm/mach/irq.h>
+#include <asm/mach/time.h>
+#include <asm/arch/pxa-regs.h>
+
+
+static inline unsigned long pxa_get_rtc_time(void)
+{
+ return RCNR;
+}
+
+static int pxa_set_rtc(void)
+{
+ unsigned long current_time = xtime.tv_sec;
+
+ if (RTSR & RTSR_ALE) {
+ /* make sure not to forward the clock over an alarm */
+ unsigned long alarm = RTAR;
+ if (current_time >= alarm && alarm >= RCNR)
+ return -ERESTARTSYS;
+ }
+ RCNR = current_time;
+ return 0;
+}
+
+/* IRQs are disabled before entering here from do_gettimeofday() */
+static unsigned long pxa_gettimeoffset (void)
+{
+ long ticks_to_match, elapsed, usec;
+
+ /* Get ticks before next timer match */
+ ticks_to_match = OSMR0 - OSCR;
+
+ /* We need elapsed ticks since last match */
+ elapsed = LATCH - ticks_to_match;
+
+ /* don't get fooled by the workaround in pxa_timer_interrupt() */
+ if (elapsed <= 0)
+ return 0;
+
+ /* Now convert them to usec */
+ usec = (unsigned long)(elapsed * (tick_nsec / 1000))/LATCH;
+
+ return usec;
+}
+
+static irqreturn_t
+pxa_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ int next_match;
+
+ write_seqlock(&xtime_lock);
+
+ /* Loop until we get ahead of the free running timer.
+ * This ensures an exact clock tick count and time accuracy.
+ * IRQs are disabled inside the loop to ensure coherence between
+ * lost_ticks (updated in do_timer()) and the match reg value, so we
+ * can use do_gettimeofday() from interrupt handlers.
+ *
+ * HACK ALERT: it seems that the PXA timer regs aren't updated right
+ * away in all cases when a write occurs. We therefore compare with
+ * 8 instead of 0 in the while() condition below to avoid missing a
+ * match if OSCR has already reached the next OSMR value.
+ * Experience has shown that up to 6 ticks are needed to work around
+ * this problem, but let's use 8 to be conservative. Note that this
+ * affect things only when the timer IRQ has been delayed by nearly
+ * exactly one tick period which should be a pretty rare event.
+ */
+ do {
+ timer_tick(regs);
+ OSSR = OSSR_M0; /* Clear match on timer 0 */
+ next_match = (OSMR0 += LATCH);
+ } while( (signed long)(next_match - OSCR) <= 8 );
+
+ write_sequnlock(&xtime_lock);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction pxa_timer_irq = {
+ .name = "PXA Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = pxa_timer_interrupt
+};
+
+static void __init pxa_timer_init(void)
+{
+ struct timespec tv;
+
+ set_rtc = pxa_set_rtc;
+
+ tv.tv_nsec = 0;
+ tv.tv_sec = pxa_get_rtc_time();
+ do_settimeofday(&tv);
+
+ OSMR0 = 0; /* set initial match at 0 */
+ OSSR = 0xf; /* clear status on all timers */
+ setup_irq(IRQ_OST0, &pxa_timer_irq);
+ OIER |= OIER_E0; /* enable match on timer 0 to cause interrupts */
+ OSCR = 0; /* initialize free-running timer, force first match */
+}
+
+#ifdef CONFIG_PM
+static unsigned long osmr[4], oier;
+
+static void pxa_timer_suspend(void)
+{
+ osmr[0] = OSMR0;
+ osmr[1] = OSMR1;
+ osmr[2] = OSMR2;
+ osmr[3] = OSMR3;
+ oier = OIER;
+}
+
+static void pxa_timer_resume(void)
+{
+ OSMR0 = osmr[0];
+ OSMR1 = osmr[1];
+ OSMR2 = osmr[2];
+ OSMR3 = osmr[3];
+ OIER = oier;
+
+ /*
+ * OSMR0 is the system timer: make sure OSCR is sufficiently behind
+ */
+ OSCR = OSMR0 - LATCH;
+}
+#else
+#define pxa_timer_suspend NULL
+#define pxa_timer_resume NULL
+#endif
+
+struct sys_timer pxa_timer = {
+ .init = pxa_timer_init,
+ .suspend = pxa_timer_suspend,
+ .resume = pxa_timer_resume,
+ .offset = pxa_gettimeoffset,
+};
--- /dev/null
+/* linux/arch/arm/mach-s3c2410/gpio.c
+ *
+ * Copyright (c) 2004 Simtec Electronics
+ * Ben Dooks <ben@simtec.co.uk>
+ *
+ * S3C2410 GPIO support
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Changelog
+ * 13-Sep-2004 BJD Implemented change of MISCCR
+ * 14-Sep-2004 BJD Added getpin call
+ * 14-Sep-2004 BJD Fixed bug in setpin() call
+ * 30-Sep-2004 BJD Fixed cfgpin() mask bug
+ * 01-Oct-2004 BJD Added getcfg() to get pin configuration
+ * 01-Oct-2004 BJD Fixed mask bug in pullup() call
+ * 01-Oct-2004 BJD Added getirq() to turn pin into irqno
+ * 04-Oct-2004 BJD Added irq filter controls for GPIO
+ * 05-Nov-2004 BJD EXPORT_SYMBOL() added for all code
+ */
+
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/ioport.h>
+
+#include <asm/hardware.h>
+#include <asm/irq.h>
+#include <asm/io.h>
+
+#include <asm/arch/regs-gpio.h>
+
+void s3c2410_gpio_cfgpin(unsigned int pin, unsigned int function)
+{
+ unsigned long base = S3C2410_GPIO_BASE(pin);
+ unsigned long mask;
+ unsigned long con;
+ unsigned long flags;
+
+ if (pin < S3C2410_GPIO_BANKB) {
+ mask = 1 << S3C2410_GPIO_OFFSET(pin);
+ } else {
+ mask = 3 << S3C2410_GPIO_OFFSET(pin)*2;
+ }
+
+ local_irq_save(flags);
+
+ con = __raw_readl(base + 0x00);
+ con &= ~mask;
+ con |= function;
+
+ __raw_writel(con, base + 0x00);
+
+ local_irq_restore(flags);
+}
+
+EXPORT_SYMBOL(s3c2410_gpio_cfgpin);
+
+unsigned int s3c2410_gpio_getcfg(unsigned int pin)
+{
+ unsigned long base = S3C2410_GPIO_BASE(pin);
+ unsigned long mask;
+
+ if (pin < S3C2410_GPIO_BANKB) {
+ mask = 1 << S3C2410_GPIO_OFFSET(pin);
+ } else {
+ mask = 3 << S3C2410_GPIO_OFFSET(pin)*2;
+ }
+
+ return __raw_readl(base) & mask;
+}
+
+EXPORT_SYMBOL(s3c2410_gpio_getcfg);
+
+void s3c2410_gpio_pullup(unsigned int pin, unsigned int to)
+{
+ unsigned long base = S3C2410_GPIO_BASE(pin);
+ unsigned long offs = S3C2410_GPIO_OFFSET(pin);
+ unsigned long flags;
+ unsigned long up;
+
+ if (pin < S3C2410_GPIO_BANKB)
+ return;
+
+ local_irq_save(flags);
+
+ up = __raw_readl(base + 0x08);
+ up &= ~(1L << offs);
+ up |= to << offs;
+ __raw_writel(up, base + 0x08);
+
+ local_irq_restore(flags);
+}
+
+EXPORT_SYMBOL(s3c2410_gpio_pullup);
+
+void s3c2410_gpio_setpin(unsigned int pin, unsigned int to)
+{
+ unsigned long base = S3C2410_GPIO_BASE(pin);
+ unsigned long offs = S3C2410_GPIO_OFFSET(pin);
+ unsigned long flags;
+ unsigned long dat;
+
+ local_irq_save(flags);
+
+ dat = __raw_readl(base + 0x04);
+ dat &= ~(1 << offs);
+ dat |= to << offs;
+ __raw_writel(dat, base + 0x04);
+
+ local_irq_restore(flags);
+}
+
+EXPORT_SYMBOL(s3c2410_gpio_setpin);
+
+unsigned int s3c2410_gpio_getpin(unsigned int pin)
+{
+ unsigned long base = S3C2410_GPIO_BASE(pin);
+ unsigned long offs = S3C2410_GPIO_OFFSET(pin);
+
+ return __raw_readl(base + 0x04) & (1<< offs);
+}
+
+EXPORT_SYMBOL(s3c2410_gpio_getpin);
+
+unsigned int s3c2410_modify_misccr(unsigned int clear, unsigned int change)
+{
+ unsigned long flags;
+ unsigned long misccr;
+
+ local_irq_save(flags);
+ misccr = __raw_readl(S3C2410_MISCCR);
+ misccr &= ~clear;
+ misccr ^= change;
+ __raw_writel(misccr, S3C2410_MISCCR);
+ local_irq_restore(flags);
+
+ return misccr;
+}
+
+EXPORT_SYMBOL(s3c2410_modify_misccr);
+
+int s3c2410_gpio_getirq(unsigned int pin)
+{
+ if (pin < S3C2410_GPF0 || pin > S3C2410_GPG15_EINT23)
+ return -1; /* not valid interrupts */
+
+ if (pin < S3C2410_GPG0 && pin > S3C2410_GPF7)
+ return -1; /* not valid pin */
+
+ if (pin < S3C2410_GPF4)
+ return (pin - S3C2410_GPF0) + IRQ_EINT0;
+
+ if (pin < S3C2410_GPG0)
+ return (pin - S3C2410_GPF4) + IRQ_EINT4;
+
+ return (pin - S3C2410_GPG0) + IRQ_EINT8;
+}
+
+EXPORT_SYMBOL(s3c2410_gpio_getirq);
+
+int s3c2410_gpio_irqfilter(unsigned int pin, unsigned int on,
+ unsigned int config)
+{
+ unsigned long reg = S3C2410_EINFLT0;
+ unsigned long flags;
+ unsigned long val;
+
+ if (pin < S3C2410_GPG8 || pin > S3C2410_GPG15)
+ return -1;
+
+ config &= 0xff;
+
+ pin -= S3C2410_GPG8_EINT16;
+ reg += pin & ~3;
+
+ local_irq_save(flags);
+
+ /* update filter width and clock source */
+
+ val = __raw_readl(reg);
+ val &= ~(0xff << ((pin & 3) * 8));
+ val |= config << ((pin & 3) * 8);
+ __raw_writel(val, reg);
+
+ /* update filter enable */
+
+ val = __raw_readl(S3C2410_EXTINT2);
+ val &= ~(1 << ((pin * 4) + 3));
+ val |= on << ((pin * 4) + 3);
+ __raw_writel(val, S3C2410_EXTINT2);
+
+ local_irq_restore(flags);
+
+ return 0;
+}
+
+EXPORT_SYMBOL(s3c2410_gpio_irqfilter);
--- /dev/null
+/***********************************************************************
+ *
+ * linux/arch/arm/mach-s3c2410/mach-smdk2410.c
+ *
+ * Copyright (C) 2004 by FS Forth-Systeme GmbH
+ * All rights reserved.
+ *
+ * $Id: mach-smdk2410.c,v 1.1 2004/05/11 14:15:38 mpietrek Exp $
+ * @Author: Jonas Dietsche
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of
+ * the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston,
+ * MA 02111-1307 USA
+ *
+ * @History:
+ * derived from linux/arch/arm/mach-s3c2410/mach-bast.c, written by
+ * Ben Dooks <ben@simtec.co.uk>
+ ***********************************************************************/
+
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/timer.h>
+#include <linux/init.h>
+
+#include <asm/mach/arch.h>
+#include <asm/mach/map.h>
+#include <asm/mach/irq.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/mach-types.h>
+
+#include <asm/arch/regs-serial.h>
+
+#include "s3c2410.h"
+#include "devs.h"
+#include "cpu.h"
+
+static struct map_desc smdk2410_iodesc[] __initdata = {
+ /* nothing here yet */
+};
+
+#define UCON S3C2410_UCON_DEFAULT
+#define ULCON S3C2410_LCON_CS8 | S3C2410_LCON_PNONE | S3C2410_LCON_STOPB
+#define UFCON S3C2410_UFCON_RXTRIG8 | S3C2410_UFCON_FIFOMODE
+
+static struct s3c2410_uartcfg smdk2410_uartcfgs[] = {
+ [0] = {
+ .hwport = 0,
+ .flags = 0,
+ .ucon = UCON,
+ .ulcon = ULCON,
+ .ufcon = UFCON,
+ },
+ [1] = {
+ .hwport = 1,
+ .flags = 0,
+ .ucon = UCON,
+ .ulcon = ULCON,
+ .ufcon = UFCON,
+ },
+ [2] = {
+ .hwport = 2,
+ .flags = 0,
+ .ucon = UCON,
+ .ulcon = ULCON,
+ .ufcon = UFCON,
+ }
+};
+
+static struct platform_device *smdk2410_devices[] __initdata = {
+ &s3c_device_usb,
+ &s3c_device_lcd,
+ &s3c_device_wdt,
+ &s3c_device_i2c,
+ &s3c_device_iis,
+};
+
+static struct s3c24xx_board smdk2410_board __initdata = {
+ .devices = smdk2410_devices,
+ .devices_count = ARRAY_SIZE(smdk2410_devices)
+};
+
+void __init smdk2410_map_io(void)
+{
+ s3c24xx_init_io(smdk2410_iodesc, ARRAY_SIZE(smdk2410_iodesc));
+ s3c2410_init_uarts(smdk2410_uartcfgs, ARRAY_SIZE(smdk2410_uartcfgs));
+ s3c24xx_set_board(&smdk2410_board);
+}
+
+void __init smdk2410_init_irq(void)
+{
+ s3c2410_init_irq();
+}
+
+MACHINE_START(SMDK2410, "SMDK2410") /* @TODO: request a new identifier and switch
+ * to SMDK2410 */
+ MAINTAINER("Jonas Dietsche")
+ BOOT_MEM(S3C2410_SDRAM_PA, S3C2410_PA_UART, S3C2410_VA_UART)
+ BOOT_PARAMS(S3C2410_SDRAM_PA + 0x100)
+ MAPIO(smdk2410_map_io)
+ INITIRQ(smdk2410_init_irq)
+ .timer = &s3c2410_timer,
+MACHINE_END
+
+
--- /dev/null
+/* linux/arch/arm/mach-s3c2410/time.c
+ *
+ * Copyright (C) 2003,2004 Simtec Electronics
+ * Ben Dooks, <ben@simtec.co.uk>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <asm/system.h>
+#include <asm/leds.h>
+#include <asm/mach-types.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/arch/map.h>
+#include <asm/arch/regs-timer.h>
+#include <asm/arch/regs-irq.h>
+#include <asm/mach/time.h>
+
+#include "clock.h"
+
+static unsigned long timer_startval;
+static unsigned long timer_usec_ticks;
+
+#define TIMER_USEC_SHIFT 16
+
+/* we use the shifted arithmetic to work out the ratio of timer ticks
+ * to usecs, as often the peripheral clock is not a nice even multiple
+ * of 1MHz.
+ *
+ * shift of 14 and 15 are too low for the 12MHz, 16 seems to be ok
+ * for the current HZ value of 200 without producing overflows.
+ *
+ * Original patch by Dimitry Andric, updated by Ben Dooks
+*/
+
+
+/* timer_mask_usec_ticks
+ *
+ * given a clock and divisor, make the value to pass into timer_ticks_to_usec
+ * to scale the ticks into usecs
+*/
+
+static inline unsigned long
+timer_mask_usec_ticks(unsigned long scaler, unsigned long pclk)
+{
+ unsigned long den = pclk / 1000;
+
+ return ((1000 << TIMER_USEC_SHIFT) * scaler + (den >> 1)) / den;
+}
+
+/* timer_ticks_to_usec
+ *
+ * convert timer ticks to usec.
+*/
+
+static inline unsigned long timer_ticks_to_usec(unsigned long ticks)
+{
+ unsigned long res;
+
+ res = ticks * timer_usec_ticks;
+ res += 1 << (TIMER_USEC_SHIFT - 4); /* round up slightly */
+
+ return res >> TIMER_USEC_SHIFT;
+}
+
+/***
+ * Returns microsecond since last clock interrupt. Note that interrupts
+ * will have been disabled by do_gettimeoffset()
+ * IRQs are disabled before entering here from do_gettimeofday()
+ */
+
+#define SRCPND_TIMER4 (1<<(IRQ_TIMER4 - IRQ_EINT0))
+
+static unsigned long s3c2410_gettimeoffset (void)
+{
+ unsigned long tdone;
+ unsigned long irqpend;
+ unsigned long tval;
+
+ /* work out how many ticks have gone since last timer interrupt */
+
+ tval = __raw_readl(S3C2410_TCNTO(4));
+ tdone = timer_startval - tval;
+
+ /* check to see if there is an interrupt pending */
+
+ irqpend = __raw_readl(S3C2410_SRCPND);
+ if (irqpend & SRCPND_TIMER4) {
+ /* re-read the timer, and try and fix up for the missed
+ * interrupt. Note, the interrupt may go off before the
+ * timer has re-loaded from wrapping.
+ */
+
+ tval = __raw_readl(S3C2410_TCNTO(4));
+ tdone = timer_startval - tval;
+
+ if (tval != 0)
+ tdone += timer_startval;
+ }
+
+ return timer_ticks_to_usec(tdone);
+}
+
+
+/*
+ * IRQ handler for the timer
+ */
+static irqreturn_t
+s3c2410_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ write_seqlock(&xtime_lock);
+ timer_tick(regs);
+ write_sequnlock(&xtime_lock);
+ return IRQ_HANDLED;
+}
+
+static struct irqaction s3c2410_timer_irq = {
+ .name = "S3C2410 Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = s3c2410_timer_interrupt
+};
+
+/*
+ * Set up timer interrupt, and return the current time in seconds.
+ *
+ * Currently we only use timer4, as it is the only timer which has no
+ * other function that can be exploited externally
+ */
+static void s3c2410_timer_setup (void)
+{
+ unsigned long tcon;
+ unsigned long tcnt;
+ unsigned long tcfg1;
+ unsigned long tcfg0;
+
+ tcnt = 0xffff; /* default value for tcnt */
+
+ /* read the current timer configuration bits */
+
+ tcon = __raw_readl(S3C2410_TCON);
+ tcfg1 = __raw_readl(S3C2410_TCFG1);
+ tcfg0 = __raw_readl(S3C2410_TCFG0);
+
+ /* configure the system for whichever machine is in use */
+
+ if (machine_is_bast() || machine_is_vr1000()) {
+ /* timer is at 12MHz, scaler is 1 */
+ timer_usec_ticks = timer_mask_usec_ticks(1, 12000000);
+ tcnt = 12000000 / HZ;
+
+ tcfg1 &= ~S3C2410_TCFG1_MUX4_MASK;
+ tcfg1 |= S3C2410_TCFG1_MUX4_TCLK1;
+ } else {
+ /* for the h1940 (and others), we use the pclk from the core
+ * to generate the timer values. since values around 50 to
+ * 70MHz are not values we can directly generate the timer
+ * value from, we need to pre-scale and divide before using it.
+ *
+ * for instance, using 50.7MHz and dividing by 6 gives 8.45MHz
+ * (8.45 ticks per usec)
+ */
+
+ /* this is used as default if no other timer can be found */
+
+ timer_usec_ticks = timer_mask_usec_ticks(6, s3c24xx_pclk);
+
+ tcfg1 &= ~S3C2410_TCFG1_MUX4_MASK;
+ tcfg1 |= S3C2410_TCFG1_MUX4_DIV2;
+
+ tcfg0 &= ~S3C2410_TCFG_PRESCALER1_MASK;
+ tcfg0 |= ((6 - 1) / 2) << S3C2410_TCFG_PRESCALER1_SHIFT;
+
+ tcnt = (s3c24xx_pclk / 6) / HZ;
+ }
+
+ /* timers reload after counting zero, so reduce the count by 1 */
+
+ tcnt--;
+
+ printk("timer tcon=%08lx, tcnt %04lx, tcfg %08lx,%08lx, usec %08lx\n",
+ tcon, tcnt, tcfg0, tcfg1, timer_usec_ticks);
+
+ /* check to see if timer is within 16bit range... */
+ if (tcnt > 0xffff) {
+ panic("setup_timer: HZ is too small, cannot configure timer!");
+ return;
+ }
+
+ __raw_writel(tcfg1, S3C2410_TCFG1);
+ __raw_writel(tcfg0, S3C2410_TCFG0);
+
+ timer_startval = tcnt;
+ __raw_writel(tcnt, S3C2410_TCNTB(4));
+
+ /* ensure timer is stopped... */
+
+ tcon &= ~(7<<20);
+ tcon |= S3C2410_TCON_T4RELOAD;
+ tcon |= S3C2410_TCON_T4MANUALUPD;
+
+ __raw_writel(tcon, S3C2410_TCON);
+ __raw_writel(tcnt, S3C2410_TCNTB(4));
+ __raw_writel(tcnt, S3C2410_TCMPB(4));
+
+ /* start the timer running */
+ tcon |= S3C2410_TCON_T4START;
+ tcon &= ~S3C2410_TCON_T4MANUALUPD;
+ __raw_writel(tcon, S3C2410_TCON);
+}
+
+static void __init s3c2410_timer_init (void)
+{
+ s3c2410_timer_setup();
+ setup_irq(IRQ_TIMER4, &s3c2410_timer_irq);
+}
+
+struct sys_timer s3c2410_timer = {
+ .init = s3c2410_timer_init,
+ .offset = s3c2410_gettimeoffset,
+ .resume = s3c2410_timer_setup
+};
--- /dev/null
+/*
+ * linux/arch/arm/mach-sa1100/collie.c
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * This file contains all Collie-specific tweaks.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * ChangeLog:
+ * 03-06-2004 John Lenz <jelenz@wisc.edu>
+ * 06-04-2002 Chris Larson <kergoth@digitalnemesis.net>
+ * 04-16-2001 Lineo Japan,Inc. ...
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/tty.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/partitions.h>
+#include <linux/timer.h>
+
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/irq.h>
+#include <asm/setup.h>
+#include <asm/arch/collie.h>
+
+#include <asm/mach/arch.h>
+#include <asm/mach/flash.h>
+#include <asm/mach/map.h>
+#include <asm/mach/serial_sa1100.h>
+
+#include <asm/hardware/locomo.h>
+
+#include "generic.h"
+
+static void __init scoop_init(void)
+{
+
+#define COLLIE_SCP_INIT_DATA(adr,dat) (((adr)<<16)|(dat))
+#define COLLIE_SCP_INIT_DATA_END ((unsigned long)-1)
+ static const unsigned long scp_init[] = {
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_MCR, 0x0140), // 00
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_MCR, 0x0100),
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_CDR, 0x0000), // 04
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_CPR, 0x0000), // 0C
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_CCR, 0x0000), // 10
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_IMR, 0x0000), // 18
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_IRM, 0x00FF), // 14
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_ISR, 0x0000), // 1C
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_IRM, 0x0000),
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_GPCR, COLLIE_SCP_IO_DIR), // 20
+ COLLIE_SCP_INIT_DATA(COLLIE_SCP_GPWR, COLLIE_SCP_IO_OUT), // 24
+ COLLIE_SCP_INIT_DATA_END
+ };
+ int i;
+ for (i = 0; scp_init[i] != COLLIE_SCP_INIT_DATA_END; i++) {
+ int adr = scp_init[i] >> 16;
+ COLLIE_SCP_REG(adr) = scp_init[i] & 0xFFFF;
+ }
+
+}
+
+static struct resource locomo_resources[] = {
+ [0] = {
+ .start = 0x40000000,
+ .end = 0x40001fff,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = IRQ_GPIO25,
+ .end = IRQ_GPIO25,
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+static struct platform_device locomo_device = {
+ .name = "locomo",
+ .id = 0,
+ .num_resources = ARRAY_SIZE(locomo_resources),
+ .resource = locomo_resources,
+};
+
+static struct platform_device *devices[] __initdata = {
+ &locomo_device,
+};
+
+static struct mtd_partition collie_partitions[] = {
+ {
+ .name = "bootloader",
+ .offset = 0,
+ .size = 0x000C0000,
+ .mask_flags = MTD_WRITEABLE
+ }, {
+ .name = "kernel",
+ .offset = MTDPART_OFS_APPEND,
+ .size = 0x00100000,
+ }, {
+ .name = "rootfs",
+ .offset = MTDPART_OFS_APPEND,
+ .size = 0x00e20000,
+ }
+};
+
+static void collie_set_vpp(int vpp)
+{
+ COLLIE_SCP_REG_GPCR |= COLLIE_SCP_VPEN;
+ if (vpp) {
+ COLLIE_SCP_REG_GPWR |= COLLIE_SCP_VPEN;
+ } else {
+ COLLIE_SCP_REG_GPWR &= ~COLLIE_SCP_VPEN;
+ }
+}
+
+static struct flash_platform_data collie_flash_data = {
+ .map_name = "cfi_probe",
+ .set_vpp = collie_set_vpp,
+ .parts = collie_partitions,
+ .nr_parts = ARRAY_SIZE(collie_partitions),
+};
+
+static struct resource collie_flash_resources[] = {
+ {
+ .start = SA1100_CS0_PHYS,
+ .end = SA1100_CS0_PHYS + SZ_32M - 1,
+ .flags = IORESOURCE_MEM,
+ }
+};
+
+static void __init collie_init(void)
+{
+ int ret = 0;
+
+ /* cpu initialize */
+ GAFR = ( GPIO_SSP_TXD | \
+ GPIO_SSP_SCLK | GPIO_SSP_SFRM | GPIO_SSP_CLK | GPIO_TIC_ACK | \
+ GPIO_32_768kHz );
+
+ GPDR = ( GPIO_LDD8 | GPIO_LDD9 | GPIO_LDD10 | GPIO_LDD11 | GPIO_LDD12 | \
+ GPIO_LDD13 | GPIO_LDD14 | GPIO_LDD15 | GPIO_SSP_TXD | \
+ GPIO_SSP_SCLK | GPIO_SSP_SFRM | GPIO_SDLC_SCLK | \
+ GPIO_SDLC_AAF | GPIO_UART_SCLK1 | GPIO_32_768kHz );
+ GPLR = GPIO_GPIO18;
+
+ // PPC pin setting
+ PPDR = ( PPC_LDD0 | PPC_LDD1 | PPC_LDD2 | PPC_LDD3 | PPC_LDD4 | PPC_LDD5 | \
+ PPC_LDD6 | PPC_LDD7 | PPC_L_PCLK | PPC_L_LCLK | PPC_L_FCLK | PPC_L_BIAS | \
+ PPC_TXD1 | PPC_TXD2 | PPC_RXD2 | PPC_TXD3 | PPC_TXD4 | PPC_SCLK | PPC_SFRM );
+
+ PSDR = ( PPC_RXD1 | PPC_RXD2 | PPC_RXD3 | PPC_RXD4 );
+
+ GAFR |= GPIO_32_768kHz;
+ GPDR |= GPIO_32_768kHz;
+ TUCR = TUCR_32_768kHz;
+
+ scoop_init();
+
+ ret = platform_add_devices(devices, ARRAY_SIZE(devices));
+ if (ret) {
+ printk(KERN_WARNING "collie: Unable to register LoCoMo device\n");
+ }
+
+ sa11x0_set_flash_data(&collie_flash_data, collie_flash_resources,
+ ARRAY_SIZE(collie_flash_resources));
+}
+
+static struct map_desc collie_io_desc[] __initdata = {
+ /* virtual physical length type */
+ {0xe8000000, 0x00000000, 0x02000000, MT_DEVICE}, /* 32M main flash (cs0) */
+ {0xea000000, 0x08000000, 0x02000000, MT_DEVICE}, /* 32M boot flash (cs1) */
+ {0xf0000000, 0x40000000, 0x01000000, MT_DEVICE}, /* 16M LOCOMO & SCOOP (cs4) */
+};
+
+static void __init collie_map_io(void)
+{
+ sa1100_map_io();
+ iotable_init(collie_io_desc, ARRAY_SIZE(collie_io_desc));
+}
+
+MACHINE_START(COLLIE, "Sharp-Collie")
+ BOOT_MEM(0xc0000000, 0x80000000, 0xf8000000)
+ MAPIO(collie_map_io)
+ INITIRQ(sa1100_init_irq)
+ .timer = &sa1100_timer,
+ .init_machine = collie_init,
+MACHINE_END
--- /dev/null
+/*
+ * linux/arch/arm/mach-sa1100/time.c
+ *
+ * Copyright (C) 1998 Deborah Wallach.
+ * Twiddles (C) 1999 Hugo Fiennes <hugo@empeg.com>
+ *
+ * 2000/03/29 (C) Nicolas Pitre <nico@cam.org>
+ * Rewritten: big cleanup, much simpler, better HZ accuracy.
+ *
+ */
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/timex.h>
+#include <linux/signal.h>
+
+#include <asm/mach/time.h>
+#include <asm/hardware.h>
+
+#define RTC_DEF_DIVIDER (32768 - 1)
+#define RTC_DEF_TRIM 0
+
+static unsigned long __init sa1100_get_rtc_time(void)
+{
+ /*
+ * According to the manual we should be able to let RTTR be zero
+ * and then a default diviser for a 32.768KHz clock is used.
+ * Apparently this doesn't work, at least for my SA1110 rev 5.
+ * If the clock divider is uninitialized then reset it to the
+ * default value to get the 1Hz clock.
+ */
+ if (RTTR == 0) {
+ RTTR = RTC_DEF_DIVIDER + (RTC_DEF_TRIM << 16);
+ printk(KERN_WARNING "Warning: uninitialized Real Time Clock\n");
+ /* The current RTC value probably doesn't make sense either */
+ RCNR = 0;
+ return 0;
+ }
+ return RCNR;
+}
+
+static int sa1100_set_rtc(void)
+{
+ unsigned long current_time = xtime.tv_sec;
+
+ if (RTSR & RTSR_ALE) {
+ /* make sure not to forward the clock over an alarm */
+ unsigned long alarm = RTAR;
+ if (current_time >= alarm && alarm >= RCNR)
+ return -ERESTARTSYS;
+ }
+ RCNR = current_time;
+ return 0;
+}
+
+/* IRQs are disabled before entering here from do_gettimeofday() */
+static unsigned long sa1100_gettimeoffset (void)
+{
+ unsigned long ticks_to_match, elapsed, usec;
+
+ /* Get ticks before next timer match */
+ ticks_to_match = OSMR0 - OSCR;
+
+ /* We need elapsed ticks since last match */
+ elapsed = LATCH - ticks_to_match;
+
+ /* Now convert them to usec */
+ usec = (unsigned long)(elapsed * (tick_nsec / 1000))/LATCH;
+
+ return usec;
+}
+
+/*
+ * We will be entered with IRQs enabled.
+ *
+ * Loop until we get ahead of the free running timer.
+ * This ensures an exact clock tick count and time accuracy.
+ * IRQs are disabled inside the loop to ensure coherence between
+ * lost_ticks (updated in do_timer()) and the match reg value, so we
+ * can use do_gettimeofday() from interrupt handlers.
+ */
+static irqreturn_t
+sa1100_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned int next_match;
+
+ write_seqlock(&xtime_lock);
+
+ do {
+ timer_tick(regs);
+ OSSR = OSSR_M0; /* Clear match on timer 0 */
+ next_match = (OSMR0 += LATCH);
+ } while ((signed long)(next_match - OSCR) <= 0);
+
+ write_sequnlock(&xtime_lock);
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction sa1100_timer_irq = {
+ .name = "SA11xx Timer Tick",
+ .flags = SA_INTERRUPT,
+ .handler = sa1100_timer_interrupt
+};
+
+static void __init sa1100_timer_init(void)
+{
+ struct timespec tv;
+
+ set_rtc = sa1100_set_rtc;
+
+ tv.tv_nsec = 0;
+ tv.tv_sec = sa1100_get_rtc_time();
+ do_settimeofday(&tv);
+
+ OSMR0 = 0; /* set initial match at 0 */
+ OSSR = 0xf; /* clear status on all timers */
+ setup_irq(IRQ_OST0, &sa1100_timer_irq);
+ OIER |= OIER_E0; /* enable match on timer 0 to cause interrupts */
+ OSCR = 0; /* initialize free-running timer, force first match */
+}
+
+#ifdef CONFIG_PM
+unsigned long osmr[4], oier;
+
+static void sa1100_timer_suspend(void)
+{
+ osmr[0] = OSMR0;
+ osmr[1] = OSMR1;
+ osmr[2] = OSMR2;
+ osmr[3] = OSMR3;
+ oier = OIER;
+}
+
+static void sa1100_timer_resume(void)
+{
+ OSSR = 0x0f;
+ OSMR0 = osmr[0];
+ OSMR1 = osmr[1];
+ OSMR2 = osmr[2];
+ OSMR3 = osmr[3];
+ OIER = oier;
+
+ /*
+ * OSMR0 is the system timer: make sure OSCR is sufficiently behind
+ */
+ OSCR = OSMR0 - LATCH;
+}
+#else
+#define sa1100_timer_suspend NULL
+#define sa1100_timer_resume NULL
+#endif
+
+struct sys_timer sa1100_timer = {
+ .init = sa1100_timer_init,
+ .suspend = sa1100_timer_suspend,
+ .resume = sa1100_timer_resume,
+ .offset = sa1100_gettimeoffset,
+};
--- /dev/null
+/*
+ * linux/arch/arm/mach-versatile/clock.c
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/errno.h>
+#include <linux/err.h>
+
+#include <asm/semaphore.h>
+#include <asm/hardware/clock.h>
+#include <asm/hardware/icst307.h>
+
+#include "clock.h"
+
+static LIST_HEAD(clocks);
+static DECLARE_MUTEX(clocks_sem);
+
+struct clk *clk_get(struct device *dev, const char *id)
+{
+ struct clk *p, *clk = ERR_PTR(-ENOENT);
+
+ down(&clocks_sem);
+ list_for_each_entry(p, &clocks, node) {
+ if (strcmp(id, p->name) == 0 && try_module_get(p->owner)) {
+ clk = p;
+ break;
+ }
+ }
+ up(&clocks_sem);
+
+ return clk;
+}
+EXPORT_SYMBOL(clk_get);
+
+void clk_put(struct clk *clk)
+{
+ module_put(clk->owner);
+}
+EXPORT_SYMBOL(clk_put);
+
+int clk_enable(struct clk *clk)
+{
+ return 0;
+}
+EXPORT_SYMBOL(clk_enable);
+
+void clk_disable(struct clk *clk)
+{
+}
+EXPORT_SYMBOL(clk_disable);
+
+int clk_use(struct clk *clk)
+{
+ return 0;
+}
+EXPORT_SYMBOL(clk_use);
+
+void clk_unuse(struct clk *clk)
+{
+}
+EXPORT_SYMBOL(clk_unuse);
+
+unsigned long clk_get_rate(struct clk *clk)
+{
+ return clk->rate;
+}
+EXPORT_SYMBOL(clk_get_rate);
+
+long clk_round_rate(struct clk *clk, unsigned long rate)
+{
+ return rate;
+}
+EXPORT_SYMBOL(clk_round_rate);
+
+int clk_set_rate(struct clk *clk, unsigned long rate)
+{
+ int ret = -EIO;
+
+ if (clk->setvco) {
+ struct icst307_vco vco;
+
+ vco = icst307_khz_to_vco(clk->params, rate / 1000);
+ clk->rate = icst307_khz(clk->params, vco) * 1000;
+
+ printk("Clock %s: setting VCO reg params: S=%d R=%d V=%d\n",
+ clk->name, vco.s, vco.r, vco.v);
+
+ clk->setvco(clk, vco);
+ ret = 0;
+ }
+ return ret;
+}
+EXPORT_SYMBOL(clk_set_rate);
+
+/*
+ * These are fixed clocks.
+ */
+static struct clk kmi_clk = {
+ .name = "KMIREFCLK",
+ .rate = 24000000,
+};
+
+static struct clk uart_clk = {
+ .name = "UARTCLK",
+ .rate = 24000000,
+};
+
+static struct clk mmci_clk = {
+ .name = "MCLK",
+ .rate = 33000000,
+};
+
+int clk_register(struct clk *clk)
+{
+ down(&clocks_sem);
+ list_add(&clk->node, &clocks);
+ up(&clocks_sem);
+ return 0;
+}
+EXPORT_SYMBOL(clk_register);
+
+void clk_unregister(struct clk *clk)
+{
+ down(&clocks_sem);
+ list_del(&clk->node);
+ up(&clocks_sem);
+}
+EXPORT_SYMBOL(clk_unregister);
+
+static int __init clk_init(void)
+{
+ clk_register(&kmi_clk);
+ clk_register(&uart_clk);
+ clk_register(&mmci_clk);
+ return 0;
+}
+arch_initcall(clk_init);
--- /dev/null
+/*
+ * linux/arch/arm/mach-versatile/clock.h
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+struct module;
+struct icst307_params;
+
+struct clk {
+ struct list_head node;
+ unsigned long rate;
+ struct module *owner;
+ const char *name;
+ const struct icst307_params *params;
+ void *data;
+ void (*setvco)(struct clk *, struct icst307_vco vco);
+};
+
+int clk_register(struct clk *clk);
+void clk_unregister(struct clk *clk);
--- /dev/null
+/*
+ * linux/arch/arm/vfp/vfp.h
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+static inline u32 vfp_shiftright32jamming(u32 val, unsigned int shift)
+{
+ if (shift) {
+ if (shift < 32)
+ val = val >> shift | ((val << (32 - shift)) != 0);
+ else
+ val = val != 0;
+ }
+ return val;
+}
+
+static inline u64 vfp_shiftright64jamming(u64 val, unsigned int shift)
+{
+ if (shift) {
+ if (shift < 64)
+ val = val >> shift | ((val << (64 - shift)) != 0);
+ else
+ val = val != 0;
+ }
+ return val;
+}
+
+static inline u32 vfp_hi64to32jamming(u64 val)
+{
+ u32 v;
+
+ asm(
+ "cmp %Q1, #1 @ vfp_hi64to32jamming\n\t"
+ "movcc %0, %R1\n\t"
+ "orrcs %0, %R1, #1"
+ : "=r" (v) : "r" (val) : "cc");
+
+ return v;
+}
+
+static inline void add128(u64 *resh, u64 *resl, u64 nh, u64 nl, u64 mh, u64 ml)
+{
+ asm( "adds %Q0, %Q2, %Q4\n\t"
+ "adcs %R0, %R2, %R4\n\t"
+ "adcs %Q1, %Q3, %Q5\n\t"
+ "adc %R1, %R3, %R5"
+ : "=r" (nl), "=r" (nh)
+ : "0" (nl), "1" (nh), "r" (ml), "r" (mh)
+ : "cc");
+ *resh = nh;
+ *resl = nl;
+}
+
+static inline void sub128(u64 *resh, u64 *resl, u64 nh, u64 nl, u64 mh, u64 ml)
+{
+ asm( "subs %Q0, %Q2, %Q4\n\t"
+ "sbcs %R0, %R2, %R4\n\t"
+ "sbcs %Q1, %Q3, %Q5\n\t"
+ "sbc %R1, %R3, %R5\n\t"
+ : "=r" (nl), "=r" (nh)
+ : "0" (nl), "1" (nh), "r" (ml), "r" (mh)
+ : "cc");
+ *resh = nh;
+ *resl = nl;
+}
+
+static inline void mul64to128(u64 *resh, u64 *resl, u64 n, u64 m)
+{
+ u32 nh, nl, mh, ml;
+ u64 rh, rma, rmb, rl;
+
+ nl = n;
+ ml = m;
+ rl = (u64)nl * ml;
+
+ nh = n >> 32;
+ rma = (u64)nh * ml;
+
+ mh = m >> 32;
+ rmb = (u64)nl * mh;
+ rma += rmb;
+
+ rh = (u64)nh * mh;
+ rh += ((u64)(rma < rmb) << 32) + (rma >> 32);
+
+ rma <<= 32;
+ rl += rma;
+ rh += (rl < rma);
+
+ *resl = rl;
+ *resh = rh;
+}
+
+static inline void shift64left(u64 *resh, u64 *resl, u64 n)
+{
+ *resh = n >> 63;
+ *resl = n << 1;
+}
+
+static inline u64 vfp_hi64multiply64(u64 n, u64 m)
+{
+ u64 rh, rl;
+ mul64to128(&rh, &rl, n, m);
+ return rh | (rl != 0);
+}
+
+static inline u64 vfp_estimate_div128to64(u64 nh, u64 nl, u64 m)
+{
+ u64 mh, ml, remh, reml, termh, terml, z;
+
+ if (nh >= m)
+ return ~0ULL;
+ mh = m >> 32;
+ z = (mh << 32 <= nh) ? 0xffffffff00000000ULL : (nh / mh) << 32;
+ mul64to128(&termh, &terml, m, z);
+ sub128(&remh, &reml, nh, nl, termh, terml);
+ ml = m << 32;
+ while ((s64)remh < 0) {
+ z -= 0x100000000ULL;
+ add128(&remh, &reml, remh, reml, mh, ml);
+ }
+ remh = (remh << 32) | (reml >> 32);
+ z |= (mh << 32 <= remh) ? 0xffffffff : remh / mh;
+ return z;
+}
+
+/*
+ * Operations on unpacked elements
+ */
+#define vfp_sign_negate(sign) (sign ^ 0x8000)
+
+/*
+ * Single-precision
+ */
+struct vfp_single {
+ s16 exponent;
+ u16 sign;
+ u32 significand;
+};
+
+extern s32 vfp_get_float(unsigned int reg);
+extern void vfp_put_float(unsigned int reg, s32 val);
+
+/*
+ * VFP_SINGLE_MANTISSA_BITS - number of bits in the mantissa
+ * VFP_SINGLE_EXPONENT_BITS - number of bits in the exponent
+ * VFP_SINGLE_LOW_BITS - number of low bits in the unpacked significand
+ * which are not propagated to the float upon packing.
+ */
+#define VFP_SINGLE_MANTISSA_BITS (23)
+#define VFP_SINGLE_EXPONENT_BITS (8)
+#define VFP_SINGLE_LOW_BITS (32 - VFP_SINGLE_MANTISSA_BITS - 2)
+#define VFP_SINGLE_LOW_BITS_MASK ((1 << VFP_SINGLE_LOW_BITS) - 1)
+
+/*
+ * The bit in an unpacked float which indicates that it is a quiet NaN
+ */
+#define VFP_SINGLE_SIGNIFICAND_QNAN (1 << (VFP_SINGLE_MANTISSA_BITS - 1 + VFP_SINGLE_LOW_BITS))
+
+/*
+ * Operations on packed single-precision numbers
+ */
+#define vfp_single_packed_sign(v) ((v) & 0x80000000)
+#define vfp_single_packed_negate(v) ((v) ^ 0x80000000)
+#define vfp_single_packed_abs(v) ((v) & ~0x80000000)
+#define vfp_single_packed_exponent(v) (((v) >> VFP_SINGLE_MANTISSA_BITS) & ((1 << VFP_SINGLE_EXPONENT_BITS) - 1))
+#define vfp_single_packed_mantissa(v) ((v) & ((1 << VFP_SINGLE_MANTISSA_BITS) - 1))
+
+/*
+ * Unpack a single-precision float. Note that this returns the magnitude
+ * of the single-precision float mantissa with the 1. if necessary,
+ * aligned to bit 30.
+ */
+static inline void vfp_single_unpack(struct vfp_single *s, s32 val)
+{
+ u32 significand;
+
+ s->sign = vfp_single_packed_sign(val) >> 16,
+ s->exponent = vfp_single_packed_exponent(val);
+
+ significand = (u32) val;
+ significand = (significand << (32 - VFP_SINGLE_MANTISSA_BITS)) >> 2;
+ if (s->exponent && s->exponent != 255)
+ significand |= 0x40000000;
+ s->significand = significand;
+}
+
+/*
+ * Re-pack a single-precision float. This assumes that the float is
+ * already normalised such that the MSB is bit 30, _not_ bit 31.
+ */
+static inline s32 vfp_single_pack(struct vfp_single *s)
+{
+ u32 val;
+ val = (s->sign << 16) +
+ (s->exponent << VFP_SINGLE_MANTISSA_BITS) +
+ (s->significand >> VFP_SINGLE_LOW_BITS);
+ return (s32)val;
+}
+
+#define VFP_NUMBER (1<<0)
+#define VFP_ZERO (1<<1)
+#define VFP_DENORMAL (1<<2)
+#define VFP_INFINITY (1<<3)
+#define VFP_NAN (1<<4)
+#define VFP_NAN_SIGNAL (1<<5)
+
+#define VFP_QNAN (VFP_NAN)
+#define VFP_SNAN (VFP_NAN|VFP_NAN_SIGNAL)
+
+static inline int vfp_single_type(struct vfp_single *s)
+{
+ int type = VFP_NUMBER;
+ if (s->exponent == 255) {
+ if (s->significand == 0)
+ type = VFP_INFINITY;
+ else if (s->significand & VFP_SINGLE_SIGNIFICAND_QNAN)
+ type = VFP_QNAN;
+ else
+ type = VFP_SNAN;
+ } else if (s->exponent == 0) {
+ if (s->significand == 0)
+ type |= VFP_ZERO;
+ else
+ type |= VFP_DENORMAL;
+ }
+ return type;
+}
+
+#ifndef DEBUG
+#define vfp_single_normaliseround(sd,vsd,fpscr,except,func) __vfp_single_normaliseround(sd,vsd,fpscr,except)
+u32 __vfp_single_normaliseround(int sd, struct vfp_single *vs, u32 fpscr, u32 exceptions);
+#else
+u32 vfp_single_normaliseround(int sd, struct vfp_single *vs, u32 fpscr, u32 exceptions, const char *func);
+#endif
+
+/*
+ * Double-precision
+ */
+struct vfp_double {
+ s16 exponent;
+ u16 sign;
+ u64 significand;
+};
+
+/*
+ * VFP_REG_ZERO is a special register number for vfp_get_double
+ * which returns (double)0.0. This is useful for the compare with
+ * zero instructions.
+ */
+#define VFP_REG_ZERO 16
+extern u64 vfp_get_double(unsigned int reg);
+extern void vfp_put_double(unsigned int reg, u64 val);
+
+#define VFP_DOUBLE_MANTISSA_BITS (52)
+#define VFP_DOUBLE_EXPONENT_BITS (11)
+#define VFP_DOUBLE_LOW_BITS (64 - VFP_DOUBLE_MANTISSA_BITS - 2)
+#define VFP_DOUBLE_LOW_BITS_MASK ((1 << VFP_DOUBLE_LOW_BITS) - 1)
+
+/*
+ * The bit in an unpacked double which indicates that it is a quiet NaN
+ */
+#define VFP_DOUBLE_SIGNIFICAND_QNAN (1ULL << (VFP_DOUBLE_MANTISSA_BITS - 1 + VFP_DOUBLE_LOW_BITS))
+
+/*
+ * Operations on packed single-precision numbers
+ */
+#define vfp_double_packed_sign(v) ((v) & (1ULL << 63))
+#define vfp_double_packed_negate(v) ((v) ^ (1ULL << 63))
+#define vfp_double_packed_abs(v) ((v) & ~(1ULL << 63))
+#define vfp_double_packed_exponent(v) (((v) >> VFP_DOUBLE_MANTISSA_BITS) & ((1 << VFP_DOUBLE_EXPONENT_BITS) - 1))
+#define vfp_double_packed_mantissa(v) ((v) & ((1ULL << VFP_DOUBLE_MANTISSA_BITS) - 1))
+
+/*
+ * Unpack a double-precision float. Note that this returns the magnitude
+ * of the double-precision float mantissa with the 1. if necessary,
+ * aligned to bit 62.
+ */
+static inline void vfp_double_unpack(struct vfp_double *s, s64 val)
+{
+ u64 significand;
+
+ s->sign = vfp_double_packed_sign(val) >> 48;
+ s->exponent = vfp_double_packed_exponent(val);
+
+ significand = (u64) val;
+ significand = (significand << (64 - VFP_DOUBLE_MANTISSA_BITS)) >> 2;
+ if (s->exponent && s->exponent != 2047)
+ significand |= (1ULL << 62);
+ s->significand = significand;
+}
+
+/*
+ * Re-pack a double-precision float. This assumes that the float is
+ * already normalised such that the MSB is bit 30, _not_ bit 31.
+ */
+static inline s64 vfp_double_pack(struct vfp_double *s)
+{
+ u64 val;
+ val = ((u64)s->sign << 48) +
+ ((u64)s->exponent << VFP_DOUBLE_MANTISSA_BITS) +
+ (s->significand >> VFP_DOUBLE_LOW_BITS);
+ return (s64)val;
+}
+
+static inline int vfp_double_type(struct vfp_double *s)
+{
+ int type = VFP_NUMBER;
+ if (s->exponent == 2047) {
+ if (s->significand == 0)
+ type = VFP_INFINITY;
+ else if (s->significand & VFP_DOUBLE_SIGNIFICAND_QNAN)
+ type = VFP_QNAN;
+ else
+ type = VFP_SNAN;
+ } else if (s->exponent == 0) {
+ if (s->significand == 0)
+ type |= VFP_ZERO;
+ else
+ type |= VFP_DENORMAL;
+ }
+ return type;
+}
+
+u32 vfp_double_normaliseround(int dd, struct vfp_double *vd, u32 fpscr, u32 exceptions, const char *func);
+
+/*
+ * System registers
+ */
+extern u32 vfp_get_sys(unsigned int reg);
+extern void vfp_put_sys(unsigned int reg, u32 val);
+
+u32 vfp_estimate_sqrt_significand(u32 exponent, u32 significand);
+
+/*
+ * A special flag to tell the normalisation code not to normalise.
+ */
+#define VFP_NAN_FLAG 0x100
--- /dev/null
+/*
+ * linux/arch/arm/vfp/vfpdouble.c
+ *
+ * This code is derived in part from John R. Housers softfloat library, which
+ * carries the following notice:
+ *
+ * ===========================================================================
+ * This C source file is part of the SoftFloat IEC/IEEE Floating-point
+ * Arithmetic Package, Release 2.
+ *
+ * Written by John R. Hauser. This work was made possible in part by the
+ * International Computer Science Institute, located at Suite 600, 1947 Center
+ * Street, Berkeley, California 94704. Funding was partially provided by the
+ * National Science Foundation under grant MIP-9311980. The original version
+ * of this code was written as part of a project to build a fixed-point vector
+ * processor in collaboration with the University of California at Berkeley,
+ * overseen by Profs. Nelson Morgan and John Wawrzynek. More information
+ * is available through the web page `http://HTTP.CS.Berkeley.EDU/~jhauser/
+ * arithmetic/softfloat.html'.
+ *
+ * THIS SOFTWARE IS DISTRIBUTED AS IS, FOR FREE. Although reasonable effort
+ * has been made to avoid it, THIS SOFTWARE MAY CONTAIN FAULTS THAT WILL AT
+ * TIMES RESULT IN INCORRECT BEHAVIOR. USE OF THIS SOFTWARE IS RESTRICTED TO
+ * PERSONS AND ORGANIZATIONS WHO CAN AND WILL TAKE FULL RESPONSIBILITY FOR ANY
+ * AND ALL LOSSES, COSTS, OR OTHER PROBLEMS ARISING FROM ITS USE.
+ *
+ * Derivative works are acceptable, even for commercial purposes, so long as
+ * (1) they include prominent notice that the work is derivative, and (2) they
+ * include prominent notice akin to these three paragraphs for those parts of
+ * this code that are retained.
+ * ===========================================================================
+ */
+#include <linux/kernel.h>
+#include <linux/bitops.h>
+#include <asm/ptrace.h>
+#include <asm/vfp.h>
+
+#include "vfpinstr.h"
+#include "vfp.h"
+
+static struct vfp_double vfp_double_default_qnan = {
+ .exponent = 2047,
+ .sign = 0,
+ .significand = VFP_DOUBLE_SIGNIFICAND_QNAN,
+};
+
+static void vfp_double_dump(const char *str, struct vfp_double *d)
+{
+ pr_debug("VFP: %s: sign=%d exponent=%d significand=%016llx\n",
+ str, d->sign != 0, d->exponent, d->significand);
+}
+
+static void vfp_double_normalise_denormal(struct vfp_double *vd)
+{
+ int bits = 31 - fls(vd->significand >> 32);
+ if (bits == 31)
+ bits = 62 - fls(vd->significand);
+
+ vfp_double_dump("normalise_denormal: in", vd);
+
+ if (bits) {
+ vd->exponent -= bits - 1;
+ vd->significand <<= bits;
+ }
+
+ vfp_double_dump("normalise_denormal: out", vd);
+}
+
+u32 vfp_double_normaliseround(int dd, struct vfp_double *vd, u32 fpscr, u32 exceptions, const char *func)
+{
+ u64 significand, incr;
+ int exponent, shift, underflow;
+ u32 rmode;
+
+ vfp_double_dump("pack: in", vd);
+
+ /*
+ * Infinities and NaNs are a special case.
+ */
+ if (vd->exponent == 2047 && (vd->significand == 0 || exceptions))
+ goto pack;
+
+ /*
+ * Special-case zero.
+ */
+ if (vd->significand == 0) {
+ vd->exponent = 0;
+ goto pack;
+ }
+
+ exponent = vd->exponent;
+ significand = vd->significand;
+
+ shift = 32 - fls(significand >> 32);
+ if (shift == 32)
+ shift = 64 - fls(significand);
+ if (shift) {
+ exponent -= shift;
+ significand <<= shift;
+ }
+
+#ifdef DEBUG
+ vd->exponent = exponent;
+ vd->significand = significand;
+ vfp_double_dump("pack: normalised", vd);
+#endif
+
+ /*
+ * Tiny number?
+ */
+ underflow = exponent < 0;
+ if (underflow) {
+ significand = vfp_shiftright64jamming(significand, -exponent);
+ exponent = 0;
+#ifdef DEBUG
+ vd->exponent = exponent;
+ vd->significand = significand;
+ vfp_double_dump("pack: tiny number", vd);
+#endif
+ if (!(significand & ((1ULL << (VFP_DOUBLE_LOW_BITS + 1)) - 1)))
+ underflow = 0;
+ }
+
+ /*
+ * Select rounding increment.
+ */
+ incr = 0;
+ rmode = fpscr & FPSCR_RMODE_MASK;
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 1ULL << VFP_DOUBLE_LOW_BITS;
+ if ((significand & (1ULL << (VFP_DOUBLE_LOW_BITS + 1))) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vd->sign != 0))
+ incr = (1ULL << (VFP_DOUBLE_LOW_BITS + 1)) - 1;
+
+ pr_debug("VFP: rounding increment = 0x%08llx\n", incr);
+
+ /*
+ * Is our rounding going to overflow?
+ */
+ if ((significand + incr) < significand) {
+ exponent += 1;
+ significand = (significand >> 1) | (significand & 1);
+ incr >>= 1;
+#ifdef DEBUG
+ vd->exponent = exponent;
+ vd->significand = significand;
+ vfp_double_dump("pack: overflow", vd);
+#endif
+ }
+
+ /*
+ * If any of the low bits (which will be shifted out of the
+ * number) are non-zero, the result is inexact.
+ */
+ if (significand & ((1 << (VFP_DOUBLE_LOW_BITS + 1)) - 1))
+ exceptions |= FPSCR_IXC;
+
+ /*
+ * Do our rounding.
+ */
+ significand += incr;
+
+ /*
+ * Infinity?
+ */
+ if (exponent >= 2046) {
+ exceptions |= FPSCR_OFC | FPSCR_IXC;
+ if (incr == 0) {
+ vd->exponent = 2045;
+ vd->significand = 0x7fffffffffffffffULL;
+ } else {
+ vd->exponent = 2047; /* infinity */
+ vd->significand = 0;
+ }
+ } else {
+ if (significand >> (VFP_DOUBLE_LOW_BITS + 1) == 0)
+ exponent = 0;
+ if (exponent || significand > 0x8000000000000000ULL)
+ underflow = 0;
+ if (underflow)
+ exceptions |= FPSCR_UFC;
+ vd->exponent = exponent;
+ vd->significand = significand >> 1;
+ }
+
+ pack:
+ vfp_double_dump("pack: final", vd);
+ {
+ s64 d = vfp_double_pack(vd);
+ pr_debug("VFP: %s: d(d%d)=%016llx exceptions=%08x\n", func,
+ dd, d, exceptions);
+ vfp_put_double(dd, d);
+ }
+ return exceptions & ~VFP_NAN_FLAG;
+}
+
+/*
+ * Propagate the NaN, setting exceptions if it is signalling.
+ * 'n' is always a NaN. 'm' may be a number, NaN or infinity.
+ */
+static u32
+vfp_propagate_nan(struct vfp_double *vdd, struct vfp_double *vdn,
+ struct vfp_double *vdm, u32 fpscr)
+{
+ struct vfp_double *nan;
+ int tn, tm = 0;
+
+ tn = vfp_double_type(vdn);
+
+ if (vdm)
+ tm = vfp_double_type(vdm);
+
+ if (fpscr & FPSCR_DEFAULT_NAN)
+ /*
+ * Default NaN mode - always returns a quiet NaN
+ */
+ nan = &vfp_double_default_qnan;
+ else {
+ /*
+ * Contemporary mode - select the first signalling
+ * NAN, or if neither are signalling, the first
+ * quiet NAN.
+ */
+ if (tn == VFP_SNAN || (tm != VFP_SNAN && tn == VFP_QNAN))
+ nan = vdn;
+ else
+ nan = vdm;
+ /*
+ * Make the NaN quiet.
+ */
+ nan->significand |= VFP_DOUBLE_SIGNIFICAND_QNAN;
+ }
+
+ *vdd = *nan;
+
+ /*
+ * If one was a signalling NAN, raise invalid operation.
+ */
+ return tn == VFP_SNAN || tm == VFP_SNAN ? FPSCR_IOC : VFP_NAN_FLAG;
+}
+
+/*
+ * Extended operations
+ */
+static u32 vfp_double_fabs(int dd, int unused, int dm, u32 fpscr)
+{
+ vfp_put_double(dd, vfp_double_packed_abs(vfp_get_double(dm)));
+ return 0;
+}
+
+static u32 vfp_double_fcpy(int dd, int unused, int dm, u32 fpscr)
+{
+ vfp_put_double(dd, vfp_get_double(dm));
+ return 0;
+}
+
+static u32 vfp_double_fneg(int dd, int unused, int dm, u32 fpscr)
+{
+ vfp_put_double(dd, vfp_double_packed_negate(vfp_get_double(dm)));
+ return 0;
+}
+
+static u32 vfp_double_fsqrt(int dd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm, vdd;
+ int ret, tm;
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ tm = vfp_double_type(&vdm);
+ if (tm & (VFP_NAN|VFP_INFINITY)) {
+ struct vfp_double *vdp = &vdd;
+
+ if (tm & VFP_NAN)
+ ret = vfp_propagate_nan(vdp, &vdm, NULL, fpscr);
+ else if (vdm.sign == 0) {
+ sqrt_copy:
+ vdp = &vdm;
+ ret = 0;
+ } else {
+ sqrt_invalid:
+ vdp = &vfp_double_default_qnan;
+ ret = FPSCR_IOC;
+ }
+ vfp_put_double(dd, vfp_double_pack(vdp));
+ return ret;
+ }
+
+ /*
+ * sqrt(+/- 0) == +/- 0
+ */
+ if (tm & VFP_ZERO)
+ goto sqrt_copy;
+
+ /*
+ * Normalise a denormalised number
+ */
+ if (tm & VFP_DENORMAL)
+ vfp_double_normalise_denormal(&vdm);
+
+ /*
+ * sqrt(<0) = invalid
+ */
+ if (vdm.sign)
+ goto sqrt_invalid;
+
+ vfp_double_dump("sqrt", &vdm);
+
+ /*
+ * Estimate the square root.
+ */
+ vdd.sign = 0;
+ vdd.exponent = ((vdm.exponent - 1023) >> 1) + 1023;
+ vdd.significand = (u64)vfp_estimate_sqrt_significand(vdm.exponent, vdm.significand >> 32) << 31;
+
+ vfp_double_dump("sqrt estimate1", &vdd);
+
+ vdm.significand >>= 1 + (vdm.exponent & 1);
+ vdd.significand += 2 + vfp_estimate_div128to64(vdm.significand, 0, vdd.significand);
+
+ vfp_double_dump("sqrt estimate2", &vdd);
+
+ /*
+ * And now adjust.
+ */
+ if ((vdd.significand & VFP_DOUBLE_LOW_BITS_MASK) <= 5) {
+ if (vdd.significand < 2) {
+ vdd.significand = ~0ULL;
+ } else {
+ u64 termh, terml, remh, reml;
+ vdm.significand <<= 2;
+ mul64to128(&termh, &terml, vdd.significand, vdd.significand);
+ sub128(&remh, &reml, vdm.significand, 0, termh, terml);
+ while ((s64)remh < 0) {
+ vdd.significand -= 1;
+ shift64left(&termh, &terml, vdd.significand);
+ terml |= 1;
+ add128(&remh, &reml, remh, reml, termh, terml);
+ }
+ vdd.significand |= (remh | reml) != 0;
+ }
+ }
+ vdd.significand = vfp_shiftright64jamming(vdd.significand, 1);
+
+ return vfp_double_normaliseround(dd, &vdd, fpscr, 0, "fsqrt");
+}
+
+/*
+ * Equal := ZC
+ * Less than := N
+ * Greater than := C
+ * Unordered := CV
+ */
+static u32 vfp_compare(int dd, int signal_on_qnan, int dm, u32 fpscr)
+{
+ s64 d, m;
+ u32 ret = 0;
+
+ m = vfp_get_double(dm);
+ if (vfp_double_packed_exponent(m) == 2047 && vfp_double_packed_mantissa(m)) {
+ ret |= FPSCR_C | FPSCR_V;
+ if (signal_on_qnan || !(vfp_double_packed_mantissa(m) & (1ULL << (VFP_DOUBLE_MANTISSA_BITS - 1))))
+ /*
+ * Signalling NaN, or signalling on quiet NaN
+ */
+ ret |= FPSCR_IOC;
+ }
+
+ d = vfp_get_double(dd);
+ if (vfp_double_packed_exponent(d) == 2047 && vfp_double_packed_mantissa(d)) {
+ ret |= FPSCR_C | FPSCR_V;
+ if (signal_on_qnan || !(vfp_double_packed_mantissa(d) & (1ULL << (VFP_DOUBLE_MANTISSA_BITS - 1))))
+ /*
+ * Signalling NaN, or signalling on quiet NaN
+ */
+ ret |= FPSCR_IOC;
+ }
+
+ if (ret == 0) {
+ if (d == m || vfp_double_packed_abs(d | m) == 0) {
+ /*
+ * equal
+ */
+ ret |= FPSCR_Z | FPSCR_C;
+ } else if (vfp_double_packed_sign(d ^ m)) {
+ /*
+ * different signs
+ */
+ if (vfp_double_packed_sign(d))
+ /*
+ * d is negative, so d < m
+ */
+ ret |= FPSCR_N;
+ else
+ /*
+ * d is positive, so d > m
+ */
+ ret |= FPSCR_C;
+ } else if ((vfp_double_packed_sign(d) != 0) ^ (d < m)) {
+ /*
+ * d < m
+ */
+ ret |= FPSCR_N;
+ } else if ((vfp_double_packed_sign(d) != 0) ^ (d > m)) {
+ /*
+ * d > m
+ */
+ ret |= FPSCR_C;
+ }
+ }
+
+ return ret;
+}
+
+static u32 vfp_double_fcmp(int dd, int unused, int dm, u32 fpscr)
+{
+ return vfp_compare(dd, 0, dm, fpscr);
+}
+
+static u32 vfp_double_fcmpe(int dd, int unused, int dm, u32 fpscr)
+{
+ return vfp_compare(dd, 1, dm, fpscr);
+}
+
+static u32 vfp_double_fcmpz(int dd, int unused, int dm, u32 fpscr)
+{
+ return vfp_compare(dd, 0, VFP_REG_ZERO, fpscr);
+}
+
+static u32 vfp_double_fcmpez(int dd, int unused, int dm, u32 fpscr)
+{
+ return vfp_compare(dd, 1, VFP_REG_ZERO, fpscr);
+}
+
+static u32 vfp_double_fcvts(int sd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm;
+ struct vfp_single vsd;
+ int tm;
+ u32 exceptions = 0;
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+
+ tm = vfp_double_type(&vdm);
+
+ /*
+ * If we have a signalling NaN, signal invalid operation.
+ */
+ if (tm == VFP_SNAN)
+ exceptions = FPSCR_IOC;
+
+ if (tm & VFP_DENORMAL)
+ vfp_double_normalise_denormal(&vdm);
+
+ vsd.sign = vdm.sign;
+ vsd.significand = vfp_hi64to32jamming(vdm.significand);
+
+ /*
+ * If we have an infinity or a NaN, the exponent must be 255
+ */
+ if (tm & (VFP_INFINITY|VFP_NAN)) {
+ vsd.exponent = 255;
+ if (tm & VFP_NAN)
+ vsd.significand |= VFP_SINGLE_SIGNIFICAND_QNAN;
+ goto pack_nan;
+ } else if (tm & VFP_ZERO)
+ vsd.exponent = 0;
+ else
+ vsd.exponent = vdm.exponent - (1023 - 127);
+
+ return vfp_single_normaliseround(sd, &vsd, fpscr, exceptions, "fcvts");
+
+ pack_nan:
+ vfp_put_float(sd, vfp_single_pack(&vsd));
+ return exceptions;
+}
+
+static u32 vfp_double_fuito(int dd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm;
+ u32 m = vfp_get_float(dm);
+
+ vdm.sign = 0;
+ vdm.exponent = 1023 + 63 - 1;
+ vdm.significand = (u64)m;
+
+ return vfp_double_normaliseround(dd, &vdm, fpscr, 0, "fuito");
+}
+
+static u32 vfp_double_fsito(int dd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm;
+ u32 m = vfp_get_float(dm);
+
+ vdm.sign = (m & 0x80000000) >> 16;
+ vdm.exponent = 1023 + 63 - 1;
+ vdm.significand = vdm.sign ? -m : m;
+
+ return vfp_double_normaliseround(dd, &vdm, fpscr, 0, "fsito");
+}
+
+static u32 vfp_double_ftoui(int sd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm;
+ u32 d, exceptions = 0;
+ int rmode = fpscr & FPSCR_RMODE_MASK;
+ int tm;
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+
+ /*
+ * Do we have a denormalised number?
+ */
+ tm = vfp_double_type(&vdm);
+ if (tm & VFP_DENORMAL)
+ exceptions |= FPSCR_IDC;
+
+ if (tm & VFP_NAN)
+ vdm.sign = 0;
+
+ if (vdm.exponent >= 1023 + 32) {
+ d = vdm.sign ? 0 : 0xffffffff;
+ exceptions = FPSCR_IOC;
+ } else if (vdm.exponent >= 1023 - 1) {
+ int shift = 1023 + 63 - vdm.exponent;
+ u64 rem, incr = 0;
+
+ /*
+ * 2^0 <= m < 2^32-2^8
+ */
+ d = (vdm.significand << 1) >> shift;
+ rem = vdm.significand << (65 - shift);
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 0x8000000000000000ULL;
+ if ((d & 1) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vdm.sign != 0)) {
+ incr = ~0ULL;
+ }
+
+ if ((rem + incr) < rem) {
+ if (d < 0xffffffff)
+ d += 1;
+ else
+ exceptions |= FPSCR_IOC;
+ }
+
+ if (d && vdm.sign) {
+ d = 0;
+ exceptions |= FPSCR_IOC;
+ } else if (rem)
+ exceptions |= FPSCR_IXC;
+ } else {
+ d = 0;
+ if (vdm.exponent | vdm.significand) {
+ exceptions |= FPSCR_IXC;
+ if (rmode == FPSCR_ROUND_PLUSINF && vdm.sign == 0)
+ d = 1;
+ else if (rmode == FPSCR_ROUND_MINUSINF && vdm.sign) {
+ d = 0;
+ exceptions |= FPSCR_IOC;
+ }
+ }
+ }
+
+ pr_debug("VFP: ftoui: d(s%d)=%08x exceptions=%08x\n", sd, d, exceptions);
+
+ vfp_put_float(sd, d);
+
+ return exceptions;
+}
+
+static u32 vfp_double_ftouiz(int sd, int unused, int dm, u32 fpscr)
+{
+ return vfp_double_ftoui(sd, unused, dm, FPSCR_ROUND_TOZERO);
+}
+
+static u32 vfp_double_ftosi(int sd, int unused, int dm, u32 fpscr)
+{
+ struct vfp_double vdm;
+ u32 d, exceptions = 0;
+ int rmode = fpscr & FPSCR_RMODE_MASK;
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ vfp_double_dump("VDM", &vdm);
+
+ /*
+ * Do we have denormalised number?
+ */
+ if (vfp_double_type(&vdm) & VFP_DENORMAL)
+ exceptions |= FPSCR_IDC;
+
+ if (vdm.exponent >= 1023 + 32) {
+ d = 0x7fffffff;
+ if (vdm.sign)
+ d = ~d;
+ exceptions |= FPSCR_IOC;
+ } else if (vdm.exponent >= 1023 - 1) {
+ int shift = 1023 + 63 - vdm.exponent; /* 58 */
+ u64 rem, incr = 0;
+
+ d = (vdm.significand << 1) >> shift;
+ rem = vdm.significand << (65 - shift);
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 0x8000000000000000ULL;
+ if ((d & 1) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vdm.sign != 0)) {
+ incr = ~0ULL;
+ }
+
+ if ((rem + incr) < rem && d < 0xffffffff)
+ d += 1;
+ if (d > 0x7fffffff + (vdm.sign != 0)) {
+ d = 0x7fffffff + (vdm.sign != 0);
+ exceptions |= FPSCR_IOC;
+ } else if (rem)
+ exceptions |= FPSCR_IXC;
+
+ if (vdm.sign)
+ d = -d;
+ } else {
+ d = 0;
+ if (vdm.exponent | vdm.significand) {
+ exceptions |= FPSCR_IXC;
+ if (rmode == FPSCR_ROUND_PLUSINF && vdm.sign == 0)
+ d = 1;
+ else if (rmode == FPSCR_ROUND_MINUSINF && vdm.sign)
+ d = -1;
+ }
+ }
+
+ pr_debug("VFP: ftosi: d(s%d)=%08x exceptions=%08x\n", sd, d, exceptions);
+
+ vfp_put_float(sd, (s32)d);
+
+ return exceptions;
+}
+
+static u32 vfp_double_ftosiz(int dd, int unused, int dm, u32 fpscr)
+{
+ return vfp_double_ftosi(dd, unused, dm, FPSCR_ROUND_TOZERO);
+}
+
+
+static u32 (* const fop_extfns[32])(int dd, int unused, int dm, u32 fpscr) = {
+ [FEXT_TO_IDX(FEXT_FCPY)] = vfp_double_fcpy,
+ [FEXT_TO_IDX(FEXT_FABS)] = vfp_double_fabs,
+ [FEXT_TO_IDX(FEXT_FNEG)] = vfp_double_fneg,
+ [FEXT_TO_IDX(FEXT_FSQRT)] = vfp_double_fsqrt,
+ [FEXT_TO_IDX(FEXT_FCMP)] = vfp_double_fcmp,
+ [FEXT_TO_IDX(FEXT_FCMPE)] = vfp_double_fcmpe,
+ [FEXT_TO_IDX(FEXT_FCMPZ)] = vfp_double_fcmpz,
+ [FEXT_TO_IDX(FEXT_FCMPEZ)] = vfp_double_fcmpez,
+ [FEXT_TO_IDX(FEXT_FCVT)] = vfp_double_fcvts,
+ [FEXT_TO_IDX(FEXT_FUITO)] = vfp_double_fuito,
+ [FEXT_TO_IDX(FEXT_FSITO)] = vfp_double_fsito,
+ [FEXT_TO_IDX(FEXT_FTOUI)] = vfp_double_ftoui,
+ [FEXT_TO_IDX(FEXT_FTOUIZ)] = vfp_double_ftouiz,
+ [FEXT_TO_IDX(FEXT_FTOSI)] = vfp_double_ftosi,
+ [FEXT_TO_IDX(FEXT_FTOSIZ)] = vfp_double_ftosiz,
+};
+
+
+
+
+static u32
+vfp_double_fadd_nonnumber(struct vfp_double *vdd, struct vfp_double *vdn,
+ struct vfp_double *vdm, u32 fpscr)
+{
+ struct vfp_double *vdp;
+ u32 exceptions = 0;
+ int tn, tm;
+
+ tn = vfp_double_type(vdn);
+ tm = vfp_double_type(vdm);
+
+ if (tn & tm & VFP_INFINITY) {
+ /*
+ * Two infinities. Are they different signs?
+ */
+ if (vdn->sign ^ vdm->sign) {
+ /*
+ * different signs -> invalid
+ */
+ exceptions = FPSCR_IOC;
+ vdp = &vfp_double_default_qnan;
+ } else {
+ /*
+ * same signs -> valid
+ */
+ vdp = vdn;
+ }
+ } else if (tn & VFP_INFINITY && tm & VFP_NUMBER) {
+ /*
+ * One infinity and one number -> infinity
+ */
+ vdp = vdn;
+ } else {
+ /*
+ * 'n' is a NaN of some type
+ */
+ return vfp_propagate_nan(vdd, vdn, vdm, fpscr);
+ }
+ *vdd = *vdp;
+ return exceptions;
+}
+
+static u32
+vfp_double_add(struct vfp_double *vdd, struct vfp_double *vdn,
+ struct vfp_double *vdm, u32 fpscr)
+{
+ u32 exp_diff;
+ u64 m_sig;
+
+ if (vdn->significand & (1ULL << 63) ||
+ vdm->significand & (1ULL << 63)) {
+ pr_info("VFP: bad FP values in %s\n", __func__);
+ vfp_double_dump("VDN", vdn);
+ vfp_double_dump("VDM", vdm);
+ }
+
+ /*
+ * Ensure that 'n' is the largest magnitude number. Note that
+ * if 'n' and 'm' have equal exponents, we do not swap them.
+ * This ensures that NaN propagation works correctly.
+ */
+ if (vdn->exponent < vdm->exponent) {
+ struct vfp_double *t = vdn;
+ vdn = vdm;
+ vdm = t;
+ }
+
+ /*
+ * Is 'n' an infinity or a NaN? Note that 'm' may be a number,
+ * infinity or a NaN here.
+ */
+ if (vdn->exponent == 2047)
+ return vfp_double_fadd_nonnumber(vdd, vdn, vdm, fpscr);
+
+ /*
+ * We have two proper numbers, where 'vdn' is the larger magnitude.
+ *
+ * Copy 'n' to 'd' before doing the arithmetic.
+ */
+ *vdd = *vdn;
+
+ /*
+ * Align 'm' with the result.
+ */
+ exp_diff = vdn->exponent - vdm->exponent;
+ m_sig = vfp_shiftright64jamming(vdm->significand, exp_diff);
+
+ /*
+ * If the signs are different, we are really subtracting.
+ */
+ if (vdn->sign ^ vdm->sign) {
+ m_sig = vdn->significand - m_sig;
+ if ((s64)m_sig < 0) {
+ vdd->sign = vfp_sign_negate(vdd->sign);
+ m_sig = -m_sig;
+ }
+ } else {
+ m_sig += vdn->significand;
+ }
+ vdd->significand = m_sig;
+
+ return 0;
+}
+
+static u32
+vfp_double_multiply(struct vfp_double *vdd, struct vfp_double *vdn,
+ struct vfp_double *vdm, u32 fpscr)
+{
+ vfp_double_dump("VDN", vdn);
+ vfp_double_dump("VDM", vdm);
+
+ /*
+ * Ensure that 'n' is the largest magnitude number. Note that
+ * if 'n' and 'm' have equal exponents, we do not swap them.
+ * This ensures that NaN propagation works correctly.
+ */
+ if (vdn->exponent < vdm->exponent) {
+ struct vfp_double *t = vdn;
+ vdn = vdm;
+ vdm = t;
+ pr_debug("VFP: swapping M <-> N\n");
+ }
+
+ vdd->sign = vdn->sign ^ vdm->sign;
+
+ /*
+ * If 'n' is an infinity or NaN, handle it. 'm' may be anything.
+ */
+ if (vdn->exponent == 2047) {
+ if (vdn->significand || (vdm->exponent == 2047 && vdm->significand))
+ return vfp_propagate_nan(vdd, vdn, vdm, fpscr);
+ if ((vdm->exponent | vdm->significand) == 0) {
+ *vdd = vfp_double_default_qnan;
+ return FPSCR_IOC;
+ }
+ vdd->exponent = vdn->exponent;
+ vdd->significand = 0;
+ return 0;
+ }
+
+ /*
+ * If 'm' is zero, the result is always zero. In this case,
+ * 'n' may be zero or a number, but it doesn't matter which.
+ */
+ if ((vdm->exponent | vdm->significand) == 0) {
+ vdd->exponent = 0;
+ vdd->significand = 0;
+ return 0;
+ }
+
+ /*
+ * We add 2 to the destination exponent for the same reason
+ * as the addition case - though this time we have +1 from
+ * each input operand.
+ */
+ vdd->exponent = vdn->exponent + vdm->exponent - 1023 + 2;
+ vdd->significand = vfp_hi64multiply64(vdn->significand, vdm->significand);
+
+ vfp_double_dump("VDD", vdd);
+ return 0;
+}
+
+#define NEG_MULTIPLY (1 << 0)
+#define NEG_SUBTRACT (1 << 1)
+
+static u32
+vfp_double_multiply_accumulate(int dd, int dn, int dm, u32 fpscr, u32 negate, char *func)
+{
+ struct vfp_double vdd, vdp, vdn, vdm;
+ u32 exceptions;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ if (vdn.exponent == 0 && vdn.significand)
+ vfp_double_normalise_denormal(&vdn);
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ if (vdm.exponent == 0 && vdm.significand)
+ vfp_double_normalise_denormal(&vdm);
+
+ exceptions = vfp_double_multiply(&vdp, &vdn, &vdm, fpscr);
+ if (negate & NEG_MULTIPLY)
+ vdp.sign = vfp_sign_negate(vdp.sign);
+
+ vfp_double_unpack(&vdn, vfp_get_double(dd));
+ if (negate & NEG_SUBTRACT)
+ vdn.sign = vfp_sign_negate(vdn.sign);
+
+ exceptions |= vfp_double_add(&vdd, &vdn, &vdp, fpscr);
+
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, func);
+}
+
+/*
+ * Standard operations
+ */
+
+/*
+ * sd = sd + (sn * sm)
+ */
+static u32 vfp_double_fmac(int dd, int dn, int dm, u32 fpscr)
+{
+ return vfp_double_multiply_accumulate(dd, dn, dm, fpscr, 0, "fmac");
+}
+
+/*
+ * sd = sd - (sn * sm)
+ */
+static u32 vfp_double_fnmac(int dd, int dn, int dm, u32 fpscr)
+{
+ return vfp_double_multiply_accumulate(dd, dn, dm, fpscr, NEG_MULTIPLY, "fnmac");
+}
+
+/*
+ * sd = -sd + (sn * sm)
+ */
+static u32 vfp_double_fmsc(int dd, int dn, int dm, u32 fpscr)
+{
+ return vfp_double_multiply_accumulate(dd, dn, dm, fpscr, NEG_SUBTRACT, "fmsc");
+}
+
+/*
+ * sd = -sd - (sn * sm)
+ */
+static u32 vfp_double_fnmsc(int dd, int dn, int dm, u32 fpscr)
+{
+ return vfp_double_multiply_accumulate(dd, dn, dm, fpscr, NEG_SUBTRACT | NEG_MULTIPLY, "fnmsc");
+}
+
+/*
+ * sd = sn * sm
+ */
+static u32 vfp_double_fmul(int dd, int dn, int dm, u32 fpscr)
+{
+ struct vfp_double vdd, vdn, vdm;
+ u32 exceptions;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ if (vdn.exponent == 0 && vdn.significand)
+ vfp_double_normalise_denormal(&vdn);
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ if (vdm.exponent == 0 && vdm.significand)
+ vfp_double_normalise_denormal(&vdm);
+
+ exceptions = vfp_double_multiply(&vdd, &vdn, &vdm, fpscr);
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, "fmul");
+}
+
+/*
+ * sd = -(sn * sm)
+ */
+static u32 vfp_double_fnmul(int dd, int dn, int dm, u32 fpscr)
+{
+ struct vfp_double vdd, vdn, vdm;
+ u32 exceptions;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ if (vdn.exponent == 0 && vdn.significand)
+ vfp_double_normalise_denormal(&vdn);
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ if (vdm.exponent == 0 && vdm.significand)
+ vfp_double_normalise_denormal(&vdm);
+
+ exceptions = vfp_double_multiply(&vdd, &vdn, &vdm, fpscr);
+ vdd.sign = vfp_sign_negate(vdd.sign);
+
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, "fnmul");
+}
+
+/*
+ * sd = sn + sm
+ */
+static u32 vfp_double_fadd(int dd, int dn, int dm, u32 fpscr)
+{
+ struct vfp_double vdd, vdn, vdm;
+ u32 exceptions;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ if (vdn.exponent == 0 && vdn.significand)
+ vfp_double_normalise_denormal(&vdn);
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ if (vdm.exponent == 0 && vdm.significand)
+ vfp_double_normalise_denormal(&vdm);
+
+ exceptions = vfp_double_add(&vdd, &vdn, &vdm, fpscr);
+
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, "fadd");
+}
+
+/*
+ * sd = sn - sm
+ */
+static u32 vfp_double_fsub(int dd, int dn, int dm, u32 fpscr)
+{
+ struct vfp_double vdd, vdn, vdm;
+ u32 exceptions;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ if (vdn.exponent == 0 && vdn.significand)
+ vfp_double_normalise_denormal(&vdn);
+
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+ if (vdm.exponent == 0 && vdm.significand)
+ vfp_double_normalise_denormal(&vdm);
+
+ /*
+ * Subtraction is like addition, but with a negated operand.
+ */
+ vdm.sign = vfp_sign_negate(vdm.sign);
+
+ exceptions = vfp_double_add(&vdd, &vdn, &vdm, fpscr);
+
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, "fsub");
+}
+
+/*
+ * sd = sn / sm
+ */
+static u32 vfp_double_fdiv(int dd, int dn, int dm, u32 fpscr)
+{
+ struct vfp_double vdd, vdn, vdm;
+ u32 exceptions = 0;
+ int tm, tn;
+
+ vfp_double_unpack(&vdn, vfp_get_double(dn));
+ vfp_double_unpack(&vdm, vfp_get_double(dm));
+
+ vdd.sign = vdn.sign ^ vdm.sign;
+
+ tn = vfp_double_type(&vdn);
+ tm = vfp_double_type(&vdm);
+
+ /*
+ * Is n a NAN?
+ */
+ if (tn & VFP_NAN)
+ goto vdn_nan;
+
+ /*
+ * Is m a NAN?
+ */
+ if (tm & VFP_NAN)
+ goto vdm_nan;
+
+ /*
+ * If n and m are infinity, the result is invalid
+ * If n and m are zero, the result is invalid
+ */
+ if (tm & tn & (VFP_INFINITY|VFP_ZERO))
+ goto invalid;
+
+ /*
+ * If n is infinity, the result is infinity
+ */
+ if (tn & VFP_INFINITY)
+ goto infinity;
+
+ /*
+ * If m is zero, raise div0 exceptions
+ */
+ if (tm & VFP_ZERO)
+ goto divzero;
+
+ /*
+ * If m is infinity, or n is zero, the result is zero
+ */
+ if (tm & VFP_INFINITY || tn & VFP_ZERO)
+ goto zero;
+
+ if (tn & VFP_DENORMAL)
+ vfp_double_normalise_denormal(&vdn);
+ if (tm & VFP_DENORMAL)
+ vfp_double_normalise_denormal(&vdm);
+
+ /*
+ * Ok, we have two numbers, we can perform division.
+ */
+ vdd.exponent = vdn.exponent - vdm.exponent + 1023 - 1;
+ vdm.significand <<= 1;
+ if (vdm.significand <= (2 * vdn.significand)) {
+ vdn.significand >>= 1;
+ vdd.exponent++;
+ }
+ vdd.significand = vfp_estimate_div128to64(vdn.significand, 0, vdm.significand);
+ if ((vdd.significand & 0x1ff) <= 2) {
+ u64 termh, terml, remh, reml;
+ mul64to128(&termh, &terml, vdm.significand, vdd.significand);
+ sub128(&remh, &reml, vdn.significand, 0, termh, terml);
+ while ((s64)remh < 0) {
+ vdd.significand -= 1;
+ add128(&remh, &reml, remh, reml, 0, vdm.significand);
+ }
+ vdd.significand |= (reml != 0);
+ }
+ return vfp_double_normaliseround(dd, &vdd, fpscr, 0, "fdiv");
+
+ vdn_nan:
+ exceptions = vfp_propagate_nan(&vdd, &vdn, &vdm, fpscr);
+ pack:
+ vfp_put_double(dd, vfp_double_pack(&vdd));
+ return exceptions;
+
+ vdm_nan:
+ exceptions = vfp_propagate_nan(&vdd, &vdm, &vdn, fpscr);
+ goto pack;
+
+ zero:
+ vdd.exponent = 0;
+ vdd.significand = 0;
+ goto pack;
+
+ divzero:
+ exceptions = FPSCR_DZC;
+ infinity:
+ vdd.exponent = 2047;
+ vdd.significand = 0;
+ goto pack;
+
+ invalid:
+ vfp_put_double(dd, vfp_double_pack(&vfp_double_default_qnan));
+ return FPSCR_IOC;
+}
+
+static u32 (* const fop_fns[16])(int dd, int dn, int dm, u32 fpscr) = {
+ [FOP_TO_IDX(FOP_FMAC)] = vfp_double_fmac,
+ [FOP_TO_IDX(FOP_FNMAC)] = vfp_double_fnmac,
+ [FOP_TO_IDX(FOP_FMSC)] = vfp_double_fmsc,
+ [FOP_TO_IDX(FOP_FNMSC)] = vfp_double_fnmsc,
+ [FOP_TO_IDX(FOP_FMUL)] = vfp_double_fmul,
+ [FOP_TO_IDX(FOP_FNMUL)] = vfp_double_fnmul,
+ [FOP_TO_IDX(FOP_FADD)] = vfp_double_fadd,
+ [FOP_TO_IDX(FOP_FSUB)] = vfp_double_fsub,
+ [FOP_TO_IDX(FOP_FDIV)] = vfp_double_fdiv,
+};
+
+#define FREG_BANK(x) ((x) & 0x0c)
+#define FREG_IDX(x) ((x) & 3)
+
+u32 vfp_double_cpdo(u32 inst, u32 fpscr)
+{
+ u32 op = inst & FOP_MASK;
+ u32 exceptions = 0;
+ unsigned int dd = vfp_get_sd(inst);
+ unsigned int dn = vfp_get_sn(inst);
+ unsigned int dm = vfp_get_sm(inst);
+ unsigned int vecitr, veclen, vecstride;
+ u32 (*fop)(int, int, s32, u32);
+
+ veclen = fpscr & FPSCR_LENGTH_MASK;
+ vecstride = (1 + ((fpscr & FPSCR_STRIDE_MASK) == FPSCR_STRIDE_MASK)) * 2;
+
+ /*
+ * If destination bank is zero, vector length is always '1'.
+ * ARM DDI0100F C5.1.3, C5.3.2.
+ */
+ if (FREG_BANK(dd) == 0)
+ veclen = 0;
+
+ pr_debug("VFP: vecstride=%u veclen=%u\n", vecstride,
+ (veclen >> FPSCR_LENGTH_BIT) + 1);
+
+ fop = (op == FOP_EXT) ? fop_extfns[dn] : fop_fns[FOP_TO_IDX(op)];
+ if (!fop)
+ goto invalid;
+
+ for (vecitr = 0; vecitr <= veclen; vecitr += 1 << FPSCR_LENGTH_BIT) {
+ u32 except;
+
+ if (op == FOP_EXT)
+ pr_debug("VFP: itr%d (d%u.%u) = op[%u] (d%u.%u)\n",
+ vecitr >> FPSCR_LENGTH_BIT,
+ dd >> 1, dd & 1, dn,
+ dm >> 1, dm & 1);
+ else
+ pr_debug("VFP: itr%d (d%u.%u) = (d%u.%u) op[%u] (d%u.%u)\n",
+ vecitr >> FPSCR_LENGTH_BIT,
+ dd >> 1, dd & 1,
+ dn >> 1, dn & 1,
+ FOP_TO_IDX(op),
+ dm >> 1, dm & 1);
+
+ except = fop(dd, dn, dm, fpscr);
+ pr_debug("VFP: itr%d: exceptions=%08x\n",
+ vecitr >> FPSCR_LENGTH_BIT, except);
+
+ exceptions |= except;
+
+ /*
+ * This ensures that comparisons only operate on scalars;
+ * comparisons always return with one FPSCR status bit set.
+ */
+ if (except & (FPSCR_N|FPSCR_Z|FPSCR_C|FPSCR_V))
+ break;
+
+ /*
+ * CHECK: It appears to be undefined whether we stop when
+ * we encounter an exception. We continue.
+ */
+
+ dd = FREG_BANK(dd) + ((FREG_IDX(dd) + vecstride) & 6);
+ dn = FREG_BANK(dn) + ((FREG_IDX(dn) + vecstride) & 6);
+ if (FREG_BANK(dm) != 0)
+ dm = FREG_BANK(dm) + ((FREG_IDX(dm) + vecstride) & 6);
+ }
+ return exceptions;
+
+ invalid:
+ return ~0;
+}
--- /dev/null
+/*
+ * linux/arch/arm/vfp/vfphw.S
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This code is called from the kernel's undefined instruction trap.
+ * r9 holds the return address for successful handling.
+ * lr holds the return address for unrecognised instructions.
+ * r10 points at the start of the private FP workspace in the thread structure
+ * sp points to a struct pt_regs (as defined in include/asm/proc/ptrace.h)
+ */
+#include <asm/thread_info.h>
+#include <asm/vfpmacros.h>
+#include "../kernel/entry-header.S"
+
+ .macro DBGSTR, str
+#ifdef DEBUG
+ stmfd sp!, {r0-r3, ip, lr}
+ add r0, pc, #4
+ bl printk
+ b 1f
+ .asciz "<7>VFP: \str\n"
+ .balign 4
+1: ldmfd sp!, {r0-r3, ip, lr}
+#endif
+ .endm
+
+ .macro DBGSTR1, str, arg
+#ifdef DEBUG
+ stmfd sp!, {r0-r3, ip, lr}
+ mov r1, \arg
+ add r0, pc, #4
+ bl printk
+ b 1f
+ .asciz "<7>VFP: \str\n"
+ .balign 4
+1: ldmfd sp!, {r0-r3, ip, lr}
+#endif
+ .endm
+
+ .macro DBGSTR3, str, arg1, arg2, arg3
+#ifdef DEBUG
+ stmfd sp!, {r0-r3, ip, lr}
+ mov r3, \arg3
+ mov r2, \arg2
+ mov r1, \arg1
+ add r0, pc, #4
+ bl printk
+ b 1f
+ .asciz "<7>VFP: \str\n"
+ .balign 4
+1: ldmfd sp!, {r0-r3, ip, lr}
+#endif
+ .endm
+
+
+@ VFP hardware support entry point.
+@
+@ r0 = faulted instruction
+@ r5 = faulted PC+4
+@ r9 = successful return
+@ r10 = vfp_state union
+@ lr = failure return
+
+ .globl vfp_support_entry
+vfp_support_entry:
+ DBGSTR3 "instr %08x pc %08x state %p", r0, r5, r10
+
+ VFPFMRX r1, FPEXC @ Is the VFP enabled?
+ DBGSTR1 "fpexc %08x", r1
+ tst r1, #FPEXC_ENABLE
+ bne look_for_VFP_exceptions @ VFP is already enabled
+
+ DBGSTR1 "enable %x", r10
+ ldr r3, last_VFP_context_address
+ orr r1, r1, #FPEXC_ENABLE @ user FPEXC has the enable bit set
+ ldr r4, [r3] @ last_VFP_context pointer
+ bic r2, r1, #FPEXC_EXCEPTION @ make sure exceptions are disabled
+ cmp r4, r10
+ beq check_for_exception @ we are returning to the same
+ @ process, so the registers are
+ @ still there. In this case, we do
+ @ not want to drop a pending exception.
+
+ VFPFMXR FPEXC, r2 @ enable VFP, disable any pending
+ @ exceptions, so we can get at the
+ @ rest of it
+
+ @ Save out the current registers to the old thread state
+
+ DBGSTR1 "save old state %p", r4
+ cmp r4, #0
+ beq no_old_VFP_process
+ VFPFMRX r2, FPSCR @ current status
+ VFPFMRX r6, FPINST @ FPINST (always there, rev0 onwards)
+ tst r1, #FPEXC_FPV2 @ is there an FPINST2 to read?
+ VFPFMRX r8, FPINST2, NE @ FPINST2 if needed - avoids reading
+ @ nonexistant reg on rev0
+ VFPFSTMIA r4 @ save the working registers
+ add r4, r4, #8*16+4
+ stmia r4, {r1, r2, r6, r8} @ save FPEXC, FPSCR, FPINST, FPINST2
+ @ and point r4 at the word at the
+ @ start of the register dump
+
+no_old_VFP_process:
+ DBGSTR1 "load state %p", r10
+ str r10, [r3] @ update the last_VFP_context pointer
+ @ Load the saved state back into the VFP
+ add r4, r10, #8*16+4
+ ldmia r4, {r1, r2, r6, r8} @ load FPEXC, FPSCR, FPINST, FPINST2
+ VFPFLDMIA r10 @ reload the working registers while
+ @ FPEXC is in a safe state
+ tst r1, #FPEXC_FPV2 @ is there an FPINST2 to write?
+ VFPFMXR FPINST2, r8, NE @ FPINST2 if needed - avoids writing
+ @ nonexistant reg on rev0
+ VFPFMXR FPINST, r6
+ VFPFMXR FPSCR, r2 @ restore status
+
+check_for_exception:
+ tst r1, #FPEXC_EXCEPTION
+ bne process_exception @ might as well handle the pending
+ @ exception before retrying branch
+ @ out before setting an FPEXC that
+ @ stops us reading stuff
+ VFPFMXR FPEXC, r1 @ restore FPEXC last
+ sub r5, r5, #4
+ str r5, [sp, #S_PC] @ retry the instruction
+ mov pc, r9 @ we think we have handled things
+
+
+look_for_VFP_exceptions:
+ tst r1, #FPEXC_EXCEPTION
+ bne process_exception
+ VFPFMRX r2, FPSCR
+ tst r2, #FPSCR_IXE @ IXE doesn't set FPEXC_EXCEPTION !
+ bne process_exception
+
+ @ Fall into hand on to next handler - appropriate coproc instr
+ @ not recognised by VFP
+
+ DBGSTR "not VFP"
+ mov pc, lr
+
+process_exception:
+ DBGSTR "bounce"
+ sub r5, r5, #4
+ str r5, [sp, #S_PC] @ retry the instruction on exit from
+ @ the imprecise exception handling in
+ @ the support code
+ mov r2, sp @ nothing stacked - regdump is at TOS
+ mov lr, r9 @ setup for a return to the user code.
+
+ @ Now call the C code to package up the bounce to the support code
+ @ r0 holds the trigger instruction
+ @ r1 holds the FPEXC value
+ @ r2 pointer to register dump
+ b VFP9_bounce @ we have handled this - the support
+ @ code will raise an exception if
+ @ required. If not, the user code will
+ @ retry the faulted instruction
+
+last_VFP_context_address:
+ .word last_VFP_context
+
+ .globl vfp_get_float
+vfp_get_float:
+ add pc, pc, r0, lsl #3
+ mov r0, r0
+ .irp dr,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
+ mrc p10, 0, r0, c\dr, c0, 0 @ fmrs r0, s0
+ mov pc, lr
+ mrc p10, 0, r0, c\dr, c0, 4 @ fmrs r0, s1
+ mov pc, lr
+ .endr
+
+ .globl vfp_put_float
+vfp_put_float:
+ add pc, pc, r0, lsl #3
+ mov r0, r0
+ .irp dr,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
+ mcr p10, 0, r1, c\dr, c0, 0 @ fmsr r0, s0
+ mov pc, lr
+ mcr p10, 0, r1, c\dr, c0, 4 @ fmsr r0, s1
+ mov pc, lr
+ .endr
+
+ .globl vfp_get_double
+vfp_get_double:
+ mov r0, r0, lsr #1
+ add pc, pc, r0, lsl #3
+ mov r0, r0
+ .irp dr,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
+ mrrc p10, 1, r0, r1, c\dr @ fmrrd r0, r1, d\dr
+ mov pc, lr
+ .endr
+
+ @ virtual register 16 for compare with zero
+ mov r0, #0
+ mov r1, #0
+ mov pc, lr
+
+ .globl vfp_put_double
+vfp_put_double:
+ mov r0, r0, lsr #1
+ add pc, pc, r0, lsl #3
+ mov r0, r0
+ .irp dr,0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15
+ mcrr p10, 1, r1, r2, c\dr @ fmrrd r1, r2, d\dr
+ mov pc, lr
+ .endr
--- /dev/null
+/*
+ * linux/arch/arm/vfp/vfpsingle.c
+ *
+ * This code is derived in part from John R. Housers softfloat library, which
+ * carries the following notice:
+ *
+ * ===========================================================================
+ * This C source file is part of the SoftFloat IEC/IEEE Floating-point
+ * Arithmetic Package, Release 2.
+ *
+ * Written by John R. Hauser. This work was made possible in part by the
+ * International Computer Science Institute, located at Suite 600, 1947 Center
+ * Street, Berkeley, California 94704. Funding was partially provided by the
+ * National Science Foundation under grant MIP-9311980. The original version
+ * of this code was written as part of a project to build a fixed-point vector
+ * processor in collaboration with the University of California at Berkeley,
+ * overseen by Profs. Nelson Morgan and John Wawrzynek. More information
+ * is available through the web page `http://HTTP.CS.Berkeley.EDU/~jhauser/
+ * arithmetic/softfloat.html'.
+ *
+ * THIS SOFTWARE IS DISTRIBUTED AS IS, FOR FREE. Although reasonable effort
+ * has been made to avoid it, THIS SOFTWARE MAY CONTAIN FAULTS THAT WILL AT
+ * TIMES RESULT IN INCORRECT BEHAVIOR. USE OF THIS SOFTWARE IS RESTRICTED TO
+ * PERSONS AND ORGANIZATIONS WHO CAN AND WILL TAKE FULL RESPONSIBILITY FOR ANY
+ * AND ALL LOSSES, COSTS, OR OTHER PROBLEMS ARISING FROM ITS USE.
+ *
+ * Derivative works are acceptable, even for commercial purposes, so long as
+ * (1) they include prominent notice that the work is derivative, and (2) they
+ * include prominent notice akin to these three paragraphs for those parts of
+ * this code that are retained.
+ * ===========================================================================
+ */
+#include <linux/kernel.h>
+#include <linux/bitops.h>
+#include <asm/ptrace.h>
+#include <asm/vfp.h>
+
+#include "vfpinstr.h"
+#include "vfp.h"
+
+static struct vfp_single vfp_single_default_qnan = {
+ .exponent = 255,
+ .sign = 0,
+ .significand = VFP_SINGLE_SIGNIFICAND_QNAN,
+};
+
+static void vfp_single_dump(const char *str, struct vfp_single *s)
+{
+ pr_debug("VFP: %s: sign=%d exponent=%d significand=%08x\n",
+ str, s->sign != 0, s->exponent, s->significand);
+}
+
+static void vfp_single_normalise_denormal(struct vfp_single *vs)
+{
+ int bits = 31 - fls(vs->significand);
+
+ vfp_single_dump("normalise_denormal: in", vs);
+
+ if (bits) {
+ vs->exponent -= bits - 1;
+ vs->significand <<= bits;
+ }
+
+ vfp_single_dump("normalise_denormal: out", vs);
+}
+
+#ifndef DEBUG
+#define vfp_single_normaliseround(sd,vsd,fpscr,except,func) __vfp_single_normaliseround(sd,vsd,fpscr,except)
+u32 __vfp_single_normaliseround(int sd, struct vfp_single *vs, u32 fpscr, u32 exceptions)
+#else
+u32 vfp_single_normaliseround(int sd, struct vfp_single *vs, u32 fpscr, u32 exceptions, const char *func)
+#endif
+{
+ u32 significand, incr, rmode;
+ int exponent, shift, underflow;
+
+ vfp_single_dump("pack: in", vs);
+
+ /*
+ * Infinities and NaNs are a special case.
+ */
+ if (vs->exponent == 255 && (vs->significand == 0 || exceptions))
+ goto pack;
+
+ /*
+ * Special-case zero.
+ */
+ if (vs->significand == 0) {
+ vs->exponent = 0;
+ goto pack;
+ }
+
+ exponent = vs->exponent;
+ significand = vs->significand;
+
+ /*
+ * Normalise first. Note that we shift the significand up to
+ * bit 31, so we have VFP_SINGLE_LOW_BITS + 1 below the least
+ * significant bit.
+ */
+ shift = 32 - fls(significand);
+ if (shift < 32 && shift) {
+ exponent -= shift;
+ significand <<= shift;
+ }
+
+#ifdef DEBUG
+ vs->exponent = exponent;
+ vs->significand = significand;
+ vfp_single_dump("pack: normalised", vs);
+#endif
+
+ /*
+ * Tiny number?
+ */
+ underflow = exponent < 0;
+ if (underflow) {
+ significand = vfp_shiftright32jamming(significand, -exponent);
+ exponent = 0;
+#ifdef DEBUG
+ vs->exponent = exponent;
+ vs->significand = significand;
+ vfp_single_dump("pack: tiny number", vs);
+#endif
+ if (!(significand & ((1 << (VFP_SINGLE_LOW_BITS + 1)) - 1)))
+ underflow = 0;
+ }
+
+ /*
+ * Select rounding increment.
+ */
+ incr = 0;
+ rmode = fpscr & FPSCR_RMODE_MASK;
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 1 << VFP_SINGLE_LOW_BITS;
+ if ((significand & (1 << (VFP_SINGLE_LOW_BITS + 1))) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vs->sign != 0))
+ incr = (1 << (VFP_SINGLE_LOW_BITS + 1)) - 1;
+
+ pr_debug("VFP: rounding increment = 0x%08x\n", incr);
+
+ /*
+ * Is our rounding going to overflow?
+ */
+ if ((significand + incr) < significand) {
+ exponent += 1;
+ significand = (significand >> 1) | (significand & 1);
+ incr >>= 1;
+#ifdef DEBUG
+ vs->exponent = exponent;
+ vs->significand = significand;
+ vfp_single_dump("pack: overflow", vs);
+#endif
+ }
+
+ /*
+ * If any of the low bits (which will be shifted out of the
+ * number) are non-zero, the result is inexact.
+ */
+ if (significand & ((1 << (VFP_SINGLE_LOW_BITS + 1)) - 1))
+ exceptions |= FPSCR_IXC;
+
+ /*
+ * Do our rounding.
+ */
+ significand += incr;
+
+ /*
+ * Infinity?
+ */
+ if (exponent >= 254) {
+ exceptions |= FPSCR_OFC | FPSCR_IXC;
+ if (incr == 0) {
+ vs->exponent = 253;
+ vs->significand = 0x7fffffff;
+ } else {
+ vs->exponent = 255; /* infinity */
+ vs->significand = 0;
+ }
+ } else {
+ if (significand >> (VFP_SINGLE_LOW_BITS + 1) == 0)
+ exponent = 0;
+ if (exponent || significand > 0x80000000)
+ underflow = 0;
+ if (underflow)
+ exceptions |= FPSCR_UFC;
+ vs->exponent = exponent;
+ vs->significand = significand >> 1;
+ }
+
+ pack:
+ vfp_single_dump("pack: final", vs);
+ {
+ s32 d = vfp_single_pack(vs);
+ pr_debug("VFP: %s: d(s%d)=%08x exceptions=%08x\n", func,
+ sd, d, exceptions);
+ vfp_put_float(sd, d);
+ }
+
+ return exceptions & ~VFP_NAN_FLAG;
+}
+
+/*
+ * Propagate the NaN, setting exceptions if it is signalling.
+ * 'n' is always a NaN. 'm' may be a number, NaN or infinity.
+ */
+static u32
+vfp_propagate_nan(struct vfp_single *vsd, struct vfp_single *vsn,
+ struct vfp_single *vsm, u32 fpscr)
+{
+ struct vfp_single *nan;
+ int tn, tm = 0;
+
+ tn = vfp_single_type(vsn);
+
+ if (vsm)
+ tm = vfp_single_type(vsm);
+
+ if (fpscr & FPSCR_DEFAULT_NAN)
+ /*
+ * Default NaN mode - always returns a quiet NaN
+ */
+ nan = &vfp_single_default_qnan;
+ else {
+ /*
+ * Contemporary mode - select the first signalling
+ * NAN, or if neither are signalling, the first
+ * quiet NAN.
+ */
+ if (tn == VFP_SNAN || (tm != VFP_SNAN && tn == VFP_QNAN))
+ nan = vsn;
+ else
+ nan = vsm;
+ /*
+ * Make the NaN quiet.
+ */
+ nan->significand |= VFP_SINGLE_SIGNIFICAND_QNAN;
+ }
+
+ *vsd = *nan;
+
+ /*
+ * If one was a signalling NAN, raise invalid operation.
+ */
+ return tn == VFP_SNAN || tm == VFP_SNAN ? FPSCR_IOC : VFP_NAN_FLAG;
+}
+
+
+/*
+ * Extended operations
+ */
+static u32 vfp_single_fabs(int sd, int unused, s32 m, u32 fpscr)
+{
+ vfp_put_float(sd, vfp_single_packed_abs(m));
+ return 0;
+}
+
+static u32 vfp_single_fcpy(int sd, int unused, s32 m, u32 fpscr)
+{
+ vfp_put_float(sd, m);
+ return 0;
+}
+
+static u32 vfp_single_fneg(int sd, int unused, s32 m, u32 fpscr)
+{
+ vfp_put_float(sd, vfp_single_packed_negate(m));
+ return 0;
+}
+
+static const u16 sqrt_oddadjust[] = {
+ 0x0004, 0x0022, 0x005d, 0x00b1, 0x011d, 0x019f, 0x0236, 0x02e0,
+ 0x039c, 0x0468, 0x0545, 0x0631, 0x072b, 0x0832, 0x0946, 0x0a67
+};
+
+static const u16 sqrt_evenadjust[] = {
+ 0x0a2d, 0x08af, 0x075a, 0x0629, 0x051a, 0x0429, 0x0356, 0x029e,
+ 0x0200, 0x0179, 0x0109, 0x00af, 0x0068, 0x0034, 0x0012, 0x0002
+};
+
+u32 vfp_estimate_sqrt_significand(u32 exponent, u32 significand)
+{
+ int index;
+ u32 z, a;
+
+ if ((significand & 0xc0000000) != 0x40000000) {
+ printk(KERN_WARNING "VFP: estimate_sqrt: invalid significand\n");
+ }
+
+ a = significand << 1;
+ index = (a >> 27) & 15;
+ if (exponent & 1) {
+ z = 0x4000 + (a >> 17) - sqrt_oddadjust[index];
+ z = ((a / z) << 14) + (z << 15);
+ a >>= 1;
+ } else {
+ z = 0x8000 + (a >> 17) - sqrt_evenadjust[index];
+ z = a / z + z;
+ z = (z >= 0x20000) ? 0xffff8000 : (z << 15);
+ if (z <= a)
+ return (s32)a >> 1;
+ }
+ return (u32)(((u64)a << 31) / z) + (z >> 1);
+}
+
+static u32 vfp_single_fsqrt(int sd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vsm, vsd;
+ int ret, tm;
+
+ vfp_single_unpack(&vsm, m);
+ tm = vfp_single_type(&vsm);
+ if (tm & (VFP_NAN|VFP_INFINITY)) {
+ struct vfp_single *vsp = &vsd;
+
+ if (tm & VFP_NAN)
+ ret = vfp_propagate_nan(vsp, &vsm, NULL, fpscr);
+ else if (vsm.sign == 0) {
+ sqrt_copy:
+ vsp = &vsm;
+ ret = 0;
+ } else {
+ sqrt_invalid:
+ vsp = &vfp_single_default_qnan;
+ ret = FPSCR_IOC;
+ }
+ vfp_put_float(sd, vfp_single_pack(vsp));
+ return ret;
+ }
+
+ /*
+ * sqrt(+/- 0) == +/- 0
+ */
+ if (tm & VFP_ZERO)
+ goto sqrt_copy;
+
+ /*
+ * Normalise a denormalised number
+ */
+ if (tm & VFP_DENORMAL)
+ vfp_single_normalise_denormal(&vsm);
+
+ /*
+ * sqrt(<0) = invalid
+ */
+ if (vsm.sign)
+ goto sqrt_invalid;
+
+ vfp_single_dump("sqrt", &vsm);
+
+ /*
+ * Estimate the square root.
+ */
+ vsd.sign = 0;
+ vsd.exponent = ((vsm.exponent - 127) >> 1) + 127;
+ vsd.significand = vfp_estimate_sqrt_significand(vsm.exponent, vsm.significand) + 2;
+
+ vfp_single_dump("sqrt estimate", &vsd);
+
+ /*
+ * And now adjust.
+ */
+ if ((vsd.significand & VFP_SINGLE_LOW_BITS_MASK) <= 5) {
+ if (vsd.significand < 2) {
+ vsd.significand = 0xffffffff;
+ } else {
+ u64 term;
+ s64 rem;
+ vsm.significand <<= !(vsm.exponent & 1);
+ term = (u64)vsd.significand * vsd.significand;
+ rem = ((u64)vsm.significand << 32) - term;
+
+ pr_debug("VFP: term=%016llx rem=%016llx\n", term, rem);
+
+ while (rem < 0) {
+ vsd.significand -= 1;
+ rem += ((u64)vsd.significand << 1) | 1;
+ }
+ vsd.significand |= rem != 0;
+ }
+ }
+ vsd.significand = vfp_shiftright32jamming(vsd.significand, 1);
+
+ return vfp_single_normaliseround(sd, &vsd, fpscr, 0, "fsqrt");
+}
+
+/*
+ * Equal := ZC
+ * Less than := N
+ * Greater than := C
+ * Unordered := CV
+ */
+static u32 vfp_compare(int sd, int signal_on_qnan, s32 m, u32 fpscr)
+{
+ s32 d;
+ u32 ret = 0;
+
+ d = vfp_get_float(sd);
+ if (vfp_single_packed_exponent(m) == 255 && vfp_single_packed_mantissa(m)) {
+ ret |= FPSCR_C | FPSCR_V;
+ if (signal_on_qnan || !(vfp_single_packed_mantissa(m) & (1 << (VFP_SINGLE_MANTISSA_BITS - 1))))
+ /*
+ * Signalling NaN, or signalling on quiet NaN
+ */
+ ret |= FPSCR_IOC;
+ }
+
+ if (vfp_single_packed_exponent(d) == 255 && vfp_single_packed_mantissa(d)) {
+ ret |= FPSCR_C | FPSCR_V;
+ if (signal_on_qnan || !(vfp_single_packed_mantissa(d) & (1 << (VFP_SINGLE_MANTISSA_BITS - 1))))
+ /*
+ * Signalling NaN, or signalling on quiet NaN
+ */
+ ret |= FPSCR_IOC;
+ }
+
+ if (ret == 0) {
+ if (d == m || vfp_single_packed_abs(d | m) == 0) {
+ /*
+ * equal
+ */
+ ret |= FPSCR_Z | FPSCR_C;
+ } else if (vfp_single_packed_sign(d ^ m)) {
+ /*
+ * different signs
+ */
+ if (vfp_single_packed_sign(d))
+ /*
+ * d is negative, so d < m
+ */
+ ret |= FPSCR_N;
+ else
+ /*
+ * d is positive, so d > m
+ */
+ ret |= FPSCR_C;
+ } else if ((vfp_single_packed_sign(d) != 0) ^ (d < m)) {
+ /*
+ * d < m
+ */
+ ret |= FPSCR_N;
+ } else if ((vfp_single_packed_sign(d) != 0) ^ (d > m)) {
+ /*
+ * d > m
+ */
+ ret |= FPSCR_C;
+ }
+ }
+ return ret;
+}
+
+static u32 vfp_single_fcmp(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_compare(sd, 0, m, fpscr);
+}
+
+static u32 vfp_single_fcmpe(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_compare(sd, 1, m, fpscr);
+}
+
+static u32 vfp_single_fcmpz(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_compare(sd, 0, 0, fpscr);
+}
+
+static u32 vfp_single_fcmpez(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_compare(sd, 1, 0, fpscr);
+}
+
+static u32 vfp_single_fcvtd(int dd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vsm;
+ struct vfp_double vdd;
+ int tm;
+ u32 exceptions = 0;
+
+ vfp_single_unpack(&vsm, m);
+
+ tm = vfp_single_type(&vsm);
+
+ /*
+ * If we have a signalling NaN, signal invalid operation.
+ */
+ if (tm == VFP_SNAN)
+ exceptions = FPSCR_IOC;
+
+ if (tm & VFP_DENORMAL)
+ vfp_single_normalise_denormal(&vsm);
+
+ vdd.sign = vsm.sign;
+ vdd.significand = (u64)vsm.significand << 32;
+
+ /*
+ * If we have an infinity or NaN, the exponent must be 2047.
+ */
+ if (tm & (VFP_INFINITY|VFP_NAN)) {
+ vdd.exponent = 2047;
+ if (tm & VFP_NAN)
+ vdd.significand |= VFP_DOUBLE_SIGNIFICAND_QNAN;
+ goto pack_nan;
+ } else if (tm & VFP_ZERO)
+ vdd.exponent = 0;
+ else
+ vdd.exponent = vsm.exponent + (1023 - 127);
+
+ /*
+ * Technically, if bit 0 of dd is set, this is an invalid
+ * instruction. However, we ignore this for efficiency.
+ */
+ return vfp_double_normaliseround(dd, &vdd, fpscr, exceptions, "fcvtd");
+
+ pack_nan:
+ vfp_put_double(dd, vfp_double_pack(&vdd));
+ return exceptions;
+}
+
+static u32 vfp_single_fuito(int sd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vs;
+
+ vs.sign = 0;
+ vs.exponent = 127 + 31 - 1;
+ vs.significand = (u32)m;
+
+ return vfp_single_normaliseround(sd, &vs, fpscr, 0, "fuito");
+}
+
+static u32 vfp_single_fsito(int sd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vs;
+
+ vs.sign = (m & 0x80000000) >> 16;
+ vs.exponent = 127 + 31 - 1;
+ vs.significand = vs.sign ? -m : m;
+
+ return vfp_single_normaliseround(sd, &vs, fpscr, 0, "fsito");
+}
+
+static u32 vfp_single_ftoui(int sd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vsm;
+ u32 d, exceptions = 0;
+ int rmode = fpscr & FPSCR_RMODE_MASK;
+ int tm;
+
+ vfp_single_unpack(&vsm, m);
+ vfp_single_dump("VSM", &vsm);
+
+ /*
+ * Do we have a denormalised number?
+ */
+ tm = vfp_single_type(&vsm);
+ if (tm & VFP_DENORMAL)
+ exceptions |= FPSCR_IDC;
+
+ if (tm & VFP_NAN)
+ vsm.sign = 0;
+
+ if (vsm.exponent >= 127 + 32) {
+ d = vsm.sign ? 0 : 0xffffffff;
+ exceptions = FPSCR_IOC;
+ } else if (vsm.exponent >= 127 - 1) {
+ int shift = 127 + 31 - vsm.exponent;
+ u32 rem, incr = 0;
+
+ /*
+ * 2^0 <= m < 2^32-2^8
+ */
+ d = (vsm.significand << 1) >> shift;
+ rem = vsm.significand << (33 - shift);
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 0x80000000;
+ if ((d & 1) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vsm.sign != 0)) {
+ incr = ~0;
+ }
+
+ if ((rem + incr) < rem) {
+ if (d < 0xffffffff)
+ d += 1;
+ else
+ exceptions |= FPSCR_IOC;
+ }
+
+ if (d && vsm.sign) {
+ d = 0;
+ exceptions |= FPSCR_IOC;
+ } else if (rem)
+ exceptions |= FPSCR_IXC;
+ } else {
+ d = 0;
+ if (vsm.exponent | vsm.significand) {
+ exceptions |= FPSCR_IXC;
+ if (rmode == FPSCR_ROUND_PLUSINF && vsm.sign == 0)
+ d = 1;
+ else if (rmode == FPSCR_ROUND_MINUSINF && vsm.sign) {
+ d = 0;
+ exceptions |= FPSCR_IOC;
+ }
+ }
+ }
+
+ pr_debug("VFP: ftoui: d(s%d)=%08x exceptions=%08x\n", sd, d, exceptions);
+
+ vfp_put_float(sd, d);
+
+ return exceptions;
+}
+
+static u32 vfp_single_ftouiz(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_single_ftoui(sd, unused, m, FPSCR_ROUND_TOZERO);
+}
+
+static u32 vfp_single_ftosi(int sd, int unused, s32 m, u32 fpscr)
+{
+ struct vfp_single vsm;
+ u32 d, exceptions = 0;
+ int rmode = fpscr & FPSCR_RMODE_MASK;
+
+ vfp_single_unpack(&vsm, m);
+ vfp_single_dump("VSM", &vsm);
+
+ /*
+ * Do we have a denormalised number?
+ */
+ if (vfp_single_type(&vsm) & VFP_DENORMAL)
+ exceptions |= FPSCR_IDC;
+
+ if (vsm.exponent >= 127 + 32) {
+ /*
+ * m >= 2^31-2^7: invalid
+ */
+ d = 0x7fffffff;
+ if (vsm.sign)
+ d = ~d;
+ exceptions |= FPSCR_IOC;
+ } else if (vsm.exponent >= 127 - 1) {
+ int shift = 127 + 31 - vsm.exponent;
+ u32 rem, incr = 0;
+
+ /* 2^0 <= m <= 2^31-2^7 */
+ d = (vsm.significand << 1) >> shift;
+ rem = vsm.significand << (33 - shift);
+
+ if (rmode == FPSCR_ROUND_NEAREST) {
+ incr = 0x80000000;
+ if ((d & 1) == 0)
+ incr -= 1;
+ } else if (rmode == FPSCR_ROUND_TOZERO) {
+ incr = 0;
+ } else if ((rmode == FPSCR_ROUND_PLUSINF) ^ (vsm.sign != 0)) {
+ incr = ~0;
+ }
+
+ if ((rem + incr) < rem && d < 0xffffffff)
+ d += 1;
+ if (d > 0x7fffffff + (vsm.sign != 0)) {
+ d = 0x7fffffff + (vsm.sign != 0);
+ exceptions |= FPSCR_IOC;
+ } else if (rem)
+ exceptions |= FPSCR_IXC;
+
+ if (vsm.sign)
+ d = -d;
+ } else {
+ d = 0;
+ if (vsm.exponent | vsm.significand) {
+ exceptions |= FPSCR_IXC;
+ if (rmode == FPSCR_ROUND_PLUSINF && vsm.sign == 0)
+ d = 1;
+ else if (rmode == FPSCR_ROUND_MINUSINF && vsm.sign)
+ d = -1;
+ }
+ }
+
+ pr_debug("VFP: ftosi: d(s%d)=%08x exceptions=%08x\n", sd, d, exceptions);
+
+ vfp_put_float(sd, (s32)d);
+
+ return exceptions;
+}
+
+static u32 vfp_single_ftosiz(int sd, int unused, s32 m, u32 fpscr)
+{
+ return vfp_single_ftosi(sd, unused, m, FPSCR_ROUND_TOZERO);
+}
+
+static u32 (* const fop_extfns[32])(int sd, int unused, s32 m, u32 fpscr) = {
+ [FEXT_TO_IDX(FEXT_FCPY)] = vfp_single_fcpy,
+ [FEXT_TO_IDX(FEXT_FABS)] = vfp_single_fabs,
+ [FEXT_TO_IDX(FEXT_FNEG)] = vfp_single_fneg,
+ [FEXT_TO_IDX(FEXT_FSQRT)] = vfp_single_fsqrt,
+ [FEXT_TO_IDX(FEXT_FCMP)] = vfp_single_fcmp,
+ [FEXT_TO_IDX(FEXT_FCMPE)] = vfp_single_fcmpe,
+ [FEXT_TO_IDX(FEXT_FCMPZ)] = vfp_single_fcmpz,
+ [FEXT_TO_IDX(FEXT_FCMPEZ)] = vfp_single_fcmpez,
+ [FEXT_TO_IDX(FEXT_FCVT)] = vfp_single_fcvtd,
+ [FEXT_TO_IDX(FEXT_FUITO)] = vfp_single_fuito,
+ [FEXT_TO_IDX(FEXT_FSITO)] = vfp_single_fsito,
+ [FEXT_TO_IDX(FEXT_FTOUI)] = vfp_single_ftoui,
+ [FEXT_TO_IDX(FEXT_FTOUIZ)] = vfp_single_ftouiz,
+ [FEXT_TO_IDX(FEXT_FTOSI)] = vfp_single_ftosi,
+ [FEXT_TO_IDX(FEXT_FTOSIZ)] = vfp_single_ftosiz,
+};
+
+
+
+
+
+static u32
+vfp_single_fadd_nonnumber(struct vfp_single *vsd, struct vfp_single *vsn,
+ struct vfp_single *vsm, u32 fpscr)
+{
+ struct vfp_single *vsp;
+ u32 exceptions = 0;
+ int tn, tm;
+
+ tn = vfp_single_type(vsn);
+ tm = vfp_single_type(vsm);
+
+ if (tn & tm & VFP_INFINITY) {
+ /*
+ * Two infinities. Are they different signs?
+ */
+ if (vsn->sign ^ vsm->sign) {
+ /*
+ * different signs -> invalid
+ */
+ exceptions = FPSCR_IOC;
+ vsp = &vfp_single_default_qnan;
+ } else {
+ /*
+ * same signs -> valid
+ */
+ vsp = vsn;
+ }
+ } else if (tn & VFP_INFINITY && tm & VFP_NUMBER) {
+ /*
+ * One infinity and one number -> infinity
+ */
+ vsp = vsn;
+ } else {
+ /*
+ * 'n' is a NaN of some type
+ */
+ return vfp_propagate_nan(vsd, vsn, vsm, fpscr);
+ }
+ *vsd = *vsp;
+ return exceptions;
+}
+
+static u32
+vfp_single_add(struct vfp_single *vsd, struct vfp_single *vsn,
+ struct vfp_single *vsm, u32 fpscr)
+{
+ u32 exp_diff, m_sig;
+
+ if (vsn->significand & 0x80000000 ||
+ vsm->significand & 0x80000000) {
+ pr_info("VFP: bad FP values in %s\n", __func__);
+ vfp_single_dump("VSN", vsn);
+ vfp_single_dump("VSM", vsm);
+ }
+
+ /*
+ * Ensure that 'n' is the largest magnitude number. Note that
+ * if 'n' and 'm' have equal exponents, we do not swap them.
+ * This ensures that NaN propagation works correctly.
+ */
+ if (vsn->exponent < vsm->exponent) {
+ struct vfp_single *t = vsn;
+ vsn = vsm;
+ vsm = t;
+ }
+
+ /*
+ * Is 'n' an infinity or a NaN? Note that 'm' may be a number,
+ * infinity or a NaN here.
+ */
+ if (vsn->exponent == 255)
+ return vfp_single_fadd_nonnumber(vsd, vsn, vsm, fpscr);
+
+ /*
+ * We have two proper numbers, where 'vsn' is the larger magnitude.
+ *
+ * Copy 'n' to 'd' before doing the arithmetic.
+ */
+ *vsd = *vsn;
+
+ /*
+ * Align both numbers.
+ */
+ exp_diff = vsn->exponent - vsm->exponent;
+ m_sig = vfp_shiftright32jamming(vsm->significand, exp_diff);
+
+ /*
+ * If the signs are different, we are really subtracting.
+ */
+ if (vsn->sign ^ vsm->sign) {
+ m_sig = vsn->significand - m_sig;
+ if ((s32)m_sig < 0) {
+ vsd->sign = vfp_sign_negate(vsd->sign);
+ m_sig = -m_sig;
+ } else if (m_sig == 0) {
+ vsd->sign = (fpscr & FPSCR_RMODE_MASK) ==
+ FPSCR_ROUND_MINUSINF ? 0x8000 : 0;
+ }
+ } else {
+ m_sig = vsn->significand + m_sig;
+ }
+ vsd->significand = m_sig;
+
+ return 0;
+}
+
+static u32
+vfp_single_multiply(struct vfp_single *vsd, struct vfp_single *vsn, struct vfp_single *vsm, u32 fpscr)
+{
+ vfp_single_dump("VSN", vsn);
+ vfp_single_dump("VSM", vsm);
+
+ /*
+ * Ensure that 'n' is the largest magnitude number. Note that
+ * if 'n' and 'm' have equal exponents, we do not swap them.
+ * This ensures that NaN propagation works correctly.
+ */
+ if (vsn->exponent < vsm->exponent) {
+ struct vfp_single *t = vsn;
+ vsn = vsm;
+ vsm = t;
+ pr_debug("VFP: swapping M <-> N\n");
+ }
+
+ vsd->sign = vsn->sign ^ vsm->sign;
+
+ /*
+ * If 'n' is an infinity or NaN, handle it. 'm' may be anything.
+ */
+ if (vsn->exponent == 255) {
+ if (vsn->significand || (vsm->exponent == 255 && vsm->significand))
+ return vfp_propagate_nan(vsd, vsn, vsm, fpscr);
+ if ((vsm->exponent | vsm->significand) == 0) {
+ *vsd = vfp_single_default_qnan;
+ return FPSCR_IOC;
+ }
+ vsd->exponent = vsn->exponent;
+ vsd->significand = 0;
+ return 0;
+ }
+
+ /*
+ * If 'm' is zero, the result is always zero. In this case,
+ * 'n' may be zero or a number, but it doesn't matter which.
+ */
+ if ((vsm->exponent | vsm->significand) == 0) {
+ vsd->exponent = 0;
+ vsd->significand = 0;
+ return 0;
+ }
+
+ /*
+ * We add 2 to the destination exponent for the same reason as
+ * the addition case - though this time we have +1 from each
+ * input operand.
+ */
+ vsd->exponent = vsn->exponent + vsm->exponent - 127 + 2;
+ vsd->significand = vfp_hi64to32jamming((u64)vsn->significand * vsm->significand);
+
+ vfp_single_dump("VSD", vsd);
+ return 0;
+}
+
+#define NEG_MULTIPLY (1 << 0)
+#define NEG_SUBTRACT (1 << 1)
+
+static u32
+vfp_single_multiply_accumulate(int sd, int sn, s32 m, u32 fpscr, u32 negate, char *func)
+{
+ struct vfp_single vsd, vsp, vsn, vsm;
+ u32 exceptions;
+ s32 v;
+
+ v = vfp_get_float(sn);
+ pr_debug("VFP: s%u = %08x\n", sn, v);
+ vfp_single_unpack(&vsn, v);
+ if (vsn.exponent == 0 && vsn.significand)
+ vfp_single_normalise_denormal(&vsn);
+
+ vfp_single_unpack(&vsm, m);
+ if (vsm.exponent == 0 && vsm.significand)
+ vfp_single_normalise_denormal(&vsm);
+
+ exceptions = vfp_single_multiply(&vsp, &vsn, &vsm, fpscr);
+ if (negate & NEG_MULTIPLY)
+ vsp.sign = vfp_sign_negate(vsp.sign);
+
+ v = vfp_get_float(sd);
+ pr_debug("VFP: s%u = %08x\n", sd, v);
+ vfp_single_unpack(&vsn, v);
+ if (negate & NEG_SUBTRACT)
+ vsn.sign = vfp_sign_negate(vsn.sign);
+
+ exceptions |= vfp_single_add(&vsd, &vsn, &vsp, fpscr);
+
+ return vfp_single_normaliseround(sd, &vsd, fpscr, exceptions, func);
+}
+
+/*
+ * Standard operations
+ */
+
+/*
+ * sd = sd + (sn * sm)
+ */
+static u32 vfp_single_fmac(int sd, int sn, s32 m, u32 fpscr)
+{
+ return vfp_single_multiply_accumulate(sd, sn, m, fpscr, 0, "fmac");
+}
+
+/*
+ * sd = sd - (sn * sm)
+ */
+static u32 vfp_single_fnmac(int sd, int sn, s32 m, u32 fpscr)
+{
+ return vfp_single_multiply_accumulate(sd, sn, m, fpscr, NEG_MULTIPLY, "fnmac");
+}
+
+/*
+ * sd = -sd + (sn * sm)
+ */
+static u32 vfp_single_fmsc(int sd, int sn, s32 m, u32 fpscr)
+{
+ return vfp_single_multiply_accumulate(sd, sn, m, fpscr, NEG_SUBTRACT, "fmsc");
+}
+
+/*
+ * sd = -sd - (sn * sm)
+ */
+static u32 vfp_single_fnmsc(int sd, int sn, s32 m, u32 fpscr)
+{
+ return vfp_single_multiply_accumulate(sd, sn, m, fpscr, NEG_SUBTRACT | NEG_MULTIPLY, "fnmsc");
+}
+
+/*
+ * sd = sn * sm
+ */
+static u32 vfp_single_fmul(int sd, int sn, s32 m, u32 fpscr)
+{
+ struct vfp_single vsd, vsn, vsm;
+ u32 exceptions;
+ s32 n = vfp_get_float(sn);
+
+ pr_debug("VFP: s%u = %08x\n", sn, n);
+
+ vfp_single_unpack(&vsn, n);
+ if (vsn.exponent == 0 && vsn.significand)
+ vfp_single_normalise_denormal(&vsn);
+
+ vfp_single_unpack(&vsm, m);
+ if (vsm.exponent == 0 && vsm.significand)
+ vfp_single_normalise_denormal(&vsm);
+
+ exceptions = vfp_single_multiply(&vsd, &vsn, &vsm, fpscr);
+ return vfp_single_normaliseround(sd, &vsd, fpscr, exceptions, "fmul");
+}
+
+/*
+ * sd = -(sn * sm)
+ */
+static u32 vfp_single_fnmul(int sd, int sn, s32 m, u32 fpscr)
+{
+ struct vfp_single vsd, vsn, vsm;
+ u32 exceptions;
+ s32 n = vfp_get_float(sn);
+
+ pr_debug("VFP: s%u = %08x\n", sn, n);
+
+ vfp_single_unpack(&vsn, n);
+ if (vsn.exponent == 0 && vsn.significand)
+ vfp_single_normalise_denormal(&vsn);
+
+ vfp_single_unpack(&vsm, m);
+ if (vsm.exponent == 0 && vsm.significand)
+ vfp_single_normalise_denormal(&vsm);
+
+ exceptions = vfp_single_multiply(&vsd, &vsn, &vsm, fpscr);
+ vsd.sign = vfp_sign_negate(vsd.sign);
+ return vfp_single_normaliseround(sd, &vsd, fpscr, exceptions, "fnmul");
+}
+
+/*
+ * sd = sn + sm
+ */
+static u32 vfp_single_fadd(int sd, int sn, s32 m, u32 fpscr)
+{
+ struct vfp_single vsd, vsn, vsm;
+ u32 exceptions;
+ s32 n = vfp_get_float(sn);
+
+ pr_debug("VFP: s%u = %08x\n", sn, n);
+
+ /*
+ * Unpack and normalise denormals.
+ */
+ vfp_single_unpack(&vsn, n);
+ if (vsn.exponent == 0 && vsn.significand)
+ vfp_single_normalise_denormal(&vsn);
+
+ vfp_single_unpack(&vsm, m);
+ if (vsm.exponent == 0 && vsm.significand)
+ vfp_single_normalise_denormal(&vsm);
+
+ exceptions = vfp_single_add(&vsd, &vsn, &vsm, fpscr);
+
+ return vfp_single_normaliseround(sd, &vsd, fpscr, exceptions, "fadd");
+}
+
+/*
+ * sd = sn - sm
+ */
+static u32 vfp_single_fsub(int sd, int sn, s32 m, u32 fpscr)
+{
+ /*
+ * Subtraction is addition with one sign inverted.
+ */
+ return vfp_single_fadd(sd, sn, vfp_single_packed_negate(m), fpscr);
+}
+
+/*
+ * sd = sn / sm
+ */
+static u32 vfp_single_fdiv(int sd, int sn, s32 m, u32 fpscr)
+{
+ struct vfp_single vsd, vsn, vsm;
+ u32 exceptions = 0;
+ s32 n = vfp_get_float(sn);
+ int tm, tn;
+
+ pr_debug("VFP: s%u = %08x\n", sn, n);
+
+ vfp_single_unpack(&vsn, n);
+ vfp_single_unpack(&vsm, m);
+
+ vsd.sign = vsn.sign ^ vsm.sign;
+
+ tn = vfp_single_type(&vsn);
+ tm = vfp_single_type(&vsm);
+
+ /*
+ * Is n a NAN?
+ */
+ if (tn & VFP_NAN)
+ goto vsn_nan;
+
+ /*
+ * Is m a NAN?
+ */
+ if (tm & VFP_NAN)
+ goto vsm_nan;
+
+ /*
+ * If n and m are infinity, the result is invalid
+ * If n and m are zero, the result is invalid
+ */
+ if (tm & tn & (VFP_INFINITY|VFP_ZERO))
+ goto invalid;
+
+ /*
+ * If n is infinity, the result is infinity
+ */
+ if (tn & VFP_INFINITY)
+ goto infinity;
+
+ /*
+ * If m is zero, raise div0 exception
+ */
+ if (tm & VFP_ZERO)
+ goto divzero;
+
+ /*
+ * If m is infinity, or n is zero, the result is zero
+ */
+ if (tm & VFP_INFINITY || tn & VFP_ZERO)
+ goto zero;
+
+ if (tn & VFP_DENORMAL)
+ vfp_single_normalise_denormal(&vsn);
+ if (tm & VFP_DENORMAL)
+ vfp_single_normalise_denormal(&vsm);
+
+ /*
+ * Ok, we have two numbers, we can perform division.
+ */
+ vsd.exponent = vsn.exponent - vsm.exponent + 127 - 1;
+ vsm.significand <<= 1;
+ if (vsm.significand <= (2 * vsn.significand)) {
+ vsn.significand >>= 1;
+ vsd.exponent++;
+ }
+ vsd.significand = ((u64)vsn.significand << 32) / vsm.significand;
+ if ((vsd.significand & 0x3f) == 0)
+ vsd.significand |= ((u64)vsm.significand * vsd.significand != (u64)vsn.significand << 32);
+
+ return vfp_single_normaliseround(sd, &vsd, fpscr, 0, "fdiv");
+
+ vsn_nan:
+ exceptions = vfp_propagate_nan(&vsd, &vsn, &vsm, fpscr);
+ pack:
+ vfp_put_float(sd, vfp_single_pack(&vsd));
+ return exceptions;
+
+ vsm_nan:
+ exceptions = vfp_propagate_nan(&vsd, &vsm, &vsn, fpscr);
+ goto pack;
+
+ zero:
+ vsd.exponent = 0;
+ vsd.significand = 0;
+ goto pack;
+
+ divzero:
+ exceptions = FPSCR_DZC;
+ infinity:
+ vsd.exponent = 255;
+ vsd.significand = 0;
+ goto pack;
+
+ invalid:
+ vfp_put_float(sd, vfp_single_pack(&vfp_single_default_qnan));
+ return FPSCR_IOC;
+}
+
+static u32 (* const fop_fns[16])(int sd, int sn, s32 m, u32 fpscr) = {
+ [FOP_TO_IDX(FOP_FMAC)] = vfp_single_fmac,
+ [FOP_TO_IDX(FOP_FNMAC)] = vfp_single_fnmac,
+ [FOP_TO_IDX(FOP_FMSC)] = vfp_single_fmsc,
+ [FOP_TO_IDX(FOP_FNMSC)] = vfp_single_fnmsc,
+ [FOP_TO_IDX(FOP_FMUL)] = vfp_single_fmul,
+ [FOP_TO_IDX(FOP_FNMUL)] = vfp_single_fnmul,
+ [FOP_TO_IDX(FOP_FADD)] = vfp_single_fadd,
+ [FOP_TO_IDX(FOP_FSUB)] = vfp_single_fsub,
+ [FOP_TO_IDX(FOP_FDIV)] = vfp_single_fdiv,
+};
+
+#define FREG_BANK(x) ((x) & 0x18)
+#define FREG_IDX(x) ((x) & 7)
+
+u32 vfp_single_cpdo(u32 inst, u32 fpscr)
+{
+ u32 op = inst & FOP_MASK;
+ u32 exceptions = 0;
+ unsigned int sd = vfp_get_sd(inst);
+ unsigned int sn = vfp_get_sn(inst);
+ unsigned int sm = vfp_get_sm(inst);
+ unsigned int vecitr, veclen, vecstride;
+ u32 (*fop)(int, int, s32, u32);
+
+ veclen = fpscr & FPSCR_LENGTH_MASK;
+ vecstride = 1 + ((fpscr & FPSCR_STRIDE_MASK) == FPSCR_STRIDE_MASK);
+
+ /*
+ * If destination bank is zero, vector length is always '1'.
+ * ARM DDI0100F C5.1.3, C5.3.2.
+ */
+ if (FREG_BANK(sd) == 0)
+ veclen = 0;
+
+ pr_debug("VFP: vecstride=%u veclen=%u\n", vecstride,
+ (veclen >> FPSCR_LENGTH_BIT) + 1);
+
+ fop = (op == FOP_EXT) ? fop_extfns[sn] : fop_fns[FOP_TO_IDX(op)];
+ if (!fop)
+ goto invalid;
+
+ for (vecitr = 0; vecitr <= veclen; vecitr += 1 << FPSCR_LENGTH_BIT) {
+ s32 m = vfp_get_float(sm);
+ u32 except;
+
+ if (op == FOP_EXT)
+ pr_debug("VFP: itr%d (s%u) = op[%u] (s%u=%08x)\n",
+ vecitr >> FPSCR_LENGTH_BIT, sd, sn, sm, m);
+ else
+ pr_debug("VFP: itr%d (s%u) = (s%u) op[%u] (s%u=%08x)\n",
+ vecitr >> FPSCR_LENGTH_BIT, sd, sn,
+ FOP_TO_IDX(op), sm, m);
+
+ except = fop(sd, sn, m, fpscr);
+ pr_debug("VFP: itr%d: exceptions=%08x\n",
+ vecitr >> FPSCR_LENGTH_BIT, except);
+
+ exceptions |= except;
+
+ /*
+ * This ensures that comparisons only operate on scalars;
+ * comparisons always return with one FPSCR status bit set.
+ */
+ if (except & (FPSCR_N|FPSCR_Z|FPSCR_C|FPSCR_V))
+ break;
+
+ /*
+ * CHECK: It appears to be undefined whether we stop when
+ * we encounter an exception. We continue.
+ */
+
+ sd = FREG_BANK(sd) + ((FREG_IDX(sd) + vecstride) & 7);
+ sn = FREG_BANK(sn) + ((FREG_IDX(sn) + vecstride) & 7);
+ if (FREG_BANK(sm) != 0)
+ sm = FREG_BANK(sm) + ((FREG_IDX(sm) + vecstride) & 7);
+ }
+ return exceptions;
+
+ invalid:
+ return (u32)-1;
+}
--- /dev/null
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/user.h>
+#include <linux/elfcore.h>
+#include <linux/sched.h>
+#include <linux/in6.h>
+#include <linux/interrupt.h>
+#include <linux/smp_lock.h>
+#include <linux/pm.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/tty.h>
+
+#include <asm/semaphore.h>
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+#include <asm/checksum.h>
+#include <asm/io.h>
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <asm/pgtable.h>
+#include <asm/fasttimer.h>
+
+extern void dump_thread(struct pt_regs *, struct user *);
+extern unsigned long get_cmos_time(void);
+extern void __Udiv(void);
+extern void __Umod(void);
+extern void __Div(void);
+extern void __Mod(void);
+extern void __ashrdi3(void);
+extern void iounmap(void *addr);
+
+/* Platform dependent support */
+EXPORT_SYMBOL(dump_thread);
+EXPORT_SYMBOL(enable_irq);
+EXPORT_SYMBOL(disable_irq);
+EXPORT_SYMBOL(kernel_thread);
+EXPORT_SYMBOL(get_cmos_time);
+EXPORT_SYMBOL(loops_per_usec);
+
+/* String functions */
+EXPORT_SYMBOL(memcmp);
+EXPORT_SYMBOL(memmove);
+EXPORT_SYMBOL(strpbrk);
+EXPORT_SYMBOL(strstr);
+EXPORT_SYMBOL(strcpy);
+EXPORT_SYMBOL(strchr);
+EXPORT_SYMBOL(strcmp);
+EXPORT_SYMBOL(strlen);
+EXPORT_SYMBOL(strcat);
+EXPORT_SYMBOL(strncat);
+EXPORT_SYMBOL(strncmp);
+EXPORT_SYMBOL(strncpy);
+
+/* Math functions */
+EXPORT_SYMBOL(__Udiv);
+EXPORT_SYMBOL(__Umod);
+EXPORT_SYMBOL(__Div);
+EXPORT_SYMBOL(__Mod);
+EXPORT_SYMBOL(__ashrdi3);
+
+/* Memory functions */
+EXPORT_SYMBOL(__ioremap);
+EXPORT_SYMBOL(iounmap);
+
+/* Semaphore functions */
+EXPORT_SYMBOL(__up);
+EXPORT_SYMBOL(__down);
+EXPORT_SYMBOL(__down_interruptible);
+EXPORT_SYMBOL(__down_trylock);
+
+/* Export shadow registers for the CPU I/O pins */
+EXPORT_SYMBOL(genconfig_shadow);
+EXPORT_SYMBOL(port_pa_data_shadow);
+EXPORT_SYMBOL(port_pa_dir_shadow);
+EXPORT_SYMBOL(port_pb_data_shadow);
+EXPORT_SYMBOL(port_pb_dir_shadow);
+EXPORT_SYMBOL(port_pb_config_shadow);
+EXPORT_SYMBOL(port_g_data_shadow);
+
+/* Userspace access functions */
+EXPORT_SYMBOL(__copy_user_zeroing);
+EXPORT_SYMBOL(__copy_user);
+
+/* Cache flush functions */
+EXPORT_SYMBOL(flush_etrax_cache);
+EXPORT_SYMBOL(prepare_rx_descriptor);
+
+#undef memcpy
+#undef memset
+extern void * memset(void *, int, __kernel_size_t);
+extern void * memcpy(void *, const void *, __kernel_size_t);
+EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL(memset);
+
+#ifdef CONFIG_ETRAX_FAST_TIMER
+/* Fast timer functions */
+EXPORT_SYMBOL(fast_timer_list);
+EXPORT_SYMBOL(start_one_shot_timer);
+EXPORT_SYMBOL(del_fast_timer);
+EXPORT_SYMBOL(schedule_usleep);
+#endif
+
--- /dev/null
+// -------------------------------------------------------------------------
+// Copyright (c) 2001, Dr Brian Gladman < >, Worcester, UK.
+// All rights reserved.
+//
+// LICENSE TERMS
+//
+// The free distribution and use of this software in both source and binary
+// form is allowed (with or without changes) provided that:
+//
+// 1. distributions of this source code include the above copyright
+// notice, this list of conditions and the following disclaimer//
+//
+// 2. distributions in binary form include the above copyright
+// notice, this list of conditions and the following disclaimer
+// in the documentation and/or other associated materials//
+//
+// 3. the copyright holder's name is not used to endorse products
+// built using this software without specific written permission.
+//
+//
+// ALTERNATIVELY, provided that this notice is retained in full, this product
+// may be distributed under the terms of the GNU General Public License (GPL),
+// in which case the provisions of the GPL apply INSTEAD OF those given above.
+//
+// Copyright (c) 2004 Linus Torvalds <torvalds@osdl.org>
+// Copyright (c) 2004 Red Hat, Inc., James Morris <jmorris@redhat.com>
+
+// DISCLAIMER
+//
+// This software is provided 'as is' with no explicit or implied warranties
+// in respect of its properties including, but not limited to, correctness
+// and fitness for purpose.
+// -------------------------------------------------------------------------
+// Issue Date: 29/07/2002
+
+.file "aes-i586-asm.S"
+.text
+
+// aes_rval aes_enc_blk(const unsigned char in_blk[], unsigned char out_blk[], const aes_ctx cx[1])//
+// aes_rval aes_dec_blk(const unsigned char in_blk[], unsigned char out_blk[], const aes_ctx cx[1])//
+
+#define tlen 1024 // length of each of 4 'xor' arrays (256 32-bit words)
+
+// offsets to parameters with one register pushed onto stack
+
+#define in_blk 8 // input byte array address parameter
+#define out_blk 12 // output byte array address parameter
+#define ctx 16 // AES context structure
+
+// offsets in context structure
+
+#define ekey 0 // encryption key schedule base address
+#define nrnd 256 // number of rounds
+#define dkey 260 // decryption key schedule base address
+
+// register mapping for encrypt and decrypt subroutines
+
+#define r0 eax
+#define r1 ebx
+#define r2 ecx
+#define r3 edx
+#define r4 esi
+#define r5 edi
+
+#define eaxl al
+#define eaxh ah
+#define ebxl bl
+#define ebxh bh
+#define ecxl cl
+#define ecxh ch
+#define edxl dl
+#define edxh dh
+
+#define _h(reg) reg##h
+#define h(reg) _h(reg)
+
+#define _l(reg) reg##l
+#define l(reg) _l(reg)
+
+// This macro takes a 32-bit word representing a column and uses
+// each of its four bytes to index into four tables of 256 32-bit
+// words to obtain values that are then xored into the appropriate
+// output registers r0, r1, r4 or r5.
+
+// Parameters:
+// table table base address
+// %1 out_state[0]
+// %2 out_state[1]
+// %3 out_state[2]
+// %4 out_state[3]
+// idx input register for the round (destroyed)
+// tmp scratch register for the round
+// sched key schedule
+
+#define do_col(table, a1,a2,a3,a4, idx, tmp) \
+ movzx %l(idx),%tmp; \
+ xor table(,%tmp,4),%a1; \
+ movzx %h(idx),%tmp; \
+ shr $16,%idx; \
+ xor table+tlen(,%tmp,4),%a2; \
+ movzx %l(idx),%tmp; \
+ movzx %h(idx),%idx; \
+ xor table+2*tlen(,%tmp,4),%a3; \
+ xor table+3*tlen(,%idx,4),%a4;
+
+// initialise output registers from the key schedule
+// NB1: original value of a3 is in idx on exit
+// NB2: original values of a1,a2,a4 aren't used
+#define do_fcol(table, a1,a2,a3,a4, idx, tmp, sched) \
+ mov 0 sched,%a1; \
+ movzx %l(idx),%tmp; \
+ mov 12 sched,%a2; \
+ xor table(,%tmp,4),%a1; \
+ mov 4 sched,%a4; \
+ movzx %h(idx),%tmp; \
+ shr $16,%idx; \
+ xor table+tlen(,%tmp,4),%a2; \
+ movzx %l(idx),%tmp; \
+ movzx %h(idx),%idx; \
+ xor table+3*tlen(,%idx,4),%a4; \
+ mov %a3,%idx; \
+ mov 8 sched,%a3; \
+ xor table+2*tlen(,%tmp,4),%a3;
+
+// initialise output registers from the key schedule
+// NB1: original value of a3 is in idx on exit
+// NB2: original values of a1,a2,a4 aren't used
+#define do_icol(table, a1,a2,a3,a4, idx, tmp, sched) \
+ mov 0 sched,%a1; \
+ movzx %l(idx),%tmp; \
+ mov 4 sched,%a2; \
+ xor table(,%tmp,4),%a1; \
+ mov 12 sched,%a4; \
+ movzx %h(idx),%tmp; \
+ shr $16,%idx; \
+ xor table+tlen(,%tmp,4),%a2; \
+ movzx %l(idx),%tmp; \
+ movzx %h(idx),%idx; \
+ xor table+3*tlen(,%idx,4),%a4; \
+ mov %a3,%idx; \
+ mov 8 sched,%a3; \
+ xor table+2*tlen(,%tmp,4),%a3;
+
+
+// original Gladman had conditional saves to MMX regs.
+#define save(a1, a2) \
+ mov %a2,4*a1(%esp)
+
+#define restore(a1, a2) \
+ mov 4*a2(%esp),%a1
+
+// These macros perform a forward encryption cycle. They are entered with
+// the first previous round column values in r0,r1,r4,r5 and
+// exit with the final values in the same registers, using stack
+// for temporary storage.
+
+// round column values
+// on entry: r0,r1,r4,r5
+// on exit: r2,r1,r4,r5
+#define fwd_rnd1(arg, table) \
+ save (0,r1); \
+ save (1,r5); \
+ \
+ /* compute new column values */ \
+ do_fcol(table, r2,r5,r4,r1, r0,r3, arg); /* idx=r0 */ \
+ do_col (table, r4,r1,r2,r5, r0,r3); /* idx=r4 */ \
+ restore(r0,0); \
+ do_col (table, r1,r2,r5,r4, r0,r3); /* idx=r1 */ \
+ restore(r0,1); \
+ do_col (table, r5,r4,r1,r2, r0,r3); /* idx=r5 */
+
+// round column values
+// on entry: r2,r1,r4,r5
+// on exit: r0,r1,r4,r5
+#define fwd_rnd2(arg, table) \
+ save (0,r1); \
+ save (1,r5); \
+ \
+ /* compute new column values */ \
+ do_fcol(table, r0,r5,r4,r1, r2,r3, arg); /* idx=r2 */ \
+ do_col (table, r4,r1,r0,r5, r2,r3); /* idx=r4 */ \
+ restore(r2,0); \
+ do_col (table, r1,r0,r5,r4, r2,r3); /* idx=r1 */ \
+ restore(r2,1); \
+ do_col (table, r5,r4,r1,r0, r2,r3); /* idx=r5 */
+
+// These macros performs an inverse encryption cycle. They are entered with
+// the first previous round column values in r0,r1,r4,r5 and
+// exit with the final values in the same registers, using stack
+// for temporary storage
+
+// round column values
+// on entry: r0,r1,r4,r5
+// on exit: r2,r1,r4,r5
+#define inv_rnd1(arg, table) \
+ save (0,r1); \
+ save (1,r5); \
+ \
+ /* compute new column values */ \
+ do_icol(table, r2,r1,r4,r5, r0,r3, arg); /* idx=r0 */ \
+ do_col (table, r4,r5,r2,r1, r0,r3); /* idx=r4 */ \
+ restore(r0,0); \
+ do_col (table, r1,r4,r5,r2, r0,r3); /* idx=r1 */ \
+ restore(r0,1); \
+ do_col (table, r5,r2,r1,r4, r0,r3); /* idx=r5 */
+
+// round column values
+// on entry: r2,r1,r4,r5
+// on exit: r0,r1,r4,r5
+#define inv_rnd2(arg, table) \
+ save (0,r1); \
+ save (1,r5); \
+ \
+ /* compute new column values */ \
+ do_icol(table, r0,r1,r4,r5, r2,r3, arg); /* idx=r2 */ \
+ do_col (table, r4,r5,r0,r1, r2,r3); /* idx=r4 */ \
+ restore(r2,0); \
+ do_col (table, r1,r4,r5,r0, r2,r3); /* idx=r1 */ \
+ restore(r2,1); \
+ do_col (table, r5,r0,r1,r4, r2,r3); /* idx=r5 */
+
+// AES (Rijndael) Encryption Subroutine
+
+.global aes_enc_blk
+
+.extern ft_tab
+.extern fl_tab
+
+.align 4
+
+aes_enc_blk:
+ push %ebp
+ mov ctx(%esp),%ebp // pointer to context
+
+// CAUTION: the order and the values used in these assigns
+// rely on the register mappings
+
+1: push %ebx
+ mov in_blk+4(%esp),%r2
+ push %esi
+ mov nrnd(%ebp),%r3 // number of rounds
+ push %edi
+#if ekey != 0
+ lea ekey(%ebp),%ebp // key pointer
+#endif
+
+// input four columns and xor in first round key
+
+ mov (%r2),%r0
+ mov 4(%r2),%r1
+ mov 8(%r2),%r4
+ mov 12(%r2),%r5
+ xor (%ebp),%r0
+ xor 4(%ebp),%r1
+ xor 8(%ebp),%r4
+ xor 12(%ebp),%r5
+
+ sub $8,%esp // space for register saves on stack
+ add $16,%ebp // increment to next round key
+ sub $10,%r3
+ je 4f // 10 rounds for 128-bit key
+ add $32,%ebp
+ sub $2,%r3
+ je 3f // 12 rounds for 128-bit key
+ add $32,%ebp
+
+2: fwd_rnd1( -64(%ebp) ,ft_tab) // 14 rounds for 128-bit key
+ fwd_rnd2( -48(%ebp) ,ft_tab)
+3: fwd_rnd1( -32(%ebp) ,ft_tab) // 12 rounds for 128-bit key
+ fwd_rnd2( -16(%ebp) ,ft_tab)
+4: fwd_rnd1( (%ebp) ,ft_tab) // 10 rounds for 128-bit key
+ fwd_rnd2( +16(%ebp) ,ft_tab)
+ fwd_rnd1( +32(%ebp) ,ft_tab)
+ fwd_rnd2( +48(%ebp) ,ft_tab)
+ fwd_rnd1( +64(%ebp) ,ft_tab)
+ fwd_rnd2( +80(%ebp) ,ft_tab)
+ fwd_rnd1( +96(%ebp) ,ft_tab)
+ fwd_rnd2(+112(%ebp) ,ft_tab)
+ fwd_rnd1(+128(%ebp) ,ft_tab)
+ fwd_rnd2(+144(%ebp) ,fl_tab) // last round uses a different table
+
+// move final values to the output array. CAUTION: the
+// order of these assigns rely on the register mappings
+
+ add $8,%esp
+ mov out_blk+12(%esp),%ebp
+ mov %r5,12(%ebp)
+ pop %edi
+ mov %r4,8(%ebp)
+ pop %esi
+ mov %r1,4(%ebp)
+ pop %ebx
+ mov %r0,(%ebp)
+ pop %ebp
+ mov $1,%eax
+ ret
+
+// AES (Rijndael) Decryption Subroutine
+
+.global aes_dec_blk
+
+.extern it_tab
+.extern il_tab
+
+.align 4
+
+aes_dec_blk:
+ push %ebp
+ mov ctx(%esp),%ebp // pointer to context
+
+// CAUTION: the order and the values used in these assigns
+// rely on the register mappings
+
+1: push %ebx
+ mov in_blk+4(%esp),%r2
+ push %esi
+ mov nrnd(%ebp),%r3 // number of rounds
+ push %edi
+#if dkey != 0
+ lea dkey(%ebp),%ebp // key pointer
+#endif
+ mov %r3,%r0
+ shl $4,%r0
+ add %r0,%ebp
+
+// input four columns and xor in first round key
+
+ mov (%r2),%r0
+ mov 4(%r2),%r1
+ mov 8(%r2),%r4
+ mov 12(%r2),%r5
+ xor (%ebp),%r0
+ xor 4(%ebp),%r1
+ xor 8(%ebp),%r4
+ xor 12(%ebp),%r5
+
+ sub $8,%esp // space for register saves on stack
+ sub $16,%ebp // increment to next round key
+ sub $10,%r3
+ je 4f // 10 rounds for 128-bit key
+ sub $32,%ebp
+ sub $2,%r3
+ je 3f // 12 rounds for 128-bit key
+ sub $32,%ebp
+
+2: inv_rnd1( +64(%ebp), it_tab) // 14 rounds for 128-bit key
+ inv_rnd2( +48(%ebp), it_tab)
+3: inv_rnd1( +32(%ebp), it_tab) // 12 rounds for 128-bit key
+ inv_rnd2( +16(%ebp), it_tab)
+4: inv_rnd1( (%ebp), it_tab) // 10 rounds for 128-bit key
+ inv_rnd2( -16(%ebp), it_tab)
+ inv_rnd1( -32(%ebp), it_tab)
+ inv_rnd2( -48(%ebp), it_tab)
+ inv_rnd1( -64(%ebp), it_tab)
+ inv_rnd2( -80(%ebp), it_tab)
+ inv_rnd1( -96(%ebp), it_tab)
+ inv_rnd2(-112(%ebp), it_tab)
+ inv_rnd1(-128(%ebp), it_tab)
+ inv_rnd2(-144(%ebp), il_tab) // last round uses a different table
+
+// move final values to the output array. CAUTION: the
+// order of these assigns rely on the register mappings
+
+ add $8,%esp
+ mov out_blk+12(%esp),%ebp
+ mov %r5,12(%ebp)
+ pop %edi
+ mov %r4,8(%ebp)
+ pop %esi
+ mov %r1,4(%ebp)
+ pop %ebx
+ mov %r0,(%ebp)
+ pop %ebp
+ mov $1,%eax
+ ret
+
--- /dev/null
+#include <linux/init.h>
+#include <asm/processor.h>
+
+#define LVL_1_INST 1
+#define LVL_1_DATA 2
+#define LVL_2 3
+#define LVL_3 4
+#define LVL_TRACE 5
+
+struct _cache_table
+{
+ unsigned char descriptor;
+ char cache_type;
+ short size;
+};
+
+/* all the cache descriptor types we care about (no TLB or trace cache entries) */
+static struct _cache_table cache_table[] __initdata =
+{
+ { 0x06, LVL_1_INST, 8 }, /* 4-way set assoc, 32 byte line size */
+ { 0x08, LVL_1_INST, 16 }, /* 4-way set assoc, 32 byte line size */
+ { 0x0a, LVL_1_DATA, 8 }, /* 2 way set assoc, 32 byte line size */
+ { 0x0c, LVL_1_DATA, 16 }, /* 4-way set assoc, 32 byte line size */
+ { 0x22, LVL_3, 512 }, /* 4-way set assoc, sectored cache, 64 byte line size */
+ { 0x23, LVL_3, 1024 }, /* 8-way set assoc, sectored cache, 64 byte line size */
+ { 0x25, LVL_3, 2048 }, /* 8-way set assoc, sectored cache, 64 byte line size */
+ { 0x29, LVL_3, 4096 }, /* 8-way set assoc, sectored cache, 64 byte line size */
+ { 0x2c, LVL_1_DATA, 32 }, /* 8-way set assoc, 64 byte line size */
+ { 0x30, LVL_1_INST, 32 }, /* 8-way set assoc, 64 byte line size */
+ { 0x39, LVL_2, 128 }, /* 4-way set assoc, sectored cache, 64 byte line size */
+ { 0x3b, LVL_2, 128 }, /* 2-way set assoc, sectored cache, 64 byte line size */
+ { 0x3c, LVL_2, 256 }, /* 4-way set assoc, sectored cache, 64 byte line size */
+ { 0x41, LVL_2, 128 }, /* 4-way set assoc, 32 byte line size */
+ { 0x42, LVL_2, 256 }, /* 4-way set assoc, 32 byte line size */
+ { 0x43, LVL_2, 512 }, /* 4-way set assoc, 32 byte line size */
+ { 0x44, LVL_2, 1024 }, /* 4-way set assoc, 32 byte line size */
+ { 0x45, LVL_2, 2048 }, /* 4-way set assoc, 32 byte line size */
+ { 0x60, LVL_1_DATA, 16 }, /* 8-way set assoc, sectored cache, 64 byte line size */
+ { 0x66, LVL_1_DATA, 8 }, /* 4-way set assoc, sectored cache, 64 byte line size */
+ { 0x67, LVL_1_DATA, 16 }, /* 4-way set assoc, sectored cache, 64 byte line size */
+ { 0x68, LVL_1_DATA, 32 }, /* 4-way set assoc, sectored cache, 64 byte line size */
+ { 0x70, LVL_TRACE, 12 }, /* 8-way set assoc */
+ { 0x71, LVL_TRACE, 16 }, /* 8-way set assoc */
+ { 0x72, LVL_TRACE, 32 }, /* 8-way set assoc */
+ { 0x78, LVL_2, 1024 }, /* 4-way set assoc, 64 byte line size */
+ { 0x79, LVL_2, 128 }, /* 8-way set assoc, sectored cache, 64 byte line size */
+ { 0x7a, LVL_2, 256 }, /* 8-way set assoc, sectored cache, 64 byte line size */
+ { 0x7b, LVL_2, 512 }, /* 8-way set assoc, sectored cache, 64 byte line size */
+ { 0x7c, LVL_2, 1024 }, /* 8-way set assoc, sectored cache, 64 byte line size */
+ { 0x7d, LVL_2, 2048 }, /* 8-way set assoc, 64 byte line size */
+ { 0x7f, LVL_2, 512 }, /* 2-way set assoc, 64 byte line size */
+ { 0x82, LVL_2, 256 }, /* 8-way set assoc, 32 byte line size */
+ { 0x83, LVL_2, 512 }, /* 8-way set assoc, 32 byte line size */
+ { 0x84, LVL_2, 1024 }, /* 8-way set assoc, 32 byte line size */
+ { 0x85, LVL_2, 2048 }, /* 8-way set assoc, 32 byte line size */
+ { 0x86, LVL_2, 512 }, /* 4-way set assoc, 64 byte line size */
+ { 0x87, LVL_2, 1024 }, /* 8-way set assoc, 64 byte line size */
+ { 0x00, 0, 0}
+};
+
+unsigned int init_intel_cacheinfo(struct cpuinfo_x86 *c)
+{
+ unsigned int trace = 0, l1i = 0, l1d = 0, l2 = 0, l3 = 0; /* Cache sizes */
+
+ if (c->cpuid_level > 1) {
+ /* supports eax=2 call */
+ int i, j, n;
+ int regs[4];
+ unsigned char *dp = (unsigned char *)regs;
+
+ /* Number of times to iterate */
+ n = cpuid_eax(2) & 0xFF;
+
+ for ( i = 0 ; i < n ; i++ ) {
+ cpuid(2, ®s[0], ®s[1], ®s[2], ®s[3]);
+
+ /* If bit 31 is set, this is an unknown format */
+ for ( j = 0 ; j < 3 ; j++ ) {
+ if ( regs[j] < 0 ) regs[j] = 0;
+ }
+
+ /* Byte 0 is level count, not a descriptor */
+ for ( j = 1 ; j < 16 ; j++ ) {
+ unsigned char des = dp[j];
+ unsigned char k = 0;
+
+ /* look up this descriptor in the table */
+ while (cache_table[k].descriptor != 0)
+ {
+ if (cache_table[k].descriptor == des) {
+ switch (cache_table[k].cache_type) {
+ case LVL_1_INST:
+ l1i += cache_table[k].size;
+ break;
+ case LVL_1_DATA:
+ l1d += cache_table[k].size;
+ break;
+ case LVL_2:
+ l2 += cache_table[k].size;
+ break;
+ case LVL_3:
+ l3 += cache_table[k].size;
+ break;
+ case LVL_TRACE:
+ trace += cache_table[k].size;
+ break;
+ }
+
+ break;
+ }
+
+ k++;
+ }
+ }
+ }
+
+ if ( trace )
+ printk (KERN_INFO "CPU: Trace cache: %dK uops", trace);
+ else if ( l1i )
+ printk (KERN_INFO "CPU: L1 I cache: %dK", l1i);
+ if ( l1d )
+ printk(", L1 D cache: %dK\n", l1d);
+ else
+ printk("\n");
+ if ( l2 )
+ printk(KERN_INFO "CPU: L2 cache: %dK\n", l2);
+ if ( l3 )
+ printk(KERN_INFO "CPU: L3 cache: %dK\n", l3);
+
+ /*
+ * This assumes the L3 cache is shared; it typically lives in
+ * the northbridge. The L1 caches are included by the L2
+ * cache, and so should not be included for the purpose of
+ * SMP switching weights.
+ */
+ c->x86_cache_size = l2 ? l2 : (l1i+l1d);
+ }
+
+ return l2;
+}
--- /dev/null
+/*
+ * Written by: Garry Forsgren, Unisys Corporation
+ * Natalie Protasevich, Unisys Corporation
+ * This file contains the code to configure and interface
+ * with Unisys ES7000 series hardware system manager.
+ *
+ * Copyright (c) 2003 Unisys Corporation. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Unisys Corporation, Township Line & Union Meeting
+ * Roads-A, Unisys Way, Blue Bell, Pennsylvania, 19424, or:
+ *
+ * http://www.unisys.com
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/smp.h>
+#include <linux/string.h>
+#include <linux/spinlock.h>
+#include <linux/errno.h>
+#include <linux/notifier.h>
+#include <linux/reboot.h>
+#include <linux/init.h>
+#include <linux/acpi.h>
+#include <asm/io.h>
+#include <asm/nmi.h>
+#include <asm/smp.h>
+#include <asm/apicdef.h>
+#include "es7000.h"
+
+/*
+ * ES7000 Globals
+ */
+
+volatile unsigned long *psai = NULL;
+struct mip_reg *mip_reg;
+struct mip_reg *host_reg;
+int mip_port;
+unsigned long mip_addr, host_addr;
+
+#if defined(CONFIG_X86_IO_APIC) && (defined(CONFIG_ACPI_INTERPRETER) || defined(CONFIG_ACPI_BOOT))
+
+/*
+ * GSI override for ES7000 platforms.
+ */
+
+static unsigned int base;
+
+static int
+es7000_rename_gsi(int ioapic, int gsi)
+{
+ if (!base) {
+ int i;
+ for (i = 0; i < nr_ioapics; i++)
+ base += nr_ioapic_registers[i];
+ }
+
+ if (!ioapic && (gsi < 16))
+ gsi += base;
+ return gsi;
+}
+
+#endif // (CONFIG_X86_IO_APIC) && (CONFIG_ACPI_INTERPRETER || CONFIG_ACPI_BOOT)
+
+/*
+ * Parse the OEM Table
+ */
+
+int __init
+parse_unisys_oem (char *oemptr, int oem_entries)
+{
+ int i;
+ int success = 0;
+ unsigned char type, size;
+ unsigned long val;
+ char *tp = NULL;
+ struct psai *psaip = NULL;
+ struct mip_reg_info *mi;
+ struct mip_reg *host, *mip;
+
+ tp = oemptr;
+
+ tp += 8;
+
+ for (i=0; i <= oem_entries; i++) {
+ type = *tp++;
+ size = *tp++;
+ tp -= 2;
+ switch (type) {
+ case MIP_REG:
+ mi = (struct mip_reg_info *)tp;
+ val = MIP_RD_LO(mi->host_reg);
+ host_addr = val;
+ host = (struct mip_reg *)val;
+ host_reg = __va(host);
+ val = MIP_RD_LO(mi->mip_reg);
+ mip_port = MIP_PORT(mi->mip_info);
+ mip_addr = val;
+ mip = (struct mip_reg *)val;
+ mip_reg = __va(mip);
+ Dprintk("es7000_mipcfg: host_reg = 0x%lx \n",
+ (unsigned long)host_reg);
+ Dprintk("es7000_mipcfg: mip_reg = 0x%lx \n",
+ (unsigned long)mip_reg);
+ success++;
+ break;
+ case MIP_PSAI_REG:
+ psaip = (struct psai *)tp;
+ if (tp != NULL) {
+ if (psaip->addr)
+ psai = __va(psaip->addr);
+ else
+ psai = NULL;
+ success++;
+ }
+ break;
+ default:
+ break;
+ }
+ if (i == 6) break;
+ tp += size;
+ }
+
+ if (success < 2) {
+ es7000_plat = 0;
+ } else {
+ printk("\nEnabling ES7000 specific features...\n");
+ es7000_plat = 1;
+ ioapic_renumber_irq = es7000_rename_gsi;
+ }
+ return es7000_plat;
+}
+
+int __init
+find_unisys_acpi_oem_table(unsigned long *oem_addr, int *length)
+{
+ struct acpi_table_rsdp *rsdp = NULL;
+ unsigned long rsdp_phys = 0;
+ struct acpi_table_header *header = NULL;
+ int i;
+ struct acpi_table_sdt sdt;
+
+ rsdp_phys = acpi_find_rsdp();
+ rsdp = __va(rsdp_phys);
+ if (rsdp->rsdt_address) {
+ struct acpi_table_rsdt *mapped_rsdt = NULL;
+ sdt.pa = rsdp->rsdt_address;
+
+ header = (struct acpi_table_header *)
+ __acpi_map_table(sdt.pa, sizeof(struct acpi_table_header));
+ if (!header)
+ return -ENODEV;
+
+ sdt.count = (header->length - sizeof(struct acpi_table_header)) >> 3;
+ mapped_rsdt = (struct acpi_table_rsdt *)
+ __acpi_map_table(sdt.pa, header->length);
+ if (!mapped_rsdt)
+ return -ENODEV;
+
+ header = &mapped_rsdt->header;
+
+ for (i = 0; i < sdt.count; i++)
+ sdt.entry[i].pa = (unsigned long) mapped_rsdt->entry[i];
+ };
+ for (i = 0; i < sdt.count; i++) {
+
+ header = (struct acpi_table_header *)
+ __acpi_map_table(sdt.entry[i].pa,
+ sizeof(struct acpi_table_header));
+ if (!header)
+ continue;
+ if (!strncmp((char *) &header->signature, "OEM1", 4)) {
+ if (!strncmp((char *) &header->oem_id, "UNISYS", 6)) {
+ void *addr;
+ struct oem_table *t;
+ acpi_table_print(header, sdt.entry[i].pa);
+ t = (struct oem_table *) __acpi_map_table(sdt.entry[i].pa, header->length);
+ addr = (void *) __acpi_map_table(t->OEMTableAddr, t->OEMTableSize);
+ *length = header->length;
+ *oem_addr = (unsigned long) addr;
+ return 0;
+ }
+ }
+ }
+ Dprintk("ES7000: did not find Unisys ACPI OEM table!\n");
+ return -1;
+}
+
+static void
+es7000_spin(int n)
+{
+ int i = 0;
+
+ while (i++ < n)
+ rep_nop();
+}
+
+static int __init
+es7000_mip_write(struct mip_reg *mip_reg)
+{
+ int status = 0;
+ int spin;
+
+ spin = MIP_SPIN;
+ while (((unsigned long long)host_reg->off_38 &
+ (unsigned long long)MIP_VALID) != 0) {
+ if (--spin <= 0) {
+ printk("es7000_mip_write: Timeout waiting for Host Valid Flag");
+ return -1;
+ }
+ es7000_spin(MIP_SPIN);
+ }
+
+ memcpy(host_reg, mip_reg, sizeof(struct mip_reg));
+ outb(1, mip_port);
+
+ spin = MIP_SPIN;
+
+ while (((unsigned long long)mip_reg->off_38 &
+ (unsigned long long)MIP_VALID) == 0) {
+ if (--spin <= 0) {
+ printk("es7000_mip_write: Timeout waiting for MIP Valid Flag");
+ return -1;
+ }
+ es7000_spin(MIP_SPIN);
+ }
+
+ status = ((unsigned long long)mip_reg->off_0 &
+ (unsigned long long)0xffff0000000000ULL) >> 48;
+ mip_reg->off_38 = ((unsigned long long)mip_reg->off_38 &
+ (unsigned long long)~MIP_VALID);
+ return status;
+}
+
+int
+es7000_start_cpu(int cpu, unsigned long eip)
+{
+ unsigned long vect = 0, psaival = 0;
+
+ if (psai == NULL)
+ return -1;
+
+ vect = ((unsigned long)__pa(eip)/0x1000) << 16;
+ psaival = (0x1000000 | vect | cpu);
+
+ while (*psai & 0x1000000)
+ ;
+
+ *psai = psaival;
+
+ return 0;
+
+}
+
+int
+es7000_stop_cpu(int cpu)
+{
+ int startup;
+
+ if (psai == NULL)
+ return -1;
+
+ startup= (0x1000000 | cpu);
+
+ while ((*psai & 0xff00ffff) != startup)
+ ;
+
+ startup = (*psai & 0xff0000) >> 16;
+ *psai &= 0xffffff;
+
+ return 0;
+
+}
+
+void __init
+es7000_sw_apic()
+{
+ if (es7000_plat) {
+ int mip_status;
+ struct mip_reg es7000_mip_reg;
+
+ printk("ES7000: Enabling APIC mode.\n");
+ memset(&es7000_mip_reg, 0, sizeof(struct mip_reg));
+ es7000_mip_reg.off_0 = MIP_SW_APIC;
+ es7000_mip_reg.off_38 = (MIP_VALID);
+ while ((mip_status = es7000_mip_write(&es7000_mip_reg)) != 0)
+ printk("es7000_sw_apic: command failed, status = %x\n",
+ mip_status);
+ return;
+ }
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_IOERROR_H
+#define _ASM_IA64_SN_IOERROR_H
+
+/*
+ * IO error structure.
+ *
+ * This structure would expand to hold the information retrieved from
+ * all IO related error registers.
+ *
+ * This structure is defined to hold all system specific
+ * information related to a single error.
+ *
+ * This serves a couple of purpose.
+ * - Error handling often involves translating one form of address to other
+ * form. So, instead of having different data structures at each level,
+ * we have a single structure, and the appropriate fields get filled in
+ * at each layer.
+ * - This provides a way to dump all error related information in any layer
+ * of erorr handling (debugging aid).
+ *
+ * A second possibility is to allow each layer to define its own error
+ * data structure, and fill in the proper fields. This has the advantage
+ * of isolating the layers.
+ * A big concern is the potential stack usage (and overflow), if each layer
+ * defines these structures on stack (assuming we don't want to do kmalloc.
+ *
+ * Any layer wishing to pass extra information to a layer next to it in
+ * error handling hierarchy, can do so as a separate parameter.
+ */
+
+typedef struct io_error_s {
+ /* Bit fields indicating which structure fields are valid */
+ union {
+ struct {
+ unsigned ievb_errortype:1;
+ unsigned ievb_widgetnum:1;
+ unsigned ievb_widgetdev:1;
+ unsigned ievb_srccpu:1;
+ unsigned ievb_srcnode:1;
+ unsigned ievb_errnode:1;
+ unsigned ievb_sysioaddr:1;
+ unsigned ievb_xtalkaddr:1;
+ unsigned ievb_busspace:1;
+ unsigned ievb_busaddr:1;
+ unsigned ievb_vaddr:1;
+ unsigned ievb_memaddr:1;
+ unsigned ievb_epc:1;
+ unsigned ievb_ef:1;
+ unsigned ievb_tnum:1;
+ } iev_b;
+ unsigned iev_a;
+ } ie_v;
+
+ short ie_errortype; /* error type: extra info about error */
+ short ie_widgetnum; /* Widget number that's in error */
+ short ie_widgetdev; /* Device within widget in error */
+ cpuid_t ie_srccpu; /* CPU on srcnode generating error */
+ cnodeid_t ie_srcnode; /* Node which caused the error */
+ cnodeid_t ie_errnode; /* Node where error was noticed */
+ iopaddr_t ie_sysioaddr; /* Sys specific IO address */
+ iopaddr_t ie_xtalkaddr; /* Xtalk (48bit) addr of Error */
+ iopaddr_t ie_busspace; /* Bus specific address space */
+ iopaddr_t ie_busaddr; /* Bus specific address */
+ caddr_t ie_vaddr; /* Virtual address of error */
+ iopaddr_t ie_memaddr; /* Physical memory address */
+ caddr_t ie_epc; /* pc when error reported */
+ caddr_t ie_ef; /* eframe when error reported */
+ short ie_tnum; /* Xtalk TNUM field */
+} ioerror_t;
+
+#define IOERROR_INIT(e) do { (e)->ie_v.iev_a = 0; } while (0)
+#define IOERROR_SETVALUE(e,f,v) do { (e)->ie_ ## f = (v); (e)->ie_v.iev_b.ievb_ ## f = 1; } while (0)
+
+#endif /* _ASM_IA64_SN_IOERROR_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_PCIBR_PROVIDER_H
+#define _ASM_IA64_SN_PCI_PCIBR_PROVIDER_H
+
+/* Workarounds */
+#define PV907516 (1 << 1) /* TIOCP: Don't write the write buffer flush reg */
+
+#define BUSTYPE_MASK 0x1
+
+/* Macros given a pcibus structure */
+#define IS_PCIX(ps) ((ps)->pbi_bridge_mode & BUSTYPE_MASK)
+#define IS_PCI_BRIDGE_ASIC(asic) (asic == PCIIO_ASIC_TYPE_PIC || \
+ asic == PCIIO_ASIC_TYPE_TIOCP)
+#define IS_PIC_SOFT(ps) (ps->pbi_bridge_type == PCIBR_BRIDGETYPE_PIC)
+
+
+/*
+ * The different PCI Bridge types supported on the SGI Altix platforms
+ */
+#define PCIBR_BRIDGETYPE_UNKNOWN -1
+#define PCIBR_BRIDGETYPE_PIC 2
+#define PCIBR_BRIDGETYPE_TIOCP 3
+
+/*
+ * Bridge 64bit Direct Map Attributes
+ */
+#define PCI64_ATTR_PREF (1ull << 59)
+#define PCI64_ATTR_PREC (1ull << 58)
+#define PCI64_ATTR_VIRTUAL (1ull << 57)
+#define PCI64_ATTR_BAR (1ull << 56)
+#define PCI64_ATTR_SWAP (1ull << 55)
+#define PCI64_ATTR_VIRTUAL1 (1ull << 54)
+
+#define PCI32_LOCAL_BASE 0
+#define PCI32_MAPPED_BASE 0x40000000
+#define PCI32_DIRECT_BASE 0x80000000
+
+#define IS_PCI32_MAPPED(x) ((uint64_t)(x) < PCI32_DIRECT_BASE && \
+ (uint64_t)(x) >= PCI32_MAPPED_BASE)
+#define IS_PCI32_DIRECT(x) ((uint64_t)(x) >= PCI32_MAPPED_BASE)
+
+
+/*
+ * Bridge PMU Address Transaltion Entry Attibutes
+ */
+#define PCI32_ATE_V (0x1 << 0)
+#define PCI32_ATE_CO (0x1 << 1)
+#define PCI32_ATE_PREC (0x1 << 2)
+#define PCI32_ATE_PREF (0x1 << 3)
+#define PCI32_ATE_BAR (0x1 << 4)
+#define PCI32_ATE_ADDR_SHFT 12
+
+#define MINIMAL_ATES_REQUIRED(addr, size) \
+ (IOPG(IOPGOFF(addr) + (size) - 1) == IOPG((size) - 1))
+
+#define MINIMAL_ATE_FLAG(addr, size) \
+ (MINIMAL_ATES_REQUIRED((uint64_t)addr, size) ? 1 : 0)
+
+/* bit 29 of the pci address is the SWAP bit */
+#define ATE_SWAPSHIFT 29
+#define ATE_SWAP_ON(x) ((x) |= (1 << ATE_SWAPSHIFT))
+#define ATE_SWAP_OFF(x) ((x) &= ~(1 << ATE_SWAPSHIFT))
+
+/*
+ * I/O page size
+ */
+#if PAGE_SIZE < 16384
+#define IOPFNSHIFT 12 /* 4K per mapped page */
+#else
+#define IOPFNSHIFT 14 /* 16K per mapped page */
+#endif
+
+#define IOPGSIZE (1 << IOPFNSHIFT)
+#define IOPG(x) ((x) >> IOPFNSHIFT)
+#define IOPGOFF(x) ((x) & (IOPGSIZE-1))
+
+#define PCIBR_DEV_SWAP_DIR (1ull << 19)
+#define PCIBR_CTRL_PAGE_SIZE (0x1 << 21)
+
+/*
+ * PMU resources.
+ */
+struct ate_resource{
+ uint64_t *ate;
+ uint64_t num_ate;
+ uint64_t lowest_free_index;
+};
+
+struct pcibus_info {
+ struct pcibus_bussoft pbi_buscommon; /* common header */
+ uint32_t pbi_moduleid;
+ short pbi_bridge_type;
+ short pbi_bridge_mode;
+
+ struct ate_resource pbi_int_ate_resource;
+ uint64_t pbi_int_ate_size;
+
+ uint64_t pbi_dir_xbase;
+ char pbi_hub_xid;
+
+ uint64_t pbi_devreg[8];
+ spinlock_t pbi_lock;
+
+ uint32_t pbi_valid_devices;
+ uint32_t pbi_enabled_devices;
+};
+
+/*
+ * pcibus_info structure locking macros
+ */
+inline static unsigned long
+pcibr_lock(struct pcibus_info *pcibus_info)
+{
+ unsigned long flag;
+ spin_lock_irqsave(&pcibus_info->pbi_lock, flag);
+ return(flag);
+}
+#define pcibr_unlock(pcibus_info, flag) spin_unlock_irqrestore(&pcibus_info->pbi_lock, flag)
+
+extern void *pcibr_bus_fixup(struct pcibus_bussoft *);
+extern uint64_t pcibr_dma_map(struct pcidev_info *, unsigned long, size_t, unsigned int);
+extern void pcibr_dma_unmap(struct pcidev_info *, dma_addr_t, int);
+
+/*
+ * prototypes for the bridge asic register access routines in pcibr_reg.c
+ */
+extern void pcireg_control_bit_clr(struct pcibus_info *, uint64_t);
+extern void pcireg_control_bit_set(struct pcibus_info *, uint64_t);
+extern uint64_t pcireg_tflush_get(struct pcibus_info *);
+extern uint64_t pcireg_intr_status_get(struct pcibus_info *);
+extern void pcireg_intr_enable_bit_clr(struct pcibus_info *, uint64_t);
+extern void pcireg_intr_enable_bit_set(struct pcibus_info *, uint64_t);
+extern void pcireg_intr_addr_addr_set(struct pcibus_info *, int, uint64_t);
+extern void pcireg_force_intr_set(struct pcibus_info *, int);
+extern uint64_t pcireg_wrb_flush_get(struct pcibus_info *, int);
+extern void pcireg_int_ate_set(struct pcibus_info *, int, uint64_t);
+extern uint64_t * pcireg_int_ate_addr(struct pcibus_info *, int);
+extern void pcibr_force_interrupt(struct sn_irq_info *sn_irq_info);
+extern void pcibr_change_devices_irq(struct sn_irq_info *sn_irq_info);
+extern int pcibr_ate_alloc(struct pcibus_info *, int);
+extern void pcibr_ate_free(struct pcibus_info *, int);
+extern void ate_write(struct pcibus_info *, int, int, uint64_t);
+#endif
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_PCIBUS_PROVIDER_H
+#define _ASM_IA64_SN_PCI_PCIBUS_PROVIDER_H
+
+/*
+ * SN pci asic types. Do not ever renumber these or reuse values. The
+ * values must agree with what prom thinks they are.
+ */
+
+#define PCIIO_ASIC_TYPE_UNKNOWN 0
+#define PCIIO_ASIC_TYPE_PPB 1
+#define PCIIO_ASIC_TYPE_PIC 2
+#define PCIIO_ASIC_TYPE_TIOCP 3
+
+/*
+ * Common pciio bus provider data. There should be one of these as the
+ * first field in any pciio based provider soft structure (e.g. pcibr_soft
+ * tioca_soft, etc).
+ */
+
+struct pcibus_bussoft {
+ uint32_t bs_asic_type; /* chipset type */
+ uint32_t bs_xid; /* xwidget id */
+ uint64_t bs_persist_busnum; /* Persistent Bus Number */
+ uint64_t bs_legacy_io; /* legacy io pio addr */
+ uint64_t bs_legacy_mem; /* legacy mem pio addr */
+ uint64_t bs_base; /* widget base */
+ struct xwidget_info *bs_xwidget_info;
+};
+
+/*
+ * DMA mapping flags
+ */
+
+#define SN_PCIDMA_CONSISTENT 0x0001
+
+#endif /* _ASM_IA64_SN_PCI_PCIBUS_PROVIDER_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_PCIDEV_H
+#define _ASM_IA64_SN_PCI_PCIDEV_H
+
+#include <linux/pci.h>
+
+extern struct sn_irq_info **sn_irq;
+
+#define SN_PCIDEV_INFO(pci_dev) \
+ ((struct pcidev_info *)(pci_dev)->sysdata)
+
+/*
+ * Given a pci_bus, return the sn pcibus_bussoft struct. Note that
+ * this only works for root busses, not for busses represented by PPB's.
+ */
+
+#define SN_PCIBUS_BUSSOFT(pci_bus) \
+ ((struct pcibus_bussoft *)(PCI_CONTROLLER((pci_bus))->platform_data))
+
+/*
+ * Given a struct pci_dev, return the sn pcibus_bussoft struct. Note
+ * that this is not equivalent to SN_PCIBUS_BUSSOFT(pci_dev->bus) due
+ * due to possible PPB's in the path.
+ */
+
+#define SN_PCIDEV_BUSSOFT(pci_dev) \
+ (SN_PCIDEV_INFO(pci_dev)->pdi_host_pcidev_info->pdi_pcibus_info)
+
+#define PCIIO_BUS_NONE 255 /* bus 255 reserved */
+#define PCIIO_SLOT_NONE 255
+#define PCIIO_FUNC_NONE 255
+#define PCIIO_VENDOR_ID_NONE (-1)
+
+struct pcidev_info {
+ uint64_t pdi_pio_mapped_addr[7]; /* 6 BARs PLUS 1 ROM */
+ uint64_t pdi_slot_host_handle; /* Bus and devfn Host pci_dev */
+
+ struct pcibus_bussoft *pdi_pcibus_info; /* Kernel common bus soft */
+ struct pcidev_info *pdi_host_pcidev_info; /* Kernel Host pci_dev */
+ struct pci_dev *pdi_linux_pcidev; /* Kernel pci_dev */
+
+ struct sn_irq_info *pdi_sn_irq_info;
+};
+
+extern void sn_irq_fixup(struct pci_dev *pci_dev,
+ struct sn_irq_info *sn_irq_info);
+
+#endif /* _ASM_IA64_SN_PCI_PCIDEV_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_PIC_H
+#define _ASM_IA64_SN_PCI_PIC_H
+
+/*
+ * PIC AS DEVICE ZERO
+ * ------------------
+ *
+ * PIC handles PCI/X busses. PCI/X requires that the 'bridge' (i.e. PIC)
+ * be designated as 'device 0'. That is a departure from earlier SGI
+ * PCI bridges. Because of that we use config space 1 to access the
+ * config space of the first actual PCI device on the bus.
+ * Here's what the PIC manual says:
+ *
+ * The current PCI-X bus specification now defines that the parent
+ * hosts bus bridge (PIC for example) must be device 0 on bus 0. PIC
+ * reduced the total number of devices from 8 to 4 and removed the
+ * device registers and windows, now only supporting devices 0,1,2, and
+ * 3. PIC did leave all 8 configuration space windows. The reason was
+ * there was nothing to gain by removing them. Here in lies the problem.
+ * The device numbering we do using 0 through 3 is unrelated to the device
+ * numbering which PCI-X requires in configuration space. In the past we
+ * correlated Configs pace and our device space 0 <-> 0, 1 <-> 1, etc.
+ * PCI-X requires we start a 1, not 0 and currently the PX brick
+ * does associate our:
+ *
+ * device 0 with configuration space window 1,
+ * device 1 with configuration space window 2,
+ * device 2 with configuration space window 3,
+ * device 3 with configuration space window 4.
+ *
+ * The net effect is that all config space access are off-by-one with
+ * relation to other per-slot accesses on the PIC.
+ * Here is a table that shows some of that:
+ *
+ * Internal Slot#
+ * |
+ * | 0 1 2 3
+ * ----------|---------------------------------------
+ * config | 0x21000 0x22000 0x23000 0x24000
+ * |
+ * even rrb | 0[0] n/a 1[0] n/a [] == implied even/odd
+ * |
+ * odd rrb | n/a 0[1] n/a 1[1]
+ * |
+ * int dev | 00 01 10 11
+ * |
+ * ext slot# | 1 2 3 4
+ * ----------|---------------------------------------
+ */
+
+#define PIC_ATE_TARGETID_SHFT 8
+#define PIC_HOST_INTR_ADDR 0x0000FFFFFFFFFFFFUL
+#define PIC_PCI64_ATTR_TARG_SHFT 60
+
+
+/*****************************************************************************
+ *********************** PIC MMR structure mapping ***************************
+ *****************************************************************************/
+
+/* NOTE: PIC WAR. PV#854697. PIC does not allow writes just to [31:0]
+ * of a 64-bit register. When writing PIC registers, always write the
+ * entire 64 bits.
+ */
+
+struct pic {
+
+ /* 0x000000-0x00FFFF -- Local Registers */
+
+ /* 0x000000-0x000057 -- Standard Widget Configuration */
+ uint64_t p_wid_id; /* 0x000000 */
+ uint64_t p_wid_stat; /* 0x000008 */
+ uint64_t p_wid_err_upper; /* 0x000010 */
+ uint64_t p_wid_err_lower; /* 0x000018 */
+ #define p_wid_err p_wid_err_lower
+ uint64_t p_wid_control; /* 0x000020 */
+ uint64_t p_wid_req_timeout; /* 0x000028 */
+ uint64_t p_wid_int_upper; /* 0x000030 */
+ uint64_t p_wid_int_lower; /* 0x000038 */
+ #define p_wid_int p_wid_int_lower
+ uint64_t p_wid_err_cmdword; /* 0x000040 */
+ uint64_t p_wid_llp; /* 0x000048 */
+ uint64_t p_wid_tflush; /* 0x000050 */
+
+ /* 0x000058-0x00007F -- Bridge-specific Widget Configuration */
+ uint64_t p_wid_aux_err; /* 0x000058 */
+ uint64_t p_wid_resp_upper; /* 0x000060 */
+ uint64_t p_wid_resp_lower; /* 0x000068 */
+ #define p_wid_resp p_wid_resp_lower
+ uint64_t p_wid_tst_pin_ctrl; /* 0x000070 */
+ uint64_t p_wid_addr_lkerr; /* 0x000078 */
+
+ /* 0x000080-0x00008F -- PMU & MAP */
+ uint64_t p_dir_map; /* 0x000080 */
+ uint64_t _pad_000088; /* 0x000088 */
+
+ /* 0x000090-0x00009F -- SSRAM */
+ uint64_t p_map_fault; /* 0x000090 */
+ uint64_t _pad_000098; /* 0x000098 */
+
+ /* 0x0000A0-0x0000AF -- Arbitration */
+ uint64_t p_arb; /* 0x0000A0 */
+ uint64_t _pad_0000A8; /* 0x0000A8 */
+
+ /* 0x0000B0-0x0000BF -- Number In A Can or ATE Parity Error */
+ uint64_t p_ate_parity_err; /* 0x0000B0 */
+ uint64_t _pad_0000B8; /* 0x0000B8 */
+
+ /* 0x0000C0-0x0000FF -- PCI/GIO */
+ uint64_t p_bus_timeout; /* 0x0000C0 */
+ uint64_t p_pci_cfg; /* 0x0000C8 */
+ uint64_t p_pci_err_upper; /* 0x0000D0 */
+ uint64_t p_pci_err_lower; /* 0x0000D8 */
+ #define p_pci_err p_pci_err_lower
+ uint64_t _pad_0000E0[4]; /* 0x0000{E0..F8} */
+
+ /* 0x000100-0x0001FF -- Interrupt */
+ uint64_t p_int_status; /* 0x000100 */
+ uint64_t p_int_enable; /* 0x000108 */
+ uint64_t p_int_rst_stat; /* 0x000110 */
+ uint64_t p_int_mode; /* 0x000118 */
+ uint64_t p_int_device; /* 0x000120 */
+ uint64_t p_int_host_err; /* 0x000128 */
+ uint64_t p_int_addr[8]; /* 0x0001{30,,,68} */
+ uint64_t p_err_int_view; /* 0x000170 */
+ uint64_t p_mult_int; /* 0x000178 */
+ uint64_t p_force_always[8]; /* 0x0001{80,,,B8} */
+ uint64_t p_force_pin[8]; /* 0x0001{C0,,,F8} */
+
+ /* 0x000200-0x000298 -- Device */
+ uint64_t p_device[4]; /* 0x0002{00,,,18} */
+ uint64_t _pad_000220[4]; /* 0x0002{20,,,38} */
+ uint64_t p_wr_req_buf[4]; /* 0x0002{40,,,58} */
+ uint64_t _pad_000260[4]; /* 0x0002{60,,,78} */
+ uint64_t p_rrb_map[2]; /* 0x0002{80,,,88} */
+ #define p_even_resp p_rrb_map[0] /* 0x000280 */
+ #define p_odd_resp p_rrb_map[1] /* 0x000288 */
+ uint64_t p_resp_status; /* 0x000290 */
+ uint64_t p_resp_clear; /* 0x000298 */
+
+ uint64_t _pad_0002A0[12]; /* 0x0002{A0..F8} */
+
+ /* 0x000300-0x0003F8 -- Buffer Address Match Registers */
+ struct {
+ uint64_t upper; /* 0x0003{00,,,F0} */
+ uint64_t lower; /* 0x0003{08,,,F8} */
+ } p_buf_addr_match[16];
+
+ /* 0x000400-0x0005FF -- Performance Monitor Registers (even only) */
+ struct {
+ uint64_t flush_w_touch; /* 0x000{400,,,5C0} */
+ uint64_t flush_wo_touch; /* 0x000{408,,,5C8} */
+ uint64_t inflight; /* 0x000{410,,,5D0} */
+ uint64_t prefetch; /* 0x000{418,,,5D8} */
+ uint64_t total_pci_retry; /* 0x000{420,,,5E0} */
+ uint64_t max_pci_retry; /* 0x000{428,,,5E8} */
+ uint64_t max_latency; /* 0x000{430,,,5F0} */
+ uint64_t clear_all; /* 0x000{438,,,5F8} */
+ } p_buf_count[8];
+
+
+ /* 0x000600-0x0009FF -- PCI/X registers */
+ uint64_t p_pcix_bus_err_addr; /* 0x000600 */
+ uint64_t p_pcix_bus_err_attr; /* 0x000608 */
+ uint64_t p_pcix_bus_err_data; /* 0x000610 */
+ uint64_t p_pcix_pio_split_addr; /* 0x000618 */
+ uint64_t p_pcix_pio_split_attr; /* 0x000620 */
+ uint64_t p_pcix_dma_req_err_attr; /* 0x000628 */
+ uint64_t p_pcix_dma_req_err_addr; /* 0x000630 */
+ uint64_t p_pcix_timeout; /* 0x000638 */
+
+ uint64_t _pad_000640[120]; /* 0x000{640,,,9F8} */
+
+ /* 0x000A00-0x000BFF -- PCI/X Read&Write Buffer */
+ struct {
+ uint64_t p_buf_addr; /* 0x000{A00,,,AF0} */
+ uint64_t p_buf_attr; /* 0X000{A08,,,AF8} */
+ } p_pcix_read_buf_64[16];
+
+ struct {
+ uint64_t p_buf_addr; /* 0x000{B00,,,BE0} */
+ uint64_t p_buf_attr; /* 0x000{B08,,,BE8} */
+ uint64_t p_buf_valid; /* 0x000{B10,,,BF0} */
+ uint64_t __pad1; /* 0x000{B18,,,BF8} */
+ } p_pcix_write_buf_64[8];
+
+ /* End of Local Registers -- Start of Address Map space */
+
+ char _pad_000c00[0x010000 - 0x000c00];
+
+ /* 0x010000-0x011fff -- Internal ATE RAM (Auto Parity Generation) */
+ uint64_t p_int_ate_ram[1024]; /* 0x010000-0x011fff */
+
+ /* 0x012000-0x013fff -- Internal ATE RAM (Manual Parity Generation) */
+ uint64_t p_int_ate_ram_mp[1024]; /* 0x012000-0x013fff */
+
+ char _pad_014000[0x18000 - 0x014000];
+
+ /* 0x18000-0x197F8 -- PIC Write Request Ram */
+ uint64_t p_wr_req_lower[256]; /* 0x18000 - 0x187F8 */
+ uint64_t p_wr_req_upper[256]; /* 0x18800 - 0x18FF8 */
+ uint64_t p_wr_req_parity[256]; /* 0x19000 - 0x197F8 */
+
+ char _pad_019800[0x20000 - 0x019800];
+
+ /* 0x020000-0x027FFF -- PCI Device Configuration Spaces */
+ union {
+ uint8_t c[0x1000 / 1]; /* 0x02{0000,,,7FFF} */
+ uint16_t s[0x1000 / 2]; /* 0x02{0000,,,7FFF} */
+ uint32_t l[0x1000 / 4]; /* 0x02{0000,,,7FFF} */
+ uint64_t d[0x1000 / 8]; /* 0x02{0000,,,7FFF} */
+ union {
+ uint8_t c[0x100 / 1];
+ uint16_t s[0x100 / 2];
+ uint32_t l[0x100 / 4];
+ uint64_t d[0x100 / 8];
+ } f[8];
+ } p_type0_cfg_dev[8]; /* 0x02{0000,,,7FFF} */
+
+ /* 0x028000-0x028FFF -- PCI Type 1 Configuration Space */
+ union {
+ uint8_t c[0x1000 / 1]; /* 0x028000-0x029000 */
+ uint16_t s[0x1000 / 2]; /* 0x028000-0x029000 */
+ uint32_t l[0x1000 / 4]; /* 0x028000-0x029000 */
+ uint64_t d[0x1000 / 8]; /* 0x028000-0x029000 */
+ union {
+ uint8_t c[0x100 / 1];
+ uint16_t s[0x100 / 2];
+ uint32_t l[0x100 / 4];
+ uint64_t d[0x100 / 8];
+ } f[8];
+ } p_type1_cfg; /* 0x028000-0x029000 */
+
+ char _pad_029000[0x030000-0x029000];
+
+ /* 0x030000-0x030007 -- PCI Interrupt Acknowledge Cycle */
+ union {
+ uint8_t c[8 / 1];
+ uint16_t s[8 / 2];
+ uint32_t l[8 / 4];
+ uint64_t d[8 / 8];
+ } p_pci_iack; /* 0x030000-0x030007 */
+
+ char _pad_030007[0x040000-0x030008];
+
+ /* 0x040000-0x030007 -- PCIX Special Cycle */
+ union {
+ uint8_t c[8 / 1];
+ uint16_t s[8 / 2];
+ uint32_t l[8 / 4];
+ uint64_t d[8 / 8];
+ } p_pcix_cycle; /* 0x040000-0x040007 */
+};
+
+#endif /* _ASM_IA64_SN_PCI_PIC_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2003-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_PCI_TIOCP_H
+#define _ASM_IA64_SN_PCI_TIOCP_H
+
+#define TIOCP_HOST_INTR_ADDR 0x003FFFFFFFFFFFFFUL
+#define TIOCP_PCI64_CMDTYPE_MEM (0x1ull << 60)
+
+
+/*****************************************************************************
+ *********************** TIOCP MMR structure mapping ***************************
+ *****************************************************************************/
+
+struct tiocp{
+
+ /* 0x000000-0x00FFFF -- Local Registers */
+
+ /* 0x000000-0x000057 -- (Legacy Widget Space) Configuration */
+ uint64_t cp_id; /* 0x000000 */
+ uint64_t cp_stat; /* 0x000008 */
+ uint64_t cp_err_upper; /* 0x000010 */
+ uint64_t cp_err_lower; /* 0x000018 */
+ #define cp_err cp_err_lower
+ uint64_t cp_control; /* 0x000020 */
+ uint64_t cp_req_timeout; /* 0x000028 */
+ uint64_t cp_intr_upper; /* 0x000030 */
+ uint64_t cp_intr_lower; /* 0x000038 */
+ #define cp_intr cp_intr_lower
+ uint64_t cp_err_cmdword; /* 0x000040 */
+ uint64_t _pad_000048; /* 0x000048 */
+ uint64_t cp_tflush; /* 0x000050 */
+
+ /* 0x000058-0x00007F -- Bridge-specific Configuration */
+ uint64_t cp_aux_err; /* 0x000058 */
+ uint64_t cp_resp_upper; /* 0x000060 */
+ uint64_t cp_resp_lower; /* 0x000068 */
+ #define cp_resp cp_resp_lower
+ uint64_t cp_tst_pin_ctrl; /* 0x000070 */
+ uint64_t cp_addr_lkerr; /* 0x000078 */
+
+ /* 0x000080-0x00008F -- PMU & MAP */
+ uint64_t cp_dir_map; /* 0x000080 */
+ uint64_t _pad_000088; /* 0x000088 */
+
+ /* 0x000090-0x00009F -- SSRAM */
+ uint64_t cp_map_fault; /* 0x000090 */
+ uint64_t _pad_000098; /* 0x000098 */
+
+ /* 0x0000A0-0x0000AF -- Arbitration */
+ uint64_t cp_arb; /* 0x0000A0 */
+ uint64_t _pad_0000A8; /* 0x0000A8 */
+
+ /* 0x0000B0-0x0000BF -- Number In A Can or ATE Parity Error */
+ uint64_t cp_ate_parity_err; /* 0x0000B0 */
+ uint64_t _pad_0000B8; /* 0x0000B8 */
+
+ /* 0x0000C0-0x0000FF -- PCI/GIO */
+ uint64_t cp_bus_timeout; /* 0x0000C0 */
+ uint64_t cp_pci_cfg; /* 0x0000C8 */
+ uint64_t cp_pci_err_upper; /* 0x0000D0 */
+ uint64_t cp_pci_err_lower; /* 0x0000D8 */
+ #define cp_pci_err cp_pci_err_lower
+ uint64_t _pad_0000E0[4]; /* 0x0000{E0..F8} */
+
+ /* 0x000100-0x0001FF -- Interrupt */
+ uint64_t cp_int_status; /* 0x000100 */
+ uint64_t cp_int_enable; /* 0x000108 */
+ uint64_t cp_int_rst_stat; /* 0x000110 */
+ uint64_t cp_int_mode; /* 0x000118 */
+ uint64_t cp_int_device; /* 0x000120 */
+ uint64_t cp_int_host_err; /* 0x000128 */
+ uint64_t cp_int_addr[8]; /* 0x0001{30,,,68} */
+ uint64_t cp_err_int_view; /* 0x000170 */
+ uint64_t cp_mult_int; /* 0x000178 */
+ uint64_t cp_force_always[8]; /* 0x0001{80,,,B8} */
+ uint64_t cp_force_pin[8]; /* 0x0001{C0,,,F8} */
+
+ /* 0x000200-0x000298 -- Device */
+ uint64_t cp_device[4]; /* 0x0002{00,,,18} */
+ uint64_t _pad_000220[4]; /* 0x0002{20,,,38} */
+ uint64_t cp_wr_req_buf[4]; /* 0x0002{40,,,58} */
+ uint64_t _pad_000260[4]; /* 0x0002{60,,,78} */
+ uint64_t cp_rrb_map[2]; /* 0x0002{80,,,88} */
+ #define cp_even_resp cp_rrb_map[0] /* 0x000280 */
+ #define cp_odd_resp cp_rrb_map[1] /* 0x000288 */
+ uint64_t cp_resp_status; /* 0x000290 */
+ uint64_t cp_resp_clear; /* 0x000298 */
+
+ uint64_t _pad_0002A0[12]; /* 0x0002{A0..F8} */
+
+ /* 0x000300-0x0003F8 -- Buffer Address Match Registers */
+ struct {
+ uint64_t upper; /* 0x0003{00,,,F0} */
+ uint64_t lower; /* 0x0003{08,,,F8} */
+ } cp_buf_addr_match[16];
+
+ /* 0x000400-0x0005FF -- Performance Monitor Registers (even only) */
+ struct {
+ uint64_t flush_w_touch; /* 0x000{400,,,5C0} */
+ uint64_t flush_wo_touch; /* 0x000{408,,,5C8} */
+ uint64_t inflight; /* 0x000{410,,,5D0} */
+ uint64_t prefetch; /* 0x000{418,,,5D8} */
+ uint64_t total_pci_retry; /* 0x000{420,,,5E0} */
+ uint64_t max_pci_retry; /* 0x000{428,,,5E8} */
+ uint64_t max_latency; /* 0x000{430,,,5F0} */
+ uint64_t clear_all; /* 0x000{438,,,5F8} */
+ } cp_buf_count[8];
+
+
+ /* 0x000600-0x0009FF -- PCI/X registers */
+ uint64_t cp_pcix_bus_err_addr; /* 0x000600 */
+ uint64_t cp_pcix_bus_err_attr; /* 0x000608 */
+ uint64_t cp_pcix_bus_err_data; /* 0x000610 */
+ uint64_t cp_pcix_pio_split_addr; /* 0x000618 */
+ uint64_t cp_pcix_pio_split_attr; /* 0x000620 */
+ uint64_t cp_pcix_dma_req_err_attr; /* 0x000628 */
+ uint64_t cp_pcix_dma_req_err_addr; /* 0x000630 */
+ uint64_t cp_pcix_timeout; /* 0x000638 */
+
+ uint64_t _pad_000640[24]; /* 0x000{640,,,6F8} */
+
+ /* 0x000700-0x000737 -- Debug Registers */
+ uint64_t cp_ct_debug_ctl; /* 0x000700 */
+ uint64_t cp_br_debug_ctl; /* 0x000708 */
+ uint64_t cp_mux3_debug_ctl; /* 0x000710 */
+ uint64_t cp_mux4_debug_ctl; /* 0x000718 */
+ uint64_t cp_mux5_debug_ctl; /* 0x000720 */
+ uint64_t cp_mux6_debug_ctl; /* 0x000728 */
+ uint64_t cp_mux7_debug_ctl; /* 0x000730 */
+
+ uint64_t _pad_000738[89]; /* 0x000{738,,,9F8} */
+
+ /* 0x000A00-0x000BFF -- PCI/X Read&Write Buffer */
+ struct {
+ uint64_t cp_buf_addr; /* 0x000{A00,,,AF0} */
+ uint64_t cp_buf_attr; /* 0X000{A08,,,AF8} */
+ } cp_pcix_read_buf_64[16];
+
+ struct {
+ uint64_t cp_buf_addr; /* 0x000{B00,,,BE0} */
+ uint64_t cp_buf_attr; /* 0x000{B08,,,BE8} */
+ uint64_t cp_buf_valid; /* 0x000{B10,,,BF0} */
+ uint64_t __pad1; /* 0x000{B18,,,BF8} */
+ } cp_pcix_write_buf_64[8];
+
+ /* End of Local Registers -- Start of Address Map space */
+
+ char _pad_000c00[0x010000 - 0x000c00];
+
+ /* 0x010000-0x011FF8 -- Internal ATE RAM (Auto Parity Generation) */
+ uint64_t cp_int_ate_ram[1024]; /* 0x010000-0x011FF8 */
+
+ char _pad_012000[0x14000 - 0x012000];
+
+ /* 0x014000-0x015FF8 -- Internal ATE RAM (Manual Parity Generation) */
+ uint64_t cp_int_ate_ram_mp[1024]; /* 0x014000-0x015FF8 */
+
+ char _pad_016000[0x18000 - 0x016000];
+
+ /* 0x18000-0x197F8 -- TIOCP Write Request Ram */
+ uint64_t cp_wr_req_lower[256]; /* 0x18000 - 0x187F8 */
+ uint64_t cp_wr_req_upper[256]; /* 0x18800 - 0x18FF8 */
+ uint64_t cp_wr_req_parity[256]; /* 0x19000 - 0x197F8 */
+
+ char _pad_019800[0x1C000 - 0x019800];
+
+ /* 0x1C000-0x1EFF8 -- TIOCP Read Response Ram */
+ uint64_t cp_rd_resp_lower[512]; /* 0x1C000 - 0x1CFF8 */
+ uint64_t cp_rd_resp_upper[512]; /* 0x1D000 - 0x1DFF8 */
+ uint64_t cp_rd_resp_parity[512]; /* 0x1E000 - 0x1EFF8 */
+
+ char _pad_01F000[0x20000 - 0x01F000];
+
+ /* 0x020000-0x021FFF -- Host Device (CP) Configuration Space (not used) */
+ char _pad_020000[0x021000 - 0x20000];
+
+ /* 0x021000-0x027FFF -- PCI Device Configuration Spaces */
+ union {
+ uint8_t c[0x1000 / 1]; /* 0x02{0000,,,7FFF} */
+ uint16_t s[0x1000 / 2]; /* 0x02{0000,,,7FFF} */
+ uint32_t l[0x1000 / 4]; /* 0x02{0000,,,7FFF} */
+ uint64_t d[0x1000 / 8]; /* 0x02{0000,,,7FFF} */
+ union {
+ uint8_t c[0x100 / 1];
+ uint16_t s[0x100 / 2];
+ uint32_t l[0x100 / 4];
+ uint64_t d[0x100 / 8];
+ } f[8];
+ } cp_type0_cfg_dev[7]; /* 0x02{1000,,,7FFF} */
+
+ /* 0x028000-0x028FFF -- PCI Type 1 Configuration Space */
+ union {
+ uint8_t c[0x1000 / 1]; /* 0x028000-0x029000 */
+ uint16_t s[0x1000 / 2]; /* 0x028000-0x029000 */
+ uint32_t l[0x1000 / 4]; /* 0x028000-0x029000 */
+ uint64_t d[0x1000 / 8]; /* 0x028000-0x029000 */
+ union {
+ uint8_t c[0x100 / 1];
+ uint16_t s[0x100 / 2];
+ uint32_t l[0x100 / 4];
+ uint64_t d[0x100 / 8];
+ } f[8];
+ } cp_type1_cfg; /* 0x028000-0x029000 */
+
+ char _pad_029000[0x030000-0x029000];
+
+ /* 0x030000-0x030007 -- PCI Interrupt Acknowledge Cycle */
+ union {
+ uint8_t c[8 / 1];
+ uint16_t s[8 / 2];
+ uint32_t l[8 / 4];
+ uint64_t d[8 / 8];
+ } cp_pci_iack; /* 0x030000-0x030007 */
+
+ char _pad_030007[0x040000-0x030008];
+
+ /* 0x040000-0x040007 -- PCIX Special Cycle */
+ union {
+ uint8_t c[8 / 1];
+ uint16_t s[8 / 2];
+ uint32_t l[8 / 4];
+ uint64_t d[8 / 8];
+ } cp_pcix_cycle; /* 0x040000-0x040007 */
+
+ char _pad_040007[0x200000-0x040008];
+
+ /* 0x200000-0x7FFFFF -- PCI/GIO Device Spaces */
+ union {
+ uint8_t c[0x100000 / 1];
+ uint16_t s[0x100000 / 2];
+ uint32_t l[0x100000 / 4];
+ uint64_t d[0x100000 / 8];
+ } cp_devio_raw[6]; /* 0x200000-0x7FFFFF */
+
+ #define cp_devio(n) cp_devio_raw[((n)<2)?(n*2):(n+2)]
+
+ char _pad_800000[0xA00000-0x800000];
+
+ /* 0xA00000-0xBFFFFF -- PCI/GIO Device Spaces w/flush */
+ union {
+ uint8_t c[0x100000 / 1];
+ uint16_t s[0x100000 / 2];
+ uint32_t l[0x100000 / 4];
+ uint64_t d[0x100000 / 8];
+ } cp_devio_raw_flush[6]; /* 0xA00000-0xBFFFFF */
+
+ #define cp_devio_flush(n) cp_devio_raw_flush[((n)<2)?(n*2):(n+2)]
+
+};
+
+#endif /* _ASM_IA64_SN_PCI_TIOCP_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2001-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_SHUB_H
+#define _ASM_IA64_SN_SHUB_H
+
+
+#define MD_MEM_BANKS 4
+
+
+/*
+ * Junk Bus Address Space
+ * The junk bus is used to access the PROM, LED's, and UART. It's
+ * accessed through the local block MMR space. The data path is
+ * 16 bits wide. This space requires address bits 31-27 to be set, and
+ * is further divided by address bits 26:15.
+ * The LED addresses are write-only. To read the LEDs, you need to use
+ * SH_JUNK_BUS_LED0-3, defined in shub_mmr.h
+ *
+ */
+#define SH_REAL_JUNK_BUS_LED0 0x7fed00000UL
+#define SH_REAL_JUNK_BUS_LED1 0x7fed10000UL
+#define SH_REAL_JUNK_BUS_LED2 0x7fed20000UL
+#define SH_REAL_JUNK_BUS_LED3 0x7fed30000UL
+#define SH_JUNK_BUS_UART0 0x7fed40000UL
+#define SH_JUNK_BUS_UART1 0x7fed40008UL
+#define SH_JUNK_BUS_UART2 0x7fed40010UL
+#define SH_JUNK_BUS_UART3 0x7fed40018UL
+#define SH_JUNK_BUS_UART4 0x7fed40020UL
+#define SH_JUNK_BUS_UART5 0x7fed40028UL
+#define SH_JUNK_BUS_UART6 0x7fed40030UL
+#define SH_JUNK_BUS_UART7 0x7fed40038UL
+
+#endif /* _ASM_IA64_SN_SHUB_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_SHUBIO_H
+#define _ASM_IA64_SN_SHUBIO_H
+
+#define HUB_WIDGET_ID_MAX 0xf
+#define IIO_NUM_ITTES 7
+#define HUB_NUM_BIG_WINDOW (IIO_NUM_ITTES - 1)
+
+#define IIO_WID 0x00400000 /* Crosstalk Widget Identification */
+ /* This register is also accessible from
+ * Crosstalk at address 0x0. */
+#define IIO_WSTAT 0x00400008 /* Crosstalk Widget Status */
+#define IIO_WCR 0x00400020 /* Crosstalk Widget Control Register */
+#define IIO_ILAPR 0x00400100 /* IO Local Access Protection Register */
+#define IIO_ILAPO 0x00400108 /* IO Local Access Protection Override */
+#define IIO_IOWA 0x00400110 /* IO Outbound Widget Access */
+#define IIO_IIWA 0x00400118 /* IO Inbound Widget Access */
+#define IIO_IIDEM 0x00400120 /* IO Inbound Device Error Mask */
+#define IIO_ILCSR 0x00400128 /* IO LLP Control and Status Register */
+#define IIO_ILLR 0x00400130 /* IO LLP Log Register */
+#define IIO_IIDSR 0x00400138 /* IO Interrupt Destination */
+
+#define IIO_IGFX0 0x00400140 /* IO Graphics Node-Widget Map 0 */
+#define IIO_IGFX1 0x00400148 /* IO Graphics Node-Widget Map 1 */
+
+#define IIO_ISCR0 0x00400150 /* IO Scratch Register 0 */
+#define IIO_ISCR1 0x00400158 /* IO Scratch Register 1 */
+
+#define IIO_ITTE1 0x00400160 /* IO Translation Table Entry 1 */
+#define IIO_ITTE2 0x00400168 /* IO Translation Table Entry 2 */
+#define IIO_ITTE3 0x00400170 /* IO Translation Table Entry 3 */
+#define IIO_ITTE4 0x00400178 /* IO Translation Table Entry 4 */
+#define IIO_ITTE5 0x00400180 /* IO Translation Table Entry 5 */
+#define IIO_ITTE6 0x00400188 /* IO Translation Table Entry 6 */
+#define IIO_ITTE7 0x00400190 /* IO Translation Table Entry 7 */
+
+#define IIO_IPRB0 0x00400198 /* IO PRB Entry 0 */
+#define IIO_IPRB8 0x004001A0 /* IO PRB Entry 8 */
+#define IIO_IPRB9 0x004001A8 /* IO PRB Entry 9 */
+#define IIO_IPRBA 0x004001B0 /* IO PRB Entry A */
+#define IIO_IPRBB 0x004001B8 /* IO PRB Entry B */
+#define IIO_IPRBC 0x004001C0 /* IO PRB Entry C */
+#define IIO_IPRBD 0x004001C8 /* IO PRB Entry D */
+#define IIO_IPRBE 0x004001D0 /* IO PRB Entry E */
+#define IIO_IPRBF 0x004001D8 /* IO PRB Entry F */
+
+#define IIO_IXCC 0x004001E0 /* IO Crosstalk Credit Count Timeout */
+#define IIO_IMEM 0x004001E8 /* IO Miscellaneous Error Mask */
+#define IIO_IXTT 0x004001F0 /* IO Crosstalk Timeout Threshold */
+#define IIO_IECLR 0x004001F8 /* IO Error Clear Register */
+#define IIO_IBCR 0x00400200 /* IO BTE Control Register */
+
+#define IIO_IXSM 0x00400208 /* IO Crosstalk Spurious Message */
+#define IIO_IXSS 0x00400210 /* IO Crosstalk Spurious Sideband */
+
+#define IIO_ILCT 0x00400218 /* IO LLP Channel Test */
+
+#define IIO_IIEPH1 0x00400220 /* IO Incoming Error Packet Header, Part 1 */
+#define IIO_IIEPH2 0x00400228 /* IO Incoming Error Packet Header, Part 2 */
+
+
+#define IIO_ISLAPR 0x00400230 /* IO SXB Local Access Protection Regster */
+#define IIO_ISLAPO 0x00400238 /* IO SXB Local Access Protection Override */
+
+#define IIO_IWI 0x00400240 /* IO Wrapper Interrupt Register */
+#define IIO_IWEL 0x00400248 /* IO Wrapper Error Log Register */
+#define IIO_IWC 0x00400250 /* IO Wrapper Control Register */
+#define IIO_IWS 0x00400258 /* IO Wrapper Status Register */
+#define IIO_IWEIM 0x00400260 /* IO Wrapper Error Interrupt Masking Register */
+
+#define IIO_IPCA 0x00400300 /* IO PRB Counter Adjust */
+
+#define IIO_IPRTE0_A 0x00400308 /* IO PIO Read Address Table Entry 0, Part A */
+#define IIO_IPRTE1_A 0x00400310 /* IO PIO Read Address Table Entry 1, Part A */
+#define IIO_IPRTE2_A 0x00400318 /* IO PIO Read Address Table Entry 2, Part A */
+#define IIO_IPRTE3_A 0x00400320 /* IO PIO Read Address Table Entry 3, Part A */
+#define IIO_IPRTE4_A 0x00400328 /* IO PIO Read Address Table Entry 4, Part A */
+#define IIO_IPRTE5_A 0x00400330 /* IO PIO Read Address Table Entry 5, Part A */
+#define IIO_IPRTE6_A 0x00400338 /* IO PIO Read Address Table Entry 6, Part A */
+#define IIO_IPRTE7_A 0x00400340 /* IO PIO Read Address Table Entry 7, Part A */
+
+#define IIO_IPRTE0_B 0x00400348 /* IO PIO Read Address Table Entry 0, Part B */
+#define IIO_IPRTE1_B 0x00400350 /* IO PIO Read Address Table Entry 1, Part B */
+#define IIO_IPRTE2_B 0x00400358 /* IO PIO Read Address Table Entry 2, Part B */
+#define IIO_IPRTE3_B 0x00400360 /* IO PIO Read Address Table Entry 3, Part B */
+#define IIO_IPRTE4_B 0x00400368 /* IO PIO Read Address Table Entry 4, Part B */
+#define IIO_IPRTE5_B 0x00400370 /* IO PIO Read Address Table Entry 5, Part B */
+#define IIO_IPRTE6_B 0x00400378 /* IO PIO Read Address Table Entry 6, Part B */
+#define IIO_IPRTE7_B 0x00400380 /* IO PIO Read Address Table Entry 7, Part B */
+
+#define IIO_IPDR 0x00400388 /* IO PIO Deallocation Register */
+#define IIO_ICDR 0x00400390 /* IO CRB Entry Deallocation Register */
+#define IIO_IFDR 0x00400398 /* IO IOQ FIFO Depth Register */
+#define IIO_IIAP 0x004003A0 /* IO IIQ Arbitration Parameters */
+#define IIO_ICMR 0x004003A8 /* IO CRB Management Register */
+#define IIO_ICCR 0x004003B0 /* IO CRB Control Register */
+#define IIO_ICTO 0x004003B8 /* IO CRB Timeout */
+#define IIO_ICTP 0x004003C0 /* IO CRB Timeout Prescalar */
+
+#define IIO_ICRB0_A 0x00400400 /* IO CRB Entry 0_A */
+#define IIO_ICRB0_B 0x00400408 /* IO CRB Entry 0_B */
+#define IIO_ICRB0_C 0x00400410 /* IO CRB Entry 0_C */
+#define IIO_ICRB0_D 0x00400418 /* IO CRB Entry 0_D */
+#define IIO_ICRB0_E 0x00400420 /* IO CRB Entry 0_E */
+
+#define IIO_ICRB1_A 0x00400430 /* IO CRB Entry 1_A */
+#define IIO_ICRB1_B 0x00400438 /* IO CRB Entry 1_B */
+#define IIO_ICRB1_C 0x00400440 /* IO CRB Entry 1_C */
+#define IIO_ICRB1_D 0x00400448 /* IO CRB Entry 1_D */
+#define IIO_ICRB1_E 0x00400450 /* IO CRB Entry 1_E */
+
+#define IIO_ICRB2_A 0x00400460 /* IO CRB Entry 2_A */
+#define IIO_ICRB2_B 0x00400468 /* IO CRB Entry 2_B */
+#define IIO_ICRB2_C 0x00400470 /* IO CRB Entry 2_C */
+#define IIO_ICRB2_D 0x00400478 /* IO CRB Entry 2_D */
+#define IIO_ICRB2_E 0x00400480 /* IO CRB Entry 2_E */
+
+#define IIO_ICRB3_A 0x00400490 /* IO CRB Entry 3_A */
+#define IIO_ICRB3_B 0x00400498 /* IO CRB Entry 3_B */
+#define IIO_ICRB3_C 0x004004a0 /* IO CRB Entry 3_C */
+#define IIO_ICRB3_D 0x004004a8 /* IO CRB Entry 3_D */
+#define IIO_ICRB3_E 0x004004b0 /* IO CRB Entry 3_E */
+
+#define IIO_ICRB4_A 0x004004c0 /* IO CRB Entry 4_A */
+#define IIO_ICRB4_B 0x004004c8 /* IO CRB Entry 4_B */
+#define IIO_ICRB4_C 0x004004d0 /* IO CRB Entry 4_C */
+#define IIO_ICRB4_D 0x004004d8 /* IO CRB Entry 4_D */
+#define IIO_ICRB4_E 0x004004e0 /* IO CRB Entry 4_E */
+
+#define IIO_ICRB5_A 0x004004f0 /* IO CRB Entry 5_A */
+#define IIO_ICRB5_B 0x004004f8 /* IO CRB Entry 5_B */
+#define IIO_ICRB5_C 0x00400500 /* IO CRB Entry 5_C */
+#define IIO_ICRB5_D 0x00400508 /* IO CRB Entry 5_D */
+#define IIO_ICRB5_E 0x00400510 /* IO CRB Entry 5_E */
+
+#define IIO_ICRB6_A 0x00400520 /* IO CRB Entry 6_A */
+#define IIO_ICRB6_B 0x00400528 /* IO CRB Entry 6_B */
+#define IIO_ICRB6_C 0x00400530 /* IO CRB Entry 6_C */
+#define IIO_ICRB6_D 0x00400538 /* IO CRB Entry 6_D */
+#define IIO_ICRB6_E 0x00400540 /* IO CRB Entry 6_E */
+
+#define IIO_ICRB7_A 0x00400550 /* IO CRB Entry 7_A */
+#define IIO_ICRB7_B 0x00400558 /* IO CRB Entry 7_B */
+#define IIO_ICRB7_C 0x00400560 /* IO CRB Entry 7_C */
+#define IIO_ICRB7_D 0x00400568 /* IO CRB Entry 7_D */
+#define IIO_ICRB7_E 0x00400570 /* IO CRB Entry 7_E */
+
+#define IIO_ICRB8_A 0x00400580 /* IO CRB Entry 8_A */
+#define IIO_ICRB8_B 0x00400588 /* IO CRB Entry 8_B */
+#define IIO_ICRB8_C 0x00400590 /* IO CRB Entry 8_C */
+#define IIO_ICRB8_D 0x00400598 /* IO CRB Entry 8_D */
+#define IIO_ICRB8_E 0x004005a0 /* IO CRB Entry 8_E */
+
+#define IIO_ICRB9_A 0x004005b0 /* IO CRB Entry 9_A */
+#define IIO_ICRB9_B 0x004005b8 /* IO CRB Entry 9_B */
+#define IIO_ICRB9_C 0x004005c0 /* IO CRB Entry 9_C */
+#define IIO_ICRB9_D 0x004005c8 /* IO CRB Entry 9_D */
+#define IIO_ICRB9_E 0x004005d0 /* IO CRB Entry 9_E */
+
+#define IIO_ICRBA_A 0x004005e0 /* IO CRB Entry A_A */
+#define IIO_ICRBA_B 0x004005e8 /* IO CRB Entry A_B */
+#define IIO_ICRBA_C 0x004005f0 /* IO CRB Entry A_C */
+#define IIO_ICRBA_D 0x004005f8 /* IO CRB Entry A_D */
+#define IIO_ICRBA_E 0x00400600 /* IO CRB Entry A_E */
+
+#define IIO_ICRBB_A 0x00400610 /* IO CRB Entry B_A */
+#define IIO_ICRBB_B 0x00400618 /* IO CRB Entry B_B */
+#define IIO_ICRBB_C 0x00400620 /* IO CRB Entry B_C */
+#define IIO_ICRBB_D 0x00400628 /* IO CRB Entry B_D */
+#define IIO_ICRBB_E 0x00400630 /* IO CRB Entry B_E */
+
+#define IIO_ICRBC_A 0x00400640 /* IO CRB Entry C_A */
+#define IIO_ICRBC_B 0x00400648 /* IO CRB Entry C_B */
+#define IIO_ICRBC_C 0x00400650 /* IO CRB Entry C_C */
+#define IIO_ICRBC_D 0x00400658 /* IO CRB Entry C_D */
+#define IIO_ICRBC_E 0x00400660 /* IO CRB Entry C_E */
+
+#define IIO_ICRBD_A 0x00400670 /* IO CRB Entry D_A */
+#define IIO_ICRBD_B 0x00400678 /* IO CRB Entry D_B */
+#define IIO_ICRBD_C 0x00400680 /* IO CRB Entry D_C */
+#define IIO_ICRBD_D 0x00400688 /* IO CRB Entry D_D */
+#define IIO_ICRBD_E 0x00400690 /* IO CRB Entry D_E */
+
+#define IIO_ICRBE_A 0x004006a0 /* IO CRB Entry E_A */
+#define IIO_ICRBE_B 0x004006a8 /* IO CRB Entry E_B */
+#define IIO_ICRBE_C 0x004006b0 /* IO CRB Entry E_C */
+#define IIO_ICRBE_D 0x004006b8 /* IO CRB Entry E_D */
+#define IIO_ICRBE_E 0x004006c0 /* IO CRB Entry E_E */
+
+#define IIO_ICSML 0x00400700 /* IO CRB Spurious Message Low */
+#define IIO_ICSMM 0x00400708 /* IO CRB Spurious Message Middle */
+#define IIO_ICSMH 0x00400710 /* IO CRB Spurious Message High */
+
+#define IIO_IDBSS 0x00400718 /* IO Debug Submenu Select */
+
+#define IIO_IBLS0 0x00410000 /* IO BTE Length Status 0 */
+#define IIO_IBSA0 0x00410008 /* IO BTE Source Address 0 */
+#define IIO_IBDA0 0x00410010 /* IO BTE Destination Address 0 */
+#define IIO_IBCT0 0x00410018 /* IO BTE Control Terminate 0 */
+#define IIO_IBNA0 0x00410020 /* IO BTE Notification Address 0 */
+#define IIO_IBIA0 0x00410028 /* IO BTE Interrupt Address 0 */
+#define IIO_IBLS1 0x00420000 /* IO BTE Length Status 1 */
+#define IIO_IBSA1 0x00420008 /* IO BTE Source Address 1 */
+#define IIO_IBDA1 0x00420010 /* IO BTE Destination Address 1 */
+#define IIO_IBCT1 0x00420018 /* IO BTE Control Terminate 1 */
+#define IIO_IBNA1 0x00420020 /* IO BTE Notification Address 1 */
+#define IIO_IBIA1 0x00420028 /* IO BTE Interrupt Address 1 */
+
+#define IIO_IPCR 0x00430000 /* IO Performance Control */
+#define IIO_IPPR 0x00430008 /* IO Performance Profiling */
+
+
+/************************************************************************
+ * *
+ * Description: This register echoes some information from the *
+ * LB_REV_ID register. It is available through Crosstalk as described *
+ * above. The REV_NUM and MFG_NUM fields receive their values from *
+ * the REVISION and MANUFACTURER fields in the LB_REV_ID register. *
+ * The PART_NUM field's value is the Crosstalk device ID number that *
+ * Steve Miller assigned to the SHub chip. *
+ * *
+ ************************************************************************/
+
+typedef union ii_wid_u {
+ uint64_t ii_wid_regval;
+ struct {
+ uint64_t w_rsvd_1 : 1;
+ uint64_t w_mfg_num : 11;
+ uint64_t w_part_num : 16;
+ uint64_t w_rev_num : 4;
+ uint64_t w_rsvd : 32;
+ } ii_wid_fld_s;
+} ii_wid_u_t;
+
+
+/************************************************************************
+ * *
+ * The fields in this register are set upon detection of an error *
+ * and cleared by various mechanisms, as explained in the *
+ * description. *
+ * *
+ ************************************************************************/
+
+typedef union ii_wstat_u {
+ uint64_t ii_wstat_regval;
+ struct {
+ uint64_t w_pending : 4;
+ uint64_t w_xt_crd_to : 1;
+ uint64_t w_xt_tail_to : 1;
+ uint64_t w_rsvd_3 : 3;
+ uint64_t w_tx_mx_rty : 1;
+ uint64_t w_rsvd_2 : 6;
+ uint64_t w_llp_tx_cnt : 8;
+ uint64_t w_rsvd_1 : 8;
+ uint64_t w_crazy : 1;
+ uint64_t w_rsvd : 31;
+ } ii_wstat_fld_s;
+} ii_wstat_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This is a read-write enabled register. It controls *
+ * various aspects of the Crosstalk flow control. *
+ * *
+ ************************************************************************/
+
+typedef union ii_wcr_u {
+ uint64_t ii_wcr_regval;
+ struct {
+ uint64_t w_wid : 4;
+ uint64_t w_tag : 1;
+ uint64_t w_rsvd_1 : 8;
+ uint64_t w_dst_crd : 3;
+ uint64_t w_f_bad_pkt : 1;
+ uint64_t w_dir_con : 1;
+ uint64_t w_e_thresh : 5;
+ uint64_t w_rsvd : 41;
+ } ii_wcr_fld_s;
+} ii_wcr_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This register's value is a bit vector that guards *
+ * access to local registers within the II as well as to external *
+ * Crosstalk widgets. Each bit in the register corresponds to a *
+ * particular region in the system; a region consists of one, two or *
+ * four nodes (depending on the value of the REGION_SIZE field in the *
+ * LB_REV_ID register, which is documented in Section 8.3.1.1). The *
+ * protection provided by this register applies to PIO read *
+ * operations as well as PIO write operations. The II will perform a *
+ * PIO read or write request only if the bit for the requestor's *
+ * region is set; otherwise, the II will not perform the requested *
+ * operation and will return an error response. When a PIO read or *
+ * write request targets an external Crosstalk widget, then not only *
+ * must the bit for the requestor's region be set in the ILAPR, but *
+ * also the target widget's bit in the IOWA register must be set in *
+ * order for the II to perform the requested operation; otherwise, *
+ * the II will return an error response. Hence, the protection *
+ * provided by the IOWA register supplements the protection provided *
+ * by the ILAPR for requests that target external Crosstalk widgets. *
+ * This register itself can be accessed only by the nodes whose *
+ * region ID bits are enabled in this same register. It can also be *
+ * accessed through the IAlias space by the local processors. *
+ * The reset value of this register allows access by all nodes. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ilapr_u {
+ uint64_t ii_ilapr_regval;
+ struct {
+ uint64_t i_region : 64;
+ } ii_ilapr_fld_s;
+} ii_ilapr_u_t;
+
+
+
+
+/************************************************************************
+ * *
+ * Description: A write to this register of the 64-bit value *
+ * "SGIrules" in ASCII, will cause the bit in the ILAPR register *
+ * corresponding to the region of the requestor to be set (allow *
+ * access). A write of any other value will be ignored. Access *
+ * protection for this register is "SGIrules". *
+ * This register can also be accessed through the IAlias space. *
+ * However, this access will not change the access permissions in the *
+ * ILAPR. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ilapo_u {
+ uint64_t ii_ilapo_regval;
+ struct {
+ uint64_t i_io_ovrride : 64;
+ } ii_ilapo_fld_s;
+} ii_ilapo_u_t;
+
+
+
+/************************************************************************
+ * *
+ * This register qualifies all the PIO and Graphics writes launched *
+ * from the SHUB towards a widget. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iowa_u {
+ uint64_t ii_iowa_regval;
+ struct {
+ uint64_t i_w0_oac : 1;
+ uint64_t i_rsvd_1 : 7;
+ uint64_t i_wx_oac : 8;
+ uint64_t i_rsvd : 48;
+ } ii_iowa_fld_s;
+} ii_iowa_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This register qualifies all the requests launched *
+ * from a widget towards the Shub. This register is intended to be *
+ * used by software in case of misbehaving widgets. *
+ * *
+ * *
+ ************************************************************************/
+
+typedef union ii_iiwa_u {
+ uint64_t ii_iiwa_regval;
+ struct {
+ uint64_t i_w0_iac : 1;
+ uint64_t i_rsvd_1 : 7;
+ uint64_t i_wx_iac : 8;
+ uint64_t i_rsvd : 48;
+ } ii_iiwa_fld_s;
+} ii_iiwa_u_t;
+
+
+
+/************************************************************************
+ * *
+ * Description: This register qualifies all the operations launched *
+ * from a widget towards the SHub. It allows individual access *
+ * control for up to 8 devices per widget. A device refers to *
+ * individual DMA master hosted by a widget. *
+ * The bits in each field of this register are cleared by the Shub *
+ * upon detection of an error which requires the device to be *
+ * disabled. These fields assume that 0=TNUM=7 (i.e., Bridge-centric *
+ * Crosstalk). Whether or not a device has access rights to this *
+ * Shub is determined by an AND of the device enable bit in the *
+ * appropriate field of this register and the corresponding bit in *
+ * the Wx_IAC field (for the widget which this device belongs to). *
+ * The bits in this field are set by writing a 1 to them. Incoming *
+ * replies from Crosstalk are not subject to this access control *
+ * mechanism. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iidem_u {
+ uint64_t ii_iidem_regval;
+ struct {
+ uint64_t i_w8_dxs : 8;
+ uint64_t i_w9_dxs : 8;
+ uint64_t i_wa_dxs : 8;
+ uint64_t i_wb_dxs : 8;
+ uint64_t i_wc_dxs : 8;
+ uint64_t i_wd_dxs : 8;
+ uint64_t i_we_dxs : 8;
+ uint64_t i_wf_dxs : 8;
+ } ii_iidem_fld_s;
+} ii_iidem_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the various programmable fields necessary *
+ * for controlling and observing the LLP signals. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ilcsr_u {
+ uint64_t ii_ilcsr_regval;
+ struct {
+ uint64_t i_nullto : 6;
+ uint64_t i_rsvd_4 : 2;
+ uint64_t i_wrmrst : 1;
+ uint64_t i_rsvd_3 : 1;
+ uint64_t i_llp_en : 1;
+ uint64_t i_bm8 : 1;
+ uint64_t i_llp_stat : 2;
+ uint64_t i_remote_power : 1;
+ uint64_t i_rsvd_2 : 1;
+ uint64_t i_maxrtry : 10;
+ uint64_t i_d_avail_sel : 2;
+ uint64_t i_rsvd_1 : 4;
+ uint64_t i_maxbrst : 10;
+ uint64_t i_rsvd : 22;
+
+ } ii_ilcsr_fld_s;
+} ii_ilcsr_u_t;
+
+
+/************************************************************************
+ * *
+ * This is simply a status registers that monitors the LLP error *
+ * rate. *
+ * *
+ ************************************************************************/
+
+typedef union ii_illr_u {
+ uint64_t ii_illr_regval;
+ struct {
+ uint64_t i_sn_cnt : 16;
+ uint64_t i_cb_cnt : 16;
+ uint64_t i_rsvd : 32;
+ } ii_illr_fld_s;
+} ii_illr_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: All II-detected non-BTE error interrupts are *
+ * specified via this register. *
+ * NOTE: The PI interrupt register address is hardcoded in the II. If *
+ * PI_ID==0, then the II sends an interrupt request (Duplonet PWRI *
+ * packet) to address offset 0x0180_0090 within the local register *
+ * address space of PI0 on the node specified by the NODE field. If *
+ * PI_ID==1, then the II sends the interrupt request to address *
+ * offset 0x01A0_0090 within the local register address space of PI1 *
+ * on the node specified by the NODE field. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iidsr_u {
+ uint64_t ii_iidsr_regval;
+ struct {
+ uint64_t i_level : 8;
+ uint64_t i_pi_id : 1;
+ uint64_t i_node : 11;
+ uint64_t i_rsvd_3 : 4;
+ uint64_t i_enable : 1;
+ uint64_t i_rsvd_2 : 3;
+ uint64_t i_int_sent : 2;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_pi0_forward_int : 1;
+ uint64_t i_pi1_forward_int : 1;
+ uint64_t i_rsvd : 30;
+ } ii_iidsr_fld_s;
+} ii_iidsr_u_t;
+
+
+
+/************************************************************************
+ * *
+ * There are two instances of this register. This register is used *
+ * for matching up the incoming responses from the graphics widget to *
+ * the processor that initiated the graphics operation. The *
+ * write-responses are converted to graphics credits and returned to *
+ * the processor so that the processor interface can manage the flow *
+ * control. *
+ * *
+ ************************************************************************/
+
+typedef union ii_igfx0_u {
+ uint64_t ii_igfx0_regval;
+ struct {
+ uint64_t i_w_num : 4;
+ uint64_t i_pi_id : 1;
+ uint64_t i_n_num : 12;
+ uint64_t i_p_num : 1;
+ uint64_t i_rsvd : 46;
+ } ii_igfx0_fld_s;
+} ii_igfx0_u_t;
+
+
+/************************************************************************
+ * *
+ * There are two instances of this register. This register is used *
+ * for matching up the incoming responses from the graphics widget to *
+ * the processor that initiated the graphics operation. The *
+ * write-responses are converted to graphics credits and returned to *
+ * the processor so that the processor interface can manage the flow *
+ * control. *
+ * *
+ ************************************************************************/
+
+typedef union ii_igfx1_u {
+ uint64_t ii_igfx1_regval;
+ struct {
+ uint64_t i_w_num : 4;
+ uint64_t i_pi_id : 1;
+ uint64_t i_n_num : 12;
+ uint64_t i_p_num : 1;
+ uint64_t i_rsvd : 46;
+ } ii_igfx1_fld_s;
+} ii_igfx1_u_t;
+
+
+/************************************************************************
+ * *
+ * There are two instances of this registers. These registers are *
+ * used as scratch registers for software use. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iscr0_u {
+ uint64_t ii_iscr0_regval;
+ struct {
+ uint64_t i_scratch : 64;
+ } ii_iscr0_fld_s;
+} ii_iscr0_u_t;
+
+
+
+/************************************************************************
+ * *
+ * There are two instances of this registers. These registers are *
+ * used as scratch registers for software use. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iscr1_u {
+ uint64_t ii_iscr1_regval;
+ struct {
+ uint64_t i_scratch : 64;
+ } ii_iscr1_fld_s;
+} ii_iscr1_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a Shub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the SHub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the Shub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte1_u {
+ uint64_t ii_itte1_regval;
+ struct {
+ uint64_t i_offset : 5;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_w_num : 4;
+ uint64_t i_iosp : 1;
+ uint64_t i_rsvd : 51;
+ } ii_itte1_fld_s;
+} ii_itte1_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a Shub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the Shub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the Shub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte2_u {
+ uint64_t ii_itte2_regval;
+ struct {
+ uint64_t i_offset : 5;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_w_num : 4;
+ uint64_t i_iosp : 1;
+ uint64_t i_rsvd : 51;
+ } ii_itte2_fld_s;
+} ii_itte2_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a Shub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the Shub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the SHub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte3_u {
+ uint64_t ii_itte3_regval;
+ struct {
+ uint64_t i_offset : 5;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_w_num : 4;
+ uint64_t i_iosp : 1;
+ uint64_t i_rsvd : 51;
+ } ii_itte3_fld_s;
+} ii_itte3_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a SHub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the SHub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the SHub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte4_u {
+ uint64_t ii_itte4_regval;
+ struct {
+ uint64_t i_offset : 5;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_w_num : 4;
+ uint64_t i_iosp : 1;
+ uint64_t i_rsvd : 51;
+ } ii_itte4_fld_s;
+} ii_itte4_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a SHub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the Shub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the Shub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte5_u {
+ uint64_t ii_itte5_regval;
+ struct {
+ uint64_t i_offset : 5;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_w_num : 4;
+ uint64_t i_iosp : 1;
+ uint64_t i_rsvd : 51;
+ } ii_itte5_fld_s;
+} ii_itte5_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a Shub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the Shub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the Shub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte6_u {
+ uint64_t ii_itte6_regval;
+ struct {
+ uint64_t i_offset : 5;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_w_num : 4;
+ uint64_t i_iosp : 1;
+ uint64_t i_rsvd : 51;
+ } ii_itte6_fld_s;
+} ii_itte6_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are seven instances of translation table entry *
+ * registers. Each register maps a Shub Big Window to a 48-bit *
+ * address on Crosstalk. *
+ * For M-mode (128 nodes, 8 GBytes/node), SysAD[31:29] (Big Window *
+ * number) are used to select one of these 7 registers. The Widget *
+ * number field is then derived from the W_NUM field for synthesizing *
+ * a Crosstalk packet. The 5 bits of OFFSET are concatenated with *
+ * SysAD[28:0] to form Crosstalk[33:0]. The upper Crosstalk[47:34] *
+ * are padded with zeros. Although the maximum Crosstalk space *
+ * addressable by the Shub is thus the lower 16 GBytes per widget *
+ * (M-mode), however only <SUP >7</SUP>/<SUB >32nds</SUB> of this *
+ * space can be accessed. *
+ * For the N-mode (256 nodes, 4 GBytes/node), SysAD[30:28] (Big *
+ * Window number) are used to select one of these 7 registers. The *
+ * Widget number field is then derived from the W_NUM field for *
+ * synthesizing a Crosstalk packet. The 5 bits of OFFSET are *
+ * concatenated with SysAD[27:0] to form Crosstalk[33:0]. The IOSP *
+ * field is used as Crosstalk[47], and remainder of the Crosstalk *
+ * address bits (Crosstalk[46:34]) are always zero. While the maximum *
+ * Crosstalk space addressable by the SHub is thus the lower *
+ * 8-GBytes per widget (N-mode), only <SUP >7</SUP>/<SUB >32nds</SUB> *
+ * of this space can be accessed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_itte7_u {
+ uint64_t ii_itte7_regval;
+ struct {
+ uint64_t i_offset : 5;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_w_num : 4;
+ uint64_t i_iosp : 1;
+ uint64_t i_rsvd : 51;
+ } ii_itte7_fld_s;
+} ii_itte7_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprb0_u {
+ uint64_t ii_iprb0_regval;
+ struct {
+ uint64_t i_c : 8;
+ uint64_t i_na : 14;
+ uint64_t i_rsvd_2 : 2;
+ uint64_t i_nb : 14;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_m : 2;
+ uint64_t i_f : 1;
+ uint64_t i_of_cnt : 5;
+ uint64_t i_error : 1;
+ uint64_t i_rd_to : 1;
+ uint64_t i_spur_wr : 1;
+ uint64_t i_spur_rd : 1;
+ uint64_t i_rsvd : 11;
+ uint64_t i_mult_err : 1;
+ } ii_iprb0_fld_s;
+} ii_iprb0_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprb8_u {
+ uint64_t ii_iprb8_regval;
+ struct {
+ uint64_t i_c : 8;
+ uint64_t i_na : 14;
+ uint64_t i_rsvd_2 : 2;
+ uint64_t i_nb : 14;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_m : 2;
+ uint64_t i_f : 1;
+ uint64_t i_of_cnt : 5;
+ uint64_t i_error : 1;
+ uint64_t i_rd_to : 1;
+ uint64_t i_spur_wr : 1;
+ uint64_t i_spur_rd : 1;
+ uint64_t i_rsvd : 11;
+ uint64_t i_mult_err : 1;
+ } ii_iprb8_fld_s;
+} ii_iprb8_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprb9_u {
+ uint64_t ii_iprb9_regval;
+ struct {
+ uint64_t i_c : 8;
+ uint64_t i_na : 14;
+ uint64_t i_rsvd_2 : 2;
+ uint64_t i_nb : 14;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_m : 2;
+ uint64_t i_f : 1;
+ uint64_t i_of_cnt : 5;
+ uint64_t i_error : 1;
+ uint64_t i_rd_to : 1;
+ uint64_t i_spur_wr : 1;
+ uint64_t i_spur_rd : 1;
+ uint64_t i_rsvd : 11;
+ uint64_t i_mult_err : 1;
+ } ii_iprb9_fld_s;
+} ii_iprb9_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprba_u {
+ uint64_t ii_iprba_regval;
+ struct {
+ uint64_t i_c : 8;
+ uint64_t i_na : 14;
+ uint64_t i_rsvd_2 : 2;
+ uint64_t i_nb : 14;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_m : 2;
+ uint64_t i_f : 1;
+ uint64_t i_of_cnt : 5;
+ uint64_t i_error : 1;
+ uint64_t i_rd_to : 1;
+ uint64_t i_spur_wr : 1;
+ uint64_t i_spur_rd : 1;
+ uint64_t i_rsvd : 11;
+ uint64_t i_mult_err : 1;
+ } ii_iprba_fld_s;
+} ii_iprba_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprbb_u {
+ uint64_t ii_iprbb_regval;
+ struct {
+ uint64_t i_c : 8;
+ uint64_t i_na : 14;
+ uint64_t i_rsvd_2 : 2;
+ uint64_t i_nb : 14;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_m : 2;
+ uint64_t i_f : 1;
+ uint64_t i_of_cnt : 5;
+ uint64_t i_error : 1;
+ uint64_t i_rd_to : 1;
+ uint64_t i_spur_wr : 1;
+ uint64_t i_spur_rd : 1;
+ uint64_t i_rsvd : 11;
+ uint64_t i_mult_err : 1;
+ } ii_iprbb_fld_s;
+} ii_iprbb_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprbc_u {
+ uint64_t ii_iprbc_regval;
+ struct {
+ uint64_t i_c : 8;
+ uint64_t i_na : 14;
+ uint64_t i_rsvd_2 : 2;
+ uint64_t i_nb : 14;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_m : 2;
+ uint64_t i_f : 1;
+ uint64_t i_of_cnt : 5;
+ uint64_t i_error : 1;
+ uint64_t i_rd_to : 1;
+ uint64_t i_spur_wr : 1;
+ uint64_t i_spur_rd : 1;
+ uint64_t i_rsvd : 11;
+ uint64_t i_mult_err : 1;
+ } ii_iprbc_fld_s;
+} ii_iprbc_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprbd_u {
+ uint64_t ii_iprbd_regval;
+ struct {
+ uint64_t i_c : 8;
+ uint64_t i_na : 14;
+ uint64_t i_rsvd_2 : 2;
+ uint64_t i_nb : 14;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_m : 2;
+ uint64_t i_f : 1;
+ uint64_t i_of_cnt : 5;
+ uint64_t i_error : 1;
+ uint64_t i_rd_to : 1;
+ uint64_t i_spur_wr : 1;
+ uint64_t i_spur_rd : 1;
+ uint64_t i_rsvd : 11;
+ uint64_t i_mult_err : 1;
+ } ii_iprbd_fld_s;
+} ii_iprbd_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of SHub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprbe_u {
+ uint64_t ii_iprbe_regval;
+ struct {
+ uint64_t i_c : 8;
+ uint64_t i_na : 14;
+ uint64_t i_rsvd_2 : 2;
+ uint64_t i_nb : 14;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_m : 2;
+ uint64_t i_f : 1;
+ uint64_t i_of_cnt : 5;
+ uint64_t i_error : 1;
+ uint64_t i_rd_to : 1;
+ uint64_t i_spur_wr : 1;
+ uint64_t i_spur_rd : 1;
+ uint64_t i_rsvd : 11;
+ uint64_t i_mult_err : 1;
+ } ii_iprbe_fld_s;
+} ii_iprbe_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 9 instances of this register, one per *
+ * actual widget in this implementation of Shub and Crossbow. *
+ * Note: Crossbow only has ports for Widgets 8 through F, widget 0 *
+ * refers to Crossbow's internal space. *
+ * This register contains the state elements per widget that are *
+ * necessary to manage the PIO flow control on Crosstalk and on the *
+ * Router Network. See the PIO Flow Control chapter for a complete *
+ * description of this register *
+ * The SPUR_WR bit requires some explanation. When this register is *
+ * written, the new value of the C field is captured in an internal *
+ * register so the hardware can remember what the programmer wrote *
+ * into the credit counter. The SPUR_WR bit sets whenever the C field *
+ * increments above this stored value, which indicates that there *
+ * have been more responses received than requests sent. The SPUR_WR *
+ * bit cannot be cleared until a value is written to the IPRBx *
+ * register; the write will correct the C field and capture its new *
+ * value in the internal register. Even if IECLR[E_PRB_x] is set, the *
+ * SPUR_WR bit will persist if IPRBx hasn't yet been written. *
+ * . *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprbf_u {
+ uint64_t ii_iprbf_regval;
+ struct {
+ uint64_t i_c : 8;
+ uint64_t i_na : 14;
+ uint64_t i_rsvd_2 : 2;
+ uint64_t i_nb : 14;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_m : 2;
+ uint64_t i_f : 1;
+ uint64_t i_of_cnt : 5;
+ uint64_t i_error : 1;
+ uint64_t i_rd_to : 1;
+ uint64_t i_spur_wr : 1;
+ uint64_t i_spur_rd : 1;
+ uint64_t i_rsvd : 11;
+ uint64_t i_mult_err : 1;
+ } ii_iprbe_fld_s;
+} ii_iprbf_u_t;
+
+
+/************************************************************************
+ * *
+ * This register specifies the timeout value to use for monitoring *
+ * Crosstalk credits which are used outbound to Crosstalk. An *
+ * internal counter called the Crosstalk Credit Timeout Counter *
+ * increments every 128 II clocks. The counter starts counting *
+ * anytime the credit count drops below a threshold, and resets to *
+ * zero (stops counting) anytime the credit count is at or above the *
+ * threshold. The threshold is 1 credit in direct connect mode and 2 *
+ * in Crossbow connect mode. When the internal Crosstalk Credit *
+ * Timeout Counter reaches the value programmed in this register, a *
+ * Crosstalk Credit Timeout has occurred. The internal counter is not *
+ * readable from software, and stops counting at its maximum value, *
+ * so it cannot cause more than one interrupt. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ixcc_u {
+ uint64_t ii_ixcc_regval;
+ struct {
+ uint64_t i_time_out : 26;
+ uint64_t i_rsvd : 38;
+ } ii_ixcc_fld_s;
+} ii_ixcc_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This register qualifies all the PIO and DMA *
+ * operations launched from widget 0 towards the SHub. In *
+ * addition, it also qualifies accesses by the BTE streams. *
+ * The bits in each field of this register are cleared by the SHub *
+ * upon detection of an error which requires widget 0 or the BTE *
+ * streams to be terminated. Whether or not widget x has access *
+ * rights to this SHub is determined by an AND of the device *
+ * enable bit in the appropriate field of this register and bit 0 in *
+ * the Wx_IAC field. The bits in this field are set by writing a 1 to *
+ * them. Incoming replies from Crosstalk are not subject to this *
+ * access control mechanism. *
+ * *
+ ************************************************************************/
+
+typedef union ii_imem_u {
+ uint64_t ii_imem_regval;
+ struct {
+ uint64_t i_w0_esd : 1;
+ uint64_t i_rsvd_3 : 3;
+ uint64_t i_b0_esd : 1;
+ uint64_t i_rsvd_2 : 3;
+ uint64_t i_b1_esd : 1;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_clr_precise : 1;
+ uint64_t i_rsvd : 51;
+ } ii_imem_fld_s;
+} ii_imem_u_t;
+
+
+
+/************************************************************************
+ * *
+ * Description: This register specifies the timeout value to use for *
+ * monitoring Crosstalk tail flits coming into the Shub in the *
+ * TAIL_TO field. An internal counter associated with this register *
+ * is incremented every 128 II internal clocks (7 bits). The counter *
+ * starts counting anytime a header micropacket is received and stops *
+ * counting (and resets to zero) any time a micropacket with a Tail *
+ * bit is received. Once the counter reaches the threshold value *
+ * programmed in this register, it generates an interrupt to the *
+ * processor that is programmed into the IIDSR. The counter saturates *
+ * (does not roll over) at its maximum value, so it cannot cause *
+ * another interrupt until after it is cleared. *
+ * The register also contains the Read Response Timeout values. The *
+ * Prescalar is 23 bits, and counts II clocks. An internal counter *
+ * increments on every II clock and when it reaches the value in the *
+ * Prescalar field, all IPRTE registers with their valid bits set *
+ * have their Read Response timers bumped. Whenever any of them match *
+ * the value in the RRSP_TO field, a Read Response Timeout has *
+ * occurred, and error handling occurs as described in the Error *
+ * Handling section of this document. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ixtt_u {
+ uint64_t ii_ixtt_regval;
+ struct {
+ uint64_t i_tail_to : 26;
+ uint64_t i_rsvd_1 : 6;
+ uint64_t i_rrsp_ps : 23;
+ uint64_t i_rrsp_to : 5;
+ uint64_t i_rsvd : 4;
+ } ii_ixtt_fld_s;
+} ii_ixtt_u_t;
+
+
+/************************************************************************
+ * *
+ * Writing a 1 to the fields of this register clears the appropriate *
+ * error bits in other areas of SHub. Note that when the *
+ * E_PRB_x bits are used to clear error bits in PRB registers, *
+ * SPUR_RD and SPUR_WR may persist, because they require additional *
+ * action to clear them. See the IPRBx and IXSS Register *
+ * specifications. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ieclr_u {
+ uint64_t ii_ieclr_regval;
+ struct {
+ uint64_t i_e_prb_0 : 1;
+ uint64_t i_rsvd : 7;
+ uint64_t i_e_prb_8 : 1;
+ uint64_t i_e_prb_9 : 1;
+ uint64_t i_e_prb_a : 1;
+ uint64_t i_e_prb_b : 1;
+ uint64_t i_e_prb_c : 1;
+ uint64_t i_e_prb_d : 1;
+ uint64_t i_e_prb_e : 1;
+ uint64_t i_e_prb_f : 1;
+ uint64_t i_e_crazy : 1;
+ uint64_t i_e_bte_0 : 1;
+ uint64_t i_e_bte_1 : 1;
+ uint64_t i_reserved_1 : 10;
+ uint64_t i_spur_rd_hdr : 1;
+ uint64_t i_cam_intr_to : 1;
+ uint64_t i_cam_overflow : 1;
+ uint64_t i_cam_read_miss : 1;
+ uint64_t i_ioq_rep_underflow : 1;
+ uint64_t i_ioq_req_underflow : 1;
+ uint64_t i_ioq_rep_overflow : 1;
+ uint64_t i_ioq_req_overflow : 1;
+ uint64_t i_iiq_rep_overflow : 1;
+ uint64_t i_iiq_req_overflow : 1;
+ uint64_t i_ii_xn_rep_cred_overflow : 1;
+ uint64_t i_ii_xn_req_cred_overflow : 1;
+ uint64_t i_ii_xn_invalid_cmd : 1;
+ uint64_t i_xn_ii_invalid_cmd : 1;
+ uint64_t i_reserved_2 : 21;
+ } ii_ieclr_fld_s;
+} ii_ieclr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register controls both BTEs. SOFT_RESET is intended for *
+ * recovery after an error. COUNT controls the total number of CRBs *
+ * that both BTEs (combined) can use, which affects total BTE *
+ * bandwidth. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibcr_u {
+ uint64_t ii_ibcr_regval;
+ struct {
+ uint64_t i_count : 4;
+ uint64_t i_rsvd_1 : 4;
+ uint64_t i_soft_reset : 1;
+ uint64_t i_rsvd : 55;
+ } ii_ibcr_fld_s;
+} ii_ibcr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the header of a spurious read response *
+ * received from Crosstalk. A spurious read response is defined as a *
+ * read response received by II from a widget for which (1) the SIDN *
+ * has a value between 1 and 7, inclusive (II never sends requests to *
+ * these widgets (2) there is no valid IPRTE register which *
+ * corresponds to the TNUM, or (3) the widget indicated in SIDN is *
+ * not the same as the widget recorded in the IPRTE register *
+ * referenced by the TNUM. If this condition is true, and if the *
+ * IXSS[VALID] bit is clear, then the header of the spurious read *
+ * response is capture in IXSM and IXSS, and IXSS[VALID] is set. The *
+ * errant header is thereby captured, and no further spurious read *
+ * respones are captured until IXSS[VALID] is cleared by setting the *
+ * appropriate bit in IECLR.Everytime a spurious read response is *
+ * detected, the SPUR_RD bit of the PRB corresponding to the incoming *
+ * message's SIDN field is set. This always happens, regarless of *
+ * whether a header is captured. The programmer should check *
+ * IXSM[SIDN] to determine which widget sent the spurious response, *
+ * because there may be more than one SPUR_RD bit set in the PRB *
+ * registers. The widget indicated by IXSM[SIDN] was the first *
+ * spurious read response to be received since the last time *
+ * IXSS[VALID] was clear. The SPUR_RD bit of the corresponding PRB *
+ * will be set. Any SPUR_RD bits in any other PRB registers indicate *
+ * spurious messages from other widets which were detected after the *
+ * header was captured.. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ixsm_u {
+ uint64_t ii_ixsm_regval;
+ struct {
+ uint64_t i_byte_en : 32;
+ uint64_t i_reserved : 1;
+ uint64_t i_tag : 3;
+ uint64_t i_alt_pactyp : 4;
+ uint64_t i_bo : 1;
+ uint64_t i_error : 1;
+ uint64_t i_vbpm : 1;
+ uint64_t i_gbr : 1;
+ uint64_t i_ds : 2;
+ uint64_t i_ct : 1;
+ uint64_t i_tnum : 5;
+ uint64_t i_pactyp : 4;
+ uint64_t i_sidn : 4;
+ uint64_t i_didn : 4;
+ } ii_ixsm_fld_s;
+} ii_ixsm_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the sideband bits of a spurious read *
+ * response received from Crosstalk. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ixss_u {
+ uint64_t ii_ixss_regval;
+ struct {
+ uint64_t i_sideband : 8;
+ uint64_t i_rsvd : 55;
+ uint64_t i_valid : 1;
+ } ii_ixss_fld_s;
+} ii_ixss_u_t;
+
+
+/************************************************************************
+ * *
+ * This register enables software to access the II LLP's test port. *
+ * Refer to the LLP 2.5 documentation for an explanation of the test *
+ * port. Software can write to this register to program the values *
+ * for the control fields (TestErrCapture, TestClear, TestFlit, *
+ * TestMask and TestSeed). Similarly, software can read from this *
+ * register to obtain the values of the test port's status outputs *
+ * (TestCBerr, TestValid and TestData). *
+ * *
+ ************************************************************************/
+
+typedef union ii_ilct_u {
+ uint64_t ii_ilct_regval;
+ struct {
+ uint64_t i_test_seed : 20;
+ uint64_t i_test_mask : 8;
+ uint64_t i_test_data : 20;
+ uint64_t i_test_valid : 1;
+ uint64_t i_test_cberr : 1;
+ uint64_t i_test_flit : 3;
+ uint64_t i_test_clear : 1;
+ uint64_t i_test_err_capture : 1;
+ uint64_t i_rsvd : 9;
+ } ii_ilct_fld_s;
+} ii_ilct_u_t;
+
+
+/************************************************************************
+ * *
+ * If the II detects an illegal incoming Duplonet packet (request or *
+ * reply) when VALID==0 in the IIEPH1 register, then it saves the *
+ * contents of the packet's header flit in the IIEPH1 and IIEPH2 *
+ * registers, sets the VALID bit in IIEPH1, clears the OVERRUN bit, *
+ * and assigns a value to the ERR_TYPE field which indicates the *
+ * specific nature of the error. The II recognizes four different *
+ * types of errors: short request packets (ERR_TYPE==2), short reply *
+ * packets (ERR_TYPE==3), long request packets (ERR_TYPE==4) and long *
+ * reply packets (ERR_TYPE==5). The encodings for these types of *
+ * errors were chosen to be consistent with the same types of errors *
+ * indicated by the ERR_TYPE field in the LB_ERROR_HDR1 register (in *
+ * the LB unit). If the II detects an illegal incoming Duplonet *
+ * packet when VALID==1 in the IIEPH1 register, then it merely sets *
+ * the OVERRUN bit to indicate that a subsequent error has happened, *
+ * and does nothing further. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iieph1_u {
+ uint64_t ii_iieph1_regval;
+ struct {
+ uint64_t i_command : 7;
+ uint64_t i_rsvd_5 : 1;
+ uint64_t i_suppl : 14;
+ uint64_t i_rsvd_4 : 1;
+ uint64_t i_source : 14;
+ uint64_t i_rsvd_3 : 1;
+ uint64_t i_err_type : 4;
+ uint64_t i_rsvd_2 : 4;
+ uint64_t i_overrun : 1;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_valid : 1;
+ uint64_t i_rsvd : 13;
+ } ii_iieph1_fld_s;
+} ii_iieph1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register holds the Address field from the header flit of an *
+ * incoming erroneous Duplonet packet, along with the tail bit which *
+ * accompanied this header flit. This register is essentially an *
+ * extension of IIEPH1. Two registers were necessary because the 64 *
+ * bits available in only a single register were insufficient to *
+ * capture the entire header flit of an erroneous packet. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iieph2_u {
+ uint64_t ii_iieph2_regval;
+ struct {
+ uint64_t i_rsvd_0 : 3;
+ uint64_t i_address : 47;
+ uint64_t i_rsvd_1 : 10;
+ uint64_t i_tail : 1;
+ uint64_t i_rsvd : 3;
+ } ii_iieph2_fld_s;
+} ii_iieph2_u_t;
+
+
+/******************************/
+
+
+
+/************************************************************************
+ * *
+ * This register's value is a bit vector that guards access from SXBs *
+ * to local registers within the II as well as to external Crosstalk *
+ * widgets *
+ * *
+ ************************************************************************/
+
+typedef union ii_islapr_u {
+ uint64_t ii_islapr_regval;
+ struct {
+ uint64_t i_region : 64;
+ } ii_islapr_fld_s;
+} ii_islapr_u_t;
+
+
+/************************************************************************
+ * *
+ * A write to this register of the 56-bit value "Pup+Bun" will cause *
+ * the bit in the ISLAPR register corresponding to the region of the *
+ * requestor to be set (access allowed). (
+ * *
+ ************************************************************************/
+
+typedef union ii_islapo_u {
+ uint64_t ii_islapo_regval;
+ struct {
+ uint64_t i_io_sbx_ovrride : 56;
+ uint64_t i_rsvd : 8;
+ } ii_islapo_fld_s;
+} ii_islapo_u_t;
+
+/************************************************************************
+ * *
+ * Determines how long the wrapper will wait aftr an interrupt is *
+ * initially issued from the II before it times out the outstanding *
+ * interrupt and drops it from the interrupt queue. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iwi_u {
+ uint64_t ii_iwi_regval;
+ struct {
+ uint64_t i_prescale : 24;
+ uint64_t i_rsvd : 8;
+ uint64_t i_timeout : 8;
+ uint64_t i_rsvd1 : 8;
+ uint64_t i_intrpt_retry_period : 8;
+ uint64_t i_rsvd2 : 8;
+ } ii_iwi_fld_s;
+} ii_iwi_u_t;
+
+/************************************************************************
+ * *
+ * Log errors which have occurred in the II wrapper. The errors are *
+ * cleared by writing to the IECLR register. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iwel_u {
+ uint64_t ii_iwel_regval;
+ struct {
+ uint64_t i_intr_timed_out : 1;
+ uint64_t i_rsvd : 7;
+ uint64_t i_cam_overflow : 1;
+ uint64_t i_cam_read_miss : 1;
+ uint64_t i_rsvd1 : 2;
+ uint64_t i_ioq_rep_underflow : 1;
+ uint64_t i_ioq_req_underflow : 1;
+ uint64_t i_ioq_rep_overflow : 1;
+ uint64_t i_ioq_req_overflow : 1;
+ uint64_t i_iiq_rep_overflow : 1;
+ uint64_t i_iiq_req_overflow : 1;
+ uint64_t i_rsvd2 : 6;
+ uint64_t i_ii_xn_rep_cred_over_under: 1;
+ uint64_t i_ii_xn_req_cred_over_under: 1;
+ uint64_t i_rsvd3 : 6;
+ uint64_t i_ii_xn_invalid_cmd : 1;
+ uint64_t i_xn_ii_invalid_cmd : 1;
+ uint64_t i_rsvd4 : 30;
+ } ii_iwel_fld_s;
+} ii_iwel_u_t;
+
+/************************************************************************
+ * *
+ * Controls the II wrapper. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iwc_u {
+ uint64_t ii_iwc_regval;
+ struct {
+ uint64_t i_dma_byte_swap : 1;
+ uint64_t i_rsvd : 3;
+ uint64_t i_cam_read_lines_reset : 1;
+ uint64_t i_rsvd1 : 3;
+ uint64_t i_ii_xn_cred_over_under_log: 1;
+ uint64_t i_rsvd2 : 19;
+ uint64_t i_xn_rep_iq_depth : 5;
+ uint64_t i_rsvd3 : 3;
+ uint64_t i_xn_req_iq_depth : 5;
+ uint64_t i_rsvd4 : 3;
+ uint64_t i_iiq_depth : 6;
+ uint64_t i_rsvd5 : 12;
+ uint64_t i_force_rep_cred : 1;
+ uint64_t i_force_req_cred : 1;
+ } ii_iwc_fld_s;
+} ii_iwc_u_t;
+
+/************************************************************************
+ * *
+ * Status in the II wrapper. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iws_u {
+ uint64_t ii_iws_regval;
+ struct {
+ uint64_t i_xn_rep_iq_credits : 5;
+ uint64_t i_rsvd : 3;
+ uint64_t i_xn_req_iq_credits : 5;
+ uint64_t i_rsvd1 : 51;
+ } ii_iws_fld_s;
+} ii_iws_u_t;
+
+/************************************************************************
+ * *
+ * Masks errors in the IWEL register. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iweim_u {
+ uint64_t ii_iweim_regval;
+ struct {
+ uint64_t i_intr_timed_out : 1;
+ uint64_t i_rsvd : 7;
+ uint64_t i_cam_overflow : 1;
+ uint64_t i_cam_read_miss : 1;
+ uint64_t i_rsvd1 : 2;
+ uint64_t i_ioq_rep_underflow : 1;
+ uint64_t i_ioq_req_underflow : 1;
+ uint64_t i_ioq_rep_overflow : 1;
+ uint64_t i_ioq_req_overflow : 1;
+ uint64_t i_iiq_rep_overflow : 1;
+ uint64_t i_iiq_req_overflow : 1;
+ uint64_t i_rsvd2 : 6;
+ uint64_t i_ii_xn_rep_cred_overflow : 1;
+ uint64_t i_ii_xn_req_cred_overflow : 1;
+ uint64_t i_rsvd3 : 6;
+ uint64_t i_ii_xn_invalid_cmd : 1;
+ uint64_t i_xn_ii_invalid_cmd : 1;
+ uint64_t i_rsvd4 : 30;
+ } ii_iweim_fld_s;
+} ii_iweim_u_t;
+
+
+/************************************************************************
+ * *
+ * A write to this register causes a particular field in the *
+ * corresponding widget's PRB entry to be adjusted up or down by 1. *
+ * This counter should be used when recovering from error and reset *
+ * conditions. Note that software would be capable of causing *
+ * inadvertent overflow or underflow of these counters. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ipca_u {
+ uint64_t ii_ipca_regval;
+ struct {
+ uint64_t i_wid : 4;
+ uint64_t i_adjust : 1;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_field : 2;
+ uint64_t i_rsvd : 54;
+ } ii_ipca_fld_s;
+} ii_ipca_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+
+typedef union ii_iprte0a_u {
+ uint64_t ii_iprte0a_regval;
+ struct {
+ uint64_t i_rsvd_1 : 54;
+ uint64_t i_widget : 4;
+ uint64_t i_to_cnt : 5;
+ uint64_t i_vld : 1;
+ } ii_iprte0a_fld_s;
+} ii_iprte0a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte1a_u {
+ uint64_t ii_iprte1a_regval;
+ struct {
+ uint64_t i_rsvd_1 : 54;
+ uint64_t i_widget : 4;
+ uint64_t i_to_cnt : 5;
+ uint64_t i_vld : 1;
+ } ii_iprte1a_fld_s;
+} ii_iprte1a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte2a_u {
+ uint64_t ii_iprte2a_regval;
+ struct {
+ uint64_t i_rsvd_1 : 54;
+ uint64_t i_widget : 4;
+ uint64_t i_to_cnt : 5;
+ uint64_t i_vld : 1;
+ } ii_iprte2a_fld_s;
+} ii_iprte2a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte3a_u {
+ uint64_t ii_iprte3a_regval;
+ struct {
+ uint64_t i_rsvd_1 : 54;
+ uint64_t i_widget : 4;
+ uint64_t i_to_cnt : 5;
+ uint64_t i_vld : 1;
+ } ii_iprte3a_fld_s;
+} ii_iprte3a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte4a_u {
+ uint64_t ii_iprte4a_regval;
+ struct {
+ uint64_t i_rsvd_1 : 54;
+ uint64_t i_widget : 4;
+ uint64_t i_to_cnt : 5;
+ uint64_t i_vld : 1;
+ } ii_iprte4a_fld_s;
+} ii_iprte4a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte5a_u {
+ uint64_t ii_iprte5a_regval;
+ struct {
+ uint64_t i_rsvd_1 : 54;
+ uint64_t i_widget : 4;
+ uint64_t i_to_cnt : 5;
+ uint64_t i_vld : 1;
+ } ii_iprte5a_fld_s;
+} ii_iprte5a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte6a_u {
+ uint64_t ii_iprte6a_regval;
+ struct {
+ uint64_t i_rsvd_1 : 54;
+ uint64_t i_widget : 4;
+ uint64_t i_to_cnt : 5;
+ uint64_t i_vld : 1;
+ } ii_iprte6a_fld_s;
+} ii_iprte6a_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte7a_u {
+ uint64_t ii_iprte7a_regval;
+ struct {
+ uint64_t i_rsvd_1 : 54;
+ uint64_t i_widget : 4;
+ uint64_t i_to_cnt : 5;
+ uint64_t i_vld : 1;
+ } ii_iprtea7_fld_s;
+} ii_iprte7a_u_t;
+
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+
+typedef union ii_iprte0b_u {
+ uint64_t ii_iprte0b_regval;
+ struct {
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_address : 47;
+ uint64_t i_init : 3;
+ uint64_t i_source : 11;
+ } ii_iprte0b_fld_s;
+} ii_iprte0b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte1b_u {
+ uint64_t ii_iprte1b_regval;
+ struct {
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_address : 47;
+ uint64_t i_init : 3;
+ uint64_t i_source : 11;
+ } ii_iprte1b_fld_s;
+} ii_iprte1b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte2b_u {
+ uint64_t ii_iprte2b_regval;
+ struct {
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_address : 47;
+ uint64_t i_init : 3;
+ uint64_t i_source : 11;
+ } ii_iprte2b_fld_s;
+} ii_iprte2b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte3b_u {
+ uint64_t ii_iprte3b_regval;
+ struct {
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_address : 47;
+ uint64_t i_init : 3;
+ uint64_t i_source : 11;
+ } ii_iprte3b_fld_s;
+} ii_iprte3b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte4b_u {
+ uint64_t ii_iprte4b_regval;
+ struct {
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_address : 47;
+ uint64_t i_init : 3;
+ uint64_t i_source : 11;
+ } ii_iprte4b_fld_s;
+} ii_iprte4b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte5b_u {
+ uint64_t ii_iprte5b_regval;
+ struct {
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_address : 47;
+ uint64_t i_init : 3;
+ uint64_t i_source : 11;
+ } ii_iprte5b_fld_s;
+} ii_iprte5b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte6b_u {
+ uint64_t ii_iprte6b_regval;
+ struct {
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_address : 47;
+ uint64_t i_init : 3;
+ uint64_t i_source : 11;
+
+ } ii_iprte6b_fld_s;
+} ii_iprte6b_u_t;
+
+
+/************************************************************************
+ * *
+ * There are 8 instances of this register. This register contains *
+ * the information that the II has to remember once it has launched a *
+ * PIO Read operation. The contents are used to form the correct *
+ * Router Network packet and direct the Crosstalk reply to the *
+ * appropriate processor. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iprte7b_u {
+ uint64_t ii_iprte7b_regval;
+ struct {
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_address : 47;
+ uint64_t i_init : 3;
+ uint64_t i_source : 11;
+ } ii_iprte7b_fld_s;
+} ii_iprte7b_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: SHub II contains a feature which did not exist in *
+ * the Hub which automatically cleans up after a Read Response *
+ * timeout, including deallocation of the IPRTE and recovery of IBuf *
+ * space. The inclusion of this register in SHub is for backward *
+ * compatibility *
+ * A write to this register causes an entry from the table of *
+ * outstanding PIO Read Requests to be freed and returned to the *
+ * stack of free entries. This register is used in handling the *
+ * timeout errors that result in a PIO Reply never returning from *
+ * Crosstalk. *
+ * Note that this register does not affect the contents of the IPRTE *
+ * registers. The Valid bits in those registers have to be *
+ * specifically turned off by software. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ipdr_u {
+ uint64_t ii_ipdr_regval;
+ struct {
+ uint64_t i_te : 3;
+ uint64_t i_rsvd_1 : 1;
+ uint64_t i_pnd : 1;
+ uint64_t i_init_rpcnt : 1;
+ uint64_t i_rsvd : 58;
+ } ii_ipdr_fld_s;
+} ii_ipdr_u_t;
+
+
+/************************************************************************
+ * *
+ * A write to this register causes a CRB entry to be returned to the *
+ * queue of free CRBs. The entry should have previously been cleared *
+ * (mark bit) via backdoor access to the pertinent CRB entry. This *
+ * register is used in the last step of handling the errors that are *
+ * captured and marked in CRB entries. Briefly: 1) first error for *
+ * DMA write from a particular device, and first error for a *
+ * particular BTE stream, lead to a marked CRB entry, and processor *
+ * interrupt, 2) software reads the error information captured in the *
+ * CRB entry, and presumably takes some corrective action, 3) *
+ * software clears the mark bit, and finally 4) software writes to *
+ * the ICDR register to return the CRB entry to the list of free CRB *
+ * entries. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icdr_u {
+ uint64_t ii_icdr_regval;
+ struct {
+ uint64_t i_crb_num : 4;
+ uint64_t i_pnd : 1;
+ uint64_t i_rsvd : 59;
+ } ii_icdr_fld_s;
+} ii_icdr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register provides debug access to two FIFOs inside of II. *
+ * Both IOQ_MAX* fields of this register contain the instantaneous *
+ * depth (in units of the number of available entries) of the *
+ * associated IOQ FIFO. A read of this register will return the *
+ * number of free entries on each FIFO at the time of the read. So *
+ * when a FIFO is idle, the associated field contains the maximum *
+ * depth of the FIFO. This register is writable for debug reasons *
+ * and is intended to be written with the maximum desired FIFO depth *
+ * while the FIFO is idle. Software must assure that II is idle when *
+ * this register is written. If there are any active entries in any *
+ * of these FIFOs when this register is written, the results are *
+ * undefined. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ifdr_u {
+ uint64_t ii_ifdr_regval;
+ struct {
+ uint64_t i_ioq_max_rq : 7;
+ uint64_t i_set_ioq_rq : 1;
+ uint64_t i_ioq_max_rp : 7;
+ uint64_t i_set_ioq_rp : 1;
+ uint64_t i_rsvd : 48;
+ } ii_ifdr_fld_s;
+} ii_ifdr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register allows the II to become sluggish in removing *
+ * messages from its inbound queue (IIQ). This will cause messages to *
+ * back up in either virtual channel. Disabling the "molasses" mode *
+ * subsequently allows the II to be tested under stress. In the *
+ * sluggish ("Molasses") mode, the localized effects of congestion *
+ * can be observed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iiap_u {
+ uint64_t ii_iiap_regval;
+ struct {
+ uint64_t i_rq_mls : 6;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_rp_mls : 6;
+ uint64_t i_rsvd : 50;
+ } ii_iiap_fld_s;
+} ii_iiap_u_t;
+
+
+/************************************************************************
+ * *
+ * This register allows several parameters of CRB operation to be *
+ * set. Note that writing to this register can have catastrophic side *
+ * effects, if the CRB is not quiescent, i.e. if the CRB is *
+ * processing protocol messages when the write occurs. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icmr_u {
+ uint64_t ii_icmr_regval;
+ struct {
+ uint64_t i_sp_msg : 1;
+ uint64_t i_rd_hdr : 1;
+ uint64_t i_rsvd_4 : 2;
+ uint64_t i_c_cnt : 4;
+ uint64_t i_rsvd_3 : 4;
+ uint64_t i_clr_rqpd : 1;
+ uint64_t i_clr_rppd : 1;
+ uint64_t i_rsvd_2 : 2;
+ uint64_t i_fc_cnt : 4;
+ uint64_t i_crb_vld : 15;
+ uint64_t i_crb_mark : 15;
+ uint64_t i_rsvd_1 : 2;
+ uint64_t i_precise : 1;
+ uint64_t i_rsvd : 11;
+ } ii_icmr_fld_s;
+} ii_icmr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register allows control of the table portion of the CRB *
+ * logic via software. Control operations from this register have *
+ * priority over all incoming Crosstalk or BTE requests. *
+ * *
+ ************************************************************************/
+
+typedef union ii_iccr_u {
+ uint64_t ii_iccr_regval;
+ struct {
+ uint64_t i_crb_num : 4;
+ uint64_t i_rsvd_1 : 4;
+ uint64_t i_cmd : 8;
+ uint64_t i_pending : 1;
+ uint64_t i_rsvd : 47;
+ } ii_iccr_fld_s;
+} ii_iccr_u_t;
+
+
+/************************************************************************
+ * *
+ * This register allows the maximum timeout value to be programmed. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icto_u {
+ uint64_t ii_icto_regval;
+ struct {
+ uint64_t i_timeout : 8;
+ uint64_t i_rsvd : 56;
+ } ii_icto_fld_s;
+} ii_icto_u_t;
+
+
+/************************************************************************
+ * *
+ * This register allows the timeout prescalar to be programmed. An *
+ * internal counter is associated with this register. When the *
+ * internal counter reaches the value of the PRESCALE field, the *
+ * timer registers in all valid CRBs are incremented (CRBx_D[TIMEOUT] *
+ * field). The internal counter resets to zero, and then continues *
+ * counting. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ictp_u {
+ uint64_t ii_ictp_regval;
+ struct {
+ uint64_t i_prescale : 24;
+ uint64_t i_rsvd : 40;
+ } ii_ictp_fld_s;
+} ii_ictp_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are *
+ * used for Crosstalk operations (both cacheline and partial *
+ * operations) or BTE/IO. Because the CRB entries are very wide, five *
+ * registers (_A to _E) are required to read and write each entry. *
+ * The CRB Entry registers can be conceptualized as rows and columns *
+ * (illustrated in the table above). Each row contains the 4 *
+ * registers required for a single CRB Entry. The first doubleword *
+ * (column) for each entry is labeled A, and the second doubleword *
+ * (higher address) is labeled B, the third doubleword is labeled C, *
+ * the fourth doubleword is labeled D and the fifth doubleword is *
+ * labeled E. All CRB entries have their addresses on a quarter *
+ * cacheline aligned boundary. *
+ * Upon reset, only the following fields are initialized: valid *
+ * (VLD), priority count, timeout, timeout valid, and context valid. *
+ * All other bits should be cleared by software before use (after *
+ * recovering any potential error state from before the reset). *
+ * The following four tables summarize the format for the four *
+ * registers that are used for each ICRB# Entry. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icrb0_a_u {
+ uint64_t ii_icrb0_a_regval;
+ struct {
+ uint64_t ia_iow : 1;
+ uint64_t ia_vld : 1;
+ uint64_t ia_addr : 47;
+ uint64_t ia_tnum : 5;
+ uint64_t ia_sidn : 4;
+ uint64_t ia_rsvd : 6;
+ } ii_icrb0_a_fld_s;
+} ii_icrb0_a_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are *
+ * used for Crosstalk operations (both cacheline and partial *
+ * operations) or BTE/IO. Because the CRB entries are very wide, five *
+ * registers (_A to _E) are required to read and write each entry. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icrb0_b_u {
+ uint64_t ii_icrb0_b_regval;
+ struct {
+ uint64_t ib_xt_err : 1;
+ uint64_t ib_mark : 1;
+ uint64_t ib_ln_uce : 1;
+ uint64_t ib_errcode : 3;
+ uint64_t ib_error : 1;
+ uint64_t ib_stall__bte_1 : 1;
+ uint64_t ib_stall__bte_0 : 1;
+ uint64_t ib_stall__intr : 1;
+ uint64_t ib_stall_ib : 1;
+ uint64_t ib_intvn : 1;
+ uint64_t ib_wb : 1;
+ uint64_t ib_hold : 1;
+ uint64_t ib_ack : 1;
+ uint64_t ib_resp : 1;
+ uint64_t ib_ack_cnt : 11;
+ uint64_t ib_rsvd : 7;
+ uint64_t ib_exc : 5;
+ uint64_t ib_init : 3;
+ uint64_t ib_imsg : 8;
+ uint64_t ib_imsgtype : 2;
+ uint64_t ib_use_old : 1;
+ uint64_t ib_rsvd_1 : 11;
+ } ii_icrb0_b_fld_s;
+} ii_icrb0_b_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are *
+ * used for Crosstalk operations (both cacheline and partial *
+ * operations) or BTE/IO. Because the CRB entries are very wide, five *
+ * registers (_A to _E) are required to read and write each entry. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icrb0_c_u {
+ uint64_t ii_icrb0_c_regval;
+ struct {
+ uint64_t ic_source : 15;
+ uint64_t ic_size : 2;
+ uint64_t ic_ct : 1;
+ uint64_t ic_bte_num : 1;
+ uint64_t ic_gbr : 1;
+ uint64_t ic_resprqd : 1;
+ uint64_t ic_bo : 1;
+ uint64_t ic_suppl : 15;
+ uint64_t ic_rsvd : 27;
+ } ii_icrb0_c_fld_s;
+} ii_icrb0_c_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are *
+ * used for Crosstalk operations (both cacheline and partial *
+ * operations) or BTE/IO. Because the CRB entries are very wide, five *
+ * registers (_A to _E) are required to read and write each entry. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icrb0_d_u {
+ uint64_t ii_icrb0_d_regval;
+ struct {
+ uint64_t id_pa_be : 43;
+ uint64_t id_bte_op : 1;
+ uint64_t id_pr_psc : 4;
+ uint64_t id_pr_cnt : 4;
+ uint64_t id_sleep : 1;
+ uint64_t id_rsvd : 11;
+ } ii_icrb0_d_fld_s;
+} ii_icrb0_d_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: There are 15 CRB Entries (ICRB0 to ICRBE) that are *
+ * used for Crosstalk operations (both cacheline and partial *
+ * operations) or BTE/IO. Because the CRB entries are very wide, five *
+ * registers (_A to _E) are required to read and write each entry. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icrb0_e_u {
+ uint64_t ii_icrb0_e_regval;
+ struct {
+ uint64_t ie_timeout : 8;
+ uint64_t ie_context : 15;
+ uint64_t ie_rsvd : 1;
+ uint64_t ie_tvld : 1;
+ uint64_t ie_cvld : 1;
+ uint64_t ie_rsvd_0 : 38;
+ } ii_icrb0_e_fld_s;
+} ii_icrb0_e_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the lower 64 bits of the header of the *
+ * spurious message captured by II. Valid when the SP_MSG bit in ICMR *
+ * register is set. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icsml_u {
+ uint64_t ii_icsml_regval;
+ struct {
+ uint64_t i_tt_addr : 47;
+ uint64_t i_newsuppl_ex : 14;
+ uint64_t i_reserved : 2;
+ uint64_t i_overflow : 1;
+ } ii_icsml_fld_s;
+} ii_icsml_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the middle 64 bits of the header of the *
+ * spurious message captured by II. Valid when the SP_MSG bit in ICMR *
+ * register is set. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icsmm_u {
+ uint64_t ii_icsmm_regval;
+ struct {
+ uint64_t i_tt_ack_cnt : 11;
+ uint64_t i_reserved : 53;
+ } ii_icsmm_fld_s;
+} ii_icsmm_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the microscopic state, all the inputs to *
+ * the protocol table, captured with the spurious message. Valid when *
+ * the SP_MSG bit in the ICMR register is set. *
+ * *
+ ************************************************************************/
+
+typedef union ii_icsmh_u {
+ uint64_t ii_icsmh_regval;
+ struct {
+ uint64_t i_tt_vld : 1;
+ uint64_t i_xerr : 1;
+ uint64_t i_ft_cwact_o : 1;
+ uint64_t i_ft_wact_o : 1;
+ uint64_t i_ft_active_o : 1;
+ uint64_t i_sync : 1;
+ uint64_t i_mnusg : 1;
+ uint64_t i_mnusz : 1;
+ uint64_t i_plusz : 1;
+ uint64_t i_plusg : 1;
+ uint64_t i_tt_exc : 5;
+ uint64_t i_tt_wb : 1;
+ uint64_t i_tt_hold : 1;
+ uint64_t i_tt_ack : 1;
+ uint64_t i_tt_resp : 1;
+ uint64_t i_tt_intvn : 1;
+ uint64_t i_g_stall_bte1 : 1;
+ uint64_t i_g_stall_bte0 : 1;
+ uint64_t i_g_stall_il : 1;
+ uint64_t i_g_stall_ib : 1;
+ uint64_t i_tt_imsg : 8;
+ uint64_t i_tt_imsgtype : 2;
+ uint64_t i_tt_use_old : 1;
+ uint64_t i_tt_respreqd : 1;
+ uint64_t i_tt_bte_num : 1;
+ uint64_t i_cbn : 1;
+ uint64_t i_match : 1;
+ uint64_t i_rpcnt_lt_34 : 1;
+ uint64_t i_rpcnt_ge_34 : 1;
+ uint64_t i_rpcnt_lt_18 : 1;
+ uint64_t i_rpcnt_ge_18 : 1;
+ uint64_t i_rpcnt_lt_2 : 1;
+ uint64_t i_rpcnt_ge_2 : 1;
+ uint64_t i_rqcnt_lt_18 : 1;
+ uint64_t i_rqcnt_ge_18 : 1;
+ uint64_t i_rqcnt_lt_2 : 1;
+ uint64_t i_rqcnt_ge_2 : 1;
+ uint64_t i_tt_device : 7;
+ uint64_t i_tt_init : 3;
+ uint64_t i_reserved : 5;
+ } ii_icsmh_fld_s;
+} ii_icsmh_u_t;
+
+
+/************************************************************************
+ * *
+ * The Shub DEBUG unit provides a 3-bit selection signal to the *
+ * II core and a 3-bit selection signal to the fsbclk domain in the II *
+ * wrapper. *
+ * *
+ ************************************************************************/
+
+typedef union ii_idbss_u {
+ uint64_t ii_idbss_regval;
+ struct {
+ uint64_t i_iioclk_core_submenu : 3;
+ uint64_t i_rsvd : 5;
+ uint64_t i_fsbclk_wrapper_submenu : 3;
+ uint64_t i_rsvd_1 : 5;
+ uint64_t i_iioclk_menu : 5;
+ uint64_t i_rsvd_2 : 43;
+ } ii_idbss_fld_s;
+} ii_idbss_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This register is used to set up the length for a *
+ * transfer and then to monitor the progress of that transfer. This *
+ * register needs to be initialized before a transfer is started. A *
+ * legitimate write to this register will set the Busy bit, clear the *
+ * Error bit, and initialize the length to the value desired. *
+ * While the transfer is in progress, hardware will decrement the *
+ * length field with each successful block that is copied. Once the *
+ * transfer completes, hardware will clear the Busy bit. The length *
+ * field will also contain the number of cache lines left to be *
+ * transferred. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibls0_u {
+ uint64_t ii_ibls0_regval;
+ struct {
+ uint64_t i_length : 16;
+ uint64_t i_error : 1;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_busy : 1;
+ uint64_t i_rsvd : 43;
+ } ii_ibls0_fld_s;
+} ii_ibls0_u_t;
+
+
+/************************************************************************
+ * *
+ * This register should be loaded before a transfer is started. The *
+ * address to be loaded in bits 39:0 is the 40-bit TRex+ physical *
+ * address as described in Section 1.3, Figure2 and Figure3. Since *
+ * the bottom 7 bits of the address are always taken to be zero, BTE *
+ * transfers are always cacheline-aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibsa0_u {
+ uint64_t ii_ibsa0_regval;
+ struct {
+ uint64_t i_rsvd_1 : 7;
+ uint64_t i_addr : 42;
+ uint64_t i_rsvd : 15;
+ } ii_ibsa0_fld_s;
+} ii_ibsa0_u_t;
+
+
+/************************************************************************
+ * *
+ * This register should be loaded before a transfer is started. The *
+ * address to be loaded in bits 39:0 is the 40-bit TRex+ physical *
+ * address as described in Section 1.3, Figure2 and Figure3. Since *
+ * the bottom 7 bits of the address are always taken to be zero, BTE *
+ * transfers are always cacheline-aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibda0_u {
+ uint64_t ii_ibda0_regval;
+ struct {
+ uint64_t i_rsvd_1 : 7;
+ uint64_t i_addr : 42;
+ uint64_t i_rsvd : 15;
+ } ii_ibda0_fld_s;
+} ii_ibda0_u_t;
+
+
+/************************************************************************
+ * *
+ * Writing to this register sets up the attributes of the transfer *
+ * and initiates the transfer operation. Reading this register has *
+ * the side effect of terminating any transfer in progress. Note: *
+ * stopping a transfer midstream could have an adverse impact on the *
+ * other BTE. If a BTE stream has to be stopped (due to error *
+ * handling for example), both BTE streams should be stopped and *
+ * their transfers discarded. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibct0_u {
+ uint64_t ii_ibct0_regval;
+ struct {
+ uint64_t i_zerofill : 1;
+ uint64_t i_rsvd_2 : 3;
+ uint64_t i_notify : 1;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_poison : 1;
+ uint64_t i_rsvd : 55;
+ } ii_ibct0_fld_s;
+} ii_ibct0_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the address to which the WINV is sent. *
+ * This address has to be cache line aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibna0_u {
+ uint64_t ii_ibna0_regval;
+ struct {
+ uint64_t i_rsvd_1 : 7;
+ uint64_t i_addr : 42;
+ uint64_t i_rsvd : 15;
+ } ii_ibna0_fld_s;
+} ii_ibna0_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the programmable level as well as the node *
+ * ID and PI unit of the processor to which the interrupt will be *
+ * sent. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibia0_u {
+ uint64_t ii_ibia0_regval;
+ struct {
+ uint64_t i_rsvd_2 : 1;
+ uint64_t i_node_id : 11;
+ uint64_t i_rsvd_1 : 4;
+ uint64_t i_level : 7;
+ uint64_t i_rsvd : 41;
+ } ii_ibia0_fld_s;
+} ii_ibia0_u_t;
+
+
+/************************************************************************
+ * *
+ * Description: This register is used to set up the length for a *
+ * transfer and then to monitor the progress of that transfer. This *
+ * register needs to be initialized before a transfer is started. A *
+ * legitimate write to this register will set the Busy bit, clear the *
+ * Error bit, and initialize the length to the value desired. *
+ * While the transfer is in progress, hardware will decrement the *
+ * length field with each successful block that is copied. Once the *
+ * transfer completes, hardware will clear the Busy bit. The length *
+ * field will also contain the number of cache lines left to be *
+ * transferred. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibls1_u {
+ uint64_t ii_ibls1_regval;
+ struct {
+ uint64_t i_length : 16;
+ uint64_t i_error : 1;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_busy : 1;
+ uint64_t i_rsvd : 43;
+ } ii_ibls1_fld_s;
+} ii_ibls1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register should be loaded before a transfer is started. The *
+ * address to be loaded in bits 39:0 is the 40-bit TRex+ physical *
+ * address as described in Section 1.3, Figure2 and Figure3. Since *
+ * the bottom 7 bits of the address are always taken to be zero, BTE *
+ * transfers are always cacheline-aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibsa1_u {
+ uint64_t ii_ibsa1_regval;
+ struct {
+ uint64_t i_rsvd_1 : 7;
+ uint64_t i_addr : 33;
+ uint64_t i_rsvd : 24;
+ } ii_ibsa1_fld_s;
+} ii_ibsa1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register should be loaded before a transfer is started. The *
+ * address to be loaded in bits 39:0 is the 40-bit TRex+ physical *
+ * address as described in Section 1.3, Figure2 and Figure3. Since *
+ * the bottom 7 bits of the address are always taken to be zero, BTE *
+ * transfers are always cacheline-aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibda1_u {
+ uint64_t ii_ibda1_regval;
+ struct {
+ uint64_t i_rsvd_1 : 7;
+ uint64_t i_addr : 33;
+ uint64_t i_rsvd : 24;
+ } ii_ibda1_fld_s;
+} ii_ibda1_u_t;
+
+
+/************************************************************************
+ * *
+ * Writing to this register sets up the attributes of the transfer *
+ * and initiates the transfer operation. Reading this register has *
+ * the side effect of terminating any transfer in progress. Note: *
+ * stopping a transfer midstream could have an adverse impact on the *
+ * other BTE. If a BTE stream has to be stopped (due to error *
+ * handling for example), both BTE streams should be stopped and *
+ * their transfers discarded. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibct1_u {
+ uint64_t ii_ibct1_regval;
+ struct {
+ uint64_t i_zerofill : 1;
+ uint64_t i_rsvd_2 : 3;
+ uint64_t i_notify : 1;
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_poison : 1;
+ uint64_t i_rsvd : 55;
+ } ii_ibct1_fld_s;
+} ii_ibct1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the address to which the WINV is sent. *
+ * This address has to be cache line aligned. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibna1_u {
+ uint64_t ii_ibna1_regval;
+ struct {
+ uint64_t i_rsvd_1 : 7;
+ uint64_t i_addr : 33;
+ uint64_t i_rsvd : 24;
+ } ii_ibna1_fld_s;
+} ii_ibna1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register contains the programmable level as well as the node *
+ * ID and PI unit of the processor to which the interrupt will be *
+ * sent. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ibia1_u {
+ uint64_t ii_ibia1_regval;
+ struct {
+ uint64_t i_pi_id : 1;
+ uint64_t i_node_id : 8;
+ uint64_t i_rsvd_1 : 7;
+ uint64_t i_level : 7;
+ uint64_t i_rsvd : 41;
+ } ii_ibia1_fld_s;
+} ii_ibia1_u_t;
+
+
+/************************************************************************
+ * *
+ * This register defines the resources that feed information into *
+ * the two performance counters located in the IO Performance *
+ * Profiling Register. There are 17 different quantities that can be *
+ * measured. Given these 17 different options, the two performance *
+ * counters have 15 of them in common; menu selections 0 through 0xE *
+ * are identical for each performance counter. As for the other two *
+ * options, one is available from one performance counter and the *
+ * other is available from the other performance counter. Hence, the *
+ * II supports all 17*16=272 possible combinations of quantities to *
+ * measure. *
+ * *
+ ************************************************************************/
+
+typedef union ii_ipcr_u {
+ uint64_t ii_ipcr_regval;
+ struct {
+ uint64_t i_ippr0_c : 4;
+ uint64_t i_ippr1_c : 4;
+ uint64_t i_icct : 8;
+ uint64_t i_rsvd : 48;
+ } ii_ipcr_fld_s;
+} ii_ipcr_u_t;
+
+
+/************************************************************************
+ * *
+ * *
+ * *
+ ************************************************************************/
+
+typedef union ii_ippr_u {
+ uint64_t ii_ippr_regval;
+ struct {
+ uint64_t i_ippr0 : 32;
+ uint64_t i_ippr1 : 32;
+ } ii_ippr_fld_s;
+} ii_ippr_u_t;
+
+
+
+/**************************************************************************
+ * *
+ * The following defines which were not formed into structures are *
+ * probably indentical to another register, and the name of the *
+ * register is provided against each of these registers. This *
+ * information needs to be checked carefully *
+ * *
+ * IIO_ICRB1_A IIO_ICRB0_A *
+ * IIO_ICRB1_B IIO_ICRB0_B *
+ * IIO_ICRB1_C IIO_ICRB0_C *
+ * IIO_ICRB1_D IIO_ICRB0_D *
+ * IIO_ICRB1_E IIO_ICRB0_E *
+ * IIO_ICRB2_A IIO_ICRB0_A *
+ * IIO_ICRB2_B IIO_ICRB0_B *
+ * IIO_ICRB2_C IIO_ICRB0_C *
+ * IIO_ICRB2_D IIO_ICRB0_D *
+ * IIO_ICRB2_E IIO_ICRB0_E *
+ * IIO_ICRB3_A IIO_ICRB0_A *
+ * IIO_ICRB3_B IIO_ICRB0_B *
+ * IIO_ICRB3_C IIO_ICRB0_C *
+ * IIO_ICRB3_D IIO_ICRB0_D *
+ * IIO_ICRB3_E IIO_ICRB0_E *
+ * IIO_ICRB4_A IIO_ICRB0_A *
+ * IIO_ICRB4_B IIO_ICRB0_B *
+ * IIO_ICRB4_C IIO_ICRB0_C *
+ * IIO_ICRB4_D IIO_ICRB0_D *
+ * IIO_ICRB4_E IIO_ICRB0_E *
+ * IIO_ICRB5_A IIO_ICRB0_A *
+ * IIO_ICRB5_B IIO_ICRB0_B *
+ * IIO_ICRB5_C IIO_ICRB0_C *
+ * IIO_ICRB5_D IIO_ICRB0_D *
+ * IIO_ICRB5_E IIO_ICRB0_E *
+ * IIO_ICRB6_A IIO_ICRB0_A *
+ * IIO_ICRB6_B IIO_ICRB0_B *
+ * IIO_ICRB6_C IIO_ICRB0_C *
+ * IIO_ICRB6_D IIO_ICRB0_D *
+ * IIO_ICRB6_E IIO_ICRB0_E *
+ * IIO_ICRB7_A IIO_ICRB0_A *
+ * IIO_ICRB7_B IIO_ICRB0_B *
+ * IIO_ICRB7_C IIO_ICRB0_C *
+ * IIO_ICRB7_D IIO_ICRB0_D *
+ * IIO_ICRB7_E IIO_ICRB0_E *
+ * IIO_ICRB8_A IIO_ICRB0_A *
+ * IIO_ICRB8_B IIO_ICRB0_B *
+ * IIO_ICRB8_C IIO_ICRB0_C *
+ * IIO_ICRB8_D IIO_ICRB0_D *
+ * IIO_ICRB8_E IIO_ICRB0_E *
+ * IIO_ICRB9_A IIO_ICRB0_A *
+ * IIO_ICRB9_B IIO_ICRB0_B *
+ * IIO_ICRB9_C IIO_ICRB0_C *
+ * IIO_ICRB9_D IIO_ICRB0_D *
+ * IIO_ICRB9_E IIO_ICRB0_E *
+ * IIO_ICRBA_A IIO_ICRB0_A *
+ * IIO_ICRBA_B IIO_ICRB0_B *
+ * IIO_ICRBA_C IIO_ICRB0_C *
+ * IIO_ICRBA_D IIO_ICRB0_D *
+ * IIO_ICRBA_E IIO_ICRB0_E *
+ * IIO_ICRBB_A IIO_ICRB0_A *
+ * IIO_ICRBB_B IIO_ICRB0_B *
+ * IIO_ICRBB_C IIO_ICRB0_C *
+ * IIO_ICRBB_D IIO_ICRB0_D *
+ * IIO_ICRBB_E IIO_ICRB0_E *
+ * IIO_ICRBC_A IIO_ICRB0_A *
+ * IIO_ICRBC_B IIO_ICRB0_B *
+ * IIO_ICRBC_C IIO_ICRB0_C *
+ * IIO_ICRBC_D IIO_ICRB0_D *
+ * IIO_ICRBC_E IIO_ICRB0_E *
+ * IIO_ICRBD_A IIO_ICRB0_A *
+ * IIO_ICRBD_B IIO_ICRB0_B *
+ * IIO_ICRBD_C IIO_ICRB0_C *
+ * IIO_ICRBD_D IIO_ICRB0_D *
+ * IIO_ICRBD_E IIO_ICRB0_E *
+ * IIO_ICRBE_A IIO_ICRB0_A *
+ * IIO_ICRBE_B IIO_ICRB0_B *
+ * IIO_ICRBE_C IIO_ICRB0_C *
+ * IIO_ICRBE_D IIO_ICRB0_D *
+ * IIO_ICRBE_E IIO_ICRB0_E *
+ * *
+ **************************************************************************/
+
+
+/*
+ * Slightly friendlier names for some common registers.
+ */
+#define IIO_WIDGET IIO_WID /* Widget identification */
+#define IIO_WIDGET_STAT IIO_WSTAT /* Widget status register */
+#define IIO_WIDGET_CTRL IIO_WCR /* Widget control register */
+#define IIO_PROTECT IIO_ILAPR /* IO interface protection */
+#define IIO_PROTECT_OVRRD IIO_ILAPO /* IO protect override */
+#define IIO_OUTWIDGET_ACCESS IIO_IOWA /* Outbound widget access */
+#define IIO_INWIDGET_ACCESS IIO_IIWA /* Inbound widget access */
+#define IIO_INDEV_ERR_MASK IIO_IIDEM /* Inbound device error mask */
+#define IIO_LLP_CSR IIO_ILCSR /* LLP control and status */
+#define IIO_LLP_LOG IIO_ILLR /* LLP log */
+#define IIO_XTALKCC_TOUT IIO_IXCC /* Xtalk credit count timeout*/
+#define IIO_XTALKTT_TOUT IIO_IXTT /* Xtalk tail timeout */
+#define IIO_IO_ERR_CLR IIO_IECLR /* IO error clear */
+#define IIO_IGFX_0 IIO_IGFX0
+#define IIO_IGFX_1 IIO_IGFX1
+#define IIO_IBCT_0 IIO_IBCT0
+#define IIO_IBCT_1 IIO_IBCT1
+#define IIO_IBLS_0 IIO_IBLS0
+#define IIO_IBLS_1 IIO_IBLS1
+#define IIO_IBSA_0 IIO_IBSA0
+#define IIO_IBSA_1 IIO_IBSA1
+#define IIO_IBDA_0 IIO_IBDA0
+#define IIO_IBDA_1 IIO_IBDA1
+#define IIO_IBNA_0 IIO_IBNA0
+#define IIO_IBNA_1 IIO_IBNA1
+#define IIO_IBIA_0 IIO_IBIA0
+#define IIO_IBIA_1 IIO_IBIA1
+#define IIO_IOPRB_0 IIO_IPRB0
+
+#define IIO_PRTE_A(_x) (IIO_IPRTE0_A + (8 * (_x)))
+#define IIO_PRTE_B(_x) (IIO_IPRTE0_B + (8 * (_x)))
+#define IIO_NUM_PRTES 8 /* Total number of PRB table entries */
+#define IIO_WIDPRTE_A(x) IIO_PRTE_A(((x) - 8)) /* widget ID to its PRTE num */
+#define IIO_WIDPRTE_B(x) IIO_PRTE_B(((x) - 8)) /* widget ID to its PRTE num */
+
+#define IIO_NUM_IPRBS (9)
+
+#define IIO_LLP_CSR_IS_UP 0x00002000
+#define IIO_LLP_CSR_LLP_STAT_MASK 0x00003000
+#define IIO_LLP_CSR_LLP_STAT_SHFT 12
+
+#define IIO_LLP_CB_MAX 0xffff /* in ILLR CB_CNT, Max Check Bit errors */
+#define IIO_LLP_SN_MAX 0xffff /* in ILLR SN_CNT, Max Sequence Number errors */
+
+/* key to IIO_PROTECT_OVRRD */
+#define IIO_PROTECT_OVRRD_KEY 0x53474972756c6573ull /* "SGIrules" */
+
+/* BTE register names */
+#define IIO_BTE_STAT_0 IIO_IBLS_0 /* Also BTE length/status 0 */
+#define IIO_BTE_SRC_0 IIO_IBSA_0 /* Also BTE source address 0 */
+#define IIO_BTE_DEST_0 IIO_IBDA_0 /* Also BTE dest. address 0 */
+#define IIO_BTE_CTRL_0 IIO_IBCT_0 /* Also BTE control/terminate 0 */
+#define IIO_BTE_NOTIFY_0 IIO_IBNA_0 /* Also BTE notification 0 */
+#define IIO_BTE_INT_0 IIO_IBIA_0 /* Also BTE interrupt 0 */
+#define IIO_BTE_OFF_0 0 /* Base offset from BTE 0 regs. */
+#define IIO_BTE_OFF_1 (IIO_IBLS_1 - IIO_IBLS_0) /* Offset from base to BTE 1 */
+
+/* BTE register offsets from base */
+#define BTEOFF_STAT 0
+#define BTEOFF_SRC (IIO_BTE_SRC_0 - IIO_BTE_STAT_0)
+#define BTEOFF_DEST (IIO_BTE_DEST_0 - IIO_BTE_STAT_0)
+#define BTEOFF_CTRL (IIO_BTE_CTRL_0 - IIO_BTE_STAT_0)
+#define BTEOFF_NOTIFY (IIO_BTE_NOTIFY_0 - IIO_BTE_STAT_0)
+#define BTEOFF_INT (IIO_BTE_INT_0 - IIO_BTE_STAT_0)
+
+
+/* names used in shub diags */
+#define IIO_BASE_BTE0 IIO_IBLS_0
+#define IIO_BASE_BTE1 IIO_IBLS_1
+
+/*
+ * Macro which takes the widget number, and returns the
+ * IO PRB address of that widget.
+ * value _x is expected to be a widget number in the range
+ * 0, 8 - 0xF
+ */
+#define IIO_IOPRB(_x) (IIO_IOPRB_0 + ( ( (_x) < HUB_WIDGET_ID_MIN ? \
+ (_x) : \
+ (_x) - (HUB_WIDGET_ID_MIN-1)) << 3) )
+
+
+/* GFX Flow Control Node/Widget Register */
+#define IIO_IGFX_W_NUM_BITS 4 /* size of widget num field */
+#define IIO_IGFX_W_NUM_MASK ((1<<IIO_IGFX_W_NUM_BITS)-1)
+#define IIO_IGFX_W_NUM_SHIFT 0
+#define IIO_IGFX_PI_NUM_BITS 1 /* size of PI num field */
+#define IIO_IGFX_PI_NUM_MASK ((1<<IIO_IGFX_PI_NUM_BITS)-1)
+#define IIO_IGFX_PI_NUM_SHIFT 4
+#define IIO_IGFX_N_NUM_BITS 8 /* size of node num field */
+#define IIO_IGFX_N_NUM_MASK ((1<<IIO_IGFX_N_NUM_BITS)-1)
+#define IIO_IGFX_N_NUM_SHIFT 5
+#define IIO_IGFX_P_NUM_BITS 1 /* size of processor num field */
+#define IIO_IGFX_P_NUM_MASK ((1<<IIO_IGFX_P_NUM_BITS)-1)
+#define IIO_IGFX_P_NUM_SHIFT 16
+#define IIO_IGFX_INIT(widget, pi, node, cpu) (\
+ (((widget) & IIO_IGFX_W_NUM_MASK) << IIO_IGFX_W_NUM_SHIFT) | \
+ (((pi) & IIO_IGFX_PI_NUM_MASK)<< IIO_IGFX_PI_NUM_SHIFT)| \
+ (((node) & IIO_IGFX_N_NUM_MASK) << IIO_IGFX_N_NUM_SHIFT) | \
+ (((cpu) & IIO_IGFX_P_NUM_MASK) << IIO_IGFX_P_NUM_SHIFT))
+
+
+/* Scratch registers (all bits available) */
+#define IIO_SCRATCH_REG0 IIO_ISCR0
+#define IIO_SCRATCH_REG1 IIO_ISCR1
+#define IIO_SCRATCH_MASK 0xffffffffffffffffUL
+
+#define IIO_SCRATCH_BIT0_0 0x0000000000000001UL
+#define IIO_SCRATCH_BIT0_1 0x0000000000000002UL
+#define IIO_SCRATCH_BIT0_2 0x0000000000000004UL
+#define IIO_SCRATCH_BIT0_3 0x0000000000000008UL
+#define IIO_SCRATCH_BIT0_4 0x0000000000000010UL
+#define IIO_SCRATCH_BIT0_5 0x0000000000000020UL
+#define IIO_SCRATCH_BIT0_6 0x0000000000000040UL
+#define IIO_SCRATCH_BIT0_7 0x0000000000000080UL
+#define IIO_SCRATCH_BIT0_8 0x0000000000000100UL
+#define IIO_SCRATCH_BIT0_9 0x0000000000000200UL
+#define IIO_SCRATCH_BIT0_A 0x0000000000000400UL
+
+#define IIO_SCRATCH_BIT1_0 0x0000000000000001UL
+#define IIO_SCRATCH_BIT1_1 0x0000000000000002UL
+/* IO Translation Table Entries */
+#define IIO_NUM_ITTES 7 /* ITTEs numbered 0..6 */
+ /* Hw manuals number them 1..7! */
+/*
+ * IIO_IMEM Register fields.
+ */
+#define IIO_IMEM_W0ESD 0x1UL /* Widget 0 shut down due to error */
+#define IIO_IMEM_B0ESD (1UL << 4) /* BTE 0 shut down due to error */
+#define IIO_IMEM_B1ESD (1UL << 8) /* BTE 1 Shut down due to error */
+
+/*
+ * As a permanent workaround for a bug in the PI side of the shub, we've
+ * redefined big window 7 as small window 0.
+ XXX does this still apply for SN1??
+ */
+#define HUB_NUM_BIG_WINDOW (IIO_NUM_ITTES - 1)
+
+/*
+ * Use the top big window as a surrogate for the first small window
+ */
+#define SWIN0_BIGWIN HUB_NUM_BIG_WINDOW
+
+#define ILCSR_WARM_RESET 0x100
+
+/*
+ * CRB manipulation macros
+ * The CRB macros are slightly complicated, since there are up to
+ * four registers associated with each CRB entry.
+ */
+#define IIO_NUM_CRBS 15 /* Number of CRBs */
+#define IIO_NUM_PC_CRBS 4 /* Number of partial cache CRBs */
+#define IIO_ICRB_OFFSET 8
+#define IIO_ICRB_0 IIO_ICRB0_A
+#define IIO_ICRB_ADDR_SHFT 2 /* Shift to get proper address */
+/* XXX - This is now tuneable:
+ #define IIO_FIRST_PC_ENTRY 12
+ */
+
+#define IIO_ICRB_A(_x) ((u64)(IIO_ICRB_0 + (6 * IIO_ICRB_OFFSET * (_x))))
+#define IIO_ICRB_B(_x) ((u64)((char *)IIO_ICRB_A(_x) + 1*IIO_ICRB_OFFSET))
+#define IIO_ICRB_C(_x) ((u64)((char *)IIO_ICRB_A(_x) + 2*IIO_ICRB_OFFSET))
+#define IIO_ICRB_D(_x) ((u64)((char *)IIO_ICRB_A(_x) + 3*IIO_ICRB_OFFSET))
+#define IIO_ICRB_E(_x) ((u64)((char *)IIO_ICRB_A(_x) + 4*IIO_ICRB_OFFSET))
+
+#define TNUM_TO_WIDGET_DEV(_tnum) (_tnum & 0x7)
+
+/*
+ * values for "ecode" field
+ */
+#define IIO_ICRB_ECODE_DERR 0 /* Directory error due to IIO access */
+#define IIO_ICRB_ECODE_PERR 1 /* Poison error on IO access */
+#define IIO_ICRB_ECODE_WERR 2 /* Write error by IIO access
+ * e.g. WINV to a Read only line. */
+#define IIO_ICRB_ECODE_AERR 3 /* Access error caused by IIO access */
+#define IIO_ICRB_ECODE_PWERR 4 /* Error on partial write */
+#define IIO_ICRB_ECODE_PRERR 5 /* Error on partial read */
+#define IIO_ICRB_ECODE_TOUT 6 /* CRB timeout before deallocating */
+#define IIO_ICRB_ECODE_XTERR 7 /* Incoming xtalk pkt had error bit */
+
+/*
+ * Values for field imsgtype
+ */
+#define IIO_ICRB_IMSGT_XTALK 0 /* Incoming Meessage from Xtalk */
+#define IIO_ICRB_IMSGT_BTE 1 /* Incoming message from BTE */
+#define IIO_ICRB_IMSGT_SN1NET 2 /* Incoming message from SN1 net */
+#define IIO_ICRB_IMSGT_CRB 3 /* Incoming message from CRB ??? */
+
+/*
+ * values for field initiator.
+ */
+#define IIO_ICRB_INIT_XTALK 0 /* Message originated in xtalk */
+#define IIO_ICRB_INIT_BTE0 0x1 /* Message originated in BTE 0 */
+#define IIO_ICRB_INIT_SN1NET 0x2 /* Message originated in SN1net */
+#define IIO_ICRB_INIT_CRB 0x3 /* Message originated in CRB ? */
+#define IIO_ICRB_INIT_BTE1 0x5 /* MEssage originated in BTE 1 */
+
+/*
+ * Number of credits Hub widget has while sending req/response to
+ * xbow.
+ * Value of 3 is required by Xbow 1.1
+ * We may be able to increase this to 4 with Xbow 1.2.
+ */
+#define HUBII_XBOW_CREDIT 3
+#define HUBII_XBOW_REV2_CREDIT 4
+
+/*
+ * Number of credits that xtalk devices should use when communicating
+ * with a SHub (depth of SHub's queue).
+ */
+#define HUB_CREDIT 4
+
+/*
+ * Some IIO_PRB fields
+ */
+#define IIO_PRB_MULTI_ERR (1LL << 63)
+#define IIO_PRB_SPUR_RD (1LL << 51)
+#define IIO_PRB_SPUR_WR (1LL << 50)
+#define IIO_PRB_RD_TO (1LL << 49)
+#define IIO_PRB_ERROR (1LL << 48)
+
+/*************************************************************************
+
+ Some of the IIO field masks and shifts are defined here.
+ This is in order to maintain compatibility in SN0 and SN1 code
+
+**************************************************************************/
+
+/*
+ * ICMR register fields
+ * (Note: the IIO_ICMR_P_CNT and IIO_ICMR_PC_VLD from Hub are not
+ * present in SHub)
+ */
+
+#define IIO_ICMR_CRB_VLD_SHFT 20
+#define IIO_ICMR_CRB_VLD_MASK (0x7fffUL << IIO_ICMR_CRB_VLD_SHFT)
+
+#define IIO_ICMR_FC_CNT_SHFT 16
+#define IIO_ICMR_FC_CNT_MASK (0xf << IIO_ICMR_FC_CNT_SHFT)
+
+#define IIO_ICMR_C_CNT_SHFT 4
+#define IIO_ICMR_C_CNT_MASK (0xf << IIO_ICMR_C_CNT_SHFT)
+
+#define IIO_ICMR_PRECISE (1UL << 52)
+#define IIO_ICMR_CLR_RPPD (1UL << 13)
+#define IIO_ICMR_CLR_RQPD (1UL << 12)
+
+/*
+ * IIO PIO Deallocation register field masks : (IIO_IPDR)
+ XXX present but not needed in bedrock? See the manual.
+ */
+#define IIO_IPDR_PND (1 << 4)
+
+/*
+ * IIO CRB deallocation register field masks: (IIO_ICDR)
+ */
+#define IIO_ICDR_PND (1 << 4)
+
+/*
+ * IO BTE Length/Status (IIO_IBLS) register bit field definitions
+ */
+#define IBLS_BUSY (0x1UL << 20)
+#define IBLS_ERROR_SHFT 16
+#define IBLS_ERROR (0x1UL << IBLS_ERROR_SHFT)
+#define IBLS_LENGTH_MASK 0xffff
+
+/*
+ * IO BTE Control/Terminate register (IBCT) register bit field definitions
+ */
+#define IBCT_POISON (0x1UL << 8)
+#define IBCT_NOTIFY (0x1UL << 4)
+#define IBCT_ZFIL_MODE (0x1UL << 0)
+
+/*
+ * IIO Incoming Error Packet Header (IIO_IIEPH1/IIO_IIEPH2)
+ */
+#define IIEPH1_VALID (1UL << 44)
+#define IIEPH1_OVERRUN (1UL << 40)
+#define IIEPH1_ERR_TYPE_SHFT 32
+#define IIEPH1_ERR_TYPE_MASK 0xf
+#define IIEPH1_SOURCE_SHFT 20
+#define IIEPH1_SOURCE_MASK 11
+#define IIEPH1_SUPPL_SHFT 8
+#define IIEPH1_SUPPL_MASK 11
+#define IIEPH1_CMD_SHFT 0
+#define IIEPH1_CMD_MASK 7
+
+#define IIEPH2_TAIL (1UL << 40)
+#define IIEPH2_ADDRESS_SHFT 0
+#define IIEPH2_ADDRESS_MASK 38
+
+#define IIEPH1_ERR_SHORT_REQ 2
+#define IIEPH1_ERR_SHORT_REPLY 3
+#define IIEPH1_ERR_LONG_REQ 4
+#define IIEPH1_ERR_LONG_REPLY 5
+
+/*
+ * IO Error Clear register bit field definitions
+ */
+#define IECLR_PI1_FWD_INT (1UL << 31) /* clear PI1_FORWARD_INT in iidsr */
+#define IECLR_PI0_FWD_INT (1UL << 30) /* clear PI0_FORWARD_INT in iidsr */
+#define IECLR_SPUR_RD_HDR (1UL << 29) /* clear valid bit in ixss reg */
+#define IECLR_BTE1 (1UL << 18) /* clear bte error 1 */
+#define IECLR_BTE0 (1UL << 17) /* clear bte error 0 */
+#define IECLR_CRAZY (1UL << 16) /* clear crazy bit in wstat reg */
+#define IECLR_PRB_F (1UL << 15) /* clear err bit in PRB_F reg */
+#define IECLR_PRB_E (1UL << 14) /* clear err bit in PRB_E reg */
+#define IECLR_PRB_D (1UL << 13) /* clear err bit in PRB_D reg */
+#define IECLR_PRB_C (1UL << 12) /* clear err bit in PRB_C reg */
+#define IECLR_PRB_B (1UL << 11) /* clear err bit in PRB_B reg */
+#define IECLR_PRB_A (1UL << 10) /* clear err bit in PRB_A reg */
+#define IECLR_PRB_9 (1UL << 9) /* clear err bit in PRB_9 reg */
+#define IECLR_PRB_8 (1UL << 8) /* clear err bit in PRB_8 reg */
+#define IECLR_PRB_0 (1UL << 0) /* clear err bit in PRB_0 reg */
+
+/*
+ * IIO CRB control register Fields: IIO_ICCR
+ */
+#define IIO_ICCR_PENDING (0x10000)
+#define IIO_ICCR_CMD_MASK (0xFF)
+#define IIO_ICCR_CMD_SHFT (7)
+#define IIO_ICCR_CMD_NOP (0x0) /* No Op */
+#define IIO_ICCR_CMD_WAKE (0x100) /* Reactivate CRB entry and process */
+#define IIO_ICCR_CMD_TIMEOUT (0x200) /* Make CRB timeout & mark invalid */
+#define IIO_ICCR_CMD_EJECT (0x400) /* Contents of entry written to memory
+ * via a WB
+ */
+#define IIO_ICCR_CMD_FLUSH (0x800)
+
+/*
+ *
+ * CRB Register description.
+ *
+ * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING
+ * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING
+ * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING
+ * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING
+ * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING * WARNING
+ *
+ * Many of the fields in CRB are status bits used by hardware
+ * for implementation of the protocol. It's very dangerous to
+ * mess around with the CRB registers.
+ *
+ * It's OK to read the CRB registers and try to make sense out of the
+ * fields in CRB.
+ *
+ * Updating CRB requires all activities in Hub IIO to be quiesced.
+ * otherwise, a write to CRB could corrupt other CRB entries.
+ * CRBs are here only as a back door peek to shub IIO's status.
+ * Quiescing implies no dmas no PIOs
+ * either directly from the cpu or from sn0net.
+ * this is not something that can be done easily. So, AVOID updating
+ * CRBs.
+ */
+
+/*
+ * Easy access macros for CRBs, all 5 registers (A-E)
+ */
+typedef ii_icrb0_a_u_t icrba_t;
+#define a_sidn ii_icrb0_a_fld_s.ia_sidn
+#define a_tnum ii_icrb0_a_fld_s.ia_tnum
+#define a_addr ii_icrb0_a_fld_s.ia_addr
+#define a_valid ii_icrb0_a_fld_s.ia_vld
+#define a_iow ii_icrb0_a_fld_s.ia_iow
+#define a_regvalue ii_icrb0_a_regval
+
+typedef ii_icrb0_b_u_t icrbb_t;
+#define b_use_old ii_icrb0_b_fld_s.ib_use_old
+#define b_imsgtype ii_icrb0_b_fld_s.ib_imsgtype
+#define b_imsg ii_icrb0_b_fld_s.ib_imsg
+#define b_initiator ii_icrb0_b_fld_s.ib_init
+#define b_exc ii_icrb0_b_fld_s.ib_exc
+#define b_ackcnt ii_icrb0_b_fld_s.ib_ack_cnt
+#define b_resp ii_icrb0_b_fld_s.ib_resp
+#define b_ack ii_icrb0_b_fld_s.ib_ack
+#define b_hold ii_icrb0_b_fld_s.ib_hold
+#define b_wb ii_icrb0_b_fld_s.ib_wb
+#define b_intvn ii_icrb0_b_fld_s.ib_intvn
+#define b_stall_ib ii_icrb0_b_fld_s.ib_stall_ib
+#define b_stall_int ii_icrb0_b_fld_s.ib_stall__intr
+#define b_stall_bte_0 ii_icrb0_b_fld_s.ib_stall__bte_0
+#define b_stall_bte_1 ii_icrb0_b_fld_s.ib_stall__bte_1
+#define b_error ii_icrb0_b_fld_s.ib_error
+#define b_ecode ii_icrb0_b_fld_s.ib_errcode
+#define b_lnetuce ii_icrb0_b_fld_s.ib_ln_uce
+#define b_mark ii_icrb0_b_fld_s.ib_mark
+#define b_xerr ii_icrb0_b_fld_s.ib_xt_err
+#define b_regvalue ii_icrb0_b_regval
+
+typedef ii_icrb0_c_u_t icrbc_t;
+#define c_suppl ii_icrb0_c_fld_s.ic_suppl
+#define c_barrop ii_icrb0_c_fld_s.ic_bo
+#define c_doresp ii_icrb0_c_fld_s.ic_resprqd
+#define c_gbr ii_icrb0_c_fld_s.ic_gbr
+#define c_btenum ii_icrb0_c_fld_s.ic_bte_num
+#define c_cohtrans ii_icrb0_c_fld_s.ic_ct
+#define c_xtsize ii_icrb0_c_fld_s.ic_size
+#define c_source ii_icrb0_c_fld_s.ic_source
+#define c_regvalue ii_icrb0_c_regval
+
+
+typedef ii_icrb0_d_u_t icrbd_t;
+#define d_sleep ii_icrb0_d_fld_s.id_sleep
+#define d_pricnt ii_icrb0_d_fld_s.id_pr_cnt
+#define d_pripsc ii_icrb0_d_fld_s.id_pr_psc
+#define d_bteop ii_icrb0_d_fld_s.id_bte_op
+#define d_bteaddr ii_icrb0_d_fld_s.id_pa_be /* ic_pa_be fld has 2 names*/
+#define d_benable ii_icrb0_d_fld_s.id_pa_be /* ic_pa_be fld has 2 names*/
+#define d_regvalue ii_icrb0_d_regval
+
+typedef ii_icrb0_e_u_t icrbe_t;
+#define icrbe_ctxtvld ii_icrb0_e_fld_s.ie_cvld
+#define icrbe_toutvld ii_icrb0_e_fld_s.ie_tvld
+#define icrbe_context ii_icrb0_e_fld_s.ie_context
+#define icrbe_timeout ii_icrb0_e_fld_s.ie_timeout
+#define e_regvalue ii_icrb0_e_regval
+
+
+/* Number of widgets supported by shub */
+#define HUB_NUM_WIDGET 9
+#define HUB_WIDGET_ID_MIN 0x8
+#define HUB_WIDGET_ID_MAX 0xf
+
+#define HUB_WIDGET_PART_NUM 0xc120
+#define MAX_HUBS_PER_XBOW 2
+
+/* A few more #defines for backwards compatibility */
+#define iprb_t ii_iprb0_u_t
+#define iprb_regval ii_iprb0_regval
+#define iprb_mult_err ii_iprb0_fld_s.i_mult_err
+#define iprb_spur_rd ii_iprb0_fld_s.i_spur_rd
+#define iprb_spur_wr ii_iprb0_fld_s.i_spur_wr
+#define iprb_rd_to ii_iprb0_fld_s.i_rd_to
+#define iprb_ovflow ii_iprb0_fld_s.i_of_cnt
+#define iprb_error ii_iprb0_fld_s.i_error
+#define iprb_ff ii_iprb0_fld_s.i_f
+#define iprb_mode ii_iprb0_fld_s.i_m
+#define iprb_bnakctr ii_iprb0_fld_s.i_nb
+#define iprb_anakctr ii_iprb0_fld_s.i_na
+#define iprb_xtalkctr ii_iprb0_fld_s.i_c
+
+#define LNK_STAT_WORKING 0x2 /* LLP is working */
+
+#define IIO_WSTAT_ECRAZY (1ULL << 32) /* Hub gone crazy */
+#define IIO_WSTAT_TXRETRY (1ULL << 9) /* Hub Tx Retry timeout */
+#define IIO_WSTAT_TXRETRY_MASK (0x7F) /* should be 0xFF?? */
+#define IIO_WSTAT_TXRETRY_SHFT (16)
+#define IIO_WSTAT_TXRETRY_CNT(w) (((w) >> IIO_WSTAT_TXRETRY_SHFT) & \
+ IIO_WSTAT_TXRETRY_MASK)
+
+/* Number of II perf. counters we can multiplex at once */
+
+#define IO_PERF_SETS 32
+
+/* Bit for the widget in inbound access register */
+#define IIO_IIWA_WIDGET(_w) ((uint64_t)(1ULL << _w))
+/* Bit for the widget in outbound access register */
+#define IIO_IOWA_WIDGET(_w) ((uint64_t)(1ULL << _w))
+
+/* NOTE: The following define assumes that we are going to get
+ * widget numbers from 8 thru F and the device numbers within
+ * widget from 0 thru 7.
+ */
+#define IIO_IIDEM_WIDGETDEV_MASK(w, d) ((uint64_t)(1ULL << (8 * ((w) - 8) + (d))))
+
+/* IO Interrupt Destination Register */
+#define IIO_IIDSR_SENT_SHIFT 28
+#define IIO_IIDSR_SENT_MASK 0x30000000
+#define IIO_IIDSR_ENB_SHIFT 24
+#define IIO_IIDSR_ENB_MASK 0x01000000
+#define IIO_IIDSR_NODE_SHIFT 9
+#define IIO_IIDSR_NODE_MASK 0x000ff700
+#define IIO_IIDSR_PI_ID_SHIFT 8
+#define IIO_IIDSR_PI_ID_MASK 0x00000100
+#define IIO_IIDSR_LVL_SHIFT 0
+#define IIO_IIDSR_LVL_MASK 0x000000ff
+
+/* Xtalk timeout threshhold register (IIO_IXTT) */
+#define IXTT_RRSP_TO_SHFT 55 /* read response timeout */
+#define IXTT_RRSP_TO_MASK (0x1FULL << IXTT_RRSP_TO_SHFT)
+#define IXTT_RRSP_PS_SHFT 32 /* read responsed TO prescalar */
+#define IXTT_RRSP_PS_MASK (0x7FFFFFULL << IXTT_RRSP_PS_SHFT)
+#define IXTT_TAIL_TO_SHFT 0 /* tail timeout counter threshold */
+#define IXTT_TAIL_TO_MASK (0x3FFFFFFULL << IXTT_TAIL_TO_SHFT)
+
+/*
+ * The IO LLP control status register and widget control register
+ */
+
+typedef union hubii_wcr_u {
+ uint64_t wcr_reg_value;
+ struct {
+ uint64_t wcr_widget_id: 4, /* LLP crossbar credit */
+ wcr_tag_mode: 1, /* Tag mode */
+ wcr_rsvd1: 8, /* Reserved */
+ wcr_xbar_crd: 3, /* LLP crossbar credit */
+ wcr_f_bad_pkt: 1, /* Force bad llp pkt enable */
+ wcr_dir_con: 1, /* widget direct connect */
+ wcr_e_thresh: 5, /* elasticity threshold */
+ wcr_rsvd: 41; /* unused */
+ } wcr_fields_s;
+} hubii_wcr_t;
+
+#define iwcr_dir_con wcr_fields_s.wcr_dir_con
+
+/* The structures below are defined to extract and modify the ii
+performance registers */
+
+/* io_perf_sel allows the caller to specify what tests will be
+ performed */
+
+typedef union io_perf_sel {
+ uint64_t perf_sel_reg;
+ struct {
+ uint64_t perf_ippr0 : 4,
+ perf_ippr1 : 4,
+ perf_icct : 8,
+ perf_rsvd : 48;
+ } perf_sel_bits;
+} io_perf_sel_t;
+
+/* io_perf_cnt is to extract the count from the shub registers. Due to
+ hardware problems there is only one counter, not two. */
+
+typedef union io_perf_cnt {
+ uint64_t perf_cnt;
+ struct {
+ uint64_t perf_cnt : 20,
+ perf_rsvd2 : 12,
+ perf_rsvd1 : 32;
+ } perf_cnt_bits;
+
+} io_perf_cnt_t;
+
+typedef union iprte_a {
+ uint64_t entry;
+ struct {
+ uint64_t i_rsvd_1 : 3;
+ uint64_t i_addr : 38;
+ uint64_t i_init : 3;
+ uint64_t i_source : 8;
+ uint64_t i_rsvd : 2;
+ uint64_t i_widget : 4;
+ uint64_t i_to_cnt : 5;
+ uint64_t i_vld : 1;
+ } iprte_fields;
+} iprte_a_t;
+
+#endif /* _ASM_IA64_SN_SHUBIO_H */
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_TIO_H
+#define _ASM_IA64_SN_TIO_H
+
+#define TIO_MMR_ADDR_MOD
+
+#define TIO_NODE_ID TIO_MMR_ADDR_MOD(0x0000000090060e80)
+
+#define TIO_ITTE_BASE 0xb0008800 /* base of translation table entries */
+#define TIO_ITTE(bigwin) (TIO_ITTE_BASE + 8*(bigwin))
+
+#define TIO_ITTE_OFFSET_BITS 8 /* size of offset field */
+#define TIO_ITTE_OFFSET_MASK ((1<<TIO_ITTE_OFFSET_BITS)-1)
+#define TIO_ITTE_OFFSET_SHIFT 0
+
+#define TIO_ITTE_WIDGET_BITS 2 /* size of widget field */
+#define TIO_ITTE_WIDGET_MASK ((1<<TIO_ITTE_WIDGET_BITS)-1)
+#define TIO_ITTE_WIDGET_SHIFT 12
+#define TIO_ITTE_VALID_MASK 0x1
+#define TIO_ITTE_VALID_SHIFT 16
+
+
+#define TIO_ITTE_PUT(nasid, bigwin, widget, addr, valid) \
+ REMOTE_HUB_S((nasid), TIO_ITTE(bigwin), \
+ (((((addr) >> TIO_BWIN_SIZE_BITS) & \
+ TIO_ITTE_OFFSET_MASK) << TIO_ITTE_OFFSET_SHIFT) | \
+ (((widget) & TIO_ITTE_WIDGET_MASK) << TIO_ITTE_WIDGET_SHIFT)) | \
+ (( (valid) & TIO_ITTE_VALID_MASK) << TIO_ITTE_VALID_SHIFT))
+
+#endif /* _ASM_IA64_SN_TIO_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+#ifndef _ASM_IA64_SN_XTALK_HUBDEV_H
+#define _ASM_IA64_SN_XTALK_HUBDEV_H
+
+#define HUB_WIDGET_ID_MAX 0xf
+#define DEV_PER_WIDGET (2*2*8)
+#define IIO_ITTE_WIDGET_BITS 4 /* size of widget field */
+#define IIO_ITTE_WIDGET_MASK ((1<<IIO_ITTE_WIDGET_BITS)-1)
+#define IIO_ITTE_WIDGET_SHIFT 8
+
+/*
+ * Use the top big window as a surrogate for the first small window
+ */
+#define SWIN0_BIGWIN HUB_NUM_BIG_WINDOW
+#define IIO_NUM_ITTES 7
+#define HUB_NUM_BIG_WINDOW (IIO_NUM_ITTES - 1)
+
+struct sn_flush_device_list {
+ int sfdl_bus;
+ int sfdl_slot;
+ int sfdl_pin;
+ struct bar_list {
+ unsigned long start;
+ unsigned long end;
+ } sfdl_bar_list[6];
+ unsigned long sfdl_force_int_addr;
+ unsigned long sfdl_flush_value;
+ volatile unsigned long *sfdl_flush_addr;
+ uint64_t sfdl_persistent_busnum;
+ struct pcibus_info *sfdl_pcibus_info;
+ spinlock_t sfdl_flush_lock;
+};
+
+/*
+ * **widget_p - Used as an array[wid_num][device] of sn_flush_device_list.
+ */
+struct sn_flush_nasid_entry {
+ struct sn_flush_device_list **widget_p; /* Used as a array of wid_num */
+ uint64_t iio_itte[8];
+};
+
+struct hubdev_info {
+ geoid_t hdi_geoid;
+ short hdi_nasid;
+ short hdi_peer_nasid; /* Dual Porting Peer */
+
+ struct sn_flush_nasid_entry hdi_flush_nasid_list;
+ struct xwidget_info hdi_xwidget_info[HUB_WIDGET_ID_MAX + 1];
+
+
+ void *hdi_nodepda;
+ void *hdi_node_vertex;
+ void *hdi_xtalk_vertex;
+};
+
+extern void hubdev_init_node(nodepda_t *, cnodeid_t);
+extern void hub_error_init(struct hubdev_info *);
+extern void ice_error_init(struct hubdev_info *);
+
+
+#endif /* _ASM_IA64_SN_XTALK_HUBDEV_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ */
+#ifndef _ASM_IA64_SN_XTALK_XBOW_H
+#define _ASM_IA64_SN_XTALK_XBOW_H
+
+#define XBOW_PORT_8 0x8
+#define XBOW_PORT_C 0xc
+#define XBOW_PORT_F 0xf
+
+#define MAX_XBOW_PORTS 8 /* number of ports on xbow chip */
+#define BASE_XBOW_PORT XBOW_PORT_8 /* Lowest external port */
+
+#define XBOW_CREDIT 4
+
+#define MAX_XBOW_NAME 16
+
+/* Register set for each xbow link */
+typedef volatile struct xb_linkregs_s {
+/*
+ * we access these through synergy unswizzled space, so the address
+ * gets twiddled (i.e. references to 0x4 actually go to 0x0 and vv.)
+ * That's why we put the register first and filler second.
+ */
+ uint32_t link_ibf;
+ uint32_t filler0; /* filler for proper alignment */
+ uint32_t link_control;
+ uint32_t filler1;
+ uint32_t link_status;
+ uint32_t filler2;
+ uint32_t link_arb_upper;
+ uint32_t filler3;
+ uint32_t link_arb_lower;
+ uint32_t filler4;
+ uint32_t link_status_clr;
+ uint32_t filler5;
+ uint32_t link_reset;
+ uint32_t filler6;
+ uint32_t link_aux_status;
+ uint32_t filler7;
+} xb_linkregs_t;
+
+typedef volatile struct xbow_s {
+ /* standard widget configuration 0x000000-0x000057 */
+ struct widget_cfg xb_widget; /* 0x000000 */
+
+ /* helper fieldnames for accessing bridge widget */
+
+#define xb_wid_id xb_widget.w_id
+#define xb_wid_stat xb_widget.w_status
+#define xb_wid_err_upper xb_widget.w_err_upper_addr
+#define xb_wid_err_lower xb_widget.w_err_lower_addr
+#define xb_wid_control xb_widget.w_control
+#define xb_wid_req_timeout xb_widget.w_req_timeout
+#define xb_wid_int_upper xb_widget.w_intdest_upper_addr
+#define xb_wid_int_lower xb_widget.w_intdest_lower_addr
+#define xb_wid_err_cmdword xb_widget.w_err_cmd_word
+#define xb_wid_llp xb_widget.w_llp_cfg
+#define xb_wid_stat_clr xb_widget.w_tflush
+
+/*
+ * we access these through synergy unswizzled space, so the address
+ * gets twiddled (i.e. references to 0x4 actually go to 0x0 and vv.)
+ * That's why we put the register first and filler second.
+ */
+ /* xbow-specific widget configuration 0x000058-0x0000FF */
+ uint32_t xb_wid_arb_reload; /* 0x00005C */
+ uint32_t _pad_000058;
+ uint32_t xb_perf_ctr_a; /* 0x000064 */
+ uint32_t _pad_000060;
+ uint32_t xb_perf_ctr_b; /* 0x00006c */
+ uint32_t _pad_000068;
+ uint32_t xb_nic; /* 0x000074 */
+ uint32_t _pad_000070;
+
+ /* Xbridge only */
+ uint32_t xb_w0_rst_fnc; /* 0x00007C */
+ uint32_t _pad_000078;
+ uint32_t xb_l8_rst_fnc; /* 0x000084 */
+ uint32_t _pad_000080;
+ uint32_t xb_l9_rst_fnc; /* 0x00008c */
+ uint32_t _pad_000088;
+ uint32_t xb_la_rst_fnc; /* 0x000094 */
+ uint32_t _pad_000090;
+ uint32_t xb_lb_rst_fnc; /* 0x00009c */
+ uint32_t _pad_000098;
+ uint32_t xb_lc_rst_fnc; /* 0x0000a4 */
+ uint32_t _pad_0000a0;
+ uint32_t xb_ld_rst_fnc; /* 0x0000ac */
+ uint32_t _pad_0000a8;
+ uint32_t xb_le_rst_fnc; /* 0x0000b4 */
+ uint32_t _pad_0000b0;
+ uint32_t xb_lf_rst_fnc; /* 0x0000bc */
+ uint32_t _pad_0000b8;
+ uint32_t xb_lock; /* 0x0000c4 */
+ uint32_t _pad_0000c0;
+ uint32_t xb_lock_clr; /* 0x0000cc */
+ uint32_t _pad_0000c8;
+ /* end of Xbridge only */
+ uint32_t _pad_0000d0[12];
+
+ /* Link Specific Registers, port 8..15 0x000100-0x000300 */
+ xb_linkregs_t xb_link_raw[MAX_XBOW_PORTS];
+#define xb_link(p) xb_link_raw[(p) & (MAX_XBOW_PORTS - 1)]
+
+} xbow_t;
+
+#define XB_FLAGS_EXISTS 0x1 /* device exists */
+#define XB_FLAGS_MASTER 0x2
+#define XB_FLAGS_SLAVE 0x0
+#define XB_FLAGS_GBR 0x4
+#define XB_FLAGS_16BIT 0x8
+#define XB_FLAGS_8BIT 0x0
+
+/* is widget port number valid? (based on version 7.0 of xbow spec) */
+#define XBOW_WIDGET_IS_VALID(wid) ((wid) >= XBOW_PORT_8 && (wid) <= XBOW_PORT_F)
+
+/* whether to use upper or lower arbitration register, given source widget id */
+#define XBOW_ARB_IS_UPPER(wid) ((wid) >= XBOW_PORT_8 && (wid) <= XBOW_PORT_B)
+#define XBOW_ARB_IS_LOWER(wid) ((wid) >= XBOW_PORT_C && (wid) <= XBOW_PORT_F)
+
+/* offset of arbitration register, given source widget id */
+#define XBOW_ARB_OFF(wid) (XBOW_ARB_IS_UPPER(wid) ? 0x1c : 0x24)
+
+#define XBOW_WID_ID WIDGET_ID
+#define XBOW_WID_STAT WIDGET_STATUS
+#define XBOW_WID_ERR_UPPER WIDGET_ERR_UPPER_ADDR
+#define XBOW_WID_ERR_LOWER WIDGET_ERR_LOWER_ADDR
+#define XBOW_WID_CONTROL WIDGET_CONTROL
+#define XBOW_WID_REQ_TO WIDGET_REQ_TIMEOUT
+#define XBOW_WID_INT_UPPER WIDGET_INTDEST_UPPER_ADDR
+#define XBOW_WID_INT_LOWER WIDGET_INTDEST_LOWER_ADDR
+#define XBOW_WID_ERR_CMDWORD WIDGET_ERR_CMD_WORD
+#define XBOW_WID_LLP WIDGET_LLP_CFG
+#define XBOW_WID_STAT_CLR WIDGET_TFLUSH
+#define XBOW_WID_ARB_RELOAD 0x5c
+#define XBOW_WID_PERF_CTR_A 0x64
+#define XBOW_WID_PERF_CTR_B 0x6c
+#define XBOW_WID_NIC 0x74
+
+/* Xbridge only */
+#define XBOW_W0_RST_FNC 0x00007C
+#define XBOW_L8_RST_FNC 0x000084
+#define XBOW_L9_RST_FNC 0x00008c
+#define XBOW_LA_RST_FNC 0x000094
+#define XBOW_LB_RST_FNC 0x00009c
+#define XBOW_LC_RST_FNC 0x0000a4
+#define XBOW_LD_RST_FNC 0x0000ac
+#define XBOW_LE_RST_FNC 0x0000b4
+#define XBOW_LF_RST_FNC 0x0000bc
+#define XBOW_RESET_FENCE(x) ((x) > 7 && (x) < 16) ? \
+ (XBOW_W0_RST_FNC + ((x) - 7) * 8) : \
+ ((x) == 0) ? XBOW_W0_RST_FNC : 0
+#define XBOW_LOCK 0x0000c4
+#define XBOW_LOCK_CLR 0x0000cc
+/* End of Xbridge only */
+
+/* used only in ide, but defined here within the reserved portion */
+/* of the widget0 address space (before 0xf4) */
+#define XBOW_WID_UNDEF 0xe4
+
+/* xbow link register set base, legal value for x is 0x8..0xf */
+#define XB_LINK_BASE 0x100
+#define XB_LINK_OFFSET 0x40
+#define XB_LINK_REG_BASE(x) (XB_LINK_BASE + ((x) & (MAX_XBOW_PORTS - 1)) * XB_LINK_OFFSET)
+
+#define XB_LINK_IBUF_FLUSH(x) (XB_LINK_REG_BASE(x) + 0x4)
+#define XB_LINK_CTRL(x) (XB_LINK_REG_BASE(x) + 0xc)
+#define XB_LINK_STATUS(x) (XB_LINK_REG_BASE(x) + 0x14)
+#define XB_LINK_ARB_UPPER(x) (XB_LINK_REG_BASE(x) + 0x1c)
+#define XB_LINK_ARB_LOWER(x) (XB_LINK_REG_BASE(x) + 0x24)
+#define XB_LINK_STATUS_CLR(x) (XB_LINK_REG_BASE(x) + 0x2c)
+#define XB_LINK_RESET(x) (XB_LINK_REG_BASE(x) + 0x34)
+#define XB_LINK_AUX_STATUS(x) (XB_LINK_REG_BASE(x) + 0x3c)
+
+/* link_control(x) */
+#define XB_CTRL_LINKALIVE_IE 0x80000000 /* link comes alive */
+ /* reserved: 0x40000000 */
+#define XB_CTRL_PERF_CTR_MODE_MSK 0x30000000 /* perf counter mode */
+#define XB_CTRL_IBUF_LEVEL_MSK 0x0e000000 /* input packet buffer level */
+#define XB_CTRL_8BIT_MODE 0x01000000 /* force link into 8 bit mode */
+#define XB_CTRL_BAD_LLP_PKT 0x00800000 /* force bad LLP packet */
+#define XB_CTRL_WIDGET_CR_MSK 0x007c0000 /* LLP widget credit mask */
+#define XB_CTRL_WIDGET_CR_SHFT 18 /* LLP widget credit shift */
+#define XB_CTRL_ILLEGAL_DST_IE 0x00020000 /* illegal destination */
+#define XB_CTRL_OALLOC_IBUF_IE 0x00010000 /* overallocated input buffer */
+ /* reserved: 0x0000fe00 */
+#define XB_CTRL_BNDWDTH_ALLOC_IE 0x00000100 /* bandwidth alloc */
+#define XB_CTRL_RCV_CNT_OFLOW_IE 0x00000080 /* rcv retry overflow */
+#define XB_CTRL_XMT_CNT_OFLOW_IE 0x00000040 /* xmt retry overflow */
+#define XB_CTRL_XMT_MAX_RTRY_IE 0x00000020 /* max transmit retry */
+#define XB_CTRL_RCV_IE 0x00000010 /* receive */
+#define XB_CTRL_XMT_RTRY_IE 0x00000008 /* transmit retry */
+ /* reserved: 0x00000004 */
+#define XB_CTRL_MAXREQ_TOUT_IE 0x00000002 /* maximum request timeout */
+#define XB_CTRL_SRC_TOUT_IE 0x00000001 /* source timeout */
+
+/* link_status(x) */
+#define XB_STAT_LINKALIVE XB_CTRL_LINKALIVE_IE
+ /* reserved: 0x7ff80000 */
+#define XB_STAT_MULTI_ERR 0x00040000 /* multi error */
+#define XB_STAT_ILLEGAL_DST_ERR XB_CTRL_ILLEGAL_DST_IE
+#define XB_STAT_OALLOC_IBUF_ERR XB_CTRL_OALLOC_IBUF_IE
+#define XB_STAT_BNDWDTH_ALLOC_ID_MSK 0x0000ff00 /* port bitmask */
+#define XB_STAT_RCV_CNT_OFLOW_ERR XB_CTRL_RCV_CNT_OFLOW_IE
+#define XB_STAT_XMT_CNT_OFLOW_ERR XB_CTRL_XMT_CNT_OFLOW_IE
+#define XB_STAT_XMT_MAX_RTRY_ERR XB_CTRL_XMT_MAX_RTRY_IE
+#define XB_STAT_RCV_ERR XB_CTRL_RCV_IE
+#define XB_STAT_XMT_RTRY_ERR XB_CTRL_XMT_RTRY_IE
+ /* reserved: 0x00000004 */
+#define XB_STAT_MAXREQ_TOUT_ERR XB_CTRL_MAXREQ_TOUT_IE
+#define XB_STAT_SRC_TOUT_ERR XB_CTRL_SRC_TOUT_IE
+
+/* link_aux_status(x) */
+#define XB_AUX_STAT_RCV_CNT 0xff000000
+#define XB_AUX_STAT_XMT_CNT 0x00ff0000
+#define XB_AUX_STAT_TOUT_DST 0x0000ff00
+#define XB_AUX_LINKFAIL_RST_BAD 0x00000040
+#define XB_AUX_STAT_PRESENT 0x00000020
+#define XB_AUX_STAT_PORT_WIDTH 0x00000010
+ /* reserved: 0x0000000f */
+
+/*
+ * link_arb_upper/link_arb_lower(x), (reg) should be the link_arb_upper
+ * register if (x) is 0x8..0xb, link_arb_lower if (x) is 0xc..0xf
+ */
+#define XB_ARB_GBR_MSK 0x1f
+#define XB_ARB_RR_MSK 0x7
+#define XB_ARB_GBR_SHFT(x) (((x) & 0x3) * 8)
+#define XB_ARB_RR_SHFT(x) (((x) & 0x3) * 8 + 5)
+#define XB_ARB_GBR_CNT(reg,x) ((reg) >> XB_ARB_GBR_SHFT(x) & XB_ARB_GBR_MSK)
+#define XB_ARB_RR_CNT(reg,x) ((reg) >> XB_ARB_RR_SHFT(x) & XB_ARB_RR_MSK)
+
+/* XBOW_WID_STAT */
+#define XB_WID_STAT_LINK_INTR_SHFT (24)
+#define XB_WID_STAT_LINK_INTR_MASK (0xFF << XB_WID_STAT_LINK_INTR_SHFT)
+#define XB_WID_STAT_LINK_INTR(x) (0x1 << (((x)&7) + XB_WID_STAT_LINK_INTR_SHFT))
+#define XB_WID_STAT_WIDGET0_INTR 0x00800000
+#define XB_WID_STAT_SRCID_MASK 0x000003c0 /* Xbridge only */
+#define XB_WID_STAT_REG_ACC_ERR 0x00000020
+#define XB_WID_STAT_RECV_TOUT 0x00000010 /* Xbridge only */
+#define XB_WID_STAT_ARB_TOUT 0x00000008 /* Xbridge only */
+#define XB_WID_STAT_XTALK_ERR 0x00000004
+#define XB_WID_STAT_DST_TOUT 0x00000002 /* Xbridge only */
+#define XB_WID_STAT_MULTI_ERR 0x00000001
+
+#define XB_WID_STAT_SRCID_SHFT 6
+
+/* XBOW_WID_CONTROL */
+#define XB_WID_CTRL_REG_ACC_IE XB_WID_STAT_REG_ACC_ERR
+#define XB_WID_CTRL_RECV_TOUT XB_WID_STAT_RECV_TOUT
+#define XB_WID_CTRL_ARB_TOUT XB_WID_STAT_ARB_TOUT
+#define XB_WID_CTRL_XTALK_IE XB_WID_STAT_XTALK_ERR
+
+/* XBOW_WID_INT_UPPER */
+/* defined in xwidget.h for WIDGET_INTDEST_UPPER_ADDR */
+
+/* XBOW WIDGET part number, in the ID register */
+#define XBOW_WIDGET_PART_NUM 0x0 /* crossbow */
+#define XXBOW_WIDGET_PART_NUM 0xd000 /* Xbridge */
+#define XBOW_WIDGET_MFGR_NUM 0x0
+#define XXBOW_WIDGET_MFGR_NUM 0x0
+#define PXBOW_WIDGET_PART_NUM 0xd100 /* PIC */
+
+#define XBOW_REV_1_0 0x1 /* xbow rev 1.0 is "1" */
+#define XBOW_REV_1_1 0x2 /* xbow rev 1.1 is "2" */
+#define XBOW_REV_1_2 0x3 /* xbow rev 1.2 is "3" */
+#define XBOW_REV_1_3 0x4 /* xbow rev 1.3 is "4" */
+#define XBOW_REV_2_0 0x5 /* xbow rev 2.0 is "5" */
+
+#define XXBOW_PART_REV_1_0 (XXBOW_WIDGET_PART_NUM << 4 | 0x1 )
+#define XXBOW_PART_REV_2_0 (XXBOW_WIDGET_PART_NUM << 4 | 0x2 )
+
+/* XBOW_WID_ARB_RELOAD */
+#define XBOW_WID_ARB_RELOAD_INT 0x3f /* GBR reload interval */
+
+#define IS_XBRIDGE_XBOW(wid) \
+ (XWIDGET_PART_NUM(wid) == XXBOW_WIDGET_PART_NUM && \
+ XWIDGET_MFG_NUM(wid) == XXBOW_WIDGET_MFGR_NUM)
+
+#define IS_PIC_XBOW(wid) \
+ (XWIDGET_PART_NUM(wid) == PXBOW_WIDGET_PART_NUM && \
+ XWIDGET_MFG_NUM(wid) == XXBOW_WIDGET_MFGR_NUM)
+
+#define XBOW_WAR_ENABLED(pv, widid) ((1 << XWIDGET_REV_NUM(widid)) & pv)
+
+#endif /* _ASM_IA64_SN_XTALK_XBOW_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ */
+#ifndef _ASM_IA64_SN_XTALK_XWIDGET_H
+#define _ASM_IA64_SN_XTALK_XWIDGET_H
+
+/* WIDGET_ID */
+#define WIDGET_REV_NUM 0xf0000000
+#define WIDGET_PART_NUM 0x0ffff000
+#define WIDGET_MFG_NUM 0x00000ffe
+#define WIDGET_REV_NUM_SHFT 28
+#define WIDGET_PART_NUM_SHFT 12
+#define WIDGET_MFG_NUM_SHFT 1
+
+#define XWIDGET_PART_NUM(widgetid) (((widgetid) & WIDGET_PART_NUM) >> WIDGET_PART_NUM_SHFT)
+#define XWIDGET_REV_NUM(widgetid) (((widgetid) & WIDGET_REV_NUM) >> WIDGET_REV_NUM_SHFT)
+#define XWIDGET_MFG_NUM(widgetid) (((widgetid) & WIDGET_MFG_NUM) >> WIDGET_MFG_NUM_SHFT)
+#define XWIDGET_PART_REV_NUM(widgetid) ((XWIDGET_PART_NUM(widgetid) << 4) | \
+ XWIDGET_REV_NUM(widgetid))
+#define XWIDGET_PART_REV_NUM_REV(partrev) (partrev & 0xf)
+
+/* widget configuration registers */
+struct widget_cfg{
+ uint32_t w_id; /* 0x04 */
+ uint32_t w_pad_0; /* 0x00 */
+ uint32_t w_status; /* 0x0c */
+ uint32_t w_pad_1; /* 0x08 */
+ uint32_t w_err_upper_addr; /* 0x14 */
+ uint32_t w_pad_2; /* 0x10 */
+ uint32_t w_err_lower_addr; /* 0x1c */
+ uint32_t w_pad_3; /* 0x18 */
+ uint32_t w_control; /* 0x24 */
+ uint32_t w_pad_4; /* 0x20 */
+ uint32_t w_req_timeout; /* 0x2c */
+ uint32_t w_pad_5; /* 0x28 */
+ uint32_t w_intdest_upper_addr; /* 0x34 */
+ uint32_t w_pad_6; /* 0x30 */
+ uint32_t w_intdest_lower_addr; /* 0x3c */
+ uint32_t w_pad_7; /* 0x38 */
+ uint32_t w_err_cmd_word; /* 0x44 */
+ uint32_t w_pad_8; /* 0x40 */
+ uint32_t w_llp_cfg; /* 0x4c */
+ uint32_t w_pad_9; /* 0x48 */
+ uint32_t w_tflush; /* 0x54 */
+ uint32_t w_pad_10; /* 0x50 */
+};
+
+/*
+ * Crosstalk Widget Hardware Identification, as defined in the Crosstalk spec.
+ */
+struct xwidget_hwid{
+ int mfg_num;
+ int rev_num;
+ int part_num;
+};
+
+struct xwidget_info{
+
+ struct xwidget_hwid xwi_hwid; /* Widget Identification */
+ char xwi_masterxid; /* Hub's Widget Port Number */
+ void *xwi_hubinfo; /* Hub's provider private info */
+ uint64_t *xwi_hub_provider; /* prom provider functions */
+ void *xwi_vertex;
+};
+
+#endif /* _ASM_IA64_SN_XTALK_XWIDGET_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+#include <linux/types.h>
+#include <asm/sn/sn_sal.h>
+#include "ioerror.h"
+#include <asm/sn/addrs.h>
+#include "shubio.h"
+#include <asm/sn/geo.h>
+#include "xtalk/xwidgetdev.h"
+#include "xtalk/hubdev.h"
+#include <asm/sn/bte.h>
+
+/*
+ * Bte error handling is done in two parts. The first captures
+ * any crb related errors. Since there can be multiple crbs per
+ * interface and multiple interfaces active, we need to wait until
+ * all active crbs are completed. This is the first job of the
+ * second part error handler. When all bte related CRBs are cleanly
+ * completed, it resets the interfaces and gets them ready for new
+ * transfers to be queued.
+ */
+
+void bte_error_handler(unsigned long);
+
+/*
+ * Wait until all BTE related CRBs are completed
+ * and then reset the interfaces.
+ */
+void bte_error_handler(unsigned long _nodepda)
+{
+ struct nodepda_s *err_nodepda = (struct nodepda_s *)_nodepda;
+ spinlock_t *recovery_lock = &err_nodepda->bte_recovery_lock;
+ struct timer_list *recovery_timer = &err_nodepda->bte_recovery_timer;
+ nasid_t nasid;
+ int i;
+ int valid_crbs;
+ unsigned long irq_flags;
+ volatile u64 *notify;
+ bte_result_t bh_error;
+ ii_imem_u_t imem; /* II IMEM Register */
+ ii_icrb0_d_u_t icrbd; /* II CRB Register D */
+ ii_ibcr_u_t ibcr;
+ ii_icmr_u_t icmr;
+
+ BTE_PRINTK(("bte_error_handler(%p) - %d\n", err_nodepda,
+ smp_processor_id()));
+
+ spin_lock_irqsave(recovery_lock, irq_flags);
+
+ if ((err_nodepda->bte_if[0].bh_error == BTE_SUCCESS) &&
+ (err_nodepda->bte_if[1].bh_error == BTE_SUCCESS)) {
+ BTE_PRINTK(("eh:%p:%d Nothing to do.\n", err_nodepda,
+ smp_processor_id()));
+ spin_unlock_irqrestore(recovery_lock, irq_flags);
+ return;
+ }
+ /*
+ * Lock all interfaces on this node to prevent new transfers
+ * from being queued.
+ */
+ for (i = 0; i < BTES_PER_NODE; i++) {
+ if (err_nodepda->bte_if[i].cleanup_active) {
+ continue;
+ }
+ spin_lock(&err_nodepda->bte_if[i].spinlock);
+ BTE_PRINTK(("eh:%p:%d locked %d\n", err_nodepda,
+ smp_processor_id(), i));
+ err_nodepda->bte_if[i].cleanup_active = 1;
+ }
+
+ /* Determine information about our hub */
+ nasid = cnodeid_to_nasid(err_nodepda->bte_if[0].bte_cnode);
+
+ /*
+ * A BTE transfer can use multiple CRBs. We need to make sure
+ * that all the BTE CRBs are complete (or timed out) before
+ * attempting to clean up the error. Resetting the BTE while
+ * there are still BTE CRBs active will hang the BTE.
+ * We should look at all the CRBs to see if they are allocated
+ * to the BTE and see if they are still active. When none
+ * are active, we can continue with the cleanup.
+ *
+ * We also want to make sure that the local NI port is up.
+ * When a router resets the NI port can go down, while it
+ * goes through the LLP handshake, but then comes back up.
+ */
+ icmr.ii_icmr_regval = REMOTE_HUB_L(nasid, IIO_ICMR);
+ if (icmr.ii_icmr_fld_s.i_crb_mark != 0) {
+ /*
+ * There are errors which still need to be cleaned up by
+ * hubiio_crb_error_handler
+ */
+ mod_timer(recovery_timer, HZ * 5);
+ BTE_PRINTK(("eh:%p:%d Marked Giving up\n", err_nodepda,
+ smp_processor_id()));
+ spin_unlock_irqrestore(recovery_lock, irq_flags);
+ return;
+ }
+ if (icmr.ii_icmr_fld_s.i_crb_vld != 0) {
+
+ valid_crbs = icmr.ii_icmr_fld_s.i_crb_vld;
+
+ for (i = 0; i < IIO_NUM_CRBS; i++) {
+ if (!((1 << i) & valid_crbs)) {
+ /* This crb was not marked as valid, ignore */
+ continue;
+ }
+ icrbd.ii_icrb0_d_regval =
+ REMOTE_HUB_L(nasid, IIO_ICRB_D(i));
+ if (icrbd.d_bteop) {
+ mod_timer(recovery_timer, HZ * 5);
+ BTE_PRINTK(("eh:%p:%d Valid %d, Giving up\n",
+ err_nodepda, smp_processor_id(),
+ i));
+ spin_unlock_irqrestore(recovery_lock,
+ irq_flags);
+ return;
+ }
+ }
+ }
+
+ BTE_PRINTK(("eh:%p:%d Cleaning up\n", err_nodepda, smp_processor_id()));
+ /* Reenable both bte interfaces */
+ imem.ii_imem_regval = REMOTE_HUB_L(nasid, IIO_IMEM);
+ imem.ii_imem_fld_s.i_b0_esd = imem.ii_imem_fld_s.i_b1_esd = 1;
+ REMOTE_HUB_S(nasid, IIO_IMEM, imem.ii_imem_regval);
+
+ /* Reinitialize both BTE state machines. */
+ ibcr.ii_ibcr_regval = REMOTE_HUB_L(nasid, IIO_IBCR);
+ ibcr.ii_ibcr_fld_s.i_soft_reset = 1;
+ REMOTE_HUB_S(nasid, IIO_IBCR, ibcr.ii_ibcr_regval);
+
+ for (i = 0; i < BTES_PER_NODE; i++) {
+ bh_error = err_nodepda->bte_if[i].bh_error;
+ if (bh_error != BTE_SUCCESS) {
+ /* There is an error which needs to be notified */
+ notify = err_nodepda->bte_if[i].most_rcnt_na;
+ BTE_PRINTK(("cnode %d bte %d error=0x%lx\n",
+ err_nodepda->bte_if[i].bte_cnode,
+ err_nodepda->bte_if[i].bte_num,
+ IBLS_ERROR | (u64) bh_error));
+ *notify = IBLS_ERROR | bh_error;
+ err_nodepda->bte_if[i].bh_error = BTE_SUCCESS;
+ }
+
+ err_nodepda->bte_if[i].cleanup_active = 0;
+ BTE_PRINTK(("eh:%p:%d Unlocked %d\n", err_nodepda,
+ smp_processor_id(), i));
+ spin_unlock(&pda->cpu_bte_if[i]->spinlock);
+ }
+
+ del_timer(recovery_timer);
+
+ spin_unlock_irqrestore(recovery_lock, irq_flags);
+}
+
+/*
+ * First part error handler. This is called whenever any error CRB interrupt
+ * is generated by the II.
+ */
+void
+bte_crb_error_handler(cnodeid_t cnode, int btenum,
+ int crbnum, ioerror_t * ioe, int bteop)
+{
+ struct bteinfo_s *bte;
+
+
+ bte = &(NODEPDA(cnode)->bte_if[btenum]);
+
+ /*
+ * The caller has already figured out the error type, we save that
+ * in the bte handle structure for the thread excercising the
+ * interface to consume.
+ */
+ bte->bh_error = ioe->ie_errortype + BTEFAIL_OFFSET;
+ bte->bte_error_count++;
+
+ BTE_PRINTK(("Got an error on cnode %d bte %d: HW error type 0x%x\n",
+ bte->bte_cnode, bte->bte_num, ioe->ie_errortype));
+ bte_error_handler((unsigned long) NODEPDA(cnode));
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000,2002-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <asm/delay.h>
+#include <asm/sn/sn_sal.h>
+#include "ioerror.h"
+#include <asm/sn/addrs.h>
+#include "shubio.h"
+#include <asm/sn/geo.h>
+#include "xtalk/xwidgetdev.h"
+#include "xtalk/hubdev.h"
+#include <asm/sn/bte.h>
+
+void hubiio_crb_error_handler(struct hubdev_info *hubdev_info);
+extern void bte_crb_error_handler(cnodeid_t, int, int, ioerror_t *,
+ int);
+static irqreturn_t hub_eint_handler(int irq, void *arg, struct pt_regs *ep)
+{
+ struct hubdev_info *hubdev_info;
+ struct ia64_sal_retval ret_stuff;
+ nasid_t nasid;
+
+ ret_stuff.status = 0;
+ ret_stuff.v0 = 0;
+ hubdev_info = (struct hubdev_info *)arg;
+ nasid = hubdev_info->hdi_nasid;
+ SAL_CALL_NOLOCK(ret_stuff, SN_SAL_HUB_ERROR_INTERRUPT,
+ (u64) nasid, 0, 0, 0, 0, 0, 0);
+
+ if ((int)ret_stuff.v0)
+ panic("hubii_eint_handler(): Fatal TIO Error");
+
+ if (!(nasid & 1)) /* Not a TIO, handle CRB errors */
+ (void)hubiio_crb_error_handler(hubdev_info);
+
+ return IRQ_HANDLED;
+}
+
+/*
+ * Free the hub CRB "crbnum" which encountered an error.
+ * Assumption is, error handling was successfully done,
+ * and we now want to return the CRB back to Hub for normal usage.
+ *
+ * In order to free the CRB, all that's needed is to de-allocate it
+ *
+ * Assumption:
+ * No other processor is mucking around with the hub control register.
+ * So, upper layer has to single thread this.
+ */
+void hubiio_crb_free(struct hubdev_info *hubdev_info, int crbnum)
+{
+ ii_icrb0_b_u_t icrbb;
+
+ /*
+ * The hardware does NOT clear the mark bit, so it must get cleared
+ * here to be sure the error is not processed twice.
+ */
+ icrbb.ii_icrb0_b_regval = REMOTE_HUB_L(hubdev_info->hdi_nasid,
+ IIO_ICRB_B(crbnum));
+ icrbb.b_mark = 0;
+ REMOTE_HUB_S(hubdev_info->hdi_nasid, IIO_ICRB_B(crbnum),
+ icrbb.ii_icrb0_b_regval);
+ /*
+ * Deallocate the register wait till hub indicates it's done.
+ */
+ REMOTE_HUB_S(hubdev_info->hdi_nasid, IIO_ICDR, (IIO_ICDR_PND | crbnum));
+ while (REMOTE_HUB_L(hubdev_info->hdi_nasid, IIO_ICDR) & IIO_ICDR_PND)
+ udelay(1);
+
+}
+
+/*
+ * hubiio_crb_error_handler
+ *
+ * This routine gets invoked when a hub gets an error
+ * interrupt. So, the routine is running in interrupt context
+ * at error interrupt level.
+ * Action:
+ * It's responsible for identifying ALL the CRBs that are marked
+ * with error, and process them.
+ *
+ * If you find the CRB that's marked with error, map this to the
+ * reason it caused error, and invoke appropriate error handler.
+ *
+ * XXX Be aware of the information in the context register.
+ *
+ * NOTE:
+ * Use REMOTE_HUB_* macro instead of LOCAL_HUB_* so that the interrupt
+ * handler can be run on any node. (not necessarily the node
+ * corresponding to the hub that encountered error).
+ */
+
+void hubiio_crb_error_handler(struct hubdev_info *hubdev_info)
+{
+ nasid_t nasid;
+ ii_icrb0_a_u_t icrba; /* II CRB Register A */
+ ii_icrb0_b_u_t icrbb; /* II CRB Register B */
+ ii_icrb0_c_u_t icrbc; /* II CRB Register C */
+ ii_icrb0_d_u_t icrbd; /* II CRB Register D */
+ ii_icrb0_e_u_t icrbe; /* II CRB Register D */
+ int i;
+ int num_errors = 0; /* Num of errors handled */
+ ioerror_t ioerror;
+
+ nasid = hubdev_info->hdi_nasid;
+
+ /*
+ * XXX - Add locking for any recovery actions
+ */
+ /*
+ * Scan through all CRBs in the Hub, and handle the errors
+ * in any of the CRBs marked.
+ */
+ for (i = 0; i < IIO_NUM_CRBS; i++) {
+ /* Check this crb entry to see if it is in error. */
+ icrbb.ii_icrb0_b_regval = REMOTE_HUB_L(nasid, IIO_ICRB_B(i));
+
+ if (icrbb.b_mark == 0) {
+ continue;
+ }
+
+ icrba.ii_icrb0_a_regval = REMOTE_HUB_L(nasid, IIO_ICRB_A(i));
+
+ IOERROR_INIT(&ioerror);
+
+ /* read other CRB error registers. */
+ icrbc.ii_icrb0_c_regval = REMOTE_HUB_L(nasid, IIO_ICRB_C(i));
+ icrbd.ii_icrb0_d_regval = REMOTE_HUB_L(nasid, IIO_ICRB_D(i));
+ icrbe.ii_icrb0_e_regval = REMOTE_HUB_L(nasid, IIO_ICRB_E(i));
+
+ IOERROR_SETVALUE(&ioerror, errortype, icrbb.b_ecode);
+
+ /* Check if this error is due to BTE operation,
+ * and handle it separately.
+ */
+ if (icrbd.d_bteop ||
+ ((icrbb.b_initiator == IIO_ICRB_INIT_BTE0 ||
+ icrbb.b_initiator == IIO_ICRB_INIT_BTE1) &&
+ (icrbb.b_imsgtype == IIO_ICRB_IMSGT_BTE ||
+ icrbb.b_imsgtype == IIO_ICRB_IMSGT_SN1NET))) {
+
+ int bte_num;
+
+ if (icrbd.d_bteop)
+ bte_num = icrbc.c_btenum;
+ else /* b_initiator bit 2 gives BTE number */
+ bte_num = (icrbb.b_initiator & 0x4) >> 2;
+
+ hubiio_crb_free(hubdev_info, i);
+
+ bte_crb_error_handler(nasid_to_cnodeid(nasid), bte_num,
+ i, &ioerror, icrbd.d_bteop);
+ num_errors++;
+ continue;
+ }
+ }
+}
+
+/*
+ * Function : hub_error_init
+ * Purpose : initialize the error handling requirements for a given hub.
+ * Parameters : cnode, the compact nodeid.
+ * Assumptions : Called only once per hub, either by a local cpu. Or by a
+ * remote cpu, when this hub is headless.(cpuless)
+ * Returns : None
+ */
+void hub_error_init(struct hubdev_info *hubdev_info)
+{
+ if (request_irq(SGI_II_ERROR, (void *)hub_eint_handler, SA_SHIRQ,
+ "SN_hub_error", (void *)hubdev_info))
+ printk("hub_error_init: Failed to request_irq for 0x%p\n",
+ hubdev_info);
+ return;
+}
+
+
+/*
+ * Function : ice_error_init
+ * Purpose : initialize the error handling requirements for a given tio.
+ * Parameters : cnode, the compact nodeid.
+ * Assumptions : Called only once per tio.
+ * Returns : None
+ */
+void ice_error_init(struct hubdev_info *hubdev_info)
+{
+ if (request_irq
+ (SGI_TIO_ERROR, (void *)hub_eint_handler, SA_SHIRQ, "SN_TIO_error",
+ (void *)hubdev_info))
+ printk("ice_error_init: request_irq() error hubdev_info 0x%p\n",
+ hubdev_info);
+ return;
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/bootmem.h>
+#include <asm/sn/types.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/addrs.h>
+#include "pci/pcibus_provider_defs.h"
+#include "pci/pcidev.h"
+#include "pci/pcibr_provider.h"
+#include "xtalk/xwidgetdev.h"
+#include <asm/sn/geo.h>
+#include "xtalk/hubdev.h"
+#include <asm/sn/io.h>
+#include <asm/sn/simulator.h>
+
+char master_baseio_wid;
+nasid_t master_nasid = INVALID_NASID; /* Partition Master */
+
+struct slab_info {
+ struct hubdev_info hubdev;
+};
+
+struct brick {
+ moduleid_t id; /* Module ID of this module */
+ struct slab_info slab_info[MAX_SLABS + 1];
+};
+
+int sn_ioif_inited = 0; /* SN I/O infrastructure initialized? */
+
+/*
+ * Retrieve the DMA Flush List given nasid. This list is needed
+ * to implement the WAR - Flush DMA data on PIO Reads.
+ */
+static inline uint64_t
+sal_get_widget_dmaflush_list(u64 nasid, u64 widget_num, u64 address)
+{
+
+ struct ia64_sal_retval ret_stuff;
+ ret_stuff.status = 0;
+ ret_stuff.v0 = 0;
+
+ SAL_CALL_NOLOCK(ret_stuff,
+ (u64) SN_SAL_IOIF_GET_WIDGET_DMAFLUSH_LIST,
+ (u64) nasid, (u64) widget_num, (u64) address, 0, 0, 0,
+ 0);
+ return ret_stuff.v0;
+
+}
+
+/*
+ * Retrieve the hub device info structure for the given nasid.
+ */
+static inline uint64_t sal_get_hubdev_info(u64 handle, u64 address)
+{
+
+ struct ia64_sal_retval ret_stuff;
+ ret_stuff.status = 0;
+ ret_stuff.v0 = 0;
+
+ SAL_CALL_NOLOCK(ret_stuff,
+ (u64) SN_SAL_IOIF_GET_HUBDEV_INFO,
+ (u64) handle, (u64) address, 0, 0, 0, 0, 0);
+ return ret_stuff.v0;
+}
+
+/*
+ * Retrieve the pci bus information given the bus number.
+ */
+static inline uint64_t sal_get_pcibus_info(u64 segment, u64 busnum, u64 address)
+{
+
+ struct ia64_sal_retval ret_stuff;
+ ret_stuff.status = 0;
+ ret_stuff.v0 = 0;
+
+ SAL_CALL_NOLOCK(ret_stuff,
+ (u64) SN_SAL_IOIF_GET_PCIBUS_INFO,
+ (u64) segment, (u64) busnum, (u64) address, 0, 0, 0, 0);
+ return ret_stuff.v0;
+}
+
+/*
+ * Retrieve the pci device information given the bus and device|function number.
+ */
+static inline uint64_t
+sal_get_pcidev_info(u64 segment, u64 bus_number, u64 devfn, u64 pci_dev,
+ u64 sn_irq_info)
+{
+ struct ia64_sal_retval ret_stuff;
+ ret_stuff.status = 0;
+ ret_stuff.v0 = 0;
+
+ SAL_CALL_NOLOCK(ret_stuff,
+ (u64) SN_SAL_IOIF_GET_PCIDEV_INFO,
+ (u64) segment, (u64) bus_number, (u64) devfn,
+ (u64) pci_dev,
+ sn_irq_info, 0, 0);
+ return ret_stuff.v0;
+}
+
+/*
+ * sn_alloc_pci_sysdata() - This routine allocates a pci controller
+ * which is expected as the pci_dev and pci_bus sysdata by the Linux
+ * PCI infrastructure.
+ */
+static inline struct pci_controller *sn_alloc_pci_sysdata(void)
+{
+ struct pci_controller *pci_sysdata;
+
+ pci_sysdata = kmalloc(sizeof(*pci_sysdata), GFP_KERNEL);
+ if (!pci_sysdata)
+ BUG();
+
+ memset(pci_sysdata, 0, sizeof(*pci_sysdata));
+ return pci_sysdata;
+}
+
+/*
+ * sn_fixup_ionodes() - This routine initializes the HUB data strcuture for
+ * each node in the system.
+ */
+static void sn_fixup_ionodes(void)
+{
+
+ struct sn_flush_device_list *sn_flush_device_list;
+ struct hubdev_info *hubdev;
+ uint64_t status;
+ uint64_t nasid;
+ int i, widget;
+
+ for (i = 0; i < numionodes; i++) {
+ hubdev = (struct hubdev_info *)(NODEPDA(i)->pdinfo);
+ nasid = cnodeid_to_nasid(i);
+ status = sal_get_hubdev_info(nasid, (uint64_t) __pa(hubdev));
+ if (status)
+ continue;
+
+ for (widget = 0; widget <= HUB_WIDGET_ID_MAX; widget++)
+ hubdev->hdi_xwidget_info[widget].xwi_hubinfo = hubdev;
+
+ if (!hubdev->hdi_flush_nasid_list.widget_p)
+ continue;
+
+ hubdev->hdi_flush_nasid_list.widget_p =
+ kmalloc((HUB_WIDGET_ID_MAX + 1) *
+ sizeof(struct sn_flush_device_list *), GFP_KERNEL);
+
+ memset(hubdev->hdi_flush_nasid_list.widget_p, 0x0,
+ (HUB_WIDGET_ID_MAX + 1) *
+ sizeof(struct sn_flush_device_list *));
+
+ for (widget = 0; widget <= HUB_WIDGET_ID_MAX; widget++) {
+ sn_flush_device_list = kmalloc(DEV_PER_WIDGET *
+ sizeof(struct
+ sn_flush_device_list),
+ GFP_KERNEL);
+ memset(sn_flush_device_list, 0x0,
+ DEV_PER_WIDGET *
+ sizeof(struct sn_flush_device_list));
+
+ status =
+ sal_get_widget_dmaflush_list(nasid, widget,
+ (uint64_t)
+ __pa
+ (sn_flush_device_list));
+ if (status) {
+ kfree(sn_flush_device_list);
+ continue;
+ }
+
+ hubdev->hdi_flush_nasid_list.widget_p[widget] =
+ sn_flush_device_list;
+ }
+
+ if (!(i & 1))
+ hub_error_init(hubdev);
+ else
+ ice_error_init(hubdev);
+ }
+
+}
+
+/*
+ * sn_pci_fixup_slot() - This routine sets up a slot's resources
+ * consistent with the Linux PCI abstraction layer. Resources acquired
+ * from our PCI provider include PIO maps to BAR space and interrupt
+ * objects.
+ */
+static void sn_pci_fixup_slot(struct pci_dev *dev)
+{
+ int idx;
+ int segment = 0;
+ uint64_t size;
+ struct sn_irq_info *sn_irq_info;
+ struct pci_dev *host_pci_dev;
+ int status = 0;
+
+ SN_PCIDEV_INFO(dev) = kmalloc(sizeof(struct pcidev_info), GFP_KERNEL);
+ if (SN_PCIDEV_INFO(dev) <= 0)
+ BUG(); /* Cannot afford to run out of memory */
+ memset(SN_PCIDEV_INFO(dev), 0, sizeof(struct pcidev_info));
+
+ sn_irq_info = kmalloc(sizeof(struct sn_irq_info), GFP_KERNEL);
+ if (sn_irq_info <= 0)
+ BUG(); /* Cannot afford to run out of memory */
+ memset(sn_irq_info, 0, sizeof(struct sn_irq_info));
+
+ /* Call to retrieve pci device information needed by kernel. */
+ status = sal_get_pcidev_info((u64) segment, (u64) dev->bus->number,
+ dev->devfn,
+ (u64) __pa(SN_PCIDEV_INFO(dev)),
+ (u64) __pa(sn_irq_info));
+ if (status)
+ BUG(); /* Cannot get platform pci device information information */
+
+ /* Copy over PIO Mapped Addresses */
+ for (idx = 0; idx <= PCI_ROM_RESOURCE; idx++) {
+ unsigned long start, end, addr;
+
+ if (!SN_PCIDEV_INFO(dev)->pdi_pio_mapped_addr[idx])
+ continue;
+
+ start = dev->resource[idx].start;
+ end = dev->resource[idx].end;
+ size = end - start;
+ addr = SN_PCIDEV_INFO(dev)->pdi_pio_mapped_addr[idx];
+ addr = ((addr << 4) >> 4) | __IA64_UNCACHED_OFFSET;
+ dev->resource[idx].start = addr;
+ dev->resource[idx].end = addr + size;
+ if (dev->resource[idx].flags & IORESOURCE_IO)
+ dev->resource[idx].parent = &ioport_resource;
+ else
+ dev->resource[idx].parent = &iomem_resource;
+ }
+
+ /* set up host bus linkages */
+ host_pci_dev =
+ pci_find_slot(SN_PCIDEV_INFO(dev)->pdi_slot_host_handle >> 32,
+ SN_PCIDEV_INFO(dev)->
+ pdi_slot_host_handle & 0xffffffff);
+ SN_PCIDEV_INFO(dev)->pdi_host_pcidev_info =
+ SN_PCIDEV_INFO(host_pci_dev);
+ SN_PCIDEV_INFO(dev)->pdi_linux_pcidev = dev;
+ SN_PCIDEV_INFO(dev)->pdi_pcibus_info = SN_PCIBUS_BUSSOFT(dev->bus);
+
+ /* Only set up IRQ stuff if this device has a host bus context */
+ if (SN_PCIDEV_BUSSOFT(dev) && sn_irq_info->irq_irq) {
+ SN_PCIDEV_INFO(dev)->pdi_sn_irq_info = sn_irq_info;
+ dev->irq = SN_PCIDEV_INFO(dev)->pdi_sn_irq_info->irq_irq;
+ sn_irq_fixup(dev, sn_irq_info);
+ }
+}
+
+/*
+ * sn_pci_controller_fixup() - This routine sets up a bus's resources
+ * consistent with the Linux PCI abstraction layer.
+ */
+static void sn_pci_controller_fixup(int segment, int busnum)
+{
+ int status = 0;
+ int nasid, cnode;
+ struct pci_bus *bus;
+ struct pci_controller *controller;
+ struct pcibus_bussoft *prom_bussoft_ptr;
+ struct hubdev_info *hubdev_info;
+ void *provider_soft;
+
+ status =
+ sal_get_pcibus_info((u64) segment, (u64) busnum,
+ (u64) ia64_tpa(&prom_bussoft_ptr));
+ if (status > 0) {
+ return; /* bus # does not exist */
+ }
+
+ prom_bussoft_ptr = __va(prom_bussoft_ptr);
+ controller = sn_alloc_pci_sysdata();
+ /* controller non-zero is BUG'd in sn_alloc_pci_sysdata */
+
+ bus = pci_scan_bus(busnum, &pci_root_ops, controller);
+ if (bus == NULL) {
+ return; /* error, or bus already scanned */
+ }
+
+ /*
+ * Per-provider fixup. Copies the contents from prom to local
+ * area and links SN_PCIBUS_BUSSOFT().
+ *
+ * Note: Provider is responsible for ensuring that prom_bussoft_ptr
+ * represents an asic-type that it can handle.
+ */
+
+ if (prom_bussoft_ptr->bs_asic_type == PCIIO_ASIC_TYPE_PPB) {
+ return; /* no further fixup necessary */
+ }
+
+ provider_soft = pcibr_bus_fixup(prom_bussoft_ptr);
+ if (provider_soft == NULL) {
+ return; /* fixup failed or not applicable */
+ }
+
+ /*
+ * Generic bus fixup goes here. Don't reference prom_bussoft_ptr
+ * after this point.
+ */
+
+ PCI_CONTROLLER(bus) = controller;
+ SN_PCIBUS_BUSSOFT(bus) = provider_soft;
+
+ nasid = NASID_GET(SN_PCIBUS_BUSSOFT(bus)->bs_base);
+ cnode = nasid_to_cnodeid(nasid);
+ hubdev_info = (struct hubdev_info *)(NODEPDA(cnode)->pdinfo);
+ SN_PCIBUS_BUSSOFT(bus)->bs_xwidget_info =
+ &(hubdev_info->hdi_xwidget_info[SN_PCIBUS_BUSSOFT(bus)->bs_xid]);
+}
+
+/*
+ * Ugly hack to get PCI setup until we have a proper ACPI namespace.
+ */
+
+#define PCI_BUSES_TO_SCAN 256
+
+static int __init sn_pci_init(void)
+{
+ int i = 0;
+ struct pci_dev *pci_dev = NULL;
+ extern void sn_init_cpei_timer(void);
+#ifdef CONFIG_PROC_FS
+ extern void register_sn_procfs(void);
+#endif
+
+ if (!ia64_platform_is("sn2") || IS_RUNNING_ON_SIMULATOR())
+ return 0;
+
+ /*
+ * This is needed to avoid bounce limit checks in the blk layer
+ */
+ ia64_max_iommu_merge_mask = ~PAGE_MASK;
+ sn_fixup_ionodes();
+ sn_irq = kmalloc(sizeof(struct sn_irq_info *) * NR_IRQS, GFP_KERNEL);
+ if (sn_irq <= 0)
+ BUG(); /* Canno afford to run out of memory. */
+ memset(sn_irq, 0, sizeof(struct sn_irq_info *) * NR_IRQS);
+
+ sn_init_cpei_timer();
+
+#ifdef CONFIG_PROC_FS
+ register_sn_procfs();
+#endif
+
+ for (i = 0; i < PCI_BUSES_TO_SCAN; i++) {
+ sn_pci_controller_fixup(0, i);
+ }
+
+ /*
+ * Generic Linux PCI Layer has created the pci_bus and pci_dev
+ * structures - time for us to add our SN PLatform specific
+ * information.
+ */
+
+ while ((pci_dev =
+ pci_find_device(PCI_ANY_ID, PCI_ANY_ID, pci_dev)) != NULL) {
+ sn_pci_fixup_slot(pci_dev);
+ }
+
+ sn_ioif_inited = 1; /* sn I/O infrastructure now initialized */
+
+ return 0;
+}
+
+/*
+ * hubdev_init_node() - Creates the HUB data structure and link them to it's
+ * own NODE specific data area.
+ */
+void hubdev_init_node(nodepda_t * npda, cnodeid_t node)
+{
+
+ struct hubdev_info *hubdev_info;
+
+ if (node >= numnodes) /* Headless/memless IO nodes */
+ hubdev_info =
+ (struct hubdev_info *)alloc_bootmem_node(NODE_DATA(0),
+ sizeof(struct
+ hubdev_info));
+ else
+ hubdev_info =
+ (struct hubdev_info *)alloc_bootmem_node(NODE_DATA(node),
+ sizeof(struct
+ hubdev_info));
+ npda->pdinfo = (void *)hubdev_info;
+
+}
+
+geoid_t
+cnodeid_get_geoid(cnodeid_t cnode)
+{
+
+ struct hubdev_info *hubdev;
+
+ hubdev = (struct hubdev_info *)(NODEPDA(cnode)->pdinfo);
+ return hubdev->hdi_geoid;
+
+}
+
+subsys_initcall(sn_pci_init);
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000-2003 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/module.h>
+#include <asm/io.h>
+#include <asm/delay.h>
+#include <asm/sn/nodepda.h>
+#include <asm/sn/simulator.h>
+#include <asm/sn/pda.h>
+#include <asm/sn/sn_cpuid.h>
+#include <asm/sn/shub_mmr.h>
+
+/**
+ * sn_io_addr - convert an in/out port to an i/o address
+ * @port: port to convert
+ *
+ * Legacy in/out instructions are converted to ld/st instructions
+ * on IA64. This routine will convert a port number into a valid
+ * SN i/o address. Used by sn_in*() and sn_out*().
+ */
+void *sn_io_addr(unsigned long port)
+{
+ if (!IS_RUNNING_ON_SIMULATOR()) {
+ /* On sn2, legacy I/O ports don't point at anything */
+ if (port < (64 * 1024))
+ return NULL;
+ return ((void *)(port | __IA64_UNCACHED_OFFSET));
+ } else {
+ /* but the simulator uses them... */
+ unsigned long io_base;
+ unsigned long addr;
+
+ /*
+ * word align port, but need more than 10 bits
+ * for accessing registers in bedrock local block
+ * (so we don't do port&0xfff)
+ */
+ if ((port >= 0x1f0 && port <= 0x1f7) ||
+ port == 0x3f6 || port == 0x3f7) {
+ io_base = (0xc000000fcc000000UL |
+ ((unsigned long)get_nasid() << 38));
+ addr = io_base | ((port >> 2) << 12) | (port & 0xfff);
+ } else {
+ addr = __ia64_get_io_port_base() | ((port >> 2) << 2);
+ }
+ return (void *)addr;
+ }
+}
+
+EXPORT_SYMBOL(sn_io_addr);
+
+/**
+ * __sn_mmiowb - I/O space memory barrier
+ *
+ * See include/asm-ia64/io.h and Documentation/DocBook/deviceiobook.tmpl
+ * for details.
+ *
+ * On SN2, we wait for the PIO_WRITE_STATUS SHub register to clear.
+ * See PV 871084 for details about the WAR about zero value.
+ *
+ */
+void __sn_mmiowb(void)
+{
+ while ((((volatile unsigned long)(*pda->pio_write_status_addr)) &
+ SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK) !=
+ SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK)
+ cpu_relax();
+}
+
+EXPORT_SYMBOL(__sn_mmiowb);
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992 - 1997, 2000-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/ctype.h>
+#include <linux/string.h>
+#include <linux/kernel.h>
+#include <asm/sn/types.h>
+#include <asm/sn/module.h>
+#include <asm/sn/l1.h>
+
+char brick_types[MAX_BRICK_TYPES + 1] = "cri.xdpn%#=vo^kjbf890123456789...";
+/*
+ * Format a module id for printing.
+ *
+ * There are three possible formats:
+ *
+ * MODULE_FORMAT_BRIEF is the brief 6-character format, including
+ * the actual brick-type as recorded in the
+ * moduleid_t, eg. 002c15 for a C-brick, or
+ * 101#17 for a PX-brick.
+ *
+ * MODULE_FORMAT_LONG is the hwgraph format, eg. rack/002/bay/15
+ * of rack/101/bay/17 (note that the brick
+ * type does not appear in this format).
+ *
+ * MODULE_FORMAT_LCD is like MODULE_FORMAT_BRIEF, except that it
+ * ensures that the module id provided appears
+ * exactly as it would on the LCD display of
+ * the corresponding brick, eg. still 002c15
+ * for a C-brick, but 101p17 for a PX-brick.
+ *
+ * maule (9/13/04): Removed top-level check for (fmt == MODULE_FORMAT_LCD)
+ * making MODULE_FORMAT_LCD equivalent to MODULE_FORMAT_BRIEF. It was
+ * decided that all callers should assume the returned string should be what
+ * is displayed on the brick L1 LCD.
+ */
+void
+format_module_id(char *buffer, moduleid_t m, int fmt)
+{
+ int rack, position;
+ unsigned char brickchar;
+
+ rack = MODULE_GET_RACK(m);
+ brickchar = MODULE_GET_BTCHAR(m);
+
+ /* Be sure we use the same brick type character as displayed
+ * on the brick's LCD
+ */
+ switch (brickchar)
+ {
+ case L1_BRICKTYPE_GA:
+ case L1_BRICKTYPE_OPUS_TIO:
+ brickchar = L1_BRICKTYPE_C;
+ break;
+
+ case L1_BRICKTYPE_PX:
+ case L1_BRICKTYPE_PE:
+ case L1_BRICKTYPE_PA:
+ case L1_BRICKTYPE_SA: /* we can move this to the "I's" later
+ * if that makes more sense
+ */
+ brickchar = L1_BRICKTYPE_P;
+ break;
+
+ case L1_BRICKTYPE_IX:
+ case L1_BRICKTYPE_IA:
+
+ brickchar = L1_BRICKTYPE_I;
+ break;
+ }
+
+ position = MODULE_GET_BPOS(m);
+
+ if ((fmt == MODULE_FORMAT_BRIEF) || (fmt == MODULE_FORMAT_LCD)) {
+ /* Brief module number format, eg. 002c15 */
+
+ /* Decompress the rack number */
+ *buffer++ = '0' + RACK_GET_CLASS(rack);
+ *buffer++ = '0' + RACK_GET_GROUP(rack);
+ *buffer++ = '0' + RACK_GET_NUM(rack);
+
+ /* Add the brick type */
+ *buffer++ = brickchar;
+ }
+ else if (fmt == MODULE_FORMAT_LONG) {
+ /* Fuller hwgraph format, eg. rack/002/bay/15 */
+
+ strcpy(buffer, "rack" "/"); buffer += strlen(buffer);
+
+ *buffer++ = '0' + RACK_GET_CLASS(rack);
+ *buffer++ = '0' + RACK_GET_GROUP(rack);
+ *buffer++ = '0' + RACK_GET_NUM(rack);
+
+ strcpy(buffer, "/" "bay" "/"); buffer += strlen(buffer);
+ }
+
+ /* Add the bay position, using at least two digits */
+ if (position < 10)
+ *buffer++ = '0';
+ sprintf(buffer, "%d", position);
+
+}
--- /dev/null
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+#
+# Makefile for the sn pci general routines.
+
+obj-y := pci_dma.o pcibr/
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2000,2002-2004 Silicon Graphics, Inc. All rights reserved.
+ *
+ * Routines for PCI DMA mapping. See Documentation/DMA-mapping.txt for
+ * a description of how these routines should be used.
+ */
+
+#include <linux/module.h>
+#include <asm/sn/sn_sal.h>
+#include "pci/pcibus_provider_defs.h"
+#include "pci/pcidev.h"
+#include "pci/pcibr_provider.h"
+
+void sn_pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
+ int direction);
+
+/**
+ * sn_pci_alloc_consistent - allocate memory for coherent DMA
+ * @hwdev: device to allocate for
+ * @size: size of the region
+ * @dma_handle: DMA (bus) address
+ *
+ * pci_alloc_consistent() returns a pointer to a memory region suitable for
+ * coherent DMA traffic to/from a PCI device. On SN platforms, this means
+ * that @dma_handle will have the %PCIIO_DMA_CMD flag set.
+ *
+ * This interface is usually used for "command" streams (e.g. the command
+ * queue for a SCSI controller). See Documentation/DMA-mapping.txt for
+ * more information.
+ *
+ * Also known as platform_pci_alloc_consistent() by the IA64 machvec code.
+ */
+void *sn_pci_alloc_consistent(struct pci_dev *hwdev, size_t size,
+ dma_addr_t * dma_handle)
+{
+ void *cpuaddr;
+ unsigned long phys_addr;
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
+ struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
+
+ if (bussoft == NULL) {
+ return NULL;
+ }
+
+ if (! IS_PCI_BRIDGE_ASIC(bussoft->bs_asic_type)) {
+ return NULL; /* unsupported asic type */
+ }
+
+ /*
+ * Allocate the memory.
+ * FIXME: We should be doing alloc_pages_node for the node closest
+ * to the PCI device.
+ */
+ if (!(cpuaddr = (void *)__get_free_pages(GFP_ATOMIC, get_order(size))))
+ return NULL;
+
+ memset(cpuaddr, 0x0, size);
+
+ /* physical addr. of the memory we just got */
+ phys_addr = __pa(cpuaddr);
+
+ /*
+ * 64 bit address translations should never fail.
+ * 32 bit translations can fail if there are insufficient mapping
+ * resources.
+ */
+
+ *dma_handle = pcibr_dma_map(pcidev_info, phys_addr, size, SN_PCIDMA_CONSISTENT);
+ if (!*dma_handle) {
+ printk(KERN_ERR
+ "sn_pci_alloc_consistent(): failed *dma_handle = 0x%lx hwdev->dev.coherent_dma_mask = 0x%lx \n",
+ *dma_handle, hwdev->dev.coherent_dma_mask);
+ free_pages((unsigned long)cpuaddr, get_order(size));
+ return NULL;
+ }
+
+ return cpuaddr;
+}
+
+/**
+ * sn_pci_free_consistent - free memory associated with coherent DMAable region
+ * @hwdev: device to free for
+ * @size: size to free
+ * @vaddr: kernel virtual address to free
+ * @dma_handle: DMA address associated with this region
+ *
+ * Frees the memory allocated by pci_alloc_consistent(). Also known
+ * as platform_pci_free_consistent() by the IA64 machvec code.
+ */
+void
+sn_pci_free_consistent(struct pci_dev *hwdev, size_t size, void *vaddr,
+ dma_addr_t dma_handle)
+{
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
+ struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
+
+ if (! bussoft) {
+ return;
+ }
+
+ pcibr_dma_unmap(pcidev_info, dma_handle, 0);
+ free_pages((unsigned long)vaddr, get_order(size));
+}
+
+/**
+ * sn_pci_map_sg - map a scatter-gather list for DMA
+ * @hwdev: device to map for
+ * @sg: scatterlist to map
+ * @nents: number of entries
+ * @direction: direction of the DMA transaction
+ *
+ * Maps each entry of @sg for DMA. Also known as platform_pci_map_sg by the
+ * IA64 machvec code.
+ */
+int
+sn_pci_map_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
+ int direction)
+{
+
+ int i;
+ unsigned long phys_addr;
+ struct scatterlist *saved_sg = sg;
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
+ struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
+
+ /* can't go anywhere w/o a direction in life */
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ if (! bussoft) {
+ return 0;
+ }
+
+ /* SN cannot support DMA addresses smaller than 32 bits. */
+ if (hwdev->dma_mask < 0x7fffffff)
+ return 0;
+
+ /*
+ * Setup a DMA address for each entry in the
+ * scatterlist.
+ */
+ for (i = 0; i < nents; i++, sg++) {
+ phys_addr =
+ __pa((unsigned long)page_address(sg->page) + sg->offset);
+ sg->dma_address = pcibr_dma_map(pcidev_info, phys_addr, sg->length, 0);
+
+ if (!sg->dma_address) {
+ printk(KERN_ERR "sn_pci_map_sg: Unable to allocate "
+ "anymore page map entries.\n");
+ /*
+ * We will need to free all previously allocated entries.
+ */
+ if (i > 0) {
+ sn_pci_unmap_sg(hwdev, saved_sg, i, direction);
+ }
+ return (0);
+ }
+
+ sg->dma_length = sg->length;
+ }
+
+ return nents;
+
+}
+
+/**
+ * sn_pci_unmap_sg - unmap a scatter-gather list
+ * @hwdev: device to unmap
+ * @sg: scatterlist to unmap
+ * @nents: number of scatterlist entries
+ * @direction: DMA direction
+ *
+ * Unmap a set of streaming mode DMA translations. Again, cpu read rules
+ * concerning calls here are the same as for pci_unmap_single() below. Also
+ * known as sn_pci_unmap_sg() by the IA64 machvec code.
+ */
+void
+sn_pci_unmap_sg(struct pci_dev *hwdev, struct scatterlist *sg, int nents,
+ int direction)
+{
+ int i;
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
+ struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
+
+ /* can't go anywhere w/o a direction in life */
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ if (! bussoft) {
+ return;
+ }
+
+ for (i = 0; i < nents; i++, sg++) {
+ pcibr_dma_unmap(pcidev_info, sg->dma_address, direction);
+ sg->dma_address = (dma_addr_t) NULL;
+ sg->dma_length = 0;
+ }
+}
+
+/**
+ * sn_pci_map_single - map a single region for DMA
+ * @hwdev: device to map for
+ * @ptr: kernel virtual address of the region to map
+ * @size: size of the region
+ * @direction: DMA direction
+ *
+ * Map the region pointed to by @ptr for DMA and return the
+ * DMA address. Also known as platform_pci_map_single() by
+ * the IA64 machvec code.
+ *
+ * We map this to the one step pcibr_dmamap_trans interface rather than
+ * the two step pcibr_dmamap_alloc/pcibr_dmamap_addr because we have
+ * no way of saving the dmamap handle from the alloc to later free
+ * (which is pretty much unacceptable).
+ *
+ * TODO: simplify our interface;
+ * get rid of dev_desc and vhdl (seems redundant given a pci_dev);
+ * figure out how to save dmamap handle so can use two step.
+ */
+dma_addr_t
+sn_pci_map_single(struct pci_dev *hwdev, void *ptr, size_t size, int direction)
+{
+ dma_addr_t dma_addr;
+ unsigned long phys_addr;
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
+ struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
+
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ if (bussoft == NULL) {
+ return 0;
+ }
+
+ if (! IS_PCI_BRIDGE_ASIC(bussoft->bs_asic_type)) {
+ return 0; /* unsupported asic type */
+ }
+
+ /* SN cannot support DMA addresses smaller than 32 bits. */
+ if (hwdev->dma_mask < 0x7fffffff)
+ return 0;
+
+ /*
+ * Call our dmamap interface
+ */
+
+ phys_addr = __pa(ptr);
+ dma_addr = pcibr_dma_map(pcidev_info, phys_addr, size, 0);
+ if (!dma_addr) {
+ printk(KERN_ERR "pci_map_single: Unable to allocate anymore "
+ "page map entries.\n");
+ return 0;
+ }
+ return ((dma_addr_t) dma_addr);
+}
+
+/**
+ * sn_pci_dma_sync_single_* - make sure all DMAs or CPU accesses
+ * have completed
+ * @hwdev: device to sync
+ * @dma_handle: DMA address to sync
+ * @size: size of region
+ * @direction: DMA direction
+ *
+ * This routine is supposed to sync the DMA region specified
+ * by @dma_handle into the 'coherence domain'. We do not need to do
+ * anything on our platform.
+ */
+void
+sn_pci_unmap_single(struct pci_dev *hwdev, dma_addr_t dma_addr, size_t size,
+ int direction)
+{
+ struct pcidev_info *pcidev_info = SN_PCIDEV_INFO(hwdev);
+ struct pcibus_bussoft *bussoft = SN_PCIDEV_BUSSOFT(hwdev);
+
+ if (direction == PCI_DMA_NONE)
+ BUG();
+
+ if (bussoft == NULL) {
+ return;
+ }
+
+ if (! IS_PCI_BRIDGE_ASIC(bussoft->bs_asic_type)) {
+ return; /* unsupported asic type */
+ }
+
+ pcibr_dma_unmap(pcidev_info, dma_addr, direction);
+}
+
+/**
+ * sn_dma_supported - test a DMA mask
+ * @hwdev: device to test
+ * @mask: DMA mask to test
+ *
+ * Return whether the given PCI device DMA address mask can be supported
+ * properly. For example, if your device can only drive the low 24-bits
+ * during PCI bus mastering, then you would pass 0x00ffffff as the mask to
+ * this function. Of course, SN only supports devices that have 32 or more
+ * address bits when using the PMU. We could theoretically support <32 bit
+ * cards using direct mapping, but we'll worry about that later--on the off
+ * chance that someone actually wants to use such a card.
+ */
+int sn_pci_dma_supported(struct pci_dev *hwdev, u64 mask)
+{
+ if (mask < 0x7fffffff)
+ return 0;
+ return 1;
+}
+
+/*
+ * New generic DMA routines just wrap sn2 PCI routines until we
+ * support other bus types (if ever).
+ */
+
+int sn_dma_supported(struct device *dev, u64 mask)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ return sn_pci_dma_supported(to_pci_dev(dev), mask);
+}
+
+EXPORT_SYMBOL(sn_dma_supported);
+
+int sn_dma_set_mask(struct device *dev, u64 dma_mask)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ if (!sn_dma_supported(dev, dma_mask))
+ return 0;
+
+ *dev->dma_mask = dma_mask;
+ return 1;
+}
+
+EXPORT_SYMBOL(sn_dma_set_mask);
+
+void *sn_dma_alloc_coherent(struct device *dev, size_t size,
+ dma_addr_t * dma_handle, int flag)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ return sn_pci_alloc_consistent(to_pci_dev(dev), size, dma_handle);
+}
+
+EXPORT_SYMBOL(sn_dma_alloc_coherent);
+
+void
+sn_dma_free_coherent(struct device *dev, size_t size, void *cpu_addr,
+ dma_addr_t dma_handle)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ sn_pci_free_consistent(to_pci_dev(dev), size, cpu_addr, dma_handle);
+}
+
+EXPORT_SYMBOL(sn_dma_free_coherent);
+
+dma_addr_t
+sn_dma_map_single(struct device *dev, void *cpu_addr, size_t size,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ return sn_pci_map_single(to_pci_dev(dev), cpu_addr, size,
+ (int)direction);
+}
+
+EXPORT_SYMBOL(sn_dma_map_single);
+
+void
+sn_dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ sn_pci_unmap_single(to_pci_dev(dev), dma_addr, size, (int)direction);
+}
+
+EXPORT_SYMBOL(sn_dma_unmap_single);
+
+dma_addr_t
+sn_dma_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size, int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ return pci_map_page(to_pci_dev(dev), page, offset, size,
+ (int)direction);
+}
+
+EXPORT_SYMBOL(sn_dma_map_page);
+
+void
+sn_dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ pci_unmap_page(to_pci_dev(dev), dma_address, size, (int)direction);
+}
+
+EXPORT_SYMBOL(sn_dma_unmap_page);
+
+int
+sn_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ return sn_pci_map_sg(to_pci_dev(dev), sg, nents, (int)direction);
+}
+
+EXPORT_SYMBOL(sn_dma_map_sg);
+
+void
+sn_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nhwentries,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+
+ sn_pci_unmap_sg(to_pci_dev(dev), sg, nhwentries, (int)direction);
+}
+
+EXPORT_SYMBOL(sn_dma_unmap_sg);
+
+void
+sn_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle,
+ size_t size, int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+}
+
+EXPORT_SYMBOL(sn_dma_sync_single_for_cpu);
+
+void
+sn_dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle,
+ size_t size, int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+}
+
+EXPORT_SYMBOL(sn_dma_sync_single_for_device);
+
+void
+sn_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, int nelems,
+ int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+}
+
+EXPORT_SYMBOL(sn_dma_sync_sg_for_cpu);
+
+void
+sn_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
+ int nelems, int direction)
+{
+ BUG_ON(dev->bus != &pci_bus_type);
+}
+
+int sn_dma_mapping_error(dma_addr_t dma_addr)
+{
+ return 0;
+}
+
+EXPORT_SYMBOL(sn_dma_sync_sg_for_device);
+EXPORT_SYMBOL(sn_pci_unmap_single);
+EXPORT_SYMBOL(sn_pci_map_single);
+EXPORT_SYMBOL(sn_pci_map_sg);
+EXPORT_SYMBOL(sn_pci_unmap_sg);
+EXPORT_SYMBOL(sn_pci_alloc_consistent);
+EXPORT_SYMBOL(sn_pci_free_consistent);
+EXPORT_SYMBOL(sn_pci_dma_supported);
+EXPORT_SYMBOL(sn_dma_mapping_error);
--- /dev/null
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2002-2004 Silicon Graphics, Inc. All Rights Reserved.
+#
+# Makefile for the sn2 io routines.
+
+obj-y += pcibr_dma.o pcibr_reg.o \
+ pcibr_ate.o pcibr_provider.o
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <asm/sn/sn_sal.h>
+#include "pci/pcibus_provider_defs.h"
+#include "pci/pcidev.h"
+#include "pci/pcibr_provider.h"
+
+int pcibr_invalidate_ate = 0; /* by default don't invalidate ATE on free */
+
+/*
+ * mark_ate: Mark the ate as either free or inuse.
+ */
+static void mark_ate(struct ate_resource *ate_resource, int start, int number,
+ uint64_t value)
+{
+
+ uint64_t *ate = ate_resource->ate;
+ int index;
+ int length = 0;
+
+ for (index = start; length < number; index++, length++)
+ ate[index] = value;
+
+}
+
+/*
+ * find_free_ate: Find the first free ate index starting from the given
+ * index for the desired consequtive count.
+ */
+static int find_free_ate(struct ate_resource *ate_resource, int start,
+ int count)
+{
+
+ uint64_t *ate = ate_resource->ate;
+ int index;
+ int start_free;
+
+ for (index = start; index < ate_resource->num_ate;) {
+ if (!ate[index]) {
+ int i;
+ int free;
+ free = 0;
+ start_free = index; /* Found start free ate */
+ for (i = start_free; i < ate_resource->num_ate; i++) {
+ if (!ate[i]) { /* This is free */
+ if (++free == count)
+ return start_free;
+ } else {
+ index = i + 1;
+ break;
+ }
+ }
+ } else
+ index++; /* Try next ate */
+ }
+
+ return -1;
+}
+
+/*
+ * free_ate_resource: Free the requested number of ATEs.
+ */
+static inline void free_ate_resource(struct ate_resource *ate_resource,
+ int start)
+{
+
+ mark_ate(ate_resource, start, ate_resource->ate[start], 0);
+ if ((ate_resource->lowest_free_index > start) ||
+ (ate_resource->lowest_free_index < 0))
+ ate_resource->lowest_free_index = start;
+
+}
+
+/*
+ * alloc_ate_resource: Allocate the requested number of ATEs.
+ */
+static inline int alloc_ate_resource(struct ate_resource *ate_resource,
+ int ate_needed)
+{
+
+ int start_index;
+
+ /*
+ * Check for ate exhaustion.
+ */
+ if (ate_resource->lowest_free_index < 0)
+ return -1;
+
+ /*
+ * Find the required number of free consequtive ates.
+ */
+ start_index =
+ find_free_ate(ate_resource, ate_resource->lowest_free_index,
+ ate_needed);
+ if (start_index >= 0)
+ mark_ate(ate_resource, start_index, ate_needed, ate_needed);
+
+ ate_resource->lowest_free_index =
+ find_free_ate(ate_resource, ate_resource->lowest_free_index, 1);
+
+ return start_index;
+}
+
+/*
+ * Allocate "count" contiguous Bridge Address Translation Entries
+ * on the specified bridge to be used for PCI to XTALK mappings.
+ * Indices in rm map range from 1..num_entries. Indicies returned
+ * to caller range from 0..num_entries-1.
+ *
+ * Return the start index on success, -1 on failure.
+ */
+int pcibr_ate_alloc(struct pcibus_info *pcibus_info, int count)
+{
+ int status = 0;
+ uint64_t flag;
+
+ flag = pcibr_lock(pcibus_info);
+ status = alloc_ate_resource(&pcibus_info->pbi_int_ate_resource, count);
+
+ if (status < 0) {
+ /* Failed to allocate */
+ pcibr_unlock(pcibus_info, flag);
+ return -1;
+ }
+
+ pcibr_unlock(pcibus_info, flag);
+
+ return status;
+}
+
+/*
+ * Setup an Address Translation Entry as specified. Use either the Bridge
+ * internal maps or the external map RAM, as appropriate.
+ */
+static inline uint64_t *pcibr_ate_addr(struct pcibus_info *pcibus_info,
+ int ate_index)
+{
+ if (ate_index < pcibus_info->pbi_int_ate_size) {
+ return pcireg_int_ate_addr(pcibus_info, ate_index);
+ }
+ panic("pcibr_ate_addr: invalid ate_index 0x%x", ate_index);
+}
+
+/*
+ * Update the ate.
+ */
+void inline
+ate_write(struct pcibus_info *pcibus_info, int ate_index, int count,
+ volatile uint64_t ate)
+{
+ while (count-- > 0) {
+ if (ate_index < pcibus_info->pbi_int_ate_size) {
+ pcireg_int_ate_set(pcibus_info, ate_index, ate);
+ } else {
+ panic("ate_write: invalid ate_index 0x%x", ate_index);
+ }
+ ate_index++;
+ ate += IOPGSIZE;
+ }
+
+ pcireg_tflush_get(pcibus_info); /* wait until Bridge PIO complete */
+}
+
+void pcibr_ate_free(struct pcibus_info *pcibus_info, int index)
+{
+
+ volatile uint64_t ate;
+ int count;
+ uint64_t flags;
+
+ if (pcibr_invalidate_ate) {
+ /* For debugging purposes, clear the valid bit in the ATE */
+ ate = *pcibr_ate_addr(pcibus_info, index);
+ count = pcibus_info->pbi_int_ate_resource.ate[index];
+ ate_write(pcibus_info, index, count, (ate & ~PCI32_ATE_V));
+ }
+
+ flags = pcibr_lock(pcibus_info);
+ free_ate_resource(&pcibus_info->pbi_int_ate_resource, index);
+ pcibr_unlock(pcibus_info, flags);
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <asm/sn/sn_sal.h>
+#include <asm/sn/geo.h>
+#include "xtalk/xwidgetdev.h"
+#include "xtalk/hubdev.h"
+#include "pci/pcibus_provider_defs.h"
+#include "pci/pcidev.h"
+#include "pci/tiocp.h"
+#include "pci/pic.h"
+#include "pci/pcibr_provider.h"
+#include "pci/tiocp.h"
+#include "tio.h"
+#include <asm/sn/addrs.h>
+
+extern int sn_ioif_inited;
+
+/* =====================================================================
+ * DMA MANAGEMENT
+ *
+ * The Bridge ASIC provides three methods of doing DMA: via a "direct map"
+ * register available in 32-bit PCI space (which selects a contiguous 2G
+ * address space on some other widget), via "direct" addressing via 64-bit
+ * PCI space (all destination information comes from the PCI address,
+ * including transfer attributes), and via a "mapped" region that allows
+ * a bunch of different small mappings to be established with the PMU.
+ *
+ * For efficiency, we most prefer to use the 32bit direct mapping facility,
+ * since it requires no resource allocations. The advantage of using the
+ * PMU over the 64-bit direct is that single-cycle PCI addressing can be
+ * used; the advantage of using 64-bit direct over PMU addressing is that
+ * we do not have to allocate entries in the PMU.
+ */
+
+static uint64_t
+pcibr_dmamap_ate32(struct pcidev_info *info,
+ uint64_t paddr, size_t req_size, uint64_t flags)
+{
+
+ struct pcidev_info *pcidev_info = info->pdi_host_pcidev_info;
+ struct pcibus_info *pcibus_info = (struct pcibus_info *)pcidev_info->
+ pdi_pcibus_info;
+ uint8_t internal_device = (PCI_SLOT(pcidev_info->pdi_host_pcidev_info->
+ pdi_linux_pcidev->devfn)) - 1;
+ int ate_count;
+ int ate_index;
+ uint64_t ate_flags = flags | PCI32_ATE_V;
+ uint64_t ate;
+ uint64_t pci_addr;
+ uint64_t xio_addr;
+ uint64_t offset;
+
+ /* PIC in PCI-X mode does not supports 32bit PageMap mode */
+ if (IS_PIC_SOFT(pcibus_info) && IS_PCIX(pcibus_info)) {
+ return 0;
+ }
+
+ /* Calculate the number of ATEs needed. */
+ if (!(MINIMAL_ATE_FLAG(paddr, req_size))) {
+ ate_count = IOPG((IOPGSIZE - 1) /* worst case start offset */
+ +req_size /* max mapping bytes */
+ - 1) + 1; /* round UP */
+ } else { /* assume requested target is page aligned */
+ ate_count = IOPG(req_size /* max mapping bytes */
+ - 1) + 1; /* round UP */
+ }
+
+ /* Get the number of ATEs required. */
+ ate_index = pcibr_ate_alloc(pcibus_info, ate_count);
+ if (ate_index < 0)
+ return 0;
+
+ /* In PCI-X mode, Prefetch not supported */
+ if (IS_PCIX(pcibus_info))
+ ate_flags &= ~(PCI32_ATE_PREF);
+
+ xio_addr =
+ IS_PIC_SOFT(pcibus_info) ? PHYS_TO_DMA(paddr) :
+ PHYS_TO_TIODMA(paddr);
+ offset = IOPGOFF(xio_addr);
+ ate = ate_flags | (xio_addr - offset);
+
+ /* If PIC, put the targetid in the ATE */
+ if (IS_PIC_SOFT(pcibus_info)) {
+ ate |= (pcibus_info->pbi_hub_xid << PIC_ATE_TARGETID_SHFT);
+ }
+ ate_write(pcibus_info, ate_index, ate_count, ate);
+
+ /*
+ * Set up the DMA mapped Address.
+ */
+ pci_addr = PCI32_MAPPED_BASE + offset + IOPGSIZE * ate_index;
+
+ /*
+ * If swap was set in device in pcibr_endian_set()
+ * we need to turn swapping on.
+ */
+ if (pcibus_info->pbi_devreg[internal_device] & PCIBR_DEV_SWAP_DIR)
+ ATE_SWAP_ON(pci_addr);
+
+ return pci_addr;
+}
+
+static uint64_t
+pcibr_dmatrans_direct64(struct pcidev_info * info, uint64_t paddr,
+ uint64_t dma_attributes)
+{
+ struct pcibus_info *pcibus_info = (struct pcibus_info *)
+ ((info->pdi_host_pcidev_info)->pdi_pcibus_info);
+ uint64_t pci_addr;
+
+ /* Translate to Crosstalk View of Physical Address */
+ pci_addr = (IS_PIC_SOFT(pcibus_info) ? PHYS_TO_DMA(paddr) :
+ PHYS_TO_TIODMA(paddr)) | dma_attributes;
+
+ /* Handle Bus mode */
+ if (IS_PCIX(pcibus_info))
+ pci_addr &= ~PCI64_ATTR_PREF;
+
+ /* Handle Bridge Chipset differences */
+ if (IS_PIC_SOFT(pcibus_info)) {
+ pci_addr |=
+ ((uint64_t) pcibus_info->
+ pbi_hub_xid << PIC_PCI64_ATTR_TARG_SHFT);
+ } else
+ pci_addr |= TIOCP_PCI64_CMDTYPE_MEM;
+
+ /* If PCI mode, func zero uses VCHAN0, every other func uses VCHAN1 */
+ if (!IS_PCIX(pcibus_info) && PCI_FUNC(info->pdi_linux_pcidev->devfn))
+ pci_addr |= PCI64_ATTR_VIRTUAL;
+
+ return pci_addr;
+
+}
+
+static uint64_t
+pcibr_dmatrans_direct32(struct pcidev_info * info,
+ uint64_t paddr, size_t req_size, uint64_t flags)
+{
+
+ struct pcidev_info *pcidev_info = info->pdi_host_pcidev_info;
+ struct pcibus_info *pcibus_info = (struct pcibus_info *)pcidev_info->
+ pdi_pcibus_info;
+ uint64_t xio_addr;
+
+ uint64_t xio_base;
+ uint64_t offset;
+ uint64_t endoff;
+
+ if (IS_PCIX(pcibus_info)) {
+ return 0;
+ }
+
+ xio_addr = IS_PIC_SOFT(pcibus_info) ? PHYS_TO_DMA(paddr) :
+ PHYS_TO_TIODMA(paddr);
+
+ xio_base = pcibus_info->pbi_dir_xbase;
+ offset = xio_addr - xio_base;
+ endoff = req_size + offset;
+ if ((req_size > (1ULL << 31)) || /* Too Big */
+ (xio_addr < xio_base) || /* Out of range for mappings */
+ (endoff > (1ULL << 31))) { /* Too Big */
+ return 0;
+ }
+
+ return PCI32_DIRECT_BASE | offset;
+
+}
+
+/*
+ * Wrapper routine for free'ing DMA maps
+ * DMA mappings for Direct 64 and 32 do not have any DMA maps.
+ */
+void
+pcibr_dma_unmap(struct pcidev_info *pcidev_info, dma_addr_t dma_handle,
+ int direction)
+{
+ struct pcibus_info *pcibus_info = (struct pcibus_info *)pcidev_info->
+ pdi_pcibus_info;
+
+ if (IS_PCI32_MAPPED(dma_handle)) {
+ int ate_index;
+
+ ate_index =
+ IOPG((ATE_SWAP_OFF(dma_handle) - PCI32_MAPPED_BASE));
+ pcibr_ate_free(pcibus_info, ate_index);
+ }
+}
+
+/*
+ * On SN systems there is a race condition between a PIO read response and
+ * DMA's. In rare cases, the read response may beat the DMA, causing the
+ * driver to think that data in memory is complete and meaningful. This code
+ * eliminates that race. This routine is called by the PIO read routines
+ * after doing the read. For PIC this routine then forces a fake interrupt
+ * on another line, which is logically associated with the slot that the PIO
+ * is addressed to. It then spins while watching the memory location that
+ * the interrupt is targetted to. When the interrupt response arrives, we
+ * are sure that the DMA has landed in memory and it is safe for the driver
+ * to proceed. For TIOCP use the Device(x) Write Request Buffer Flush
+ * Bridge register since it ensures the data has entered the coherence domain,
+ * unlike the PIC Device(x) Write Request Buffer Flush register.
+ */
+
+void sn_dma_flush(uint64_t addr)
+{
+ nasid_t nasid;
+ int is_tio;
+ int wid_num;
+ int i, j;
+ int bwin;
+ uint64_t flags;
+ struct hubdev_info *hubinfo;
+ volatile struct sn_flush_device_list *p;
+ struct sn_flush_nasid_entry *flush_nasid_list;
+
+ if (!sn_ioif_inited)
+ return;
+
+ nasid = NASID_GET(addr);
+ if (-1 == nasid_to_cnodeid(nasid))
+ return;
+
+ hubinfo = (NODEPDA(nasid_to_cnodeid(nasid)))->pdinfo;
+
+ if (!hubinfo) {
+ BUG();
+ }
+ is_tio = (nasid & 1);
+ if (is_tio) {
+ wid_num = TIO_SWIN_WIDGETNUM(addr);
+ bwin = TIO_BWIN_WINDOWNUM(addr);
+ } else {
+ wid_num = SWIN_WIDGETNUM(addr);
+ bwin = BWIN_WINDOWNUM(addr);
+ }
+
+ flush_nasid_list = &hubinfo->hdi_flush_nasid_list;
+ if (flush_nasid_list->widget_p == NULL)
+ return;
+ if (bwin > 0) {
+ uint64_t itte = flush_nasid_list->iio_itte[bwin];
+
+ if (is_tio) {
+ wid_num = (itte >> TIO_ITTE_WIDGET_SHIFT) &
+ TIO_ITTE_WIDGET_MASK;
+ } else {
+ wid_num = (itte >> IIO_ITTE_WIDGET_SHIFT) &
+ IIO_ITTE_WIDGET_MASK;
+ }
+ }
+ if (flush_nasid_list->widget_p == NULL)
+ return;
+ if (flush_nasid_list->widget_p[wid_num] == NULL)
+ return;
+ p = &flush_nasid_list->widget_p[wid_num][0];
+
+ /* find a matching BAR */
+ for (i = 0; i < DEV_PER_WIDGET; i++) {
+ for (j = 0; j < PCI_ROM_RESOURCE; j++) {
+ if (p->sfdl_bar_list[j].start == 0)
+ break;
+ if (addr >= p->sfdl_bar_list[j].start
+ && addr <= p->sfdl_bar_list[j].end)
+ break;
+ }
+ if (j < PCI_ROM_RESOURCE && p->sfdl_bar_list[j].start != 0)
+ break;
+ p++;
+ }
+
+ /* if no matching BAR, return without doing anything. */
+ if (i == DEV_PER_WIDGET)
+ return;
+
+ /*
+ * For TIOCP use the Device(x) Write Request Buffer Flush Bridge
+ * register since it ensures the data has entered the coherence
+ * domain, unlike PIC
+ */
+ if (is_tio) {
+ uint32_t tio_id = REMOTE_HUB_L(nasid, TIO_NODE_ID);
+ uint32_t revnum = XWIDGET_PART_REV_NUM(tio_id);
+
+ /* TIOCP BRINGUP WAR (PV907516): Don't write buffer flush reg */
+ if ((1 << XWIDGET_PART_REV_NUM_REV(revnum)) & PV907516) {
+ return;
+ } else {
+ pcireg_wrb_flush_get(p->sfdl_pcibus_info,
+ (p->sfdl_slot - 1));
+ }
+ } else {
+ spin_lock_irqsave(&((struct sn_flush_device_list *)p)->
+ sfdl_flush_lock, flags);
+
+ p->sfdl_flush_value = 0;
+
+ /* force an interrupt. */
+ *(volatile uint32_t *)(p->sfdl_force_int_addr) = 1;
+
+ /* wait for the interrupt to come back. */
+ while (*(p->sfdl_flush_addr) != 0x10f) ;
+
+ /* okay, everything is synched up. */
+ spin_unlock_irqrestore((spinlock_t *)&p->sfdl_flush_lock, flags);
+ }
+ return;
+}
+
+/*
+ * Wrapper DMA interface. Called from pci_dma.c routines.
+ */
+
+uint64_t
+pcibr_dma_map(struct pcidev_info * pcidev_info, unsigned long phys_addr,
+ size_t size, unsigned int flags)
+{
+ dma_addr_t dma_handle;
+ struct pci_dev *pcidev = pcidev_info->pdi_linux_pcidev;
+
+ if (flags & SN_PCIDMA_CONSISTENT) {
+ /* sn_pci_alloc_consistent interfaces */
+ if (pcidev->dev.coherent_dma_mask == ~0UL) {
+ dma_handle =
+ pcibr_dmatrans_direct64(pcidev_info, phys_addr,
+ PCI64_ATTR_BAR);
+ } else {
+ dma_handle =
+ (dma_addr_t) pcibr_dmamap_ate32(pcidev_info,
+ phys_addr, size,
+ PCI32_ATE_BAR);
+ }
+ } else {
+ /* map_sg/map_single interfaces */
+
+ /* SN cannot support DMA addresses smaller than 32 bits. */
+ if (pcidev->dma_mask < 0x7fffffff) {
+ return 0;
+ }
+
+ if (pcidev->dma_mask == ~0UL) {
+ /*
+ * Handle the most common case: 64 bit cards. This
+ * call should always succeed.
+ */
+
+ dma_handle =
+ pcibr_dmatrans_direct64(pcidev_info, phys_addr,
+ PCI64_ATTR_PREF);
+ } else {
+ /* Handle 32-63 bit cards via direct mapping */
+ dma_handle =
+ pcibr_dmatrans_direct32(pcidev_info, phys_addr,
+ size, 0);
+ if (!dma_handle) {
+ /*
+ * It is a 32 bit card and we cannot do direct mapping,
+ * so we use an ATE.
+ */
+
+ dma_handle =
+ pcibr_dmamap_ate32(pcidev_info, phys_addr,
+ size, PCI32_ATE_PREF);
+ }
+ }
+ }
+
+ return dma_handle;
+}
+
+EXPORT_SYMBOL(sn_dma_flush);
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <asm/sn/sn_sal.h>
+#include "xtalk/xwidgetdev.h"
+#include <asm/sn/geo.h>
+#include "xtalk/hubdev.h"
+#include "pci/pcibus_provider_defs.h"
+#include "pci/pcidev.h"
+#include "pci/pcibr_provider.h"
+#include <asm/sn/addrs.h>
+
+
+static int sal_pcibr_error_interrupt(struct pcibus_info *soft)
+{
+ struct ia64_sal_retval ret_stuff;
+ uint64_t busnum;
+ int segment;
+ ret_stuff.status = 0;
+ ret_stuff.v0 = 0;
+
+ segment = 0;
+ busnum = soft->pbi_buscommon.bs_persist_busnum;
+ SAL_CALL_NOLOCK(ret_stuff,
+ (u64) SN_SAL_IOIF_ERROR_INTERRUPT,
+ (u64) segment, (u64) busnum, 0, 0, 0, 0, 0);
+
+ return (int)ret_stuff.v0;
+}
+
+/*
+ * PCI Bridge Error interrupt handler. Gets invoked whenever a PCI
+ * bridge sends an error interrupt.
+ */
+static irqreturn_t
+pcibr_error_intr_handler(int irq, void *arg, struct pt_regs *regs)
+{
+ struct pcibus_info *soft = (struct pcibus_info *)arg;
+
+ if (sal_pcibr_error_interrupt(soft) < 0) {
+ panic("pcibr_error_intr_handler(): Fatal Bridge Error");
+ }
+ return IRQ_HANDLED;
+}
+
+void *
+pcibr_bus_fixup(struct pcibus_bussoft *prom_bussoft)
+{
+ int nasid, cnode, j;
+ struct hubdev_info *hubdev_info;
+ struct pcibus_info *soft;
+ struct sn_flush_device_list *sn_flush_device_list;
+
+ if (! IS_PCI_BRIDGE_ASIC(prom_bussoft->bs_asic_type)) {
+ return NULL;
+ }
+
+ /*
+ * Allocate kernel bus soft and copy from prom.
+ */
+
+ soft = kmalloc(sizeof(struct pcibus_info), GFP_KERNEL);
+ if (!soft) {
+ return NULL;
+ }
+
+ memcpy(soft, prom_bussoft, sizeof(struct pcibus_info));
+ soft->pbi_buscommon.bs_base =
+ (((u64) soft->pbi_buscommon.
+ bs_base << 4) >> 4) | __IA64_UNCACHED_OFFSET;
+
+ spin_lock_init(&soft->pbi_lock);
+
+ /*
+ * register the bridge's error interrupt handler
+ */
+ if (request_irq(SGI_PCIBR_ERROR, (void *)pcibr_error_intr_handler,
+ SA_SHIRQ, "PCIBR error", (void *)(soft))) {
+ printk(KERN_WARNING
+ "pcibr cannot allocate interrupt for error handler\n");
+ }
+
+ /*
+ * Update the Bridge with the "kernel" pagesize
+ */
+ if (PAGE_SIZE < 16384) {
+ pcireg_control_bit_clr(soft, PCIBR_CTRL_PAGE_SIZE);
+ } else {
+ pcireg_control_bit_set(soft, PCIBR_CTRL_PAGE_SIZE);
+ }
+
+ nasid = NASID_GET(soft->pbi_buscommon.bs_base);
+ cnode = nasid_to_cnodeid(nasid);
+ hubdev_info = (struct hubdev_info *)(NODEPDA(cnode)->pdinfo);
+
+ if (hubdev_info->hdi_flush_nasid_list.widget_p) {
+ sn_flush_device_list = hubdev_info->hdi_flush_nasid_list.
+ widget_p[(int)soft->pbi_buscommon.bs_xid];
+ if (sn_flush_device_list) {
+ for (j = 0; j < DEV_PER_WIDGET;
+ j++, sn_flush_device_list++) {
+ if (sn_flush_device_list->sfdl_slot == -1)
+ continue;
+ if (sn_flush_device_list->
+ sfdl_persistent_busnum ==
+ soft->pbi_buscommon.bs_persist_busnum)
+ sn_flush_device_list->sfdl_pcibus_info =
+ soft;
+ }
+ }
+ }
+
+ /* Setup the PMU ATE map */
+ soft->pbi_int_ate_resource.lowest_free_index = 0;
+ soft->pbi_int_ate_resource.ate =
+ kmalloc(soft->pbi_int_ate_size * sizeof(uint64_t), GFP_KERNEL);
+ memset(soft->pbi_int_ate_resource.ate, 0,
+ (soft->pbi_int_ate_size * sizeof(uint64_t)));
+
+ return soft;
+}
+
+void pcibr_force_interrupt(struct sn_irq_info *sn_irq_info)
+{
+ struct pcidev_info *pcidev_info;
+ struct pcibus_info *pcibus_info;
+ int bit = sn_irq_info->irq_int_bit;
+
+ pcidev_info = (struct pcidev_info *)sn_irq_info->irq_pciioinfo;
+ if (pcidev_info) {
+ pcibus_info =
+ (struct pcibus_info *)pcidev_info->pdi_host_pcidev_info->
+ pdi_pcibus_info;
+ pcireg_force_intr_set(pcibus_info, bit);
+ }
+}
+
+void pcibr_change_devices_irq(struct sn_irq_info *sn_irq_info)
+{
+ struct pcidev_info *pcidev_info;
+ struct pcibus_info *pcibus_info;
+ int bit = sn_irq_info->irq_int_bit;
+ uint64_t xtalk_addr = sn_irq_info->irq_xtalkaddr;
+
+ pcidev_info = (struct pcidev_info *)sn_irq_info->irq_pciioinfo;
+ if (pcidev_info) {
+ pcibus_info =
+ (struct pcibus_info *)pcidev_info->pdi_host_pcidev_info->
+ pdi_pcibus_info;
+
+ /* Disable the device's IRQ */
+ pcireg_intr_enable_bit_clr(pcibus_info, bit);
+
+ /* Change the device's IRQ */
+ pcireg_intr_addr_addr_set(pcibus_info, bit, xtalk_addr);
+
+ /* Re-enable the device's IRQ */
+ pcireg_intr_enable_bit_set(pcibus_info, bit);
+
+ pcibr_force_interrupt(sn_irq_info);
+ }
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#include <linux/types.h>
+#include <linux/interrupt.h>
+#include "pci/pcibus_provider_defs.h"
+#include "pci/pcidev.h"
+#include "pci/tiocp.h"
+#include "pci/pic.h"
+#include "pci/pcibr_provider.h"
+
+union br_ptr {
+ struct tiocp tio;
+ struct pic pic;
+};
+
+/*
+ * Control Register Access -- Read/Write 0000_0020
+ */
+void pcireg_control_bit_clr(struct pcibus_info *pcibus_info, uint64_t bits)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ptr->tio.cp_control &= ~bits;
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ptr->pic.p_wid_control &= ~bits;
+ break;
+ default:
+ panic
+ ("pcireg_control_bit_clr: unknown bridgetype bridge 0x%p",
+ (void *)ptr);
+ }
+ }
+}
+
+void pcireg_control_bit_set(struct pcibus_info *pcibus_info, uint64_t bits)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ptr->tio.cp_control |= bits;
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ptr->pic.p_wid_control |= bits;
+ break;
+ default:
+ panic
+ ("pcireg_control_bit_set: unknown bridgetype bridge 0x%p",
+ (void *)ptr);
+ }
+ }
+}
+
+/*
+ * PCI/PCIX Target Flush Register Access -- Read Only 0000_0050
+ */
+uint64_t pcireg_tflush_get(struct pcibus_info *pcibus_info)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+ uint64_t ret = 0;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ret = ptr->tio.cp_tflush;
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ret = ptr->pic.p_wid_tflush;
+ break;
+ default:
+ panic
+ ("pcireg_tflush_get: unknown bridgetype bridge 0x%p",
+ (void *)ptr);
+ }
+ }
+
+ /* Read of the Target Flush should always return zero */
+ if (ret != 0)
+ panic("pcireg_tflush_get:Target Flush failed\n");
+
+ return ret;
+}
+
+/*
+ * Interrupt Status Register Access -- Read Only 0000_0100
+ */
+uint64_t pcireg_intr_status_get(struct pcibus_info * pcibus_info)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+ uint64_t ret = 0;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ret = ptr->tio.cp_int_status;
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ret = ptr->pic.p_int_status;
+ break;
+ default:
+ panic
+ ("pcireg_intr_status_get: unknown bridgetype bridge 0x%p",
+ (void *)ptr);
+ }
+ }
+ return ret;
+}
+
+/*
+ * Interrupt Enable Register Access -- Read/Write 0000_0108
+ */
+void pcireg_intr_enable_bit_clr(struct pcibus_info *pcibus_info, uint64_t bits)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ptr->tio.cp_int_enable &= ~bits;
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ptr->pic.p_int_enable &= ~bits;
+ break;
+ default:
+ panic
+ ("pcireg_intr_enable_bit_clr: unknown bridgetype bridge 0x%p",
+ (void *)ptr);
+ }
+ }
+}
+
+void pcireg_intr_enable_bit_set(struct pcibus_info *pcibus_info, uint64_t bits)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ptr->tio.cp_int_enable |= bits;
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ptr->pic.p_int_enable |= bits;
+ break;
+ default:
+ panic
+ ("pcireg_intr_enable_bit_set: unknown bridgetype bridge 0x%p",
+ (void *)ptr);
+ }
+ }
+}
+
+/*
+ * Intr Host Address Register (int_addr) -- Read/Write 0000_0130 - 0000_0168
+ */
+void pcireg_intr_addr_addr_set(struct pcibus_info *pcibus_info, int int_n,
+ uint64_t addr)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ptr->tio.cp_int_addr[int_n] &= ~TIOCP_HOST_INTR_ADDR;
+ ptr->tio.cp_int_addr[int_n] |=
+ (addr & TIOCP_HOST_INTR_ADDR);
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ptr->pic.p_int_addr[int_n] &= ~PIC_HOST_INTR_ADDR;
+ ptr->pic.p_int_addr[int_n] |=
+ (addr & PIC_HOST_INTR_ADDR);
+ break;
+ default:
+ panic
+ ("pcireg_intr_addr_addr_get: unknown bridgetype bridge 0x%p",
+ (void *)ptr);
+ }
+ }
+}
+
+/*
+ * Force Interrupt Register Access -- Write Only 0000_01C0 - 0000_01F8
+ */
+void pcireg_force_intr_set(struct pcibus_info *pcibus_info, int int_n)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ptr->tio.cp_force_pin[int_n] = 1;
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ptr->pic.p_force_pin[int_n] = 1;
+ break;
+ default:
+ panic
+ ("pcireg_force_intr_set: unknown bridgetype bridge 0x%p",
+ (void *)ptr);
+ }
+ }
+}
+
+/*
+ * Device(x) Write Buffer Flush Reg Access -- Read Only 0000_0240 - 0000_0258
+ */
+uint64_t pcireg_wrb_flush_get(struct pcibus_info *pcibus_info, int device)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+ uint64_t ret = 0;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ret = ptr->tio.cp_wr_req_buf[device];
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ret = ptr->pic.p_wr_req_buf[device];
+ break;
+ default:
+ panic("pcireg_wrb_flush_get: unknown bridgetype bridge 0x%p", (void *)ptr);
+ }
+
+ }
+ /* Read of the Write Buffer Flush should always return zero */
+ return ret;
+}
+
+void pcireg_int_ate_set(struct pcibus_info *pcibus_info, int ate_index,
+ uint64_t val)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ptr->tio.cp_int_ate_ram[ate_index] = (uint64_t) val;
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ptr->pic.p_int_ate_ram[ate_index] = (uint64_t) val;
+ break;
+ default:
+ panic
+ ("pcireg_int_ate_set: unknown bridgetype bridge 0x%p",
+ (void *)ptr);
+ }
+ }
+}
+
+uint64_t *pcireg_int_ate_addr(struct pcibus_info *pcibus_info, int ate_index)
+{
+ union br_ptr *ptr = (union br_ptr *)pcibus_info->pbi_buscommon.bs_base;
+ uint64_t *ret = (uint64_t *) 0;
+
+ if (pcibus_info) {
+ switch (pcibus_info->pbi_bridge_type) {
+ case PCIBR_BRIDGETYPE_TIOCP:
+ ret =
+ (uint64_t *) & (ptr->tio.cp_int_ate_ram[ate_index]);
+ break;
+ case PCIBR_BRIDGETYPE_PIC:
+ ret =
+ (uint64_t *) & (ptr->pic.p_int_ate_ram[ate_index]);
+ break;
+ default:
+ panic
+ ("pcireg_int_ate_addr: unknown bridgetype bridge 0x%p",
+ (void *)ptr);
+ }
+ }
+ return ret;
+}
--- /dev/null
+/*
+ * arch/mips/au1000/common/cputable.c
+ *
+ * Copyright (C) 2004 Dan Malek (dan@embeddededge.com)
+ * Copied from PowerPC and updated for Alchemy Au1xxx processors.
+ *
+ * Copyright (C) 2001 Ben. Herrenschmidt (benh@kernel.crashing.org)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/string.h>
+#include <linux/sched.h>
+#include <linux/threads.h>
+#include <linux/init.h>
+#include <asm/mach-au1x00/au1000.h>
+
+struct cpu_spec* cur_cpu_spec[NR_CPUS];
+
+/* With some thought, we can probably use the mask to reduce the
+ * size of the table.
+ */
+struct cpu_spec cpu_specs[] = {
+ { 0xffffffff, 0x00030100, "Au1000 DA", 1, 0 },
+ { 0xffffffff, 0x00030201, "Au1000 HA", 1, 0 },
+ { 0xffffffff, 0x00030202, "Au1000 HB", 1, 0 },
+ { 0xffffffff, 0x00030203, "Au1000 HC", 1, 1 },
+ { 0xffffffff, 0x00030204, "Au1000 HD", 1, 1 },
+ { 0xffffffff, 0x01030200, "Au1500 AB", 1, 1 },
+ { 0xffffffff, 0x01030201, "Au1500 AC", 0, 1 },
+ { 0xffffffff, 0x01030202, "Au1500 AD", 0, 1 },
+ { 0xffffffff, 0x02030200, "Au1100 AB", 1, 1 },
+ { 0xffffffff, 0x02030201, "Au1100 BA", 1, 1 },
+ { 0xffffffff, 0x02030202, "Au1100 BC", 1, 1 },
+ { 0xffffffff, 0x02030203, "Au1100 BD", 0, 1 },
+ { 0xffffffff, 0x02030204, "Au1100 BE", 0, 1 },
+ { 0xffffffff, 0x03030200, "Au1550 AA", 0, 1 },
+ { 0xffffffff, 0x04030200, "Au1200 AA", 0, 1 },
+ { 0x00000000, 0x00000000, "Unknown Au1xxx", 1, 0 },
+};
+
+void
+set_cpuspec(void)
+{
+ struct cpu_spec *sp;
+ u32 prid;
+
+ prid = read_c0_prid();
+ sp = cpu_specs;
+ while ((prid & sp->prid_mask) != sp->prid_value)
+ sp++;
+ cur_cpu_spec[0] = sp;
+}
--- /dev/null
+#
+# Automatically generated make config: don't edit
+# Linux kernel version: 2.6.10-rc2
+# Sun Nov 21 14:12:04 2004
+#
+CONFIG_MIPS=y
+CONFIG_MIPS64=y
+CONFIG_64BIT=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_LOCALVERSION=""
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+# CONFIG_HOTPLUG is not set
+CONFIG_KOBJECT_UEVENT=y
+# CONFIG_IKCONFIG is not set
+CONFIG_EMBEDDED=y
+CONFIG_KALLSYMS=y
+# CONFIG_KALLSYMS_EXTRA_PASS is not set
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+CONFIG_SHMEM=y
+CONFIG_CC_ALIGN_FUNCTIONS=0
+CONFIG_CC_ALIGN_LABELS=0
+CONFIG_CC_ALIGN_LOOPS=0
+CONFIG_CC_ALIGN_JUMPS=0
+# CONFIG_TINY_SHMEM is not set
+
+#
+# Loadable module support
+#
+# CONFIG_MODULES is not set
+
+#
+# Machine selection
+#
+# CONFIG_MACH_JAZZ is not set
+# CONFIG_MACH_VR41XX is not set
+# CONFIG_MIPS_COBALT is not set
+# CONFIG_MACH_DECSTATION is not set
+# CONFIG_MIPS_EV64120 is not set
+# CONFIG_MIPS_EV96100 is not set
+# CONFIG_MIPS_IVR is not set
+# CONFIG_LASAT is not set
+# CONFIG_MIPS_ITE8172 is not set
+# CONFIG_MIPS_ATLAS is not set
+# CONFIG_MIPS_MALTA is not set
+# CONFIG_MIPS_SEAD is not set
+# CONFIG_MOMENCO_OCELOT is not set
+CONFIG_MOMENCO_OCELOT_G=y
+# CONFIG_MOMENCO_OCELOT_C is not set
+# CONFIG_MOMENCO_OCELOT_3 is not set
+# CONFIG_MOMENCO_JAGUAR_ATX is not set
+# CONFIG_PMC_YOSEMITE is not set
+# CONFIG_DDB5074 is not set
+# CONFIG_DDB5476 is not set
+# CONFIG_DDB5477 is not set
+# CONFIG_NEC_OSPREY is not set
+# CONFIG_SGI_IP22 is not set
+# CONFIG_SGI_IP27 is not set
+# CONFIG_SGI_IP32 is not set
+# CONFIG_SIBYTE_SB1xxx_SOC is not set
+# CONFIG_SNI_RM200_PCI is not set
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+CONFIG_HAVE_DEC_LOCK=y
+CONFIG_DMA_NONCOHERENT=y
+# CONFIG_CPU_LITTLE_ENDIAN is not set
+CONFIG_IRQ_CPU=y
+CONFIG_IRQ_CPU_RM7K=y
+CONFIG_PCI_MARVELL=y
+CONFIG_SWAP_IO_SPACE=y
+# CONFIG_SYSCLK_75 is not set
+# CONFIG_SYSCLK_83 is not set
+CONFIG_SYSCLK_100=y
+CONFIG_MIPS_L1_CACHE_SHIFT=5
+# CONFIG_FB is not set
+
+#
+# CPU selection
+#
+# CONFIG_CPU_MIPS32 is not set
+# CONFIG_CPU_MIPS64 is not set
+# CONFIG_CPU_R3000 is not set
+# CONFIG_CPU_TX39XX is not set
+# CONFIG_CPU_VR41XX is not set
+# CONFIG_CPU_R4300 is not set
+# CONFIG_CPU_R4X00 is not set
+# CONFIG_CPU_TX49XX is not set
+# CONFIG_CPU_R5000 is not set
+# CONFIG_CPU_R5432 is not set
+# CONFIG_CPU_R6000 is not set
+# CONFIG_CPU_NEVADA is not set
+# CONFIG_CPU_R8000 is not set
+# CONFIG_CPU_R10000 is not set
+CONFIG_CPU_RM7000=y
+# CONFIG_CPU_RM9000 is not set
+# CONFIG_CPU_SB1 is not set
+CONFIG_PAGE_SIZE_4KB=y
+# CONFIG_PAGE_SIZE_8KB is not set
+# CONFIG_PAGE_SIZE_16KB is not set
+# CONFIG_PAGE_SIZE_64KB is not set
+CONFIG_BOARD_SCACHE=y
+CONFIG_RM7000_CPU_SCACHE=y
+CONFIG_CPU_HAS_PREFETCH=y
+CONFIG_CPU_HAS_LLSC=y
+CONFIG_CPU_HAS_LLDSCD=y
+CONFIG_CPU_HAS_SYNC=y
+# CONFIG_PREEMPT is not set
+
+#
+# Bus options (PCI, PCMCIA, EISA, ISA, TC)
+#
+CONFIG_HW_HAS_PCI=y
+CONFIG_PCI=y
+CONFIG_PCI_LEGACY_PROC=y
+CONFIG_PCI_NAMES=y
+CONFIG_MMU=y
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_MISC is not set
+# CONFIG_BUILD_ELF64 is not set
+CONFIG_MIPS32_COMPAT=y
+CONFIG_COMPAT=y
+CONFIG_MIPS32_O32=y
+CONFIG_MIPS32_N32=y
+CONFIG_BINFMT_ELF32=y
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+CONFIG_STANDALONE=y
+CONFIG_PREVENT_FIRMWARE_BUILD=y
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_SX8 is not set
+# CONFIG_BLK_DEV_RAM is not set
+CONFIG_INITRAMFS_SOURCE=""
+CONFIG_CDROM_PKTCDVD=y
+CONFIG_CDROM_PKTCDVD_BUFFERS=8
+# CONFIG_CDROM_PKTCDVD_WCACHE is not set
+
+#
+# IO Schedulers
+#
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+# CONFIG_IDE is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+# CONFIG_PACKET is not set
+CONFIG_NETLINK_DEV=y
+CONFIG_UNIX=y
+CONFIG_NET_KEY=y
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+# CONFIG_IP_PNP_BOOTP is not set
+# CONFIG_IP_PNP_RARP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+CONFIG_INET_TUNNEL=y
+CONFIG_IP_TCPDIAG=y
+# CONFIG_IP_TCPDIAG_IPV6 is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+CONFIG_XFRM=y
+CONFIG_XFRM_USER=y
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+# CONFIG_NET_CLS_ROUTE is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+# CONFIG_ETHERTAP is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+CONFIG_GALILEO_64240_ETH=y
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+
+#
+# Tulip family network device support
+#
+# CONFIG_NET_TULIP is not set
+# CONFIG_HP100 is not set
+# CONFIG_NET_PCI is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+# CONFIG_NET_RADIO is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+CONFIG_INPUT=y
+
+#
+# Userland interfaces
+#
+CONFIG_INPUT_MOUSEDEV=y
+CONFIG_INPUT_MOUSEDEV_PSAUX=y
+CONFIG_INPUT_MOUSEDEV_SCREEN_X=1024
+CONFIG_INPUT_MOUSEDEV_SCREEN_Y=768
+# CONFIG_INPUT_JOYDEV is not set
+# CONFIG_INPUT_TSDEV is not set
+# CONFIG_INPUT_EVDEV is not set
+# CONFIG_INPUT_EVBUG is not set
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+CONFIG_SERIO=y
+# CONFIG_SERIO_I8042 is not set
+CONFIG_SERIO_SERPORT=y
+# CONFIG_SERIO_CT82C710 is not set
+# CONFIG_SERIO_PCIPS2 is not set
+CONFIG_SERIO_RAW=y
+
+#
+# Input Device Drivers
+#
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_INPUT_JOYSTICK is not set
+# CONFIG_INPUT_TOUCHSCREEN is not set
+# CONFIG_INPUT_MISC is not set
+
+#
+# Character devices
+#
+CONFIG_VT=y
+CONFIG_VT_CONSOLE=y
+CONFIG_HW_CONSOLE=y
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=4
+# CONFIG_SERIAL_8250_EXTENDED is not set
+
+#
+# Non-8250 serial port support
+#
+CONFIG_SERIAL_CORE=y
+CONFIG_SERIAL_CORE_CONSOLE=y
+CONFIG_UNIX98_PTYS=y
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Dallas's 1-wire bus
+#
+# CONFIG_W1 is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+
+#
+# Console display driver support
+#
+# CONFIG_VGA_CONSOLE is not set
+CONFIG_DUMMY_CONSOLE=y
+
+#
+# Sound
+#
+# CONFIG_SOUND is not set
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+CONFIG_USB_ARCH_HAS_HCD=y
+CONFIG_USB_ARCH_HAS_OHCI=y
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+# CONFIG_MINIX_FS is not set
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+CONFIG_DNOTIFY=y
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+# CONFIG_MSDOS_FS is not set
+# CONFIG_VFAT_FS is not set
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+CONFIG_DEVPTS_FS_XATTR=y
+CONFIG_DEVPTS_FS_SECURITY=y
+# CONFIG_TMPFS is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+CONFIG_NFS_FS=y
+# CONFIG_NFS_V3 is not set
+# CONFIG_NFS_V4 is not set
+# CONFIG_NFS_DIRECTIO is not set
+CONFIG_NFSD=y
+# CONFIG_NFSD_V3 is not set
+# CONFIG_NFSD_TCP is not set
+CONFIG_ROOT_NFS=y
+CONFIG_LOCKD=y
+CONFIG_EXPORTFS=y
+CONFIG_SUNRPC=y
+# CONFIG_RPCSEC_GSS_KRB5 is not set
+# CONFIG_RPCSEC_GSS_SPKM3 is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+
+#
+# Native Language Support
+#
+# CONFIG_NLS is not set
+
+#
+# Kernel hacking
+#
+# CONFIG_DEBUG_KERNEL is not set
+CONFIG_CROSSCOMPILE=y
+CONFIG_CMDLINE=""
+
+#
+# Security options
+#
+CONFIG_KEYS=y
+CONFIG_KEYS_DEBUG_PROC_KEYS=y
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+# CONFIG_CRC_CCITT is not set
+# CONFIG_CRC32 is not set
+# CONFIG_LIBCRC32C is not set
+CONFIG_GENERIC_HARDIRQS=y
+CONFIG_GENERIC_IRQ_PROBE=y
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1996 David S. Miller (dm@engr.sgi.com)
+ * Copyright (C) 1997, 1998, 1999, 2000 Ralf Baechle ralf@gnu.org
+ * Carsten Langgaard, carstenl@mips.com
+ * Copyright (C) 2002 MIPS Technologies, Inc. All rights reserved.
+ */
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+
+#include <asm/cpu.h>
+#include <asm/bootinfo.h>
+#include <asm/mmu_context.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+
+extern void build_tlb_refill_handler(void);
+
+#define TFP_TLB_SIZE 384
+#define TFP_TLB_SET_SHIFT 7
+
+/* CP0 hazard avoidance. */
+#define BARRIER __asm__ __volatile__(".set noreorder\n\t" \
+ "nop; nop; nop; nop; nop; nop;\n\t" \
+ ".set reorder\n\t")
+
+void local_flush_tlb_all(void)
+{
+ unsigned long flags;
+ unsigned long old_ctx;
+ int entry;
+
+ local_irq_save(flags);
+ /* Save old context and create impossible VPN2 value */
+ old_ctx = read_c0_entryhi();
+ write_c0_entrylo(0);
+
+ for (entry = 0; entry < TFP_TLB_SIZE; entry++) {
+ write_c0_tlbset(entry >> TFP_TLB_SET_SHIFT);
+ write_c0_vaddr(entry << PAGE_SHIFT);
+ write_c0_entryhi(CKSEG0 + (entry << (PAGE_SHIFT + 1)));
+ mtc0_tlbw_hazard();
+ tlb_write();
+ }
+ tlbw_use_hazard();
+ write_c0_entryhi(old_ctx);
+ local_irq_restore(flags);
+}
+
+void local_flush_tlb_mm(struct mm_struct *mm)
+{
+ int cpu = smp_processor_id();
+
+ if (cpu_context(cpu, mm) != 0)
+ drop_mmu_context(mm,cpu);
+}
+
+void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end)
+{
+ struct mm_struct *mm = vma->vm_mm;
+ int cpu = smp_processor_id();
+ unsigned long flags;
+ int oldpid, newpid, size;
+
+ if (!cpu_context(cpu, mm))
+ return;
+
+ size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
+ size = (size + 1) >> 1;
+
+ local_irq_save(flags);
+
+ if (size > TFP_TLB_SIZE / 2) {
+ drop_mmu_context(mm, cpu);
+ goto out_restore;
+ }
+
+ oldpid = read_c0_entryhi();
+ newpid = cpu_asid(cpu, mm);
+
+ write_c0_entrylo(0);
+
+ start &= PAGE_MASK;
+ end += (PAGE_SIZE - 1);
+ end &= PAGE_MASK;
+ while (start < end) {
+ signed long idx;
+
+ write_c0_vaddr(start);
+ write_c0_entryhi(start);
+ start += PAGE_SIZE;
+ tlb_probe();
+ idx = read_c0_tlbset();
+ if (idx < 0)
+ continue;
+
+ write_c0_entryhi(CKSEG0 + (idx << (PAGE_SHIFT + 1)));
+ tlb_write();
+ }
+ write_c0_entryhi(oldpid);
+
+out_restore:
+ local_irq_restore(flags);
+}
+
+/* Usable for KV1 addresses only! */
+void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
+{
+ unsigned long flags;
+ int size;
+
+ size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
+ size = (size + 1) >> 1;
+
+ if (size > TFP_TLB_SIZE / 2) {
+ local_flush_tlb_all();
+ return;
+ }
+
+ local_irq_save(flags);
+
+ write_c0_entrylo(0);
+
+ start &= PAGE_MASK;
+ end += (PAGE_SIZE - 1);
+ end &= PAGE_MASK;
+ while (start < end) {
+ signed long idx;
+
+ write_c0_vaddr(start);
+ write_c0_entryhi(start);
+ start += PAGE_SIZE;
+ tlb_probe();
+ idx = read_c0_tlbset();
+ if (idx < 0)
+ continue;
+
+ write_c0_entryhi(CKSEG0 + (idx << (PAGE_SHIFT + 1)));
+ tlb_write();
+ }
+
+ local_irq_restore(flags);
+}
+
+void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+{
+ int cpu = smp_processor_id();
+ unsigned long flags;
+ int oldpid, newpid;
+ signed long idx;
+
+ if (!cpu_context(cpu, vma->vm_mm))
+ return;
+
+ newpid = cpu_asid(cpu, vma->vm_mm);
+ page &= PAGE_MASK;
+ local_irq_save(flags);
+ oldpid = read_c0_entryhi();
+ write_c0_vaddr(page);
+ write_c0_entryhi(newpid);
+ tlb_probe();
+ idx = read_c0_tlbset();
+ if (idx < 0)
+ goto finish;
+
+ write_c0_entrylo(0);
+ write_c0_entryhi(CKSEG0 + (idx << (PAGE_SHIFT + 1)));
+ tlb_write();
+
+finish:
+ write_c0_entryhi(oldpid);
+ local_irq_restore(flags);
+}
+
+/*
+ * We will need multiple versions of update_mmu_cache(), one that just
+ * updates the TLB with the new pte(s), and another which also checks
+ * for the R4k "end of page" hardware bug and does the needy.
+ */
+void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
+{
+ unsigned long flags;
+ pgd_t *pgdp;
+ pmd_t *pmdp;
+ pte_t *ptep;
+ int pid;
+
+ /*
+ * Handle debugger faulting in for debugee.
+ */
+ if (current->active_mm != vma->vm_mm)
+ return;
+
+ pid = read_c0_entryhi() & ASID_MASK;
+
+ local_irq_save(flags);
+ address &= PAGE_MASK;
+ write_c0_vaddr(address);
+ write_c0_entryhi(pid);
+ pgdp = pgd_offset(vma->vm_mm, address);
+ pmdp = pmd_offset(pgdp, address);
+ ptep = pte_offset_map(pmdp, address);
+ tlb_probe();
+
+ write_c0_entrylo(pte_val(*ptep++) >> 6);
+ tlb_write();
+
+ write_c0_entryhi(pid);
+ local_irq_restore(flags);
+}
+
+static void __init probe_tlb(unsigned long config)
+{
+ struct cpuinfo_mips *c = ¤t_cpu_data;
+
+ c->tlbsize = 3 * 128; /* 3 sets each 128 entries */
+}
+
+void __init tlb_init(void)
+{
+ unsigned int config = read_c0_config();
+ unsigned long status;
+
+ probe_tlb(config);
+
+ status = read_c0_status();
+ status &= ~(ST0_UPS | ST0_KPS);
+#ifdef CONFIG_PAGE_SIZE_4KB
+ status |= (TFP_PAGESIZE_4K << 32) | (TFP_PAGESIZE_4K << 36);
+#elif defined(CONFIG_PAGE_SIZE_8KB)
+ status |= (TFP_PAGESIZE_8K << 32) | (TFP_PAGESIZE_8K << 36);
+#elif defined(CONFIG_PAGE_SIZE_16KB)
+ status |= (TFP_PAGESIZE_16K << 32) | (TFP_PAGESIZE_16K << 36);
+#elif defined(CONFIG_PAGE_SIZE_64KB)
+ status |= (TFP_PAGESIZE_64K << 32) | (TFP_PAGESIZE_64K << 36);
+#endif
+ write_c0_status(status);
+
+ write_c0_wired(0);
+
+ local_flush_tlb_all();
+
+ build_tlb_refill_handler();
+}
--- /dev/null
+/*
+ * TLB exception handling code for R2000/R3000.
+ *
+ * Copyright (C) 1994, 1995, 1996 by Ralf Baechle and Andreas Busse
+ *
+ * Multi-CPU abstraction reworking:
+ * Copyright (C) 1996 David S. Miller (dm@engr.sgi.com)
+ *
+ * Further modifications to make this work:
+ * Copyright (c) 1998 Harald Koerfgen
+ * Copyright (c) 1998, 1999 Gleb Raiko & Vladimir Roganov
+ * Copyright (c) 2001 Ralf Baechle
+ * Copyright (c) 2001 MIPS Technologies, Inc.
+ */
+#include <linux/init.h>
+#include <asm/asm.h>
+#include <asm/cachectl.h>
+#include <asm/fpregdef.h>
+#include <asm/mipsregs.h>
+#include <asm/page.h>
+#include <asm/pgtable-bits.h>
+#include <asm/regdef.h>
+#include <asm/stackframe.h>
+
+#define TLB_OPTIMIZE /* If you are paranoid, disable this. */
+
+ /* ABUSE of CPP macros 101. */
+
+ /* After this macro runs, the pte faulted on is
+ * in register PTE, a ptr into the table in which
+ * the pte belongs is in PTR.
+ */
+#define LOAD_PTE(pte, ptr) \
+ mfc0 pte, CP0_BADVADDR; \
+ lw ptr, pgd_current; \
+ srl pte, pte, 22; \
+ sll pte, pte, 2; \
+ addu ptr, ptr, pte; \
+ mfc0 pte, CP0_CONTEXT; \
+ lw ptr, (ptr); \
+ andi pte, pte, 0xffc; \
+ addu ptr, ptr, pte; \
+ lw pte, (ptr); \
+ nop;
+
+ /* This places the even/odd pte pair in the page
+ * table at PTR into ENTRYLO0 and ENTRYLO1 using
+ * TMP as a scratch register.
+ */
+#define PTE_RELOAD(ptr) \
+ lw ptr, (ptr) ; \
+ nop ; \
+ mtc0 ptr, CP0_ENTRYLO0; \
+ nop;
+
+#define DO_FAULT(write) \
+ .set noat; \
+ .set macro; \
+ SAVE_ALL; \
+ mfc0 a2, CP0_BADVADDR; \
+ KMODE; \
+ .set at; \
+ move a0, sp; \
+ jal do_page_fault; \
+ li a1, write; \
+ j ret_from_exception; \
+ nop; \
+ .set noat; \
+ .set nomacro;
+
+ /* Check is PTE is present, if not then jump to LABEL.
+ * PTR points to the page table where this PTE is located,
+ * when the macro is done executing PTE will be restored
+ * with it's original value.
+ */
+#define PTE_PRESENT(pte, ptr, label) \
+ andi pte, pte, (_PAGE_PRESENT | _PAGE_READ); \
+ xori pte, pte, (_PAGE_PRESENT | _PAGE_READ); \
+ bnez pte, label; \
+ .set push; \
+ .set reorder; \
+ lw pte, (ptr); \
+ .set pop;
+
+ /* Make PTE valid, store result in PTR. */
+#define PTE_MAKEVALID(pte, ptr) \
+ ori pte, pte, (_PAGE_VALID | _PAGE_ACCESSED); \
+ sw pte, (ptr);
+
+ /* Check if PTE can be written to, if not branch to LABEL.
+ * Regardless restore PTE with value from PTR when done.
+ */
+#define PTE_WRITABLE(pte, ptr, label) \
+ andi pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \
+ xori pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \
+ bnez pte, label; \
+ .set push; \
+ .set reorder; \
+ lw pte, (ptr); \
+ .set pop;
+
+
+ /* Make PTE writable, update software status bits as well,
+ * then store at PTR.
+ */
+#define PTE_MAKEWRITE(pte, ptr) \
+ ori pte, pte, (_PAGE_ACCESSED | _PAGE_MODIFIED | \
+ _PAGE_VALID | _PAGE_DIRTY); \
+ sw pte, (ptr);
+
+/*
+ * The index register may have the probe fail bit set,
+ * because we would trap on access kseg2, i.e. without refill.
+ */
+#define TLB_WRITE(reg) \
+ mfc0 reg, CP0_INDEX; \
+ nop; \
+ bltz reg, 1f; \
+ nop; \
+ tlbwi; \
+ j 2f; \
+ nop; \
+1: tlbwr; \
+2:
+
+#define RET(reg) \
+ mfc0 reg, CP0_EPC; \
+ nop; \
+ jr reg; \
+ rfe
+
+ .set noreorder
+
+ .align 5
+NESTED(handle_tlbl, PT_SIZE, sp)
+ .set noat
+
+#ifdef TLB_OPTIMIZE
+ /* Test present bit in entry. */
+ LOAD_PTE(k0, k1)
+ tlbp
+ PTE_PRESENT(k0, k1, nopage_tlbl)
+ PTE_MAKEVALID(k0, k1)
+ PTE_RELOAD(k1)
+ TLB_WRITE(k0)
+ RET(k0)
+nopage_tlbl:
+#endif
+
+ DO_FAULT(0)
+END(handle_tlbl)
+
+NESTED(handle_tlbs, PT_SIZE, sp)
+ .set noat
+
+#ifdef TLB_OPTIMIZE
+ LOAD_PTE(k0, k1)
+ tlbp # find faulting entry
+ PTE_WRITABLE(k0, k1, nopage_tlbs)
+ PTE_MAKEWRITE(k0, k1)
+ PTE_RELOAD(k1)
+ TLB_WRITE(k0)
+ RET(k0)
+nopage_tlbs:
+#endif
+
+ DO_FAULT(1)
+END(handle_tlbs)
+
+ .align 5
+NESTED(handle_mod, PT_SIZE, sp)
+ .set noat
+#ifdef TLB_OPTIMIZE
+ LOAD_PTE(k0, k1)
+ tlbp # find faulting entry
+ andi k0, k0, _PAGE_WRITE
+ beqz k0, nowrite_mod
+ .set push
+ .set reorder
+ lw k0, (k1)
+ .set pop
+
+ /* Present and writable bits set, set accessed and dirty bits. */
+ PTE_MAKEWRITE(k0, k1)
+
+ /* Now reload the entry into the tlb. */
+ PTE_RELOAD(k1)
+ tlbwi
+ RET(k0)
+#endif
+
+nowrite_mod:
+ DO_FAULT(1)
+END(handle_mod)
--- /dev/null
+/*
+ * TLB exception handling code for r4k.
+ *
+ * Copyright (C) 1994, 1995, 1996 by Ralf Baechle and Andreas Busse
+ *
+ * Multi-cpu abstraction and reworking:
+ * Copyright (C) 1996 David S. Miller (dm@engr.sgi.com)
+ *
+ * Carsten Langgaard, carstenl@mips.com
+ * Copyright (C) 2000 MIPS Technologies, Inc. All rights reserved.
+ */
+#include <linux/init.h>
+#include <linux/config.h>
+
+#include <asm/asm.h>
+#include <asm/offset.h>
+#include <asm/cachectl.h>
+#include <asm/fpregdef.h>
+#include <asm/mipsregs.h>
+#include <asm/page.h>
+#include <asm/pgtable-bits.h>
+#include <asm/regdef.h>
+#include <asm/stackframe.h>
+#include <asm/war.h>
+
+#define TLB_OPTIMIZE /* If you are paranoid, disable this. */
+
+#ifdef CONFIG_64BIT_PHYS_ADDR
+#define PTE_L ld
+#define PTE_S sd
+#define PTE_SRL dsrl
+#define P_MTC0 dmtc0
+#define PTE_SIZE 8
+#define PTEP_INDX_MSK 0xff0
+#define PTE_INDX_MSK 0xff8
+#define PTE_INDX_SHIFT 9
+#else
+#define PTE_L lw
+#define PTE_S sw
+#define PTE_SRL srl
+#define P_MTC0 mtc0
+#define PTE_SIZE 4
+#define PTEP_INDX_MSK 0xff8
+#define PTE_INDX_MSK 0xffc
+#define PTE_INDX_SHIFT 10
+#endif
+
+/*
+ * ABUSE of CPP macros 101.
+ *
+ * After this macro runs, the pte faulted on is
+ * in register PTE, a ptr into the table in which
+ * the pte belongs is in PTR.
+ */
+
+#ifdef CONFIG_SMP
+#define GET_PGD(scratch, ptr) \
+ mfc0 ptr, CP0_CONTEXT; \
+ la scratch, pgd_current;\
+ srl ptr, 23; \
+ sll ptr, 2; \
+ addu ptr, scratch, ptr; \
+ lw ptr, (ptr);
+#else
+#define GET_PGD(scratch, ptr) \
+ lw ptr, pgd_current;
+#endif
+
+#define LOAD_PTE(pte, ptr) \
+ GET_PGD(pte, ptr) \
+ mfc0 pte, CP0_BADVADDR; \
+ srl pte, pte, _PGDIR_SHIFT; \
+ sll pte, pte, 2; \
+ addu ptr, ptr, pte; \
+ mfc0 pte, CP0_BADVADDR; \
+ lw ptr, (ptr); \
+ srl pte, pte, PTE_INDX_SHIFT; \
+ and pte, pte, PTE_INDX_MSK; \
+ addu ptr, ptr, pte; \
+ PTE_L pte, (ptr);
+
+ /* This places the even/odd pte pair in the page
+ * table at PTR into ENTRYLO0 and ENTRYLO1 using
+ * TMP as a scratch register.
+ */
+#define PTE_RELOAD(ptr, tmp) \
+ ori ptr, ptr, PTE_SIZE; \
+ xori ptr, ptr, PTE_SIZE; \
+ PTE_L tmp, PTE_SIZE(ptr); \
+ PTE_L ptr, 0(ptr); \
+ PTE_SRL tmp, tmp, 6; \
+ P_MTC0 tmp, CP0_ENTRYLO1; \
+ PTE_SRL ptr, ptr, 6; \
+ P_MTC0 ptr, CP0_ENTRYLO0;
+
+#define DO_FAULT(write) \
+ .set noat; \
+ SAVE_ALL; \
+ mfc0 a2, CP0_BADVADDR; \
+ KMODE; \
+ .set at; \
+ move a0, sp; \
+ jal do_page_fault; \
+ li a1, write; \
+ j ret_from_exception; \
+ nop; \
+ .set noat;
+
+ /* Check is PTE is present, if not then jump to LABEL.
+ * PTR points to the page table where this PTE is located,
+ * when the macro is done executing PTE will be restored
+ * with it's original value.
+ */
+#define PTE_PRESENT(pte, ptr, label) \
+ andi pte, pte, (_PAGE_PRESENT | _PAGE_READ); \
+ xori pte, pte, (_PAGE_PRESENT | _PAGE_READ); \
+ bnez pte, label; \
+ PTE_L pte, (ptr);
+
+ /* Make PTE valid, store result in PTR. */
+#define PTE_MAKEVALID(pte, ptr) \
+ ori pte, pte, (_PAGE_VALID | _PAGE_ACCESSED); \
+ PTE_S pte, (ptr);
+
+ /* Check if PTE can be written to, if not branch to LABEL.
+ * Regardless restore PTE with value from PTR when done.
+ */
+#define PTE_WRITABLE(pte, ptr, label) \
+ andi pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \
+ xori pte, pte, (_PAGE_PRESENT | _PAGE_WRITE); \
+ bnez pte, label; \
+ PTE_L pte, (ptr);
+
+ /* Make PTE writable, update software status bits as well,
+ * then store at PTR.
+ */
+#define PTE_MAKEWRITE(pte, ptr) \
+ ori pte, pte, (_PAGE_ACCESSED | _PAGE_MODIFIED | \
+ _PAGE_VALID | _PAGE_DIRTY); \
+ PTE_S pte, (ptr);
+
+
+ .set noreorder
+
+/*
+ * From the IDT errata for the QED RM5230 (Nevada), processor revision 1.0:
+ * 2. A timing hazard exists for the TLBP instruction.
+ *
+ * stalling_instruction
+ * TLBP
+ *
+ * The JTLB is being read for the TLBP throughout the stall generated by the
+ * previous instruction. This is not really correct as the stalling instruction
+ * can modify the address used to access the JTLB. The failure symptom is that
+ * the TLBP instruction will use an address created for the stalling instruction
+ * and not the address held in C0_ENHI and thus report the wrong results.
+ *
+ * The software work-around is to not allow the instruction preceding the TLBP
+ * to stall - make it an NOP or some other instruction guaranteed not to stall.
+ *
+ * Errata 2 will not be fixed. This errata is also on the R5000.
+ *
+ * As if we MIPS hackers wouldn't know how to nop pipelines happy ...
+ */
+#define R5K_HAZARD nop
+
+ /*
+ * Note for many R4k variants tlb probes cannot be executed out
+ * of the instruction cache else you get bogus results.
+ */
+ .align 5
+ NESTED(handle_tlbl, PT_SIZE, sp)
+ .set noat
+#if BCM1250_M3_WAR
+ mfc0 k0, CP0_BADVADDR
+ mfc0 k1, CP0_ENTRYHI
+ xor k0, k1
+ srl k0, k0, PAGE_SHIFT+1
+ beqz k0, 1f
+ nop
+ .set mips3
+ eret
+ .set mips0
+1:
+#endif
+invalid_tlbl:
+#ifdef TLB_OPTIMIZE
+ .set mips3
+ /* Test present bit in entry. */
+ LOAD_PTE(k0, k1)
+ R5K_HAZARD
+ tlbp
+ PTE_PRESENT(k0, k1, nopage_tlbl)
+ PTE_MAKEVALID(k0, k1)
+ PTE_RELOAD(k1, k0)
+ mtc0_tlbw_hazard
+ tlbwi
+ nop
+ tlbw_eret_hazard
+ .set mips3
+ eret
+ .set mips0
+#endif
+
+nopage_tlbl:
+ DO_FAULT(0)
+ END(handle_tlbl)
+
+ .align 5
+ NESTED(handle_tlbs, PT_SIZE, sp)
+ .set noat
+#ifdef TLB_OPTIMIZE
+ .set mips3
+ li k0,0
+ LOAD_PTE(k0, k1)
+ R5K_HAZARD
+ tlbp # find faulting entry
+ PTE_WRITABLE(k0, k1, nopage_tlbs)
+ PTE_MAKEWRITE(k0, k1)
+ PTE_RELOAD(k1, k0)
+ mtc0_tlbw_hazard
+ tlbwi
+ nop
+ tlbw_eret_hazard
+ .set mips3
+ eret
+ .set mips0
+#endif
+
+nopage_tlbs:
+ DO_FAULT(1)
+ END(handle_tlbs)
+
+ .align 5
+ NESTED(handle_mod, PT_SIZE, sp)
+ .set noat
+#ifdef TLB_OPTIMIZE
+ .set mips3
+ LOAD_PTE(k0, k1)
+ R5K_HAZARD
+ tlbp # find faulting entry
+ andi k0, k0, _PAGE_WRITE
+ beqz k0, nowrite_mod
+ PTE_L k0, (k1)
+
+ /* Present and writable bits set, set accessed and dirty bits. */
+ PTE_MAKEWRITE(k0, k1)
+
+ /* Now reload the entry into the tlb. */
+ PTE_RELOAD(k1, k0)
+ mtc0_tlbw_hazard
+ tlbwi
+ nop
+ tlbw_eret_hazard
+ .set mips3
+ eret
+ .set mips0
+#endif
+
+nowrite_mod:
+ DO_FAULT(1)
+ END(handle_mod)
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Marvell MV64340 interrupt fixup code.
+ *
+ * Marvell wants an NDA for their docs so this was written without
+ * documentation. You've been warned.
+ *
+ * Copyright (C) 2004 Ralf Baechle (ralf@linux-mips.org)
+ */
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <asm/mipsregs.h>
+
+/*
+ * WARNING: Example of how _NOT_ to do it.
+ */
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int bus = dev->bus->number;
+
+ if (bus == 0 && slot == 1)
+ return 3; /* PCI-X A */
+ if (bus == 0 && slot == 2)
+ return 4; /* PCI-X B */
+ if (bus == 1 && slot == 1)
+ return 5; /* PCI A */
+ if (bus == 1 && slot == 2)
+ return 6; /* PCI B */
+
+return 0;
+ panic("Whooops in pcibios_map_irq");
+}
+
+/* Do platform specific device initialization at pci_enable_device() time */
+int pcibios_plat_dev_init(struct pci_dev *dev)
+{
+ return 0;
+}
--- /dev/null
+/*
+ * fixup-mpc30x.c, The Victor MP-C303/304 specific PCI fixups.
+ *
+ * Copyright (C) 2002,2004 Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <asm/vr41xx/mpc30x.h>
+#include <asm/vr41xx/vrc4173.h>
+
+static const int internal_func_irqs[] __initdata = {
+ VRC4173_CASCADE_IRQ,
+ VRC4173_AC97_IRQ,
+ VRC4173_USB_IRQ,
+};
+
+static char irq_tab_mpc30x[] __initdata = {
+ [12] = VRC4173_PCMCIA1_IRQ,
+ [13] = VRC4173_PCMCIA2_IRQ,
+ [29] = MQ200_IRQ,
+};
+
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ if (slot == 30)
+ return internal_func_irqs[PCI_FUNC(dev->devfn)];
+
+ return irq_tab_mpc30x[slot];
+}
+
+/* Do platform specific device initialization at pci_enable_device() time */
+int pcibios_plat_dev_init(struct pci_dev *dev)
+{
+ return 0;
+}
--- /dev/null
+/*
+ * Copyright 2002 Momentum Computer Inc.
+ * Author: Matthew Dharm <mdharm@momenco.com>
+ *
+ * Based on work for the Linux port to the Ocelot board, which is
+ * Copyright 2001 MontaVista Software Inc.
+ * Author: Jun Sun, jsun@mvista.com or jsun@junsun.net
+ *
+ * arch/mips/momentum/ocelot_g/pci.c
+ * Board-specific PCI routines for mv64340 controller.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int bus = dev->bus->number;
+
+ if (bus == 0 && slot == 1)
+ return 2; /* PCI-X A */
+ if (bus == 1 && slot == 1)
+ return 12; /* PCI-X B */
+ if (bus == 1 && slot == 2)
+ return 4; /* PCI B */
+
+return 0;
+ panic("Whooops in pcibios_map_irq");
+}
+
+/* Do platform specific device initialization at pci_enable_device() time */
+int pcibios_plat_dev_init(struct pci_dev *dev)
+{
+ return 0;
+}
--- /dev/null
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * Copyright (C) 2004 Ralf Baechle (ralf@linux-mips.org)
+ */
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int bus = dev->bus->number;
+
+ if (bus == 0 && slot == 1) /* Intel 82543 Gigabit MAC */
+ return 2; /* irq_nr is 2 for INT0 */
+
+ if (bus == 0 && slot == 2) /* Intel 82543 Gigabit MAC */
+ return 3; /* irq_nr is 3 for INT1 */
+
+ if (bus == 1 && slot == 3) /* Intel 21555 bridge */
+ return 5; /* irq_nr is 8 for INT6 */
+
+ if (bus == 1 && slot == 4) /* PMC Slot */
+ return 9; /* irq_nr is 9 for INT7 */
+
+ return -1;
+}
+
+/* Do platform specific device initialization at pci_enable_device() time */
+int pcibios_plat_dev_init(struct pci_dev *dev)
+{
+ return 0;
+}
--- /dev/null
+/*
+ * fixup-tb0219.c, The TANBAC TB0219 specific PCI fixups.
+ *
+ * Copyright (C) 2003 Megasolution Inc. <matsu@megasolution.jp>
+ * Copyright (C) 2004 Yoichi Yuasa <yuasa@hh.iij4u.or.jp>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+#include <linux/init.h>
+#include <linux/pci.h>
+
+#include <asm/vr41xx/tb0219.h>
+
+int __init pcibios_map_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int irq = -1;
+
+ switch (slot) {
+ case 12:
+ vr41xx_set_irq_trigger(TB0219_PCI_SLOT1_PIN,
+ TRIGGER_LEVEL,
+ SIGNAL_THROUGH);
+ vr41xx_set_irq_level(TB0219_PCI_SLOT1_PIN,
+ LEVEL_LOW);
+ irq = TB0219_PCI_SLOT1_IRQ;
+ break;
+ case 13:
+ vr41xx_set_irq_trigger(TB0219_PCI_SLOT2_PIN,
+ TRIGGER_LEVEL,
+ SIGNAL_THROUGH);
+ vr41xx_set_irq_level(TB0219_PCI_SLOT2_PIN,
+ LEVEL_LOW);
+ irq = TB0219_PCI_SLOT2_IRQ;
+ break;
+ case 14:
+ vr41xx_set_irq_trigger(TB0219_PCI_SLOT3_PIN,
+ TRIGGER_LEVEL,
+ SIGNAL_THROUGH);
+ vr41xx_set_irq_level(TB0219_PCI_SLOT3_PIN,
+ LEVEL_LOW);
+ irq = TB0219_PCI_SLOT3_IRQ;
+ break;
+ default:
+ break;
+ }
+
+ return irq;
+}
+
+/* Do platform specific device initialization at pci_enable_device() time */
+int pcibios_plat_dev_init(struct pci_dev *dev)
+{
+ return 0;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2004 by Ralf Baechle (ralf@linux-mips.org)
+ */
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/pci.h>
+#include <asm/gt64240.h>
+
+extern struct pci_ops titan_pci_ops;
+
+static struct resource py_mem_resource = {
+ "Titan PCI MEM", 0xe0000000UL, 0xe3ffffffUL, IORESOURCE_MEM
+};
+
+/*
+ * PMON really reserves 16MB of I/O port space but that's stupid, nothing
+ * needs that much since allocations are limited to 256 bytes per device
+ * anyway. So we just claim 64kB here.
+ */
+#define TITAN_IO_SIZE 0x0000ffffUL
+
+static struct resource py_io_resource = {
+ "Titan IO MEM", 0x00001000UL, TITAN_IO_SIZE - 1, IORESOURCE_IO,
+};
+
+static struct pci_controller py_controller = {
+ .pci_ops = &titan_pci_ops,
+ .mem_resource = &py_mem_resource,
+ .mem_offset = 0x00000000UL,
+ .io_resource = &py_io_resource,
+ .io_offset = 0x00000000UL
+};
+
+static char ioremap_failed[] __initdata = "Could not ioremap I/O port range";
+
+static int __init pmc_yosemite_setup(void)
+{
+ unsigned long io_v_base;
+
+ io_v_base = (unsigned long) ioremap(0xe0000000UL,TITAN_IO_SIZE);
+ if (!io_v_base)
+ panic(ioremap_failed);
+
+ set_io_port_base(io_v_base);
+
+ ioport_resource.end = TITAN_IO_SIZE - 1;
+
+ register_pci_controller(&py_controller);
+
+ return 0;
+}
+
+arch_initcall(pmc_yosemite_setup);
--- /dev/null
+/*
+ * Copyright 2003 PMC-Sierra
+ * Author: Manish Lachwani (lachwani@pmc-sierra.com)
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ * THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED
+ * WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN
+ * NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+ * INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+ * NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF
+ * USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ * ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+ * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write to the Free Software Foundation, Inc.,
+ * 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+/*
+ * Support for KGDB for the Yosemite board. We make use of single serial
+ * port to be used for KGDB as well as console. The second serial port
+ * seems to be having a problem. Single IRQ is allocated for both the
+ * ports. Hence, the interrupt routing code needs to figure out whether
+ * the interrupt came from channel A or B.
+ */
+
+#include <linux/config.h>
+
+#ifdef CONFIG_KGDB
+#include <asm/serial.h>
+
+/*
+ * Baud rate, Parity, Data and Stop bit settings for the
+ * serial port on the Yosemite. Note that the Early printk
+ * patch has been added. So, we should be all set to go
+ */
+#define YOSEMITE_BAUD_2400 2400
+#define YOSEMITE_BAUD_4800 4800
+#define YOSEMITE_BAUD_9600 9600
+#define YOSEMITE_BAUD_19200 19200
+#define YOSEMITE_BAUD_38400 38400
+#define YOSEMITE_BAUD_57600 57600
+#define YOSEMITE_BAUD_115200 115200
+
+#define YOSEMITE_PARITY_NONE 0
+#define YOSEMITE_PARITY_ODD 0x08
+#define YOSEMITE_PARITY_EVEN 0x18
+#define YOSEMITE_PARITY_MARK 0x28
+#define YOSEMITE_PARITY_SPACE 0x38
+
+#define YOSEMITE_DATA_5BIT 0x0
+#define YOSEMITE_DATA_6BIT 0x1
+#define YOSEMITE_DATA_7BIT 0x2
+#define YOSEMITE_DATA_8BIT 0x3
+
+#define YOSEMITE_STOP_1BIT 0x0
+#define YOSEMITE_STOP_2BIT 0x4
+
+/* This is crucial */
+#define SERIAL_REG_OFS 0x1
+
+#define SERIAL_RCV_BUFFER 0x0
+#define SERIAL_TRANS_HOLD 0x0
+#define SERIAL_SEND_BUFFER 0x0
+#define SERIAL_INTR_ENABLE (1 * SERIAL_REG_OFS)
+#define SERIAL_INTR_ID (2 * SERIAL_REG_OFS)
+#define SERIAL_DATA_FORMAT (3 * SERIAL_REG_OFS)
+#define SERIAL_LINE_CONTROL (3 * SERIAL_REG_OFS)
+#define SERIAL_MODEM_CONTROL (4 * SERIAL_REG_OFS)
+#define SERIAL_RS232_OUTPUT (4 * SERIAL_REG_OFS)
+#define SERIAL_LINE_STATUS (5 * SERIAL_REG_OFS)
+#define SERIAL_MODEM_STATUS (6 * SERIAL_REG_OFS)
+#define SERIAL_RS232_INPUT (6 * SERIAL_REG_OFS)
+#define SERIAL_SCRATCH_PAD (7 * SERIAL_REG_OFS)
+
+#define SERIAL_DIVISOR_LSB (0 * SERIAL_REG_OFS)
+#define SERIAL_DIVISOR_MSB (1 * SERIAL_REG_OFS)
+
+/*
+ * Functions to READ and WRITE to serial port 0
+ */
+#define SERIAL_READ(ofs) (*((volatile unsigned char*) \
+ (TITAN_SERIAL_BASE + ofs)))
+
+#define SERIAL_WRITE(ofs, val) ((*((volatile unsigned char*) \
+ (TITAN_SERIAL_BASE + ofs))) = val)
+
+/*
+ * Functions to READ and WRITE to serial port 1
+ */
+#define SERIAL_READ_1(ofs) (*((volatile unsigned char*) \
+ (TITAN_SERIAL_BASE_1 + ofs)
+
+#define SERIAL_WRITE_1(ofs, val) ((*((volatile unsigned char*) \
+ (TITAN_SERIAL_BASE_1 + ofs))) = val)
+
+/*
+ * Second serial port initialization
+ */
+void init_second_port(void)
+{
+ /* Disable Interrupts */
+ SERIAL_WRITE_1(SERIAL_LINE_CONTROL, 0x0);
+ SERIAL_WRITE_1(SERIAL_INTR_ENABLE, 0x0);
+
+ {
+ unsigned int divisor;
+
+ SERIAL_WRITE_1(SERIAL_LINE_CONTROL, 0x80);
+ divisor = TITAN_SERIAL_BASE_BAUD / YOSEMITE_BAUD_115200;
+ SERIAL_WRITE_1(SERIAL_DIVISOR_LSB, divisor & 0xff);
+
+ SERIAL_WRITE_1(SERIAL_DIVISOR_MSB,
+ (divisor & 0xff00) >> 8);
+ SERIAL_WRITE_1(SERIAL_LINE_CONTROL, 0x0);
+ }
+
+ SERIAL_WRITE_1(SERIAL_DATA_FORMAT, YOSEMITE_DATA_8BIT |
+ YOSEMITE_PARITY_NONE | YOSEMITE_STOP_1BIT);
+
+ /* Enable Interrupts */
+ SERIAL_WRITE_1(SERIAL_INTR_ENABLE, 0xf);
+}
+
+/* Initialize the serial port for KGDB debugging */
+void debugInit(unsigned int baud, unsigned char data, unsigned char parity,
+ unsigned char stop)
+{
+ /* Disable Interrupts */
+ SERIAL_WRITE(SERIAL_LINE_CONTROL, 0x0);
+ SERIAL_WRITE(SERIAL_INTR_ENABLE, 0x0);
+
+ {
+ unsigned int divisor;
+
+ SERIAL_WRITE(SERIAL_LINE_CONTROL, 0x80);
+
+ divisor = TITAN_SERIAL_BASE_BAUD / baud;
+ SERIAL_WRITE(SERIAL_DIVISOR_LSB, divisor & 0xff);
+
+ SERIAL_WRITE(SERIAL_DIVISOR_MSB, (divisor & 0xff00) >> 8);
+ SERIAL_WRITE(SERIAL_LINE_CONTROL, 0x0);
+ }
+
+ SERIAL_WRITE(SERIAL_DATA_FORMAT, data | parity | stop);
+}
+
+static int remoteDebugInitialized = 0;
+
+unsigned char getDebugChar(void)
+{
+ if (!remoteDebugInitialized) {
+ remoteDebugInitialized = 1;
+ debugInit(YOSEMITE_BAUD_115200,
+ YOSEMITE_DATA_8BIT,
+ YOSEMITE_PARITY_NONE, YOSEMITE_STOP_1BIT);
+ }
+
+ while ((SERIAL_READ(SERIAL_LINE_STATUS) & 0x1) == 0);
+ return SERIAL_READ(SERIAL_RCV_BUFFER);
+}
+
+int putDebugChar(unsigned char byte)
+{
+ if (!remoteDebugInitialized) {
+ remoteDebugInitialized = 1;
+ debugInit(YOSEMITE_BAUD_115200,
+ YOSEMITE_DATA_8BIT,
+ YOSEMITE_PARITY_NONE, YOSEMITE_STOP_1BIT);
+ }
+
+ while ((SERIAL_READ(SERIAL_LINE_STATUS) & 0x20) == 0);
+ SERIAL_WRITE(SERIAL_SEND_BUFFER, byte);
+
+ return 1;
+}
+#endif
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2001, 2002, 2004 Ralf Baechle
+ */
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/termios.h>
+#include <linux/sched.h>
+#include <linux/tty.h>
+
+#include <linux/serial.h>
+#include <linux/serial_core.h>
+#include <asm/serial.h>
+#include <asm/io.h>
+
+/* SUPERIO uart register map */
+struct yo_uartregs {
+ union {
+ volatile u8 rbr; /* read only, DLAB == 0 */
+ volatile u8 thr; /* write only, DLAB == 0 */
+ volatile u8 dll; /* DLAB == 1 */
+ } u1;
+ union {
+ volatile u8 ier; /* DLAB == 0 */
+ volatile u8 dlm; /* DLAB == 1 */
+ } u2;
+ union {
+ volatile u8 iir; /* read only */
+ volatile u8 fcr; /* write only */
+ } u3;
+ volatile u8 iu_lcr;
+ volatile u8 iu_mcr;
+ volatile u8 iu_lsr;
+ volatile u8 iu_msr;
+ volatile u8 iu_scr;
+} yo_uregs_t;
+
+#define iu_rbr u1.rbr
+#define iu_thr u1.thr
+#define iu_dll u1.dll
+#define iu_ier u2.ier
+#define iu_dlm u2.dlm
+#define iu_iir u3.iir
+#define iu_fcr u3.fcr
+
+#define IO_BASE_64 0x9000000000000000ULL
+
+static unsigned char readb_outer_space(unsigned long phys)
+{
+ unsigned long long vaddr = IO_BASE_64 | phys;
+ unsigned char res;
+ unsigned int sr;
+
+ sr = read_c0_status();
+ write_c0_status((sr | ST0_KX) & ~ ST0_IE);
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+
+ __asm__ __volatile__ (
+ " .set mips3 \n"
+ " ld %0, (%0) \n"
+ " lbu %0, (%0) \n"
+ " .set mips0 \n"
+ : "=r" (res)
+ : "0" (&vaddr));
+
+ write_c0_status(sr);
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+
+ return res;
+}
+
+static void writeb_outer_space(unsigned long phys, unsigned char c)
+{
+ unsigned long long vaddr = IO_BASE_64 | phys;
+ unsigned long tmp;
+ unsigned int sr;
+
+ sr = read_c0_status();
+ write_c0_status((sr | ST0_KX) & ~ ST0_IE);
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+
+ __asm__ __volatile__ (
+ " .set mips3 \n"
+ " ld %0, (%1) \n"
+ " sb %2, (%0) \n"
+ " .set mips0 \n"
+ : "=r" (tmp)
+ : "r" (&vaddr), "r" (c));
+
+ write_c0_status(sr);
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+ __asm__("sll $0, $0, 2\n");
+}
+
+void prom_putchar(char c)
+{
+ unsigned long lsr = 0xfd000008UL + offsetof(struct yo_uartregs, iu_lsr);
+ unsigned long thr = 0xfd000008UL + offsetof(struct yo_uartregs, iu_thr);
+
+ while ((readb_outer_space(lsr) & 0x20) == 0);
+ writeb_outer_space(thr, c);
+}
+
+char __init prom_getchar(void)
+{
+ return 0;
+}
--- /dev/null
+/*
+ * Kernel unwinding support
+ *
+ * (c) 2002-2004 Randolph Chung <tausq@debian.org>
+ *
+ * Derived partially from the IA64 implementation. The PA-RISC
+ * Runtime Architecture Document is also a useful reference to
+ * understand what is happening here
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/kallsyms.h>
+
+#include <asm/uaccess.h>
+#include <asm/assembly.h>
+
+#include <asm/unwind.h>
+
+/* #define DEBUG 1 */
+#ifdef DEBUG
+#define dbg(x...) printk(x)
+#else
+#define dbg(x...)
+#endif
+
+extern struct unwind_table_entry __start___unwind[];
+extern struct unwind_table_entry __stop___unwind[];
+
+static spinlock_t unwind_lock;
+/*
+ * the kernel unwind block is not dynamically allocated so that
+ * we can call unwind_init as early in the bootup process as
+ * possible (before the slab allocator is initialized)
+ */
+static struct unwind_table kernel_unwind_table;
+static struct unwind_table *unwind_tables, *unwind_tables_end;
+
+
+static inline const struct unwind_table_entry *
+find_unwind_entry_in_table(const struct unwind_table *table, unsigned long addr)
+{
+ const struct unwind_table_entry *e = NULL;
+ unsigned long lo, hi, mid;
+
+ lo = 0;
+ hi = table->length - 1;
+
+ while (lo <= hi) {
+ mid = (hi - lo) / 2 + lo;
+ e = &table->table[mid];
+ if (addr < e->region_start)
+ hi = mid - 1;
+ else if (addr > e->region_end)
+ lo = mid + 1;
+ else
+ return e;
+ }
+
+ return NULL;
+}
+
+static const struct unwind_table_entry *
+find_unwind_entry(unsigned long addr)
+{
+ struct unwind_table *table = unwind_tables;
+ const struct unwind_table_entry *e = NULL;
+
+ if (addr >= kernel_unwind_table.start &&
+ addr <= kernel_unwind_table.end)
+ e = find_unwind_entry_in_table(&kernel_unwind_table, addr);
+ else
+ for (; table; table = table->next) {
+ if (addr >= table->start &&
+ addr <= table->end)
+ e = find_unwind_entry_in_table(table, addr);
+ if (e)
+ break;
+ }
+
+ return e;
+}
+
+static void
+unwind_table_init(struct unwind_table *table, const char *name,
+ unsigned long base_addr, unsigned long gp,
+ void *table_start, void *table_end)
+{
+ struct unwind_table_entry *start = table_start;
+ struct unwind_table_entry *end =
+ (struct unwind_table_entry *)table_end - 1;
+
+ table->name = name;
+ table->base_addr = base_addr;
+ table->gp = gp;
+ table->start = base_addr + start->region_start;
+ table->end = base_addr + end->region_end;
+ table->table = (struct unwind_table_entry *)table_start;
+ table->length = end - start + 1;
+ table->next = NULL;
+
+ for (; start <= end; start++) {
+ if (start < end &&
+ start->region_end > (start+1)->region_start) {
+ printk("WARNING: Out of order unwind entry! %p and %p\n", start, start+1);
+ }
+
+ start->region_start += base_addr;
+ start->region_end += base_addr;
+ }
+}
+
+void *
+unwind_table_add(const char *name, unsigned long base_addr,
+ unsigned long gp,
+ void *start, void *end)
+{
+ struct unwind_table *table;
+ unsigned long flags;
+
+ table = kmalloc(sizeof(struct unwind_table), GFP_USER);
+ if (table == NULL)
+ return NULL;
+ unwind_table_init(table, name, base_addr, gp, start, end);
+ spin_lock_irqsave(&unwind_lock, flags);
+ if (unwind_tables)
+ {
+ unwind_tables_end->next = table;
+ unwind_tables_end = table;
+ }
+ else
+ {
+ unwind_tables = unwind_tables_end = table;
+ }
+ spin_unlock_irqrestore(&unwind_lock, flags);
+
+ return table;
+}
+
+/* Called from setup_arch to import the kernel unwind info */
+static int unwind_init(void)
+{
+ long start, stop;
+ register unsigned long gp __asm__ ("r27");
+
+ start = (long)&__start___unwind[0];
+ stop = (long)&__stop___unwind[0];
+
+ printk("unwind_init: start = 0x%lx, end = 0x%lx, entries = %lu\n",
+ start, stop,
+ (stop - start) / sizeof(struct unwind_table_entry));
+
+ unwind_table_init(&kernel_unwind_table, "kernel", KERNEL_START,
+ gp,
+ &__start___unwind[0], &__stop___unwind[0]);
+#if 0
+ {
+ int i;
+ for (i = 0; i < 10; i++)
+ {
+ printk("region 0x%x-0x%x\n",
+ __start___unwind[i].region_start,
+ __start___unwind[i].region_end);
+ }
+ }
+#endif
+ return 0;
+}
+
+static void unwind_frame_regs(struct unwind_frame_info *info)
+{
+ const struct unwind_table_entry *e;
+ unsigned long npc;
+ unsigned int insn;
+ long frame_size = 0;
+ int looking_for_rp, rpoffset = 0;
+
+ e = find_unwind_entry(info->ip);
+ if (e == NULL) {
+ unsigned long sp;
+ extern char _stext[], _etext[];
+
+ dbg("Cannot find unwind entry for 0x%lx; forced unwinding\n", info->ip);
+
+#ifdef CONFIG_KALLSYMS
+ /* Handle some frequent special cases.... */
+ {
+ char symname[KSYM_NAME_LEN+1];
+ char *modname;
+ unsigned long symsize, offset;
+
+ kallsyms_lookup(info->ip, &symsize, &offset,
+ &modname, symname);
+
+ dbg("info->ip = 0x%lx, name = %s\n", info->ip, symname);
+
+ if (strcmp(symname, "_switch_to_ret") == 0) {
+ info->prev_sp = info->sp - CALLEE_SAVE_FRAME_SIZE;
+ info->prev_ip = *(unsigned long *)(info->prev_sp - RP_OFFSET);
+ dbg("_switch_to_ret @ %lx - setting "
+ "prev_sp=%lx prev_ip=%lx\n",
+ info->ip, info->prev_sp,
+ info->prev_ip);
+ return;
+ } else if (strcmp(symname, "ret_from_kernel_thread") == 0 ||
+ strcmp(symname, "syscall_exit") == 0) {
+ info->prev_ip = info->prev_sp = 0;
+ return;
+ }
+ }
+#endif
+
+ /* Since we are doing the unwinding blind, we don't know if
+ we are adjusting the stack correctly or extracting the rp
+ correctly. The rp is checked to see if it belongs to the
+ kernel text section, if not we assume we don't have a
+ correct stack frame and we continue to unwind the stack.
+ This is not quite correct, and will fail for loadable
+ modules. */
+ sp = info->sp & ~63;
+ do {
+ unsigned long tmp;
+
+ info->prev_sp = sp - 64;
+ info->prev_ip = 0;
+ if (get_user(tmp, (unsigned long *)(info->prev_sp - RP_OFFSET)))
+ break;
+ info->prev_ip = tmp;
+ sp = info->prev_sp;
+ } while (info->prev_ip < (unsigned long)_stext ||
+ info->prev_ip > (unsigned long)_etext);
+
+ info->rp = 0;
+
+ dbg("analyzing func @ %lx with no unwind info, setting "
+ "prev_sp=%lx prev_ip=%lx\n", info->ip,
+ info->prev_sp, info->prev_ip);
+ } else {
+ dbg("e->start = 0x%x, e->end = 0x%x, Save_SP = %d, "
+ "Save_RP = %d size = %u\n", e->region_start,
+ e->region_end, e->Save_SP, e->Save_RP,
+ e->Total_frame_size);
+
+ looking_for_rp = e->Save_RP;
+
+ for (npc = e->region_start;
+ (frame_size < (e->Total_frame_size << 3) ||
+ looking_for_rp) &&
+ npc < info->ip;
+ npc += 4) {
+
+ insn = *(unsigned int *)npc;
+
+ if ((insn & 0xffffc000) == 0x37de0000 ||
+ (insn & 0xffe00000) == 0x6fc00000) {
+ /* ldo X(sp), sp, or stwm X,D(sp) */
+ frame_size += (insn & 0x1 ? -1 << 13 : 0) |
+ ((insn & 0x3fff) >> 1);
+ dbg("analyzing func @ %lx, insn=%08x @ "
+ "%lx, frame_size = %ld\n", info->ip,
+ insn, npc, frame_size);
+ } else if ((insn & 0xffe00008) == 0x73c00008) {
+ /* std,ma X,D(sp) */
+ frame_size += (insn & 0x1 ? -1 << 13 : 0) |
+ (((insn >> 4) & 0x3ff) << 3);
+ dbg("analyzing func @ %lx, insn=%08x @ "
+ "%lx, frame_size = %ld\n", info->ip,
+ insn, npc, frame_size);
+ } else if (insn == 0x6bc23fd9) {
+ /* stw rp,-20(sp) */
+ rpoffset = 20;
+ looking_for_rp = 0;
+ dbg("analyzing func @ %lx, insn=stw rp,"
+ "-20(sp) @ %lx\n", info->ip, npc);
+ } else if (insn == 0x0fc212c1) {
+ /* std rp,-16(sr0,sp) */
+ rpoffset = 16;
+ looking_for_rp = 0;
+ dbg("analyzing func @ %lx, insn=std rp,"
+ "-16(sp) @ %lx\n", info->ip, npc);
+ }
+ }
+
+ info->prev_sp = info->sp - frame_size;
+ if (rpoffset)
+ info->rp = *(unsigned long *)(info->prev_sp - rpoffset);
+ info->prev_ip = info->rp;
+ info->rp = 0;
+
+ dbg("analyzing func @ %lx, setting prev_sp=%lx "
+ "prev_ip=%lx npc=%lx\n", info->ip, info->prev_sp,
+ info->prev_ip, npc);
+ }
+}
+
+void unwind_frame_init(struct unwind_frame_info *info, struct task_struct *t,
+ unsigned long sp, unsigned long ip, unsigned long rp)
+{
+ memset(info, 0, sizeof(struct unwind_frame_info));
+ info->t = t;
+ info->sp = sp;
+ info->ip = ip;
+ info->rp = rp;
+
+ dbg("(%d) Start unwind from sp=%08lx ip=%08lx\n",
+ t ? (int)t->pid : -1, info->sp, info->ip);
+}
+
+void unwind_frame_init_from_blocked_task(struct unwind_frame_info *info, struct task_struct *t)
+{
+ struct pt_regs *regs = &t->thread.regs;
+ unwind_frame_init(info, t, regs->ksp, regs->kpc, 0);
+}
+
+void unwind_frame_init_running(struct unwind_frame_info *info, struct pt_regs *regs)
+{
+ unwind_frame_init(info, current, regs->gr[30], regs->iaoq[0],
+ regs->gr[2]);
+}
+
+int unwind_once(struct unwind_frame_info *next_frame)
+{
+ unwind_frame_regs(next_frame);
+
+ if (next_frame->prev_sp == 0 ||
+ next_frame->prev_ip == 0)
+ return -1;
+
+ next_frame->sp = next_frame->prev_sp;
+ next_frame->ip = next_frame->prev_ip;
+ next_frame->prev_sp = 0;
+ next_frame->prev_ip = 0;
+
+ dbg("(%d) Continue unwind to sp=%08lx ip=%08lx\n",
+ next_frame->t ? (int)next_frame->t->pid : -1,
+ next_frame->sp, next_frame->ip);
+
+ return 0;
+}
+
+int unwind_to_user(struct unwind_frame_info *info)
+{
+ int ret;
+
+ do {
+ ret = unwind_once(info);
+ } while (!ret && !(info->ip & 3));
+
+ return ret;
+}
+
+module_init(unwind_init);
--- /dev/null
+/*
+ * arch/ppc/boot/simple/mpc52xx_tty.c
+ *
+ * Minimal serial functions needed to send messages out a MPC52xx
+ * Programmable Serial Controller (PSC).
+ *
+ * Author: Dale Farnsworth <dfarnsworth@mvista.com>
+ *
+ * 2003-2004 (c) MontaVista, Software, Inc. This file is licensed under the
+ * terms of the GNU General Public License version 2. This program is licensed
+ * "as is" without any warranty of any kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <asm/uaccess.h>
+#include <asm/mpc52xx.h>
+#include <asm/mpc52xx_psc.h>
+#include <asm/serial.h>
+#include <asm/io.h>
+#include <asm/time.h>
+
+#if MPC52xx_PF_CONSOLE_PORT == 0
+#define MPC52xx_CONSOLE MPC52xx_PSC1
+#define MPC52xx_PSC_CONFIG_SHIFT 0
+#elif MPC52xx_PF_CONSOLE_PORT == 1
+#define MPC52xx_CONSOLE MPC52xx_PSC2
+#define MPC52xx_PSC_CONFIG_SHIFT 4
+#elif MPC52xx_PF_CONSOLE_PORT == 2
+#define MPC52xx_CONSOLE MPC52xx_PSC3
+#define MPC52xx_PSC_CONFIG_SHIFT 8
+#else
+#error "MPC52xx_PF_CONSOLE_PORT not defined"
+#endif
+
+static struct mpc52xx_psc *psc = (struct mpc52xx_psc *)MPC52xx_CONSOLE;
+
+/* The decrementer counts at the system bus clock frequency
+ * divided by four. The most accurate time base is connected to the
+ * rtc. We read the decrementer change during one rtc tick (one second)
+ * and multiply by 4 to get the system bus clock frequency.
+ */
+int
+mpc52xx_ipbfreq(void)
+{
+ struct mpc52xx_rtc *rtc = (struct mpc52xx_rtc*)MPC52xx_RTC;
+ struct mpc52xx_cdm *cdm = (struct mpc52xx_cdm*)MPC52xx_CDM;
+ int current_time, previous_time;
+ int tbl_start, tbl_end;
+ int xlbfreq, ipbfreq;
+
+ out_be32(&rtc->dividers, 0x8f1f0000); /* Set RTC 64x faster */
+ previous_time = in_be32(&rtc->time);
+ while ((current_time = in_be32(&rtc->time)) == previous_time) ;
+ tbl_start = get_tbl();
+ previous_time = current_time;
+ while ((current_time = in_be32(&rtc->time)) == previous_time) ;
+ tbl_end = get_tbl();
+ out_be32(&rtc->dividers, 0xffff0000); /* Restore RTC */
+
+ xlbfreq = (tbl_end - tbl_start) << 8;
+ ipbfreq = (in_8(&cdm->ipb_clk_sel) & 1) ? xlbfreq / 2 : xlbfreq;
+
+ return ipbfreq;
+}
+
+unsigned long
+serial_init(int ignored, void *ignored2)
+{
+ struct mpc52xx_gpio *gpio = (struct mpc52xx_gpio *)MPC52xx_GPIO;
+ int divisor;
+ int mode1;
+ int mode2;
+ u32 val32;
+
+ static int been_here = 0;
+
+ if (been_here)
+ return 0;
+
+ been_here = 1;
+
+ val32 = in_be32(&gpio->port_config);
+ val32 &= ~(0x7 << MPC52xx_PSC_CONFIG_SHIFT);
+ val32 |= MPC52xx_GPIO_PSC_CONFIG_UART_WITHOUT_CD
+ << MPC52xx_PSC_CONFIG_SHIFT;
+ out_be32(&gpio->port_config, val32);
+
+ out_8(&psc->command, MPC52xx_PSC_RST_TX
+ | MPC52xx_PSC_RX_DISABLE | MPC52xx_PSC_TX_ENABLE);
+ out_8(&psc->command, MPC52xx_PSC_RST_RX);
+
+ out_be32(&psc->sicr, 0x0);
+ out_be16(&psc->mpc52xx_psc_clock_select, 0xdd00);
+ out_be16(&psc->tfalarm, 0xf8);
+
+ out_8(&psc->command, MPC52xx_PSC_SEL_MODE_REG_1
+ | MPC52xx_PSC_RX_ENABLE
+ | MPC52xx_PSC_TX_ENABLE);
+
+ divisor = ((mpc52xx_ipbfreq()
+ / (CONFIG_SERIAL_MPC52xx_CONSOLE_BAUD * 16)) + 1) >> 1;
+
+ mode1 = MPC52xx_PSC_MODE_8_BITS | MPC52xx_PSC_MODE_PARNONE
+ | MPC52xx_PSC_MODE_ERR;
+ mode2 = MPC52xx_PSC_MODE_ONE_STOP;
+
+ out_8(&psc->ctur, divisor>>8);
+ out_8(&psc->ctlr, divisor);
+ out_8(&psc->command, MPC52xx_PSC_SEL_MODE_REG_1);
+ out_8(&psc->mode, mode1);
+ out_8(&psc->mode, mode2);
+
+ return 0; /* ignored */
+}
+
+void
+serial_putc(void *ignored, const char c)
+{
+ serial_init(0, 0);
+
+ while (!(in_be16(&psc->mpc52xx_psc_status) & MPC52xx_PSC_SR_TXEMP)) ;
+ out_8(&psc->mpc52xx_psc_buffer_8, c);
+ while (!(in_be16(&psc->mpc52xx_psc_status) & MPC52xx_PSC_SR_TXEMP)) ;
+}
+
+char
+serial_getc(void *ignored)
+{
+ while (!(in_be16(&psc->mpc52xx_psc_status) & MPC52xx_PSC_SR_RXRDY)) ;
+
+ return in_8(&psc->mpc52xx_psc_buffer_8);
+}
+
+int
+serial_tstc(void *ignored)
+{
+ return (in_be16(&psc->mpc52xx_psc_status) & MPC52xx_PSC_SR_RXRDY) != 0;
+}
--- /dev/null
+/*
+ * PowerPC version derived from arch/arm/mm/consistent.c
+ * Copyright (C) 2001 Dan Malek (dmalek@jlc.net)
+ *
+ * Copyright (C) 2000 Russell King
+ *
+ * Consistent memory allocators. Used for DMA devices that want to
+ * share uncached memory with the processor core. The function return
+ * is the virtual address and 'dma_handle' is the physical address.
+ * Mostly stolen from the ARM port, with some changes for PowerPC.
+ * -- Dan
+ *
+ * Reorganized to get rid of the arch-specific consistent_* functions
+ * and provide non-coherent implementations for the DMA API. -Matt
+ *
+ * Added in_interrupt() safe dma_alloc_coherent()/dma_free_coherent()
+ * implementation. This is pulled straight from ARM and barely
+ * modified. -Matt
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/stddef.h>
+#include <linux/vmalloc.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/bootmem.h>
+#include <linux/highmem.h>
+#include <linux/dma-mapping.h>
+#include <linux/hardirq.h>
+
+#include <asm/pgalloc.h>
+#include <asm/prom.h>
+#include <asm/io.h>
+#include <asm/mmu_context.h>
+#include <asm/pgtable.h>
+#include <asm/mmu.h>
+#include <asm/uaccess.h>
+#include <asm/smp.h>
+#include <asm/machdep.h>
+
+int map_page(unsigned long va, phys_addr_t pa, int flags);
+
+#include <asm/tlbflush.h>
+
+/*
+ * This address range defaults to a value that is safe for all
+ * platforms which currently set CONFIG_NOT_COHERENT_CACHE. It
+ * can be further configured for specific applications under
+ * the "Advanced Setup" menu. -Matt
+ */
+#define CONSISTENT_BASE (CONFIG_CONSISTENT_START)
+#define CONSISTENT_END (CONFIG_CONSISTENT_START + CONFIG_CONSISTENT_SIZE)
+#define CONSISTENT_OFFSET(x) (((unsigned long)(x) - CONSISTENT_BASE) >> PAGE_SHIFT)
+
+/*
+ * This is the page table (2MB) covering uncached, DMA consistent allocations
+ */
+static pte_t *consistent_pte;
+static spinlock_t consistent_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * VM region handling support.
+ *
+ * This should become something generic, handling VM region allocations for
+ * vmalloc and similar (ioremap, module space, etc).
+ *
+ * I envisage vmalloc()'s supporting vm_struct becoming:
+ *
+ * struct vm_struct {
+ * struct vm_region region;
+ * unsigned long flags;
+ * struct page **pages;
+ * unsigned int nr_pages;
+ * unsigned long phys_addr;
+ * };
+ *
+ * get_vm_area() would then call vm_region_alloc with an appropriate
+ * struct vm_region head (eg):
+ *
+ * struct vm_region vmalloc_head = {
+ * .vm_list = LIST_HEAD_INIT(vmalloc_head.vm_list),
+ * .vm_start = VMALLOC_START,
+ * .vm_end = VMALLOC_END,
+ * };
+ *
+ * However, vmalloc_head.vm_start is variable (typically, it is dependent on
+ * the amount of RAM found at boot time.) I would imagine that get_vm_area()
+ * would have to initialise this each time prior to calling vm_region_alloc().
+ */
+struct vm_region {
+ struct list_head vm_list;
+ unsigned long vm_start;
+ unsigned long vm_end;
+};
+
+static struct vm_region consistent_head = {
+ .vm_list = LIST_HEAD_INIT(consistent_head.vm_list),
+ .vm_start = CONSISTENT_BASE,
+ .vm_end = CONSISTENT_END,
+};
+
+static struct vm_region *
+vm_region_alloc(struct vm_region *head, size_t size, int gfp)
+{
+ unsigned long addr = head->vm_start, end = head->vm_end - size;
+ unsigned long flags;
+ struct vm_region *c, *new;
+
+ new = kmalloc(sizeof(struct vm_region), gfp);
+ if (!new)
+ goto out;
+
+ spin_lock_irqsave(&consistent_lock, flags);
+
+ list_for_each_entry(c, &head->vm_list, vm_list) {
+ if ((addr + size) < addr)
+ goto nospc;
+ if ((addr + size) <= c->vm_start)
+ goto found;
+ addr = c->vm_end;
+ if (addr > end)
+ goto nospc;
+ }
+
+ found:
+ /*
+ * Insert this entry _before_ the one we found.
+ */
+ list_add_tail(&new->vm_list, &c->vm_list);
+ new->vm_start = addr;
+ new->vm_end = addr + size;
+
+ spin_unlock_irqrestore(&consistent_lock, flags);
+ return new;
+
+ nospc:
+ spin_unlock_irqrestore(&consistent_lock, flags);
+ kfree(new);
+ out:
+ return NULL;
+}
+
+static struct vm_region *vm_region_find(struct vm_region *head, unsigned long addr)
+{
+ struct vm_region *c;
+
+ list_for_each_entry(c, &head->vm_list, vm_list) {
+ if (c->vm_start == addr)
+ goto out;
+ }
+ c = NULL;
+ out:
+ return c;
+}
+
+/*
+ * Allocate DMA-coherent memory space and return both the kernel remapped
+ * virtual and bus address for that space.
+ */
+void *
+__dma_alloc_coherent(size_t size, dma_addr_t *handle, int gfp)
+{
+ struct page *page;
+ struct vm_region *c;
+ unsigned long order;
+ u64 mask = 0x00ffffff, limit; /* ISA default */
+
+ if (!consistent_pte) {
+ printk(KERN_ERR "%s: not initialised\n", __func__);
+ dump_stack();
+ return NULL;
+ }
+
+ size = PAGE_ALIGN(size);
+ limit = (mask + 1) & ~mask;
+ if ((limit && size >= limit) || size >= (CONSISTENT_END - CONSISTENT_BASE)) {
+ printk(KERN_WARNING "coherent allocation too big (requested %#x mask %#Lx)\n",
+ size, mask);
+ return NULL;
+ }
+
+ order = get_order(size);
+
+ if (mask != 0xffffffff)
+ gfp |= GFP_DMA;
+
+ page = alloc_pages(gfp, order);
+ if (!page)
+ goto no_page;
+
+ /*
+ * Invalidate any data that might be lurking in the
+ * kernel direct-mapped region for device DMA.
+ */
+ {
+ unsigned long kaddr = (unsigned long)page_address(page);
+ memset(page_address(page), 0, size);
+ flush_dcache_range(kaddr, kaddr + size);
+ }
+
+ /*
+ * Allocate a virtual address in the consistent mapping region.
+ */
+ c = vm_region_alloc(&consistent_head, size,
+ gfp & ~(__GFP_DMA | __GFP_HIGHMEM));
+ if (c) {
+ pte_t *pte = consistent_pte + CONSISTENT_OFFSET(c->vm_start);
+ struct page *end = page + (1 << order);
+
+ /*
+ * Set the "dma handle"
+ */
+ *handle = page_to_bus(page);
+
+ do {
+ BUG_ON(!pte_none(*pte));
+
+ set_page_count(page, 1);
+ SetPageReserved(page);
+ set_pte(pte, mk_pte(page, pgprot_noncached(PAGE_KERNEL)));
+ page++;
+ pte++;
+ } while (size -= PAGE_SIZE);
+
+ /*
+ * Free the otherwise unused pages.
+ */
+ while (page < end) {
+ set_page_count(page, 1);
+ __free_page(page);
+ page++;
+ }
+
+ return (void *)c->vm_start;
+ }
+
+ if (page)
+ __free_pages(page, order);
+ no_page:
+ return NULL;
+}
+EXPORT_SYMBOL(__dma_alloc_coherent);
+
+/*
+ * free a page as defined by the above mapping.
+ */
+void __dma_free_coherent(size_t size, void *vaddr)
+{
+ struct vm_region *c;
+ unsigned long flags;
+ pte_t *ptep;
+
+ size = PAGE_ALIGN(size);
+
+ spin_lock_irqsave(&consistent_lock, flags);
+
+ c = vm_region_find(&consistent_head, (unsigned long)vaddr);
+ if (!c)
+ goto no_area;
+
+ if ((c->vm_end - c->vm_start) != size) {
+ printk(KERN_ERR "%s: freeing wrong coherent size (%ld != %d)\n",
+ __func__, c->vm_end - c->vm_start, size);
+ dump_stack();
+ size = c->vm_end - c->vm_start;
+ }
+
+ ptep = consistent_pte + CONSISTENT_OFFSET(c->vm_start);
+ do {
+ pte_t pte = ptep_get_and_clear(ptep);
+ unsigned long pfn;
+
+ ptep++;
+
+ if (!pte_none(pte) && pte_present(pte)) {
+ pfn = pte_pfn(pte);
+
+ if (pfn_valid(pfn)) {
+ struct page *page = pfn_to_page(pfn);
+ ClearPageReserved(page);
+
+ __free_page(page);
+ continue;
+ }
+ }
+
+ printk(KERN_CRIT "%s: bad page in kernel page table\n",
+ __func__);
+ } while (size -= PAGE_SIZE);
+
+ flush_tlb_kernel_range(c->vm_start, c->vm_end);
+
+ list_del(&c->vm_list);
+
+ spin_unlock_irqrestore(&consistent_lock, flags);
+
+ kfree(c);
+ return;
+
+ no_area:
+ spin_unlock_irqrestore(&consistent_lock, flags);
+ printk(KERN_ERR "%s: trying to free invalid coherent area: %p\n",
+ __func__, vaddr);
+ dump_stack();
+}
+EXPORT_SYMBOL(__dma_free_coherent);
+
+/*
+ * Initialise the consistent memory allocation.
+ */
+static int __init dma_alloc_init(void)
+{
+ pgd_t *pgd;
+ pmd_t *pmd;
+ pte_t *pte;
+ int ret = 0;
+
+ spin_lock(&init_mm.page_table_lock);
+
+ do {
+ pgd = pgd_offset(&init_mm, CONSISTENT_BASE);
+ pmd = pmd_alloc(&init_mm, pgd, CONSISTENT_BASE);
+ if (!pmd) {
+ printk(KERN_ERR "%s: no pmd tables\n", __func__);
+ ret = -ENOMEM;
+ break;
+ }
+ WARN_ON(!pmd_none(*pmd));
+
+ pte = pte_alloc_kernel(&init_mm, pmd, CONSISTENT_BASE);
+ if (!pte) {
+ printk(KERN_ERR "%s: no pte tables\n", __func__);
+ ret = -ENOMEM;
+ break;
+ }
+
+ consistent_pte = pte;
+ } while (0);
+
+ spin_unlock(&init_mm.page_table_lock);
+
+ return ret;
+}
+
+core_initcall(dma_alloc_init);
+
+/*
+ * make an area consistent.
+ */
+void __dma_sync(void *vaddr, size_t size, int direction)
+{
+ unsigned long start = (unsigned long)vaddr;
+ unsigned long end = start + size;
+
+ switch (direction) {
+ case DMA_NONE:
+ BUG();
+ case DMA_FROM_DEVICE: /* invalidate only */
+ invalidate_dcache_range(start, end);
+ break;
+ case DMA_TO_DEVICE: /* writeback only */
+ clean_dcache_range(start, end);
+ break;
+ case DMA_BIDIRECTIONAL: /* writeback and invalidate */
+ flush_dcache_range(start, end);
+ break;
+ }
+}
+EXPORT_SYMBOL(__dma_sync);
+
+#ifdef CONFIG_HIGHMEM
+/*
+ * __dma_sync_page() implementation for systems using highmem.
+ * In this case, each page of a buffer must be kmapped/kunmapped
+ * in order to have a virtual address for __dma_sync(). This must
+ * not sleep so kmap_atmomic()/kunmap_atomic() are used.
+ *
+ * Note: yes, it is possible and correct to have a buffer extend
+ * beyond the first page.
+ */
+static inline void __dma_sync_page_highmem(struct page *page,
+ unsigned long offset, size_t size, int direction)
+{
+ size_t seg_size = min((size_t)PAGE_SIZE, size) - offset;
+ size_t cur_size = seg_size;
+ unsigned long flags, start, seg_offset = offset;
+ int nr_segs = PAGE_ALIGN(size + (PAGE_SIZE - offset))/PAGE_SIZE;
+ int seg_nr = 0;
+
+ local_irq_save(flags);
+
+ do {
+ start = (unsigned long)kmap_atomic(page + seg_nr,
+ KM_PPC_SYNC_PAGE) + seg_offset;
+
+ /* Sync this buffer segment */
+ __dma_sync((void *)start, seg_size, direction);
+ kunmap_atomic((void *)start, KM_PPC_SYNC_PAGE);
+ seg_nr++;
+
+ /* Calculate next buffer segment size */
+ seg_size = min((size_t)PAGE_SIZE, size - cur_size);
+
+ /* Add the segment size to our running total */
+ cur_size += seg_size;
+ seg_offset = 0;
+ } while (seg_nr < nr_segs);
+
+ local_irq_restore(flags);
+}
+#endif /* CONFIG_HIGHMEM */
+
+/*
+ * __dma_sync_page makes memory consistent. identical to __dma_sync, but
+ * takes a struct page instead of a virtual address
+ */
+void __dma_sync_page(struct page *page, unsigned long offset,
+ size_t size, int direction)
+{
+#ifdef CONFIG_HIGHMEM
+ __dma_sync_page_highmem(page, offset, size, direction);
+#else
+ unsigned long start = (unsigned long)page_address(page) + offset;
+ __dma_sync((void *)start, size, direction);
+#endif
+}
+EXPORT_SYMBOL(__dma_sync_page);
--- /dev/null
+/*
+ * arch/ppc/kernel/head_e500.S
+ *
+ * Kernel execution entry point code.
+ *
+ * Copyright (c) 1995-1996 Gary Thomas <gdt@linuxppc.org>
+ * Initial PowerPC version.
+ * Copyright (c) 1996 Cort Dougan <cort@cs.nmt.edu>
+ * Rewritten for PReP
+ * Copyright (c) 1996 Paul Mackerras <paulus@cs.anu.edu.au>
+ * Low-level exception handers, MMU support, and rewrite.
+ * Copyright (c) 1997 Dan Malek <dmalek@jlc.net>
+ * PowerPC 8xx modifications.
+ * Copyright (c) 1998-1999 TiVo, Inc.
+ * PowerPC 403GCX modifications.
+ * Copyright (c) 1999 Grant Erickson <grant@lcse.umn.edu>
+ * PowerPC 403GCX/405GP modifications.
+ * Copyright 2000 MontaVista Software Inc.
+ * PPC405 modifications
+ * PowerPC 403GCX/405GP modifications.
+ * Author: MontaVista Software, Inc.
+ * frank_rowand@mvista.com or source@mvista.com
+ * debbie_chu@mvista.com
+ * Copyright 2002-2004 MontaVista Software, Inc.
+ * PowerPC 44x support, Matt Porter <mporter@kernel.crashing.org>
+ * Copyright 2004 Freescale Semiconductor, Inc
+ * PowerPC e500 modifications, Kumar Gala <kumar.gala@freescale.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <asm/processor.h>
+#include <asm/page.h>
+#include <asm/mmu.h>
+#include <asm/pgtable.h>
+#include <asm/cputable.h>
+#include <asm/thread_info.h>
+#include <asm/ppc_asm.h>
+#include <asm/offsets.h>
+#include "head_booke.h"
+
+/* As with the other PowerPC ports, it is expected that when code
+ * execution begins here, the following registers contain valid, yet
+ * optional, information:
+ *
+ * r3 - Board info structure pointer (DRAM, frequency, MAC address, etc.)
+ * r4 - Starting address of the init RAM disk
+ * r5 - Ending address of the init RAM disk
+ * r6 - Start of kernel command line string (e.g. "mem=128")
+ * r7 - End of kernel command line string
+ *
+ */
+ .text
+_GLOBAL(_stext)
+_GLOBAL(_start)
+ /*
+ * Reserve a word at a fixed location to store the address
+ * of abatron_pteptrs
+ */
+ nop
+/*
+ * Save parameters we are passed
+ */
+ mr r31,r3
+ mr r30,r4
+ mr r29,r5
+ mr r28,r6
+ mr r27,r7
+ li r24,0 /* CPU number */
+
+/* We try to not make any assumptions about how the boot loader
+ * setup or used the TLBs. We invalidate all mappings from the
+ * boot loader and load a single entry in TLB1[0] to map the
+ * first 16M of kernel memory. Any boot info passed from the
+ * bootloader needs to live in this first 16M.
+ *
+ * Requirement on bootloader:
+ * - The page we're executing in needs to reside in TLB1 and
+ * have IPROT=1. If not an invalidate broadcast could
+ * evict the entry we're currently executing in.
+ *
+ * r3 = Index of TLB1 were executing in
+ * r4 = Current MSR[IS]
+ * r5 = Index of TLB1 temp mapping
+ *
+ * Later in mapin_ram we will correctly map lowmem, and resize TLB1[0]
+ * if needed
+ */
+
+/* 1. Find the index of the entry we're executing in */
+ bl invstr /* Find our address */
+invstr: mflr r6 /* Make it accessible */
+ mfmsr r7
+ rlwinm r4,r7,27,31,31 /* extract MSR[IS] */
+ mfspr r7, SPRN_PID0
+ slwi r7,r7,16
+ or r7,r7,r4
+ mtspr SPRN_MAS6,r7
+ tlbsx 0,r6 /* search MSR[IS], SPID=PID0 */
+ mfspr r7,SPRN_MAS1
+ andis. r7,r7,MAS1_VALID@h
+ bne match_TLB
+ mfspr r7,SPRN_PID1
+ slwi r7,r7,16
+ or r7,r7,r4
+ mtspr SPRN_MAS6,r7
+ tlbsx 0,r6 /* search MSR[IS], SPID=PID1 */
+ mfspr r7,SPRN_MAS1
+ andis. r7,r7,MAS1_VALID@h
+ bne match_TLB
+ mfspr r7, SPRN_PID2
+ slwi r7,r7,16
+ or r7,r7,r4
+ mtspr SPRN_MAS6,r7
+ tlbsx 0,r6 /* Fall through, we had to match */
+match_TLB:
+ mfspr r7,SPRN_MAS0
+ rlwinm r3,r7,16,28,31 /* Extract MAS0(Entry) */
+
+ mfspr r7,SPRN_MAS1 /* Insure IPROT set */
+ oris r7,r7,MAS1_IPROT@h
+ mtspr SPRN_MAS1,r7
+ tlbwe
+
+/* 2. Invalidate all entries except the entry we're executing in */
+ mfspr r9,SPRN_TLB1CFG
+ andi. r9,r9,0xfff
+ li r6,0 /* Set Entry counter to 0 */
+1: lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r6,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r6) */
+ mtspr SPRN_MAS0,r7
+ tlbre
+ mfspr r7,SPRN_MAS1
+ rlwinm r7,r7,0,2,31 /* Clear MAS1 Valid and IPROT */
+ cmpw r3,r6
+ beq skpinv /* Dont update the current execution TLB */
+ mtspr SPRN_MAS1,r7
+ tlbwe
+ isync
+skpinv: addi r6,r6,1 /* Increment */
+ cmpw r6,r9 /* Are we done? */
+ bne 1b /* If not, repeat */
+
+ /* Invalidate TLB0 */
+ li r6,0x04
+ tlbivax 0,r6
+#ifdef CONFIG_SMP
+ tlbsync
+#endif
+ /* Invalidate TLB1 */
+ li r6,0x0c
+ tlbivax 0,r6
+#ifdef CONFIG_SMP
+ tlbsync
+#endif
+ msync
+
+/* 3. Setup a temp mapping and jump to it */
+ andi. r5, r3, 0x1 /* Find an entry not used and is non-zero */
+ addi r5, r5, 0x1
+ lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r3,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r3) */
+ mtspr SPRN_MAS0,r7
+ tlbre
+
+ /* Just modify the entry ID and EPN for the temp mapping */
+ lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r5,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r5) */
+ mtspr SPRN_MAS0,r7
+ xori r6,r4,1 /* Setup TMP mapping in the other Address space */
+ slwi r6,r6,12
+ oris r6,r6,(MAS1_VALID|MAS1_IPROT)@h
+ ori r6,r6,(MAS1_TSIZE(BOOKE_PAGESZ_4K))@l
+ mtspr SPRN_MAS1,r6
+ mfspr r6,SPRN_MAS2
+ li r7,0 /* temp EPN = 0 */
+ rlwimi r7,r6,0,20,31
+ mtspr SPRN_MAS2,r7
+ tlbwe
+
+ xori r6,r4,1
+ slwi r6,r6,5 /* setup new context with other address space */
+ bl 1f /* Find our address */
+1: mflr r9
+ rlwimi r7,r9,0,20,31
+ addi r7,r7,24
+ mtspr SRR0,r7
+ mtspr SRR1,r6
+ rfi
+
+/* 4. Clear out PIDs & Search info */
+ li r6,0
+ mtspr SPRN_PID0,r6
+ mtspr SPRN_PID1,r6
+ mtspr SPRN_PID2,r6
+ mtspr SPRN_MAS6,r6
+
+/* 5. Invalidate mapping we started in */
+ lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r3,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r3) */
+ mtspr SPRN_MAS0,r7
+ tlbre
+ li r6,0
+ mtspr SPRN_MAS1,r6
+ tlbwe
+ /* Invalidate TLB1 */
+ li r9,0x0c
+ tlbivax 0,r9
+#ifdef CONFIG_SMP
+ tlbsync
+#endif
+ msync
+
+/* 6. Setup KERNELBASE mapping in TLB1[0] */
+ lis r6,0x1000 /* Set MAS0(TLBSEL) = TLB1(1), ESEL = 0 */
+ mtspr SPRN_MAS0,r6
+ lis r6,(MAS1_VALID|MAS1_IPROT)@h
+ ori r6,r6,(MAS1_TSIZE(BOOKE_PAGESZ_16M))@l
+ mtspr SPRN_MAS1,r6
+ li r7,0
+ lis r6,KERNELBASE@h
+ ori r6,r6,KERNELBASE@l
+ rlwimi r6,r7,0,20,31
+ mtspr SPRN_MAS2,r6
+ li r7,(MAS3_SX|MAS3_SW|MAS3_SR)
+ mtspr SPRN_MAS3,r7
+ tlbwe
+
+/* 7. Jump to KERNELBASE mapping */
+ li r7,0
+ bl 1f /* Find our address */
+1: mflr r9
+ rlwimi r6,r9,0,20,31
+ addi r6,r6,24
+ mtspr SRR0,r6
+ mtspr SRR1,r7
+ rfi /* start execution out of TLB1[0] entry */
+
+/* 8. Clear out the temp mapping */
+ lis r7,0x1000 /* Set MAS0(TLBSEL) = 1 */
+ rlwimi r7,r5,16,12,15 /* Setup MAS0 = TLBSEL | ESEL(r5) */
+ mtspr SPRN_MAS0,r7
+ tlbre
+ mtspr SPRN_MAS1,r8
+ tlbwe
+ /* Invalidate TLB1 */
+ li r9,0x0c
+ tlbivax 0,r9
+#ifdef CONFIG_SMP
+ tlbsync
+#endif
+ msync
+
+ /* Establish the interrupt vector offsets */
+ SET_IVOR(0, CriticalInput);
+ SET_IVOR(1, MachineCheck);
+ SET_IVOR(2, DataStorage);
+ SET_IVOR(3, InstructionStorage);
+ SET_IVOR(4, ExternalInput);
+ SET_IVOR(5, Alignment);
+ SET_IVOR(6, Program);
+ SET_IVOR(7, FloatingPointUnavailable);
+ SET_IVOR(8, SystemCall);
+ SET_IVOR(9, AuxillaryProcessorUnavailable);
+ SET_IVOR(10, Decrementer);
+ SET_IVOR(11, FixedIntervalTimer);
+ SET_IVOR(12, WatchdogTimer);
+ SET_IVOR(13, DataTLBError);
+ SET_IVOR(14, InstructionTLBError);
+ SET_IVOR(15, Debug);
+ SET_IVOR(32, SPEUnavailable);
+ SET_IVOR(33, SPEFloatingPointData);
+ SET_IVOR(34, SPEFloatingPointRound);
+ SET_IVOR(35, PerformanceMonitor);
+
+ /* Establish the interrupt vector base */
+ lis r4,interrupt_base@h /* IVPR only uses the high 16-bits */
+ mtspr SPRN_IVPR,r4
+
+ /* Setup the defaults for TLB entries */
+ li r2,MAS4_TSIZED(BOOKE_PAGESZ_4K)
+ mtspr SPRN_MAS4, r2
+
+#if 0
+ /* Enable DOZE */
+ mfspr r2,SPRN_HID0
+ oris r2,r2,HID0_DOZE@h
+ mtspr SPRN_HID0, r2
+#endif
+
+ /*
+ * This is where the main kernel code starts.
+ */
+
+ /* ptr to current */
+ lis r2,init_task@h
+ ori r2,r2,init_task@l
+
+ /* ptr to current thread */
+ addi r4,r2,THREAD /* init task's THREAD */
+ mtspr SPRG3,r4
+
+ /* stack */
+ lis r1,init_thread_union@h
+ ori r1,r1,init_thread_union@l
+ li r0,0
+ stwu r0,THREAD_SIZE-STACK_FRAME_OVERHEAD(r1)
+
+ bl early_init
+
+ mfspr r3,SPRN_TLB1CFG
+ andi. r3,r3,0xfff
+ lis r4,num_tlbcam_entries@ha
+ stw r3,num_tlbcam_entries@l(r4)
+/*
+ * Decide what sort of machine this is and initialize the MMU.
+ */
+ mr r3,r31
+ mr r4,r30
+ mr r5,r29
+ mr r6,r28
+ mr r7,r27
+ bl machine_init
+ bl MMU_init
+
+ /* Setup PTE pointers for the Abatron bdiGDB */
+ lis r6, swapper_pg_dir@h
+ ori r6, r6, swapper_pg_dir@l
+ lis r5, abatron_pteptrs@h
+ ori r5, r5, abatron_pteptrs@l
+ lis r4, KERNELBASE@h
+ ori r4, r4, KERNELBASE@l
+ stw r5, 0(r4) /* Save abatron_pteptrs at a fixed location */
+ stw r6, 0(r5)
+
+ /* Let's move on */
+ lis r4,start_kernel@h
+ ori r4,r4,start_kernel@l
+ lis r3,MSR_KERNEL@h
+ ori r3,r3,MSR_KERNEL@l
+ mtspr SRR0,r4
+ mtspr SRR1,r3
+ rfi /* change context and jump to start_kernel */
+
+/*
+ * Interrupt vector entry code
+ *
+ * The Book E MMUs are always on so we don't need to handle
+ * interrupts in real mode as with previous PPC processors. In
+ * this case we handle interrupts in the kernel virtual address
+ * space.
+ *
+ * Interrupt vectors are dynamically placed relative to the
+ * interrupt prefix as determined by the address of interrupt_base.
+ * The interrupt vectors offsets are programmed using the labels
+ * for each interrupt vector entry.
+ *
+ * Interrupt vectors must be aligned on a 16 byte boundary.
+ * We align on a 32 byte cache line boundary for good measure.
+ */
+
+interrupt_base:
+ /* Critical Input Interrupt */
+ CRITICAL_EXCEPTION(0x0100, CriticalInput, UnknownException)
+
+ /* Machine Check Interrupt */
+ MCHECK_EXCEPTION(0x0200, MachineCheck, MachineCheckException)
+
+ /* Data Storage Interrupt */
+ START_EXCEPTION(DataStorage)
+ mtspr SPRG0, r10 /* Save some working registers */
+ mtspr SPRG1, r11
+ mtspr SPRG4W, r12
+ mtspr SPRG5W, r13
+ mfcr r11
+ mtspr SPRG7W, r11
+
+ /*
+ * Check if it was a store fault, if not then bail
+ * because a user tried to access a kernel or
+ * read-protected page. Otherwise, get the
+ * offending address and handle it.
+ */
+ mfspr r10, SPRN_ESR
+ andis. r10, r10, ESR_ST@h
+ beq 2f
+
+ mfspr r10, SPRN_DEAR /* Get faulting address */
+
+ /* If we are faulting a kernel address, we have to use the
+ * kernel page tables.
+ */
+ lis r11, TASK_SIZE@h
+ ori r11, r11, TASK_SIZE@l
+ cmplw 0, r10, r11
+ bge 2f
+
+ /* Get the PGD for the current thread */
+3:
+ mfspr r11,SPRG3
+ lwz r11,PGDIR(r11)
+4:
+ rlwimi r11, r10, 12, 20, 29 /* Create L1 (pgdir/pmd) address */
+ lwz r11, 0(r11) /* Get L1 entry */
+ rlwinm. r12, r11, 0, 0, 19 /* Extract L2 (pte) base address */
+ beq 2f /* Bail if no table */
+
+ rlwimi r12, r10, 22, 20, 29 /* Compute PTE address */
+ lwz r11, 0(r12) /* Get Linux PTE */
+
+ /* Are _PAGE_USER & _PAGE_RW set & _PAGE_HWWRITE not? */
+ andi. r13, r11, _PAGE_RW|_PAGE_USER|_PAGE_HWWRITE
+ cmpwi 0, r13, _PAGE_RW|_PAGE_USER
+ bne 2f /* Bail if not */
+
+ /* Update 'changed'. */
+ ori r11, r11, _PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_HWWRITE
+ stw r11, 0(r12) /* Update Linux page table */
+
+ /* MAS2 not updated as the entry does exist in the tlb, this
+ fault taken to detect state transition (eg: COW -> DIRTY)
+ */
+ lis r12, MAS3_RPN@h
+ ori r12, r12, _PAGE_HWEXEC | MAS3_RPN@l
+ and r11, r11, r12
+ rlwimi r11, r11, 31, 27, 27 /* SX <- _PAGE_HWEXEC */
+ ori r11, r11, (MAS3_UW|MAS3_SW|MAS3_UR|MAS3_SR)@l /* set static perms */
+
+ /* update search PID in MAS6, AS = 0 */
+ mfspr r12, SPRN_PID0
+ slwi r12, r12, 16
+ mtspr SPRN_MAS6, r12
+
+ /* find the TLB index that caused the fault. It has to be here. */
+ tlbsx 0, r10
+
+ mtspr SPRN_MAS3,r11
+ tlbwe
+
+ /* Done...restore registers and get out of here. */
+ mfspr r11, SPRG7R
+ mtcr r11
+ mfspr r13, SPRG5R
+ mfspr r12, SPRG4R
+ mfspr r11, SPRG1
+ mfspr r10, SPRG0
+ rfi /* Force context change */
+
+2:
+ /*
+ * The bailout. Restore registers to pre-exception conditions
+ * and call the heavyweights to help us out.
+ */
+ mfspr r11, SPRG7R
+ mtcr r11
+ mfspr r13, SPRG5R
+ mfspr r12, SPRG4R
+ mfspr r11, SPRG1
+ mfspr r10, SPRG0
+ b data_access
+
+ /* Instruction Storage Interrupt */
+ START_EXCEPTION(InstructionStorage)
+ NORMAL_EXCEPTION_PROLOG
+ mfspr r5,SPRN_ESR /* Grab the ESR and save it */
+ stw r5,_ESR(r11)
+ mr r4,r12 /* Pass SRR0 as arg2 */
+ li r5,0 /* Pass zero as arg3 */
+ EXC_XFER_EE_LITE(0x0400, handle_page_fault)
+
+ /* External Input Interrupt */
+ EXCEPTION(0x0500, ExternalInput, do_IRQ, EXC_XFER_LITE)
+
+ /* Alignment Interrupt */
+ START_EXCEPTION(Alignment)
+ NORMAL_EXCEPTION_PROLOG
+ mfspr r4,SPRN_DEAR /* Grab the DEAR and save it */
+ stw r4,_DEAR(r11)
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_EE(0x0600, AlignmentException)
+
+ /* Program Interrupt */
+ START_EXCEPTION(Program)
+ NORMAL_EXCEPTION_PROLOG
+ mfspr r4,SPRN_ESR /* Grab the ESR and save it */
+ stw r4,_ESR(r11)
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_STD(0x0700, ProgramCheckException)
+
+ /* Floating Point Unavailable Interrupt */
+ EXCEPTION(0x0800, FloatingPointUnavailable, UnknownException, EXC_XFER_EE)
+
+ /* System Call Interrupt */
+ START_EXCEPTION(SystemCall)
+ NORMAL_EXCEPTION_PROLOG
+ EXC_XFER_EE_LITE(0x0c00, DoSyscall)
+
+ /* Auxillary Processor Unavailable Interrupt */
+ EXCEPTION(0x2900, AuxillaryProcessorUnavailable, UnknownException, EXC_XFER_EE)
+
+ /* Decrementer Interrupt */
+ START_EXCEPTION(Decrementer)
+ NORMAL_EXCEPTION_PROLOG
+ lis r0,TSR_DIS@h /* Setup the DEC interrupt mask */
+ mtspr SPRN_TSR,r0 /* Clear the DEC interrupt */
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_LITE(0x0900, timer_interrupt)
+
+ /* Fixed Internal Timer Interrupt */
+ /* TODO: Add FIT support */
+ EXCEPTION(0x3100, FixedIntervalTimer, UnknownException, EXC_XFER_EE)
+
+ /* Watchdog Timer Interrupt */
+ /* TODO: Add watchdog support */
+ CRITICAL_EXCEPTION(0x3200, WatchdogTimer, UnknownException)
+
+ /* Data TLB Error Interrupt */
+ START_EXCEPTION(DataTLBError)
+ mtspr SPRG0, r10 /* Save some working registers */
+ mtspr SPRG1, r11
+ mtspr SPRG4W, r12
+ mtspr SPRG5W, r13
+ mfcr r11
+ mtspr SPRG7W, r11
+ mfspr r10, SPRN_DEAR /* Get faulting address */
+
+ /* If we are faulting a kernel address, we have to use the
+ * kernel page tables.
+ */
+ lis r11, TASK_SIZE@h
+ ori r11, r11, TASK_SIZE@l
+ cmplw 5, r10, r11
+ blt 5, 3f
+ lis r11, swapper_pg_dir@h
+ ori r11, r11, swapper_pg_dir@l
+
+ mfspr r12,SPRN_MAS1 /* Set TID to 0 */
+ li r13,MAS1_TID@l
+ andc r12,r12,r13
+ mtspr SPRN_MAS1,r12
+
+ b 4f
+
+ /* Get the PGD for the current thread */
+3:
+ mfspr r11,SPRG3
+ lwz r11,PGDIR(r11)
+
+4:
+ rlwimi r11, r10, 12, 20, 29 /* Create L1 (pgdir/pmd) address */
+ lwz r11, 0(r11) /* Get L1 entry */
+ rlwinm. r12, r11, 0, 0, 19 /* Extract L2 (pte) base address */
+ beq 2f /* Bail if no table */
+
+ rlwimi r12, r10, 22, 20, 29 /* Compute PTE address */
+ lwz r11, 0(r12) /* Get Linux PTE */
+ andi. r13, r11, _PAGE_PRESENT
+ beq 2f
+
+ ori r11, r11, _PAGE_ACCESSED
+ stw r11, 0(r12)
+
+ /* Jump to common tlb load */
+ b finish_tlb_load
+2:
+ /* The bailout. Restore registers to pre-exception conditions
+ * and call the heavyweights to help us out.
+ */
+ mfspr r11, SPRG7R
+ mtcr r11
+ mfspr r13, SPRG5R
+ mfspr r12, SPRG4R
+ mfspr r11, SPRG1
+ mfspr r10, SPRG0
+ b data_access
+
+ /* Instruction TLB Error Interrupt */
+ /*
+ * Nearly the same as above, except we get our
+ * information from different registers and bailout
+ * to a different point.
+ */
+ START_EXCEPTION(InstructionTLBError)
+ mtspr SPRG0, r10 /* Save some working registers */
+ mtspr SPRG1, r11
+ mtspr SPRG4W, r12
+ mtspr SPRG5W, r13
+ mfcr r11
+ mtspr SPRG7W, r11
+ mfspr r10, SRR0 /* Get faulting address */
+
+ /* If we are faulting a kernel address, we have to use the
+ * kernel page tables.
+ */
+ lis r11, TASK_SIZE@h
+ ori r11, r11, TASK_SIZE@l
+ cmplw 5, r10, r11
+ blt 5, 3f
+ lis r11, swapper_pg_dir@h
+ ori r11, r11, swapper_pg_dir@l
+
+ mfspr r12,SPRN_MAS1 /* Set TID to 0 */
+ li r13,MAS1_TID@l
+ andc r12,r12,r13
+ mtspr SPRN_MAS1,r12
+
+ b 4f
+
+ /* Get the PGD for the current thread */
+3:
+ mfspr r11,SPRG3
+ lwz r11,PGDIR(r11)
+
+4:
+ rlwimi r11, r10, 12, 20, 29 /* Create L1 (pgdir/pmd) address */
+ lwz r11, 0(r11) /* Get L1 entry */
+ rlwinm. r12, r11, 0, 0, 19 /* Extract L2 (pte) base address */
+ beq 2f /* Bail if no table */
+
+ rlwimi r12, r10, 22, 20, 29 /* Compute PTE address */
+ lwz r11, 0(r12) /* Get Linux PTE */
+ andi. r13, r11, _PAGE_PRESENT
+ beq 2f
+
+ ori r11, r11, _PAGE_ACCESSED
+ stw r11, 0(r12)
+
+ /* Jump to common TLB load point */
+ b finish_tlb_load
+
+2:
+ /* The bailout. Restore registers to pre-exception conditions
+ * and call the heavyweights to help us out.
+ */
+ mfspr r11, SPRG7R
+ mtcr r11
+ mfspr r13, SPRG5R
+ mfspr r12, SPRG4R
+ mfspr r11, SPRG1
+ mfspr r10, SPRG0
+ b InstructionStorage
+
+#ifdef CONFIG_SPE
+ /* SPE Unavailable */
+ START_EXCEPTION(SPEUnavailable)
+ NORMAL_EXCEPTION_PROLOG
+ bne load_up_spe
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_EE_LITE(0x2010, KernelSPE)
+#else
+ EXCEPTION(0x2020, SPEUnavailable, UnknownException, EXC_XFER_EE)
+#endif /* CONFIG_SPE */
+
+ /* SPE Floating Point Data */
+#ifdef CONFIG_SPE
+ EXCEPTION(0x2030, SPEFloatingPointData, SPEFloatingPointException, EXC_XFER_EE);
+#else
+ EXCEPTION(0x2040, SPEFloatingPointData, UnknownException, EXC_XFER_EE)
+#endif /* CONFIG_SPE */
+
+ /* SPE Floating Point Round */
+ EXCEPTION(0x2050, SPEFloatingPointRound, UnknownException, EXC_XFER_EE)
+
+ /* Performance Monitor */
+ EXCEPTION(0x2060, PerformanceMonitor, UnknownException, EXC_XFER_EE)
+
+ /* Debug Interrupt */
+ DEBUG_EXCEPTION
+
+/*
+ * Local functions
+ */
+ /*
+ * Data TLB exceptions will bail out to this point
+ * if they can't resolve the lightweight TLB fault.
+ */
+data_access:
+ NORMAL_EXCEPTION_PROLOG
+ mfspr r5,SPRN_ESR /* Grab the ESR, save it, pass arg3 */
+ stw r5,_ESR(r11)
+ mfspr r4,SPRN_DEAR /* Grab the DEAR, save it, pass arg2 */
+ andis. r10,r5,(ESR_ILK|ESR_DLK)@h
+ bne 1f
+ EXC_XFER_EE_LITE(0x0300, handle_page_fault)
+1:
+ addi r3,r1,STACK_FRAME_OVERHEAD
+ EXC_XFER_EE_LITE(0x0300, CacheLockingException)
+
+/*
+
+ * Both the instruction and data TLB miss get to this
+ * point to load the TLB.
+ * r10 - EA of fault
+ * r11 - TLB (info from Linux PTE)
+ * r12, r13 - available to use
+ * CR5 - results of addr < TASK_SIZE
+ * MAS0, MAS1 - loaded with proper value when we get here
+ * MAS2, MAS3 - will need additional info from Linux PTE
+ * Upon exit, we reload everything and RFI.
+ */
+finish_tlb_load:
+ /*
+ * We set execute, because we don't have the granularity to
+ * properly set this at the page level (Linux problem).
+ * Many of these bits are software only. Bits we don't set
+ * here we (properly should) assume have the appropriate value.
+ */
+
+ mfspr r12, SPRN_MAS2
+ rlwimi r12, r11, 26, 27, 31 /* extract WIMGE from pte */
+ mtspr SPRN_MAS2, r12
+
+ bge 5, 1f
+
+ /* addr > TASK_SIZE */
+ li r10, (MAS3_UX | MAS3_UW | MAS3_UR)
+ andi. r13, r11, (_PAGE_USER | _PAGE_HWWRITE | _PAGE_HWEXEC)
+ andi. r12, r11, _PAGE_USER /* Test for _PAGE_USER */
+ iseleq r12, 0, r10
+ and r10, r12, r13
+ srwi r12, r10, 1
+ or r12, r12, r10 /* Copy user perms into supervisor */
+ b 2f
+
+ /* addr <= TASK_SIZE */
+1: rlwinm r12, r11, 31, 29, 29 /* Extract _PAGE_HWWRITE into SW */
+ ori r12, r12, (MAS3_SX | MAS3_SR)
+
+2: rlwimi r11, r12, 0, 20, 31 /* Extract RPN from PTE and merge with perms */
+ mtspr SPRN_MAS3, r11
+ tlbwe
+
+ /* Done...restore registers and get out of here. */
+ mfspr r11, SPRG7R
+ mtcr r11
+ mfspr r13, SPRG5R
+ mfspr r12, SPRG4R
+ mfspr r11, SPRG1
+ mfspr r10, SPRG0
+ rfi /* Force context change */
+
+#ifdef CONFIG_SPE
+/* Note that the SPE support is closely modeled after the AltiVec
+ * support. Changes to one are likely to be applicable to the
+ * other! */
+load_up_spe:
+/*
+ * Disable SPE for the task which had SPE previously,
+ * and save its SPE registers in its thread_struct.
+ * Enables SPE for use in the kernel on return.
+ * On SMP we know the SPE units are free, since we give it up every
+ * switch. -- Kumar
+ */
+ mfmsr r5
+ oris r5,r5,MSR_SPE@h
+ mtmsr r5 /* enable use of SPE now */
+ isync
+/*
+ * For SMP, we don't do lazy SPE switching because it just gets too
+ * horrendously complex, especially when a task switches from one CPU
+ * to another. Instead we call giveup_spe in switch_to.
+ */
+#ifndef CONFIG_SMP
+ lis r3,last_task_used_spe@ha
+ lwz r4,last_task_used_spe@l(r3)
+ cmpi 0,r4,0
+ beq 1f
+ addi r4,r4,THREAD /* want THREAD of last_task_used_spe */
+ SAVE_32EVR(0,r10,r4)
+ evxor evr10, evr10, evr10 /* clear out evr10 */
+ evmwumiaa evr10, evr10, evr10 /* evr10 <- ACC = 0 * 0 + ACC */
+ li r5,THREAD_ACC
+ evstddx evr10, r4, r5 /* save off accumulator */
+ lwz r5,PT_REGS(r4)
+ lwz r4,_MSR-STACK_FRAME_OVERHEAD(r5)
+ lis r10,MSR_SPE@h
+ andc r4,r4,r10 /* disable SPE for previous task */
+ stw r4,_MSR-STACK_FRAME_OVERHEAD(r5)
+1:
+#endif /* CONFIG_SMP */
+ /* enable use of SPE after return */
+ oris r9,r9,MSR_SPE@h
+ mfspr r5,SPRG3 /* current task's THREAD (phys) */
+ li r4,1
+ li r10,THREAD_ACC
+ stw r4,THREAD_USED_SPE(r5)
+ evlddx evr4,r10,r5
+ evmra evr4,evr4
+ REST_32EVR(0,r10,r5)
+#ifndef CONFIG_SMP
+ subi r4,r5,THREAD
+ stw r4,last_task_used_spe@l(r3)
+#endif /* CONFIG_SMP */
+ /* restore registers and return */
+2: REST_4GPRS(3, r11)
+ lwz r10,_CCR(r11)
+ REST_GPR(1, r11)
+ mtcr r10
+ lwz r10,_LINK(r11)
+ mtlr r10
+ REST_GPR(10, r11)
+ mtspr SRR1,r9
+ mtspr SRR0,r12
+ REST_GPR(9, r11)
+ REST_GPR(12, r11)
+ lwz r11,GPR11(r11)
+ SYNC
+ rfi
+
+
+
+/*
+ * SPE unavailable trap from kernel - print a message, but let
+ * the task use SPE in the kernel until it returns to user mode.
+ */
+KernelSPE:
+ lwz r3,_MSR(r1)
+ oris r3,r3,MSR_SPE@h
+ stw r3,_MSR(r1) /* enable use of SPE after return */
+ lis r3,87f@h
+ ori r3,r3,87f@l
+ mr r4,r2 /* current */
+ lwz r5,_NIP(r1)
+ bl printk
+ b ret_from_except
+87: .string "SPE used in kernel (task=%p, pc=%x) \n"
+ .align 4,0
+
+#endif /* CONFIG_SPE */
+
+/*
+ * Global functions
+ */
+
+/*
+ * extern void loadcam_entry(unsigned int index)
+ *
+ * Load TLBCAM[index] entry in to the L2 CAM MMU
+ */
+_GLOBAL(loadcam_entry)
+ lis r4,TLBCAM@ha
+ addi r4,r4,TLBCAM@l
+ mulli r5,r3,20
+ add r3,r5,r4
+ lwz r4,0(r3)
+ mtspr SPRN_MAS0,r4
+ lwz r4,4(r3)
+ mtspr SPRN_MAS1,r4
+ lwz r4,8(r3)
+ mtspr SPRN_MAS2,r4
+ lwz r4,12(r3)
+ mtspr SPRN_MAS3,r4
+ tlbwe
+ isync
+ blr
+
+/*
+ * extern void giveup_altivec(struct task_struct *prev)
+ *
+ * The e500 core does not have an AltiVec unit.
+ */
+_GLOBAL(giveup_altivec)
+ blr
+
+#ifdef CONFIG_SPE
+/*
+ * extern void giveup_spe(struct task_struct *prev)
+ *
+ */
+_GLOBAL(giveup_spe)
+ mfmsr r5
+ oris r5,r5,MSR_SPE@h
+ SYNC
+ mtmsr r5 /* enable use of SPE now */
+ isync
+ cmpi 0,r3,0
+ beqlr- /* if no previous owner, done */
+ addi r3,r3,THREAD /* want THREAD of task */
+ lwz r5,PT_REGS(r3)
+ cmpi 0,r5,0
+ SAVE_32EVR(0, r4, r3)
+ evxor evr6, evr6, evr6 /* clear out evr6 */
+ evmwumiaa evr6, evr6, evr6 /* evr6 <- ACC = 0 * 0 + ACC */
+ li r4,THREAD_ACC
+ evstddx evr6, r4, r3 /* save off accumulator */
+ mfspr r6,SPRN_SPEFSCR
+ stw r6,THREAD_SPEFSCR(r3) /* save spefscr register value */
+ beq 1f
+ lwz r4,_MSR-STACK_FRAME_OVERHEAD(r5)
+ lis r3,MSR_SPE@h
+ andc r4,r4,r3 /* disable SPE for previous task */
+ stw r4,_MSR-STACK_FRAME_OVERHEAD(r5)
+1:
+#ifndef CONFIG_SMP
+ li r5,0
+ lis r4,last_task_used_spe@ha
+ stw r5,last_task_used_spe@l(r4)
+#endif /* CONFIG_SMP */
+ blr
+#endif /* CONFIG_SPE */
+
+/*
+ * extern void giveup_fpu(struct task_struct *prev)
+ *
+ * The e500 core does not have an FPU.
+ */
+_GLOBAL(giveup_fpu)
+ blr
+
+/*
+ * extern void abort(void)
+ *
+ * At present, this routine just applies a system reset.
+ */
+_GLOBAL(abort)
+ li r13,0
+ mtspr SPRN_DBCR0,r13 /* disable all debug events */
+ mfmsr r13
+ ori r13,r13,MSR_DE@l /* Enable Debug Events */
+ mtmsr r13
+ mfspr r13,SPRN_DBCR0
+ lis r13,(DBCR0_IDM|DBCR0_RST_CHIP)@h
+ mtspr SPRN_DBCR0,r13
+
+_GLOBAL(set_context)
+
+#ifdef CONFIG_BDI_SWITCH
+ /* Context switch the PTE pointer for the Abatron BDI2000.
+ * The PGDIR is the second parameter.
+ */
+ lis r5, abatron_pteptrs@h
+ ori r5, r5, abatron_pteptrs@l
+ stw r4, 0x4(r5)
+#endif
+ mtspr SPRN_PID,r3
+ isync /* Force context change */
+ blr
+
+/*
+ * We put a few things here that have to be page-aligned. This stuff
+ * goes at the beginning of the data segment, which is page-aligned.
+ */
+ .data
+_GLOBAL(sdata)
+_GLOBAL(empty_zero_page)
+ .space 4096
+_GLOBAL(swapper_pg_dir)
+ .space 4096
+
+ .section .bss
+/* Stack for handling critical exceptions from kernel mode */
+critical_stack_bottom:
+ .space 4096
+critical_stack_top:
+ .previous
+
+/* Stack for handling machine check exceptions from kernel mode */
+mcheck_stack_bottom:
+ .space 4096
+mcheck_stack_top:
+ .previous
+
+/*
+ * This area is used for temporarily saving registers during the
+ * critical and machine check exception prologs. It must always
+ * follow the page aligned allocations, so it starts on a page
+ * boundary, ensuring that all crit_save areas are in a single
+ * page.
+ */
+
+/* crit_save */
+_GLOBAL(crit_save)
+ .space 4
+_GLOBAL(crit_r10)
+ .space 4
+_GLOBAL(crit_r11)
+ .space 4
+_GLOBAL(crit_sprg0)
+ .space 4
+_GLOBAL(crit_sprg1)
+ .space 4
+_GLOBAL(crit_sprg4)
+ .space 4
+_GLOBAL(crit_sprg5)
+ .space 4
+_GLOBAL(crit_sprg7)
+ .space 4
+_GLOBAL(crit_pid)
+ .space 4
+_GLOBAL(crit_srr0)
+ .space 4
+_GLOBAL(crit_srr1)
+ .space 4
+
+/* mcheck_save */
+_GLOBAL(mcheck_save)
+ .space 4
+_GLOBAL(mcheck_r10)
+ .space 4
+_GLOBAL(mcheck_r11)
+ .space 4
+_GLOBAL(mcheck_sprg0)
+ .space 4
+_GLOBAL(mcheck_sprg1)
+ .space 4
+_GLOBAL(mcheck_sprg4)
+ .space 4
+_GLOBAL(mcheck_sprg5)
+ .space 4
+_GLOBAL(mcheck_sprg7)
+ .space 4
+_GLOBAL(mcheck_pid)
+ .space 4
+_GLOBAL(mcheck_srr0)
+ .space 4
+_GLOBAL(mcheck_srr1)
+ .space 4
+_GLOBAL(mcheck_csrr0)
+ .space 4
+_GLOBAL(mcheck_csrr1)
+ .space 4
+
+/*
+ * This space gets a copy of optional info passed to us by the bootstrap
+ * which is used to pass parameters into the kernel like root=/dev/sda1, etc.
+ */
+_GLOBAL(cmd_line)
+ .space 512
+
+/*
+ * Room for two PTE pointers, usually the kernel and current user pointers
+ * to their respective root page table.
+ */
+abatron_pteptrs:
+ .space 8
+
+
--- /dev/null
+#include <asm/ppc_asm.h>
+#include <asm/processor.h>
+
+/*
+ * The routines below are in assembler so we can closely control the
+ * usage of floating-point registers. These routines must be called
+ * with preempt disabled.
+ */
+ .data
+fpzero:
+ .long 0
+fpone:
+ .long 0x3f800000 /* 1.0 in single-precision FP */
+fphalf:
+ .long 0x3f000000 /* 0.5 in single-precision FP */
+
+ .text
+/*
+ * Internal routine to enable floating point and set FPSCR to 0.
+ * Don't call it from C; it doesn't use the normal calling convention.
+ */
+fpenable:
+ mfmsr r10
+ ori r11,r10,MSR_FP
+ mtmsr r11
+ isync
+ stfd fr0,24(r1)
+ stfd fr1,16(r1)
+ stfd fr31,8(r1)
+ lis r11,fpzero@ha
+ mffs fr31
+ lfs fr1,fpzero@l(r11)
+ mtfsf 0xff,fr1
+ blr
+
+fpdisable:
+ mtfsf 0xff,fr31
+ lfd fr31,8(r1)
+ lfd fr1,16(r1)
+ lfd fr0,24(r1)
+ mtmsr r10
+ isync
+ blr
+
+/*
+ * Vector add, floating point.
+ */
+ .globl vaddfp
+vaddfp:
+ stwu r1,-32(r1)
+ mflr r0
+ stw r0,36(r1)
+ bl fpenable
+ li r0,4
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ lfsx fr1,r5,r6
+ fadds fr0,fr0,fr1
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ bl fpdisable
+ lwz r0,36(r1)
+ mtlr r0
+ addi r1,r1,32
+ blr
+
+/*
+ * Vector subtract, floating point.
+ */
+ .globl vsubfp
+vsubfp:
+ stwu r1,-32(r1)
+ mflr r0
+ stw r0,36(r1)
+ bl fpenable
+ li r0,4
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ lfsx fr1,r5,r6
+ fsubs fr0,fr0,fr1
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ bl fpdisable
+ lwz r0,36(r1)
+ mtlr r0
+ addi r1,r1,32
+ blr
+
+/*
+ * Vector multiply and add, floating point.
+ */
+ .globl vmaddfp
+vmaddfp:
+ stwu r1,-48(r1)
+ mflr r0
+ stw r0,52(r1)
+ bl fpenable
+ stfd fr2,32(r1)
+ li r0,4
+ mtctr r0
+ li r7,0
+1: lfsx fr0,r4,r7
+ lfsx fr1,r5,r7
+ lfsx fr2,r6,r7
+ fmadds fr0,fr0,fr2,fr1
+ stfsx fr0,r3,r7
+ addi r7,r7,4
+ bdnz 1b
+ lfd fr2,32(r1)
+ bl fpdisable
+ lwz r0,52(r1)
+ mtlr r0
+ addi r1,r1,48
+ blr
+
+/*
+ * Vector negative multiply and subtract, floating point.
+ */
+ .globl vnmsubfp
+vnmsubfp:
+ stwu r1,-48(r1)
+ mflr r0
+ stw r0,52(r1)
+ bl fpenable
+ stfd fr2,32(r1)
+ li r0,4
+ mtctr r0
+ li r7,0
+1: lfsx fr0,r4,r7
+ lfsx fr1,r5,r7
+ lfsx fr2,r6,r7
+ fnmsubs fr0,fr0,fr2,fr1
+ stfsx fr0,r3,r7
+ addi r7,r7,4
+ bdnz 1b
+ lfd fr2,32(r1)
+ bl fpdisable
+ lwz r0,52(r1)
+ mtlr r0
+ addi r1,r1,48
+ blr
+
+/*
+ * Vector reciprocal estimate. We just compute 1.0/x.
+ * r3 -> destination, r4 -> source.
+ */
+ .globl vrefp
+vrefp:
+ stwu r1,-32(r1)
+ mflr r0
+ stw r0,36(r1)
+ bl fpenable
+ lis r9,fpone@ha
+ li r0,4
+ lfs fr1,fpone@l(r9)
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ fdivs fr0,fr1,fr0
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ bl fpdisable
+ lwz r0,36(r1)
+ mtlr r0
+ addi r1,r1,32
+ blr
+
+/*
+ * Vector reciprocal square-root estimate, floating point.
+ * We use the frsqrte instruction for the initial estimate followed
+ * by 2 iterations of Newton-Raphson to get sufficient accuracy.
+ * r3 -> destination, r4 -> source.
+ */
+ .globl vrsqrtefp
+vrsqrtefp:
+ stwu r1,-48(r1)
+ mflr r0
+ stw r0,52(r1)
+ bl fpenable
+ stfd fr2,32(r1)
+ stfd fr3,40(r1)
+ stfd fr4,48(r1)
+ stfd fr5,56(r1)
+ lis r9,fpone@ha
+ lis r8,fphalf@ha
+ li r0,4
+ lfs fr4,fpone@l(r9)
+ lfs fr5,fphalf@l(r8)
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ frsqrte fr1,fr0 /* r = frsqrte(s) */
+ fmuls fr3,fr1,fr0 /* r * s */
+ fmuls fr2,fr1,fr5 /* r * 0.5 */
+ fnmsubs fr3,fr1,fr3,fr4 /* 1 - s * r * r */
+ fmadds fr1,fr2,fr3,fr1 /* r = r + 0.5 * r * (1 - s * r * r) */
+ fmuls fr3,fr1,fr0 /* r * s */
+ fmuls fr2,fr1,fr5 /* r * 0.5 */
+ fnmsubs fr3,fr1,fr3,fr4 /* 1 - s * r * r */
+ fmadds fr1,fr2,fr3,fr1 /* r = r + 0.5 * r * (1 - s * r * r) */
+ stfsx fr1,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ lfd fr5,56(r1)
+ lfd fr4,48(r1)
+ lfd fr3,40(r1)
+ lfd fr2,32(r1)
+ bl fpdisable
+ lwz r0,36(r1)
+ mtlr r0
+ addi r1,r1,32
+ blr
--- /dev/null
+/*
+ * arch/ppc/syslib/rheap.c
+ *
+ * A Remote Heap. Remote means that we don't touch the memory that the
+ * heap points to. Normal heap implementations use the memory they manage
+ * to place their list. We cannot do that because the memory we manage may
+ * have special properties, for example it is uncachable or of different
+ * endianess.
+ *
+ * Author: Pantelis Antoniou <panto@intracom.gr>
+ *
+ * 2004 (c) INTRACOM S.A. Greece. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+
+#include <asm/rheap.h>
+
+/*
+ * Fixup a list_head, needed when copying lists. If the pointers fall
+ * between s and e, apply the delta. This assumes that
+ * sizeof(struct list_head *) == sizeof(unsigned long *).
+ */
+static inline void fixup(unsigned long s, unsigned long e, int d,
+ struct list_head *l)
+{
+ unsigned long *pp;
+
+ pp = (unsigned long *)&l->next;
+ if (*pp >= s && *pp < e)
+ *pp += d;
+
+ pp = (unsigned long *)&l->prev;
+ if (*pp >= s && *pp < e)
+ *pp += d;
+}
+
+/* Grow the allocated blocks */
+static int grow(rh_info_t * info, int max_blocks)
+{
+ rh_block_t *block, *blk;
+ int i, new_blocks;
+ int delta;
+ unsigned long blks, blke;
+
+ if (max_blocks <= info->max_blocks)
+ return -EINVAL;
+
+ new_blocks = max_blocks - info->max_blocks;
+
+ block = kmalloc(sizeof(rh_block_t) * max_blocks, GFP_KERNEL);
+ if (block == NULL)
+ return -ENOMEM;
+
+ if (info->max_blocks > 0) {
+
+ /* copy old block area */
+ memcpy(block, info->block,
+ sizeof(rh_block_t) * info->max_blocks);
+
+ delta = (char *)block - (char *)info->block;
+
+ /* and fixup list pointers */
+ blks = (unsigned long)info->block;
+ blke = (unsigned long)(info->block + info->max_blocks);
+
+ for (i = 0, blk = block; i < info->max_blocks; i++, blk++)
+ fixup(blks, blke, delta, &blk->list);
+
+ fixup(blks, blke, delta, &info->empty_list);
+ fixup(blks, blke, delta, &info->free_list);
+ fixup(blks, blke, delta, &info->taken_list);
+
+ /* free the old allocated memory */
+ if ((info->flags & RHIF_STATIC_BLOCK) == 0)
+ kfree(info->block);
+ }
+
+ info->block = block;
+ info->empty_slots += new_blocks;
+ info->max_blocks = max_blocks;
+ info->flags &= ~RHIF_STATIC_BLOCK;
+
+ /* add all new blocks to the free list */
+ for (i = 0, blk = block + info->max_blocks; i < new_blocks; i++, blk++)
+ list_add(&blk->list, &info->empty_list);
+
+ return 0;
+}
+
+/*
+ * Assure at least the required amount of empty slots. If this function
+ * causes a grow in the block area then all pointers kept to the block
+ * area are invalid!
+ */
+static int assure_empty(rh_info_t * info, int slots)
+{
+ int max_blocks;
+
+ /* This function is not meant to be used to grow uncontrollably */
+ if (slots >= 4)
+ return -EINVAL;
+
+ /* Enough space */
+ if (info->empty_slots >= slots)
+ return 0;
+
+ /* Next 16 sized block */
+ max_blocks = ((info->max_blocks + slots) + 15) & ~15;
+
+ return grow(info, max_blocks);
+}
+
+static rh_block_t *get_slot(rh_info_t * info)
+{
+ rh_block_t *blk;
+
+ /* If no more free slots, and failure to extend. */
+ /* XXX: You should have called assure_empty before */
+ if (info->empty_slots == 0) {
+ printk(KERN_ERR "rh: out of slots; crash is imminent.\n");
+ return NULL;
+ }
+
+ /* Get empty slot to use */
+ blk = list_entry(info->empty_list.next, rh_block_t, list);
+ list_del_init(&blk->list);
+ info->empty_slots--;
+
+ /* Initialize */
+ blk->start = NULL;
+ blk->size = 0;
+ blk->owner = NULL;
+
+ return blk;
+}
+
+static inline void release_slot(rh_info_t * info, rh_block_t * blk)
+{
+ list_add(&blk->list, &info->empty_list);
+ info->empty_slots++;
+}
+
+static void attach_free_block(rh_info_t * info, rh_block_t * blkn)
+{
+ rh_block_t *blk;
+ rh_block_t *before;
+ rh_block_t *after;
+ rh_block_t *next;
+ int size;
+ unsigned long s, e, bs, be;
+ struct list_head *l;
+
+ /* We assume that they are aligned properly */
+ size = blkn->size;
+ s = (unsigned long)blkn->start;
+ e = s + size;
+
+ /* Find the blocks immediately before and after the given one
+ * (if any) */
+ before = NULL;
+ after = NULL;
+ next = NULL;
+
+ list_for_each(l, &info->free_list) {
+ blk = list_entry(l, rh_block_t, list);
+
+ bs = (unsigned long)blk->start;
+ be = bs + blk->size;
+
+ if (next == NULL && s >= bs)
+ next = blk;
+
+ if (be == s)
+ before = blk;
+
+ if (e == bs)
+ after = blk;
+
+ /* If both are not null, break now */
+ if (before != NULL && after != NULL)
+ break;
+ }
+
+ /* Now check if they are really adjacent */
+ if (before != NULL && s != (unsigned long)before->start + before->size)
+ before = NULL;
+
+ if (after != NULL && e != (unsigned long)after->start)
+ after = NULL;
+
+ /* No coalescing; list insert and return */
+ if (before == NULL && after == NULL) {
+
+ if (next != NULL)
+ list_add(&blkn->list, &next->list);
+ else
+ list_add(&blkn->list, &info->free_list);
+
+ return;
+ }
+
+ /* We don't need it anymore */
+ release_slot(info, blkn);
+
+ /* Grow the before block */
+ if (before != NULL && after == NULL) {
+ before->size += size;
+ return;
+ }
+
+ /* Grow the after block backwards */
+ if (before == NULL && after != NULL) {
+ after->start = (int8_t *)after->start - size;
+ after->size += size;
+ return;
+ }
+
+ /* Grow the before block, and release the after block */
+ before->size += size + after->size;
+ list_del(&after->list);
+ release_slot(info, after);
+}
+
+static void attach_taken_block(rh_info_t * info, rh_block_t * blkn)
+{
+ rh_block_t *blk;
+ struct list_head *l;
+
+ /* Find the block immediately before the given one (if any) */
+ list_for_each(l, &info->taken_list) {
+ blk = list_entry(l, rh_block_t, list);
+ if (blk->start > blkn->start) {
+ list_add_tail(&blkn->list, &blk->list);
+ return;
+ }
+ }
+
+ list_add_tail(&blkn->list, &info->taken_list);
+}
+
+/*
+ * Create a remote heap dynamically. Note that no memory for the blocks
+ * are allocated. It will upon the first allocation
+ */
+rh_info_t *rh_create(unsigned int alignment)
+{
+ rh_info_t *info;
+
+ /* Alignment must be a power of two */
+ if ((alignment & (alignment - 1)) != 0)
+ return ERR_PTR(-EINVAL);
+
+ info = kmalloc(sizeof(*info), GFP_KERNEL);
+ if (info == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ info->alignment = alignment;
+
+ /* Initially everything as empty */
+ info->block = NULL;
+ info->max_blocks = 0;
+ info->empty_slots = 0;
+ info->flags = 0;
+
+ INIT_LIST_HEAD(&info->empty_list);
+ INIT_LIST_HEAD(&info->free_list);
+ INIT_LIST_HEAD(&info->taken_list);
+
+ return info;
+}
+
+/*
+ * Destroy a dynamically created remote heap. Deallocate only if the areas
+ * are not static
+ */
+void rh_destroy(rh_info_t * info)
+{
+ if ((info->flags & RHIF_STATIC_BLOCK) == 0 && info->block != NULL)
+ kfree(info->block);
+
+ if ((info->flags & RHIF_STATIC_INFO) == 0)
+ kfree(info);
+}
+
+/*
+ * Initialize in place a remote heap info block. This is needed to support
+ * operation very early in the startup of the kernel, when it is not yet safe
+ * to call kmalloc.
+ */
+void rh_init(rh_info_t * info, unsigned int alignment, int max_blocks,
+ rh_block_t * block)
+{
+ int i;
+ rh_block_t *blk;
+
+ /* Alignment must be a power of two */
+ if ((alignment & (alignment - 1)) != 0)
+ return;
+
+ info->alignment = alignment;
+
+ /* Initially everything as empty */
+ info->block = block;
+ info->max_blocks = max_blocks;
+ info->empty_slots = max_blocks;
+ info->flags = RHIF_STATIC_INFO | RHIF_STATIC_BLOCK;
+
+ INIT_LIST_HEAD(&info->empty_list);
+ INIT_LIST_HEAD(&info->free_list);
+ INIT_LIST_HEAD(&info->taken_list);
+
+ /* Add all new blocks to the free list */
+ for (i = 0, blk = block; i < max_blocks; i++, blk++)
+ list_add(&blk->list, &info->empty_list);
+}
+
+/* Attach a free memory region, coalesces regions if adjuscent */
+int rh_attach_region(rh_info_t * info, void *start, int size)
+{
+ rh_block_t *blk;
+ unsigned long s, e, m;
+ int r;
+
+ /* The region must be aligned */
+ s = (unsigned long)start;
+ e = s + size;
+ m = info->alignment - 1;
+
+ /* Round start up */
+ s = (s + m) & ~m;
+
+ /* Round end down */
+ e = e & ~m;
+
+ /* Take final values */
+ start = (void *)s;
+ size = (int)(e - s);
+
+ /* Grow the blocks, if needed */
+ r = assure_empty(info, 1);
+ if (r < 0)
+ return r;
+
+ blk = get_slot(info);
+ blk->start = start;
+ blk->size = size;
+ blk->owner = NULL;
+
+ attach_free_block(info, blk);
+
+ return 0;
+}
+
+/* Detatch given address range, splits free block if needed. */
+void *rh_detach_region(rh_info_t * info, void *start, int size)
+{
+ struct list_head *l;
+ rh_block_t *blk, *newblk;
+ unsigned long s, e, m, bs, be;
+
+ /* Validate size */
+ if (size <= 0)
+ return ERR_PTR(-EINVAL);
+
+ /* The region must be aligned */
+ s = (unsigned long)start;
+ e = s + size;
+ m = info->alignment - 1;
+
+ /* Round start up */
+ s = (s + m) & ~m;
+
+ /* Round end down */
+ e = e & ~m;
+
+ if (assure_empty(info, 1) < 0)
+ return ERR_PTR(-ENOMEM);
+
+ blk = NULL;
+ list_for_each(l, &info->free_list) {
+ blk = list_entry(l, rh_block_t, list);
+ /* The range must lie entirely inside one free block */
+ bs = (unsigned long)blk->start;
+ be = (unsigned long)blk->start + blk->size;
+ if (s >= bs && e <= be)
+ break;
+ blk = NULL;
+ }
+
+ if (blk == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ /* Perfect fit */
+ if (bs == s && be == e) {
+ /* Delete from free list, release slot */
+ list_del(&blk->list);
+ release_slot(info, blk);
+ return (void *)s;
+ }
+
+ /* blk still in free list, with updated start and/or size */
+ if (bs == s || be == e) {
+ if (bs == s)
+ blk->start = (int8_t *)blk->start + size;
+ blk->size -= size;
+
+ } else {
+ /* The front free fragment */
+ blk->size = s - bs;
+
+ /* the back free fragment */
+ newblk = get_slot(info);
+ newblk->start = (void *)e;
+ newblk->size = be - e;
+
+ list_add(&newblk->list, &blk->list);
+ }
+
+ return (void *)s;
+}
+
+void *rh_alloc(rh_info_t * info, int size, const char *owner)
+{
+ struct list_head *l;
+ rh_block_t *blk;
+ rh_block_t *newblk;
+ void *start;
+
+ /* Validate size */
+ if (size <= 0)
+ return ERR_PTR(-EINVAL);
+
+ /* Align to configured alignment */
+ size = (size + (info->alignment - 1)) & ~(info->alignment - 1);
+
+ if (assure_empty(info, 1) < 0)
+ return ERR_PTR(-ENOMEM);
+
+ blk = NULL;
+ list_for_each(l, &info->free_list) {
+ blk = list_entry(l, rh_block_t, list);
+ if (size <= blk->size)
+ break;
+ blk = NULL;
+ }
+
+ if (blk == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ /* Just fits */
+ if (blk->size == size) {
+ /* Move from free list to taken list */
+ list_del(&blk->list);
+ blk->owner = owner;
+ start = blk->start;
+
+ attach_taken_block(info, blk);
+
+ return start;
+ }
+
+ newblk = get_slot(info);
+ newblk->start = blk->start;
+ newblk->size = size;
+ newblk->owner = owner;
+
+ /* blk still in free list, with updated start, size */
+ blk->start = (int8_t *)blk->start + size;
+ blk->size -= size;
+
+ start = newblk->start;
+
+ attach_taken_block(info, newblk);
+
+ return start;
+}
+
+/* allocate at precisely the given address */
+void *rh_alloc_fixed(rh_info_t * info, void *start, int size, const char *owner)
+{
+ struct list_head *l;
+ rh_block_t *blk, *newblk1, *newblk2;
+ unsigned long s, e, m, bs, be;
+
+ /* Validate size */
+ if (size <= 0)
+ return ERR_PTR(-EINVAL);
+
+ /* The region must be aligned */
+ s = (unsigned long)start;
+ e = s + size;
+ m = info->alignment - 1;
+
+ /* Round start up */
+ s = (s + m) & ~m;
+
+ /* Round end down */
+ e = e & ~m;
+
+ if (assure_empty(info, 2) < 0)
+ return ERR_PTR(-ENOMEM);
+
+ blk = NULL;
+ list_for_each(l, &info->free_list) {
+ blk = list_entry(l, rh_block_t, list);
+ /* The range must lie entirely inside one free block */
+ bs = (unsigned long)blk->start;
+ be = (unsigned long)blk->start + blk->size;
+ if (s >= bs && e <= be)
+ break;
+ }
+
+ if (blk == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ /* Perfect fit */
+ if (bs == s && be == e) {
+ /* Move from free list to taken list */
+ list_del(&blk->list);
+ blk->owner = owner;
+
+ start = blk->start;
+ attach_taken_block(info, blk);
+
+ return start;
+
+ }
+
+ /* blk still in free list, with updated start and/or size */
+ if (bs == s || be == e) {
+ if (bs == s)
+ blk->start = (int8_t *)blk->start + size;
+ blk->size -= size;
+
+ } else {
+ /* The front free fragment */
+ blk->size = s - bs;
+
+ /* The back free fragment */
+ newblk2 = get_slot(info);
+ newblk2->start = (void *)e;
+ newblk2->size = be - e;
+
+ list_add(&newblk2->list, &blk->list);
+ }
+
+ newblk1 = get_slot(info);
+ newblk1->start = (void *)s;
+ newblk1->size = e - s;
+ newblk1->owner = owner;
+
+ start = newblk1->start;
+ attach_taken_block(info, newblk1);
+
+ return start;
+}
+
+int rh_free(rh_info_t * info, void *start)
+{
+ rh_block_t *blk, *blk2;
+ struct list_head *l;
+ int size;
+
+ /* Linear search for block */
+ blk = NULL;
+ list_for_each(l, &info->taken_list) {
+ blk2 = list_entry(l, rh_block_t, list);
+ if (start < blk2->start)
+ break;
+ blk = blk2;
+ }
+
+ if (blk == NULL || start > (blk->start + blk->size))
+ return -EINVAL;
+
+ /* Remove from taken list */
+ list_del(&blk->list);
+
+ /* Get size of freed block */
+ size = blk->size;
+ attach_free_block(info, blk);
+
+ return size;
+}
+
+int rh_get_stats(rh_info_t * info, int what, int max_stats, rh_stats_t * stats)
+{
+ rh_block_t *blk;
+ struct list_head *l;
+ struct list_head *h;
+ int nr;
+
+ switch (what) {
+
+ case RHGS_FREE:
+ h = &info->free_list;
+ break;
+
+ case RHGS_TAKEN:
+ h = &info->taken_list;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ /* Linear search for block */
+ nr = 0;
+ list_for_each(l, h) {
+ blk = list_entry(l, rh_block_t, list);
+ if (stats != NULL && nr < max_stats) {
+ stats->start = blk->start;
+ stats->size = blk->size;
+ stats->owner = blk->owner;
+ stats++;
+ }
+ nr++;
+ }
+
+ return nr;
+}
+
+int rh_set_owner(rh_info_t * info, void *start, const char *owner)
+{
+ rh_block_t *blk, *blk2;
+ struct list_head *l;
+ int size;
+
+ /* Linear search for block */
+ blk = NULL;
+ list_for_each(l, &info->taken_list) {
+ blk2 = list_entry(l, rh_block_t, list);
+ if (start < blk2->start)
+ break;
+ blk = blk2;
+ }
+
+ if (blk == NULL || start > (blk->start + blk->size))
+ return -EINVAL;
+
+ blk->owner = owner;
+ size = blk->size;
+
+ return size;
+}
+
+void rh_dump(rh_info_t * info)
+{
+ static rh_stats_t st[32]; /* XXX maximum 32 blocks */
+ int maxnr;
+ int i, nr;
+
+ maxnr = sizeof(st) / sizeof(st[0]);
+
+ printk(KERN_INFO
+ "info @0x%p (%d slots empty / %d max)\n",
+ info, info->empty_slots, info->max_blocks);
+
+ printk(KERN_INFO " Free:\n");
+ nr = rh_get_stats(info, RHGS_FREE, maxnr, st);
+ if (nr > maxnr)
+ nr = maxnr;
+ for (i = 0; i < nr; i++)
+ printk(KERN_INFO
+ " 0x%p-0x%p (%u)\n",
+ st[i].start, (int8_t *) st[i].start + st[i].size,
+ st[i].size);
+ printk(KERN_INFO "\n");
+
+ printk(KERN_INFO " Taken:\n");
+ nr = rh_get_stats(info, RHGS_TAKEN, maxnr, st);
+ if (nr > maxnr)
+ nr = maxnr;
+ for (i = 0; i < nr; i++)
+ printk(KERN_INFO
+ " 0x%p-0x%p (%u) %s\n",
+ st[i].start, (int8_t *) st[i].start + st[i].size,
+ st[i].size, st[i].owner != NULL ? st[i].owner : "");
+ printk(KERN_INFO "\n");
+}
+
+void rh_dump_blk(rh_info_t * info, rh_block_t * blk)
+{
+ printk(KERN_INFO
+ "blk @0x%p: 0x%p-0x%p (%u)\n",
+ blk, blk->start, (int8_t *) blk->start + blk->size, blk->size);
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc8540_ads.c
+ *
+ * MPC8540ADS board specific routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/root_dev.h>
+#include <linux/serial.h>
+#include <linux/tty.h> /* for linux/serial_core.h */
+#include <linux/serial_core.h>
+#include <linux/initrd.h>
+#include <linux/module.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/kgdb.h>
+#include <asm/ocp.h>
+#include <mm/mmu_decl.h>
+
+#include <syslib/ppc85xx_common.h>
+#include <syslib/ppc85xx_setup.h>
+
+struct ocp_gfar_data mpc85xx_tsec1_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
+ .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
+ | GFAR_HAS_RMON
+ | GFAR_HAS_PHY_INTR | GFAR_HAS_COALESCE),
+ .phyid = 0,
+ .phyregidx = 0,
+};
+
+struct ocp_gfar_data mpc85xx_tsec2_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
+ .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
+ | GFAR_HAS_RMON
+ | GFAR_HAS_PHY_INTR | GFAR_HAS_COALESCE),
+ .phyid = 1,
+ .phyregidx = 0,
+};
+
+struct ocp_gfar_data mpc85xx_fec_def = {
+ .interruptTransmit = MPC85xx_IRQ_FEC,
+ .interruptError = MPC85xx_IRQ_FEC,
+ .interruptReceive = MPC85xx_IRQ_FEC,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = 0,
+ .phyid = 3,
+ .phyregidx = 0,
+};
+
+struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
+ .flags = FS_I2C_SEPARATE_DFSRR,
+};
+
+/* ************************************************************************
+ *
+ * Setup the architecture
+ *
+ */
+static void __init
+mpc8540ads_setup_arch(void)
+{
+ struct ocp_def *def;
+ struct ocp_gfar_data *einfo;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc8540ads_setup_arch()", 0);
+
+ /* Set loops_per_jiffy to a half-way reasonable value,
+ for use until calibrate_delay gets called. */
+ loops_per_jiffy = freq / HZ;
+
+#ifdef CONFIG_PCI
+ /* setup PCI host bridges */
+ mpc85xx_setup_hose();
+#endif
+
+#ifdef CONFIG_SERIAL_8250
+ mpc85xx_early_serial_map();
+#endif
+
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ /* Invalidate the entry we stole earlier the serial ports
+ * should be properly mapped */
+ invalidate_tlbcam_entry(NUM_TLBCAMS - 1);
+#endif
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
+ }
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
+ }
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 2);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enet2addr, 6);
+ }
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+
+ ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
+}
+
+/* ************************************************************************ */
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* parse_bootinfo must always be called first */
+ parse_bootinfo(find_bootinfo());
+
+ /*
+ * If we were passed in a board information, copy it into the
+ * residual data area.
+ */
+ if (r3) {
+ memcpy((void *) __res, (void *) (r3 + KERNELBASE),
+ sizeof (bd_t));
+ }
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ {
+ bd_t *binfo = (bd_t *) __res;
+
+ /* Use the last TLB entry to map CCSRBAR to allow access to DUART regs */
+ settlbcam(NUM_TLBCAMS - 1, binfo->bi_immr_base,
+ binfo->bi_immr_base, MPC85xx_CCSRBAR_SIZE, _PAGE_IO, 0);
+ }
+#endif
+
+#if defined(CONFIG_BLK_DEV_INITRD)
+ /*
+ * If the init RAM disk has been configured in, and there's a valid
+ * starting address for it, set it up.
+ */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+ /* Copy the kernel command line arguments to a safe place. */
+
+ if (r6) {
+ *(char *) (r7 + KERNELBASE) = 0;
+ strcpy(cmd_line, (char *) (r6 + KERNELBASE));
+ }
+
+ /* setup the PowerPC module struct */
+ ppc_md.setup_arch = mpc8540ads_setup_arch;
+ ppc_md.show_cpuinfo = mpc85xx_ads_show_cpuinfo;
+
+ ppc_md.init_IRQ = mpc85xx_ads_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+
+ ppc_md.restart = mpc85xx_restart;
+ ppc_md.power_off = mpc85xx_power_off;
+ ppc_md.halt = mpc85xx_halt;
+
+ ppc_md.find_end_of_memory = mpc85xx_find_end_of_memory;
+
+ ppc_md.time_init = NULL;
+ ppc_md.set_rtc_time = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.calibrate_decr = mpc85xx_calibrate_decr;
+
+#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG)
+ ppc_md.progress = gen550_progress;
+#endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc8540ads_init(): exit", 0);
+
+ return;
+}
--- /dev/null
+/*
+ * arch/ppc/platform/85xx/mpc8555.c
+ *
+ * MPC8555 I/O descriptions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <asm/mpc85xx.h>
+#include <asm/ocp.h>
+
+/* These should be defined in platform code */
+extern struct ocp_gfar_data mpc85xx_tsec1_def;
+extern struct ocp_gfar_data mpc85xx_tsec2_def;
+extern struct ocp_mpc_i2c_data mpc85xx_i2c1_def;
+
+/* We use offsets for paddr since we do not know at compile time
+ * what CCSRBAR is, platform code should fix this up in
+ * setup_arch
+ *
+ * Only the first IRQ is given even if a device has
+ * multiple lines associated with ita
+ */
+struct ocp_def core_ocp[] = {
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_IIC,
+ .index = 0,
+ .paddr = MPC85xx_IIC1_OFFSET,
+ .irq = MPC85xx_IRQ_IIC1,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_i2c1_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_16550,
+ .index = 0,
+ .paddr = MPC85xx_UART0_OFFSET,
+ .irq = MPC85xx_IRQ_DUART,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_16550,
+ .index = 1,
+ .paddr = MPC85xx_UART1_OFFSET,
+ .irq = MPC85xx_IRQ_DUART,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_GFAR,
+ .index = 0,
+ .paddr = MPC85xx_ENET1_OFFSET,
+ .irq = MPC85xx_IRQ_TSEC1_TX,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_tsec1_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_GFAR,
+ .index = 1,
+ .paddr = MPC85xx_ENET2_OFFSET,
+ .irq = MPC85xx_IRQ_TSEC2_TX,
+ .pm = OCP_CPM_NA,
+ .additions = &mpc85xx_tsec2_def,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_DMA,
+ .index = 0,
+ .paddr = MPC85xx_DMA_OFFSET,
+ .irq = MPC85xx_IRQ_DMA0,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_SEC2,
+ .index = 0,
+ .paddr = MPC85xx_SEC2_OFFSET,
+ .irq = MPC85xx_IRQ_SEC2,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_PERFMON,
+ .index = 0,
+ .paddr = MPC85xx_PERFMON_OFFSET,
+ .irq = MPC85xx_IRQ_PERFMON,
+ .pm = OCP_CPM_NA,
+ },
+ { .vendor = OCP_VENDOR_INVALID
+ }
+};
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc8560_ads.c
+ *
+ * MPC8560ADS board specific routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/root_dev.h>
+#include <linux/serial.h>
+#include <linux/tty.h> /* for linux/serial_core.h */
+#include <linux/serial_core.h>
+#include <linux/initrd.h>
+#include <linux/module.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/kgdb.h>
+#include <asm/ocp.h>
+#include <asm/cpm2.h>
+#include <mm/mmu_decl.h>
+
+#include <syslib/cpm2_pic.h>
+#include <syslib/ppc85xx_common.h>
+#include <syslib/ppc85xx_setup.h>
+
+extern void cpm2_reset(void);
+
+struct ocp_gfar_data mpc85xx_tsec1_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
+ .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
+ | GFAR_HAS_RMON | GFAR_HAS_COALESCE
+ | GFAR_HAS_PHY_INTR),
+ .phyid = 0,
+ .phyregidx = 0,
+};
+
+struct ocp_gfar_data mpc85xx_tsec2_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
+ .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR
+ | GFAR_HAS_RMON | GFAR_HAS_COALESCE
+ | GFAR_HAS_PHY_INTR),
+ .phyid = 1,
+ .phyregidx = 0,
+};
+
+struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
+ .flags = FS_I2C_SEPARATE_DFSRR,
+};
+
+/* ************************************************************************
+ *
+ * Setup the architecture
+ *
+ */
+
+static void __init
+mpc8560ads_setup_arch(void)
+{
+ struct ocp_def *def;
+ struct ocp_gfar_data *einfo;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ cpm2_reset();
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc8560ads_setup_arch()", 0);
+
+ /* Set loops_per_jiffy to a half-way reasonable value,
+ for use until calibrate_delay gets called. */
+ loops_per_jiffy = freq / HZ;
+
+#ifdef CONFIG_PCI
+ /* setup PCI host bridges */
+ mpc85xx_setup_hose();
+#endif
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
+ }
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
+ }
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+
+ ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
+}
+
+static irqreturn_t cpm2_cascade(int irq, void *dev_id, struct pt_regs *regs)
+{
+ while ((irq = cpm2_get_irq(regs)) >= 0)
+ __do_IRQ(irq, regs);
+ return IRQ_HANDLED;
+}
+
+static struct irqaction cpm2_irqaction = {
+ .handler = cpm2_cascade,
+ .flags = SA_INTERRUPT,
+ .mask = CPU_MASK_NONE,
+ .name = "cpm2_cascade",
+};
+
+static void __init
+mpc8560_ads_init_IRQ(void)
+{
+ int i;
+ volatile cpm2_map_t *immap = cpm2_immr;
+
+ /* Setup OpenPIC */
+ mpc85xx_ads_init_IRQ();
+
+ /* disable all CPM interupts */
+ immap->im_intctl.ic_simrh = 0x0;
+ immap->im_intctl.ic_simrl = 0x0;
+
+ for (i = CPM_IRQ_OFFSET; i < (NR_CPM_INTS + CPM_IRQ_OFFSET); i++)
+ irq_desc[i].handler = &cpm2_pic;
+
+ /* Initialize the default interrupt mapping priorities,
+ * in case the boot rom changed something on us.
+ */
+ immap->im_intctl.ic_sicr = 0;
+ immap->im_intctl.ic_scprrh = 0x05309770;
+ immap->im_intctl.ic_scprrl = 0x05309770;
+
+ setup_irq(MPC85xx_IRQ_CPM, &cpm2_irqaction);
+
+ return;
+}
+
+
+
+/* ************************************************************************ */
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* parse_bootinfo must always be called first */
+ parse_bootinfo(find_bootinfo());
+
+ /*
+ * If we were passed in a board information, copy it into the
+ * residual data area.
+ */
+ if (r3) {
+ memcpy((void *) __res, (void *) (r3 + KERNELBASE),
+ sizeof (bd_t));
+
+ }
+#if defined(CONFIG_BLK_DEV_INITRD)
+ /*
+ * If the init RAM disk has been configured in, and there's a valid
+ * starting address for it, set it up.
+ */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+ /* Copy the kernel command line arguments to a safe place. */
+
+ if (r6) {
+ *(char *) (r7 + KERNELBASE) = 0;
+ strcpy(cmd_line, (char *) (r6 + KERNELBASE));
+ }
+
+ /* setup the PowerPC module struct */
+ ppc_md.setup_arch = mpc8560ads_setup_arch;
+ ppc_md.show_cpuinfo = mpc85xx_ads_show_cpuinfo;
+
+ ppc_md.init_IRQ = mpc8560_ads_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+
+ ppc_md.restart = mpc85xx_restart;
+ ppc_md.power_off = mpc85xx_power_off;
+ ppc_md.halt = mpc85xx_halt;
+
+ ppc_md.find_end_of_memory = mpc85xx_find_end_of_memory;
+
+ ppc_md.time_init = NULL;
+ ppc_md.set_rtc_time = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.calibrate_decr = mpc85xx_calibrate_decr;
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc8560ads_init(): exit", 0);
+
+ return;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/mpc85xx_ads_common.c
+ *
+ * MPC85xx ADS board common routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/serial.h>
+#include <linux/module.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/ocp.h>
+
+#include <mm/mmu_decl.h>
+
+#include <platforms/85xx/mpc85xx_ads_common.h>
+
+#ifndef CONFIG_PCI
+unsigned long isa_io_base = 0;
+unsigned long isa_mem_base = 0;
+#endif
+
+extern unsigned long total_memory; /* in mm/init */
+
+unsigned char __res[sizeof (bd_t)];
+
+/* Internal interrupts are all Level Sensitive, and Positive Polarity */
+
+static u_char mpc85xx_ads_openpic_initsenses[] __initdata = {
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 0: L2 Cache */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 1: ECM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 2: DDR DRAM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 3: LBIU */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 4: DMA 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 5: DMA 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 6: DMA 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 7: DMA 3 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 8: PCI/PCI-X */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 9: RIO Inbound Port Write Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 10: RIO Doorbell Inbound */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 11: RIO Outbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 12: RIO Inbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 13: TSEC 0 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 14: TSEC 0 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 15: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 16: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 17: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 18: TSEC 0 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 19: TSEC 1 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 20: TSEC 1 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 21: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 22: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 23: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 24: TSEC 1 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 25: Fast Ethernet */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 26: DUART */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 27: I2C */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 28: Performance Monitor */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 29: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 30: CPM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 31: Unused */
+ 0x0, /* External 0: */
+#if defined(CONFIG_PCI)
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 1: PCI slot 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 2: PCI slot 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 3: PCI slot 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 4: PCI slot 3 */
+#else
+ 0x0, /* External 1: */
+ 0x0, /* External 2: */
+ 0x0, /* External 3: */
+ 0x0, /* External 4: */
+#endif
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 5: PHY */
+ 0x0, /* External 6: */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 7: PHY */
+ 0x0, /* External 8: */
+ 0x0, /* External 9: */
+ 0x0, /* External 10: */
+ 0x0, /* External 11: */
+};
+
+/* ************************************************************************ */
+int
+mpc85xx_ads_show_cpuinfo(struct seq_file *m)
+{
+ uint pvid, svid, phid1;
+ uint memsize = total_memory;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ pvid = mfspr(PVR);
+ svid = mfspr(SVR);
+
+ seq_printf(m, "Vendor\t\t: Freescale Semiconductor\n");
+
+ switch (svid & 0xffff0000) {
+ case SVR_8540:
+ seq_printf(m, "Machine\t\t: mpc8540ads\n");
+ break;
+ case SVR_8560:
+ seq_printf(m, "Machine\t\t: mpc8560ads\n");
+ break;
+ default:
+ seq_printf(m, "Machine\t\t: unknown\n");
+ break;
+ }
+ seq_printf(m, "clock\t\t: %dMHz\n", freq / 1000000);
+ seq_printf(m, "PVR\t\t: 0x%x\n", pvid);
+ seq_printf(m, "SVR\t\t: 0x%x\n", svid);
+
+ /* Display cpu Pll setting */
+ phid1 = mfspr(HID1);
+ seq_printf(m, "PLL setting\t: 0x%x\n", ((phid1 >> 24) & 0x3f));
+
+ /* Display the amount of memory */
+ seq_printf(m, "Memory\t\t: %d MB\n", memsize / (1024 * 1024));
+
+ return 0;
+}
+
+void __init
+mpc85xx_ads_init_IRQ(void)
+{
+ bd_t *binfo = (bd_t *) __res;
+ /* Determine the Physical Address of the OpenPIC regs */
+ phys_addr_t OpenPIC_PAddr =
+ binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET;
+ OpenPIC_Addr = ioremap(OpenPIC_PAddr, MPC85xx_OPENPIC_SIZE);
+ OpenPIC_InitSenses = mpc85xx_ads_openpic_initsenses;
+ OpenPIC_NumInitSenses = sizeof (mpc85xx_ads_openpic_initsenses);
+
+ /* Skip reserved space and internal sources */
+ openpic_set_sources(0, 32, OpenPIC_Addr + 0x10200);
+ /* Map PIC IRQs 0-11 */
+ openpic_set_sources(32, 12, OpenPIC_Addr + 0x10000);
+
+ /* we let openpic interrupts starting from an offset, to
+ * leave space for cascading interrupts underneath.
+ */
+ openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET);
+
+ return;
+}
+
+#ifdef CONFIG_PCI
+/*
+ * interrupt routing
+ */
+
+int
+mpc85xx_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ static char pci_irq_table[][4] =
+ /*
+ * This is little evil, but works around the fact
+ * that revA boards have IDSEL starting at 18
+ * and others boards (older) start at 12
+ *
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ {PIRQA, PIRQB, PIRQC, PIRQD}, /* IDSEL 2 */
+ {PIRQD, PIRQA, PIRQB, PIRQC},
+ {PIRQC, PIRQD, PIRQA, PIRQB},
+ {PIRQB, PIRQC, PIRQD, PIRQA}, /* IDSEL 5 */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {PIRQA, PIRQB, PIRQC, PIRQD}, /* IDSEL 12 */
+ {PIRQD, PIRQA, PIRQB, PIRQC},
+ {PIRQC, PIRQD, PIRQA, PIRQB},
+ {PIRQB, PIRQC, PIRQD, PIRQA}, /* IDSEL 15 */
+ {0, 0, 0, 0}, /* -- */
+ {0, 0, 0, 0}, /* -- */
+ {PIRQA, PIRQB, PIRQC, PIRQD}, /* IDSEL 18 */
+ {PIRQD, PIRQA, PIRQB, PIRQC},
+ {PIRQC, PIRQD, PIRQA, PIRQB},
+ {PIRQB, PIRQC, PIRQD, PIRQA}, /* IDSEL 21 */
+ };
+
+ const long min_idsel = 2, max_idsel = 21, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+}
+
+int
+mpc85xx_exclude_device(u_char bus, u_char devfn)
+{
+ if (bus == 0 && PCI_SLOT(devfn) == 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+ else
+ return PCIBIOS_SUCCESSFUL;
+}
+
+#endif /* CONFIG_PCI */
--- /dev/null
+/*
+ * arch/ppc/platform/85xx/mpc85xx_cds_common.c
+ *
+ * MPC85xx CDS board specific routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/serial.h>
+#include <linux/module.h>
+#include <linux/root_dev.h>
+#include <linux/initrd.h>
+#include <linux/tty.h>
+#include <linux/serial_core.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/immap_cpm2.h>
+#include <asm/ocp.h>
+#include <asm/kgdb.h>
+
+#include <mm/mmu_decl.h>
+#include <syslib/cpm2_pic.h>
+#include <syslib/ppc85xx_common.h>
+#include <syslib/ppc85xx_setup.h>
+
+
+#ifndef CONFIG_PCI
+unsigned long isa_io_base = 0;
+unsigned long isa_mem_base = 0;
+#endif
+
+extern unsigned long total_memory; /* in mm/init */
+
+unsigned char __res[sizeof (bd_t)];
+
+static int cds_pci_slot = 2;
+static volatile u8 * cadmus;
+
+/* Internal interrupts are all Level Sensitive, and Positive Polarity */
+
+static u_char mpc85xx_cds_openpic_initsenses[] __initdata = {
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 0: L2 Cache */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 1: ECM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 2: DDR DRAM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 3: LBIU */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 4: DMA 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 5: DMA 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 6: DMA 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 7: DMA 3 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 8: PCI/PCI-X */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 9: RIO Inbound Port Write Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 10: RIO Doorbell Inbound */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 11: RIO Outbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 12: RIO Inbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 13: TSEC 0 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 14: TSEC 0 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 15: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 16: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 17: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 18: TSEC 0 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 19: TSEC 1 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 20: TSEC 1 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 21: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 22: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 23: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 24: TSEC 1 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 25: Fast Ethernet */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 26: DUART */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 27: I2C */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 28: Performance Monitor */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 29: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 30: CPM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 31: Unused */
+#if defined(CONFIG_PCI)
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 0: PCI1 slot */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 1: PCI1 slot */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 2: PCI1 slot */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 3: PCI1 slot */
+#else
+ 0x0, /* External 0: */
+ 0x0, /* External 1: */
+ 0x0, /* External 2: */
+ 0x0, /* External 3: */
+#endif
+ 0x0, /* External 4: */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 5: PHY */
+ 0x0, /* External 6: */
+ 0x0, /* External 7: */
+ 0x0, /* External 8: */
+ 0x0, /* External 9: */
+ 0x0, /* External 10: */
+#if defined(CONFIG_85xx_PCI2) && defined(CONFIG_PCI)
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 11: PCI2 slot 0 */
+#else
+ 0x0, /* External 11: */
+#endif
+};
+
+struct ocp_gfar_data mpc85xx_tsec1_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
+ .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR |
+ GFAR_HAS_PHY_INTR),
+ .phyid = 0,
+ .phyregidx = 0,
+};
+
+struct ocp_gfar_data mpc85xx_tsec2_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
+ .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT5,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR |
+ GFAR_HAS_PHY_INTR),
+ .phyid = 1,
+ .phyregidx = 0,
+};
+
+struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
+ .flags = FS_I2C_SEPARATE_DFSRR,
+};
+
+/* ************************************************************************ */
+int
+mpc85xx_cds_show_cpuinfo(struct seq_file *m)
+{
+ uint pvid, svid, phid1;
+ uint memsize = total_memory;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ pvid = mfspr(PVR);
+ svid = mfspr(SVR);
+
+ seq_printf(m, "Vendor\t\t: Freescale Semiconductor\n");
+ seq_printf(m, "Machine\t\t: CDS (%x)\n", cadmus[CM_VER]);
+ seq_printf(m, "clock\t\t: %dMHz\n", freq / 1000000);
+ seq_printf(m, "PVR\t\t: 0x%x\n", pvid);
+ seq_printf(m, "SVR\t\t: 0x%x\n", svid);
+
+ /* Display cpu Pll setting */
+ phid1 = mfspr(HID1);
+ seq_printf(m, "PLL setting\t: 0x%x\n", ((phid1 >> 24) & 0x3f));
+
+ /* Display the amount of memory */
+ seq_printf(m, "Memory\t\t: %d MB\n", memsize / (1024 * 1024));
+
+ return 0;
+}
+
+#ifdef CONFIG_CPM2
+static void cpm2_cascade(int irq, void *dev_id, struct pt_regs *regs)
+{
+ while((irq = cpm2_get_irq(regs)) >= 0)
+ __do_IRQ(irq, regs);
+}
+
+static struct irqaction cpm2_irqaction = {
+ .handler = cpm2_cascade,
+ .flags = SA_INTERRUPT,
+ .mask = CPU_MASK_NONE,
+ .name = "cpm2_cascade",
+};
+#endif /* CONFIG_CPM2 */
+
+void __init
+mpc85xx_cds_init_IRQ(void)
+{
+ bd_t *binfo = (bd_t *) __res;
+#ifdef CONFIG_CPM2
+ volatile cpm2_map_t *immap = cpm2_immr;
+ int i;
+#endif
+
+ /* Determine the Physical Address of the OpenPIC regs */
+ phys_addr_t OpenPIC_PAddr = binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET;
+ OpenPIC_Addr = ioremap(OpenPIC_PAddr, MPC85xx_OPENPIC_SIZE);
+ OpenPIC_InitSenses = mpc85xx_cds_openpic_initsenses;
+ OpenPIC_NumInitSenses = sizeof (mpc85xx_cds_openpic_initsenses);
+
+ /* Skip reserved space and internal sources */
+ openpic_set_sources(0, 32, OpenPIC_Addr + 0x10200);
+ /* Map PIC IRQs 0-11 */
+ openpic_set_sources(32, 12, OpenPIC_Addr + 0x10000);
+
+ /* we let openpic interrupts starting from an offset, to
+ * leave space for cascading interrupts underneath.
+ */
+ openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET);
+
+#ifdef CONFIG_CPM2
+ /* disable all CPM interupts */
+ immap->im_intctl.ic_simrh = 0x0;
+ immap->im_intctl.ic_simrl = 0x0;
+
+ for (i = CPM_IRQ_OFFSET; i < (NR_CPM_INTS + CPM_IRQ_OFFSET); i++)
+ irq_desc[i].handler = &cpm2_pic;
+
+ /* Initialize the default interrupt mapping priorities,
+ * in case the boot rom changed something on us.
+ */
+ immap->im_intctl.ic_sicr = 0;
+ immap->im_intctl.ic_scprrh = 0x05309770;
+ immap->im_intctl.ic_scprrl = 0x05309770;
+
+ setup_irq(MPC85xx_IRQ_CPM, &cpm2_irqaction);
+#endif
+
+ return;
+}
+
+#ifdef CONFIG_PCI
+/*
+ * interrupt routing
+ */
+int
+mpc85xx_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
+{
+ struct pci_controller *hose = pci_bus_to_hose(dev->bus->number);
+
+ if (!hose->index)
+ {
+ /* Handle PCI1 interrupts */
+ char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+
+ /* Note IRQ assignment for slots is based on which slot the elysium is
+ * in -- in this setup elysium is in slot #2 (this PIRQA as first
+ * interrupt on slot */
+ {
+ { 0, 1, 2, 3 }, /* 16 - PMC */
+ { 3, 0, 0, 0 }, /* 17 P2P (Tsi320) */
+ { 0, 1, 2, 3 }, /* 18 - Slot 1 */
+ { 1, 2, 3, 0 }, /* 19 - Slot 2 */
+ { 2, 3, 0, 1 }, /* 20 - Slot 3 */
+ { 3, 0, 1, 2 }, /* 21 - Slot 4 */
+ };
+
+ const long min_idsel = 16, max_idsel = 21, irqs_per_slot = 4;
+ int i, j;
+
+ for (i = 0; i < 6; i++)
+ for (j = 0; j < 4; j++)
+ pci_irq_table[i][j] =
+ ((pci_irq_table[i][j] + 5 -
+ cds_pci_slot) & 0x3) + PIRQ0A;
+
+ return PCI_IRQ_TABLE_LOOKUP;
+ } else {
+ /* Handle PCI2 interrupts (if we have one) */
+ char pci_irq_table[][4] =
+ {
+ /*
+ * We only have one slot and one interrupt
+ * going to PIRQA - PIRQD */
+ { PIRQ1A, PIRQ1A, PIRQ1A, PIRQ1A }, /* 21 - slot 0 */
+ };
+
+ const long min_idsel = 21, max_idsel = 21, irqs_per_slot = 4;
+
+ return PCI_IRQ_TABLE_LOOKUP;
+ }
+}
+
+#define ARCADIA_HOST_BRIDGE_IDSEL 17
+#define ARCADIA_2ND_BRIDGE_IDSEL 3
+
+int
+mpc85xx_exclude_device(u_char bus, u_char devfn)
+{
+ if (bus == 0 && PCI_SLOT(devfn) == 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+#ifdef CONFIG_85xx_PCI2
+ /* With the current code we know PCI2 will be bus 2, however this may
+ * not be guarnteed */
+ if (bus == 2 && PCI_SLOT(devfn) == 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+#endif
+ /* We explicitly do not go past the Tundra 320 Bridge */
+ if (bus == 1)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+ if ((bus == 0) && (PCI_SLOT(devfn) == ARCADIA_2ND_BRIDGE_IDSEL))
+ return PCIBIOS_DEVICE_NOT_FOUND;
+ else
+ return PCIBIOS_SUCCESSFUL;
+}
+#endif /* CONFIG_PCI */
+
+/* ************************************************************************
+ *
+ * Setup the architecture
+ *
+ */
+static void __init
+mpc85xx_cds_setup_arch(void)
+{
+ struct ocp_def *def;
+ struct ocp_gfar_data *einfo;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ printk("mpc85xx_cds_setup_arch\n");
+
+#ifdef CONFIG_CPM2
+ cpm2_reset();
+#endif
+
+ cadmus = ioremap(CADMUS_BASE, CADMUS_SIZE);
+ cds_pci_slot = ((cadmus[CM_CSR] >> 6) & 0x3) + 1;
+ printk("CDS Version = %x in PCI slot %d\n", cadmus[CM_VER], cds_pci_slot);
+
+ /* Set loops_per_jiffy to a half-way reasonable value,
+ for use until calibrate_delay gets called. */
+ loops_per_jiffy = freq / HZ;
+
+#ifdef CONFIG_PCI
+ /* setup PCI host bridges */
+ mpc85xx_setup_hose();
+#endif
+
+#ifdef CONFIG_SERIAL_8250
+ mpc85xx_early_serial_map();
+#endif
+
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ /* Invalidate the entry we stole earlier the serial ports
+ * should be properly mapped */
+ invalidate_tlbcam_entry(NUM_TLBCAMS - 1);
+#endif
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
+ }
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
+ }
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+
+ ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
+}
+
+/* ************************************************************************ */
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* parse_bootinfo must always be called first */
+ parse_bootinfo(find_bootinfo());
+
+ /*
+ * If we were passed in a board information, copy it into the
+ * residual data area.
+ */
+ if (r3) {
+ memcpy((void *) __res, (void *) (r3 + KERNELBASE),
+ sizeof (bd_t));
+
+ }
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ {
+ bd_t *binfo = (bd_t *) __res;
+
+ /* Use the last TLB entry to map CCSRBAR to allow access to DUART regs */
+ settlbcam(NUM_TLBCAMS - 1, binfo->bi_immr_base,
+ binfo->bi_immr_base, MPC85xx_CCSRBAR_SIZE, _PAGE_IO, 0);
+
+ }
+#endif
+
+#if defined(CONFIG_BLK_DEV_INITRD)
+ /*
+ * If the init RAM disk has been configured in, and there's a valid
+ * starting address for it, set it up.
+ */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+ /* Copy the kernel command line arguments to a safe place. */
+
+ if (r6) {
+ *(char *) (r7 + KERNELBASE) = 0;
+ strcpy(cmd_line, (char *) (r6 + KERNELBASE));
+ }
+
+ /* setup the PowerPC module struct */
+ ppc_md.setup_arch = mpc85xx_cds_setup_arch;
+ ppc_md.show_cpuinfo = mpc85xx_cds_show_cpuinfo;
+
+ ppc_md.init_IRQ = mpc85xx_cds_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+
+ ppc_md.restart = mpc85xx_restart;
+ ppc_md.power_off = mpc85xx_power_off;
+ ppc_md.halt = mpc85xx_halt;
+
+ ppc_md.find_end_of_memory = mpc85xx_find_end_of_memory;
+
+ ppc_md.time_init = NULL;
+ ppc_md.set_rtc_time = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.calibrate_decr = mpc85xx_calibrate_decr;
+
+#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG)
+ ppc_md.progress = gen550_progress;
+#endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */
+
+ if (ppc_md.progress)
+ ppc_md.progress("mpc85xx_cds_init(): exit", 0);
+
+ return;
+}
--- /dev/null
+/*
+ * arch/ppc/platforms/85xx/sbc8560.c
+ *
+ * Wind River SBC8560 board specific routines
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/root_dev.h>
+#include <linux/serial.h>
+#include <linux/tty.h> /* for linux/serial_core.h */
+#include <linux/serial_core.h>
+#include <linux/initrd.h>
+#include <linux/module.h>
+#include <linux/initrd.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/kgdb.h>
+#include <asm/ocp.h>
+#include <mm/mmu_decl.h>
+
+#include <syslib/ppc85xx_common.h>
+#include <syslib/ppc85xx_setup.h>
+
+struct ocp_gfar_data mpc85xx_tsec1_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC1_TX,
+ .interruptError = MPC85xx_IRQ_TSEC1_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC1_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT6,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR | GFAR_HAS_PHY_INTR),
+ .phyid = 25,
+ .phyregidx = 0,
+};
+
+struct ocp_gfar_data mpc85xx_tsec2_def = {
+ .interruptTransmit = MPC85xx_IRQ_TSEC2_TX,
+ .interruptError = MPC85xx_IRQ_TSEC2_ERROR,
+ .interruptReceive = MPC85xx_IRQ_TSEC2_RX,
+ .interruptPHY = MPC85xx_IRQ_EXT7,
+ .flags = (GFAR_HAS_GIGABIT | GFAR_HAS_MULTI_INTR | GFAR_HAS_PHY_INTR),
+ .phyid = 26,
+ .phyregidx = 0,
+};
+
+struct ocp_fs_i2c_data mpc85xx_i2c1_def = {
+ .flags = FS_I2C_SEPARATE_DFSRR,
+};
+
+
+#ifdef CONFIG_SERIAL_8250
+static void __init
+sbc8560_early_serial_map(void)
+{
+ struct uart_port uart_req;
+
+ /* Setup serial port access */
+ memset(&uart_req, 0, sizeof (uart_req));
+ uart_req.irq = MPC85xx_IRQ_EXT9;
+ uart_req.flags = STD_COM_FLAGS;
+ uart_req.uartclk = BASE_BAUD * 16;
+ uart_req.iotype = SERIAL_IO_MEM;
+ uart_req.mapbase = UARTA_ADDR;
+ uart_req.membase = ioremap(uart_req.mapbase, MPC85xx_UART0_SIZE);
+ uart_req.type = PORT_16650;
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+ gen550_init(0, &uart_req);
+#endif
+
+ if (early_serial_setup(&uart_req) != 0)
+ printk("Early serial init of port 0 failed\n");
+
+ /* Assume early_serial_setup() doesn't modify uart_req */
+ uart_req.line = 1;
+ uart_req.mapbase = UARTB_ADDR;
+ uart_req.membase = ioremap(uart_req.mapbase, MPC85xx_UART1_SIZE);
+ uart_req.irq = MPC85xx_IRQ_EXT10;
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+ gen550_init(1, &uart_req);
+#endif
+
+ if (early_serial_setup(&uart_req) != 0)
+ printk("Early serial init of port 1 failed\n");
+}
+#endif
+
+/* ************************************************************************
+ *
+ * Setup the architecture
+ *
+ */
+static void __init
+sbc8560_setup_arch(void)
+{
+ struct ocp_def *def;
+ struct ocp_gfar_data *einfo;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ if (ppc_md.progress)
+ ppc_md.progress("sbc8560_setup_arch()", 0);
+
+ /* Set loops_per_jiffy to a half-way reasonable value,
+ for use until calibrate_delay gets called. */
+ loops_per_jiffy = freq / HZ;
+
+#ifdef CONFIG_PCI
+ /* setup PCI host bridges */
+ mpc85xx_setup_hose();
+#endif
+#ifdef CONFIG_SERIAL_8250
+ sbc8560_early_serial_map();
+#endif
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ /* Invalidate the entry we stole earlier the serial ports
+ * should be properly mapped */
+ invalidate_tlbcam_entry(NUM_TLBCAMS - 1);
+#endif
+
+ /* Set up MAC addresses for the Ethernet devices */
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 0);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enetaddr, 6);
+ }
+
+ def = ocp_get_one_device(OCP_VENDOR_FREESCALE, OCP_FUNC_GFAR, 1);
+ if (def) {
+ einfo = (struct ocp_gfar_data *) def->additions;
+ memcpy(einfo->mac_addr, binfo->bi_enet1addr, 6);
+ }
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ if (initrd_start)
+ ROOT_DEV = Root_RAM0;
+ else
+#endif
+#ifdef CONFIG_ROOT_NFS
+ ROOT_DEV = Root_NFS;
+#else
+ ROOT_DEV = Root_HDA1;
+#endif
+
+ ocp_for_each_device(mpc85xx_update_paddr_ocp, &(binfo->bi_immr_base));
+}
+
+/* ************************************************************************ */
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* parse_bootinfo must always be called first */
+ parse_bootinfo(find_bootinfo());
+
+ /*
+ * If we were passed in a board information, copy it into the
+ * residual data area.
+ */
+ if (r3) {
+ memcpy((void *) __res, (void *) (r3 + KERNELBASE),
+ sizeof (bd_t));
+ }
+
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ /* Use the last TLB entry to map CCSRBAR to allow access to DUART regs */
+ settlbcam(NUM_TLBCAMS - 1, UARTA_ADDR,
+ UARTA_ADDR, 0x1000, _PAGE_IO, 0);
+#endif
+
+#if defined(CONFIG_BLK_DEV_INITRD)
+ /*
+ * If the init RAM disk has been configured in, and there's a valid
+ * starting address for it, set it up.
+ */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif /* CONFIG_BLK_DEV_INITRD */
+
+ /* Copy the kernel command line arguments to a safe place. */
+
+ if (r6) {
+ *(char *) (r7 + KERNELBASE) = 0;
+ strcpy(cmd_line, (char *) (r6 + KERNELBASE));
+ }
+
+ /* setup the PowerPC module struct */
+ ppc_md.setup_arch = sbc8560_setup_arch;
+ ppc_md.show_cpuinfo = sbc8560_show_cpuinfo;
+
+ ppc_md.init_IRQ = sbc8560_init_IRQ;
+ ppc_md.get_irq = openpic_get_irq;
+
+ ppc_md.restart = mpc85xx_restart;
+ ppc_md.power_off = mpc85xx_power_off;
+ ppc_md.halt = mpc85xx_halt;
+
+ ppc_md.find_end_of_memory = mpc85xx_find_end_of_memory;
+
+ ppc_md.time_init = NULL;
+ ppc_md.set_rtc_time = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.calibrate_decr = mpc85xx_calibrate_decr;
+
+#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG)
+ ppc_md.progress = gen550_progress;
+#endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */
+
+ if (ppc_md.progress)
+ ppc_md.progress("sbc8560_init(): exit", 0);
+}
--- /dev/null
+/*
+ * arch/ppc/platform/85xx/sbc85xx.c
+ *
+ * WindRiver PowerQUICC III SBC85xx board common routines
+ *
+ * Copyright 2002, 2003 Motorola Inc.
+ * Copyright 2004 Red Hat, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/stddef.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/reboot.h>
+#include <linux/pci.h>
+#include <linux/kdev_t.h>
+#include <linux/major.h>
+#include <linux/console.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+#include <linux/seq_file.h>
+#include <linux/serial.h>
+#include <linux/module.h>
+
+#include <asm/system.h>
+#include <asm/pgtable.h>
+#include <asm/page.h>
+#include <asm/atomic.h>
+#include <asm/time.h>
+#include <asm/io.h>
+#include <asm/machdep.h>
+#include <asm/prom.h>
+#include <asm/open_pic.h>
+#include <asm/bootinfo.h>
+#include <asm/pci-bridge.h>
+#include <asm/mpc85xx.h>
+#include <asm/irq.h>
+#include <asm/immap_85xx.h>
+#include <asm/ocp.h>
+
+#include <mm/mmu_decl.h>
+
+#include <platforms/85xx/sbc85xx.h>
+
+unsigned char __res[sizeof (bd_t)];
+
+#ifndef CONFIG_PCI
+unsigned long isa_io_base = 0;
+unsigned long isa_mem_base = 0;
+unsigned long pci_dram_offset = 0;
+#endif
+
+extern unsigned long total_memory; /* in mm/init */
+
+/* Internal interrupts are all Level Sensitive, and Positive Polarity */
+
+static u_char sbc8560_openpic_initsenses[] __initdata = {
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 0: L2 Cache */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 1: ECM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 2: DDR DRAM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 3: LBIU */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 4: DMA 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 5: DMA 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 6: DMA 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 7: DMA 3 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 8: PCI/PCI-X */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 9: RIO Inbound Port Write Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 10: RIO Doorbell Inbound */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 11: RIO Outbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 12: RIO Inbound Message */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 13: TSEC 0 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 14: TSEC 0 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 15: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 16: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 17: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 18: TSEC 0 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 19: TSEC 1 Transmit */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 20: TSEC 1 Receive */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 21: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 22: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 23: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 24: TSEC 1 Receive/Transmit Error */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 25: Fast Ethernet */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 26: DUART */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 27: I2C */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 28: Performance Monitor */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 29: Unused */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 30: CPM */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* Internal 31: Unused */
+ 0x0, /* External 0: */
+ 0x0, /* External 1: */
+#if defined(CONFIG_PCI)
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 2: PCI slot 0 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 3: PCI slot 1 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 4: PCI slot 2 */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 5: PCI slot 3 */
+#else
+ 0x0, /* External 2: */
+ 0x0, /* External 3: */
+ 0x0, /* External 4: */
+ 0x0, /* External 5: */
+#endif
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 6: PHY */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_NEGATIVE), /* External 7: PHY */
+ 0x0, /* External 8: */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* External 9: PHY */
+ (IRQ_SENSE_LEVEL | IRQ_POLARITY_POSITIVE), /* External 10: PHY */
+ 0x0, /* External 11: */
+};
+
+/* ************************************************************************ */
+int
+sbc8560_show_cpuinfo(struct seq_file *m)
+{
+ uint pvid, svid, phid1;
+ uint memsize = total_memory;
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq;
+
+ /* get the core frequency */
+ freq = binfo->bi_intfreq;
+
+ pvid = mfspr(PVR);
+ svid = mfspr(SVR);
+
+ seq_printf(m, "Vendor\t\t: Wind River\n");
+
+ switch (svid & 0xffff0000) {
+ case SVR_8540:
+ seq_printf(m, "Machine\t\t: hhmmm, this board isn't made yet!\n");
+ break;
+ case SVR_8560:
+ seq_printf(m, "Machine\t\t: SBC8560\n");
+ break;
+ default:
+ seq_printf(m, "Machine\t\t: unknown\n");
+ break;
+ }
+ seq_printf(m, "clock\t\t: %dMHz\n", freq / 1000000);
+ seq_printf(m, "PVR\t\t: 0x%x\n", pvid);
+ seq_printf(m, "SVR\t\t: 0x%x\n", svid);
+
+ /* Display cpu Pll setting */
+ phid1 = mfspr(HID1);
+ seq_printf(m, "PLL setting\t: 0x%x\n", ((phid1 >> 24) & 0x3f));
+
+ /* Display the amount of memory */
+ seq_printf(m, "Memory\t\t: %d MB\n", memsize / (1024 * 1024));
+
+ return 0;
+}
+
+void __init
+sbc8560_init_IRQ(void)
+{
+ bd_t *binfo = (bd_t *) __res;
+ /* Determine the Physical Address of the OpenPIC regs */
+ phys_addr_t OpenPIC_PAddr =
+ binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET;
+ OpenPIC_Addr = ioremap(OpenPIC_PAddr, MPC85xx_OPENPIC_SIZE);
+ OpenPIC_InitSenses = sbc8560_openpic_initsenses;
+ OpenPIC_NumInitSenses = sizeof (sbc8560_openpic_initsenses);
+
+ /* Skip reserved space and internal sources */
+ openpic_set_sources(0, 32, OpenPIC_Addr + 0x10200);
+ /* Map PIC IRQs 0-11 */
+ openpic_set_sources(32, 12, OpenPIC_Addr + 0x10000);
+
+ /* we let openpic interrupts starting from an offset, to
+ * leave space for cascading interrupts underneath.
+ */
+ openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET);
+
+ return;
+}
+
+/*
+ * interrupt routing
+ */
+
+#ifdef CONFIG_PCI
+int mpc85xx_map_irq(struct pci_dev *dev, unsigned char idsel,
+ unsigned char pin)
+{
+ static char pci_irq_table[][4] =
+ /*
+ * PCI IDSEL/INTPIN->INTLINE
+ * A B C D
+ */
+ {
+ {PIRQA, PIRQB, PIRQC, PIRQD},
+ {PIRQD, PIRQA, PIRQB, PIRQC},
+ {PIRQC, PIRQD, PIRQA, PIRQB},
+ {PIRQB, PIRQC, PIRQD, PIRQA},
+ };
+
+ const long min_idsel = 12, max_idsel = 15, irqs_per_slot = 4;
+ return PCI_IRQ_TABLE_LOOKUP;
+}
+
+int mpc85xx_exclude_device(u_char bus, u_char devfn)
+{
+ if (bus == 0 && PCI_SLOT(devfn) == 0)
+ return PCIBIOS_DEVICE_NOT_FOUND;
+ else
+ return PCIBIOS_SUCCESSFUL;
+}
+#endif /* CONFIG_PCI */
--- /dev/null
+/*
+ * arch/ppc/platforms/lite5200.c
+ *
+ * Platform support file for the Freescale LITE5200 based on MPC52xx.
+ * A maximum of this file should be moved to syslib/mpc52xx_?????
+ * so that new platform based on MPC52xx need a minimal platform file
+ * ( avoid code duplication )
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Based on the 2.4 code written by Kent Borg,
+ * Dale Farnsworth <dale.farnsworth@mvista.com> and
+ * Wolfgang Denk <wd@denx.de>
+ *
+ * Copyright 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright 2003 Motorola Inc.
+ * Copyright 2003 MontaVista Software Inc.
+ * Copyright 2003 DENX Software Engineering (wd@denx.de)
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/initrd.h>
+#include <linux/seq_file.h>
+#include <linux/kdev_t.h>
+#include <linux/root_dev.h>
+#include <linux/console.h>
+
+#include <asm/bootinfo.h>
+#include <asm/io.h>
+#include <asm/ocp.h>
+#include <asm/mpc52xx.h>
+
+
+extern int powersave_nap;
+
+/* Board data given by U-Boot */
+bd_t __res;
+EXPORT_SYMBOL(__res); /* For modules */
+
+
+/* ======================================================================== */
+/* OCP device definition */
+/* For board/shared resources like PSCs */
+/* ======================================================================== */
+/* Be sure not to load conficting devices : e.g. loading the UART drivers for
+ * PSC1 and then also loading a AC97 for this same PSC.
+ * For details about how to create an entry, look in the doc of the concerned
+ * driver ( eg drivers/serial/mpc52xx_uart.c for the PSC in uart mode )
+ */
+
+struct ocp_def board_ocp[] = {
+ {
+ .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_PSC_UART,
+ .index = 0,
+ .paddr = MPC52xx_PSC1,
+ .irq = MPC52xx_PSC1_IRQ,
+ .pm = OCP_CPM_NA,
+ },
+ { /* Terminating entry */
+ .vendor = OCP_VENDOR_INVALID
+ }
+};
+
+
+/* ======================================================================== */
+/* Platform specific code */
+/* ======================================================================== */
+
+static int
+lite5200_show_cpuinfo(struct seq_file *m)
+{
+ seq_printf(m, "machine\t\t: Freescale LITE5200\n");
+ return 0;
+}
+
+static void __init
+lite5200_setup_cpu(void)
+{
+ struct mpc52xx_intr *intr;
+
+ u32 intr_ctrl;
+
+ /* Map zones */
+ intr = (struct mpc52xx_intr *)
+ ioremap(MPC52xx_INTR,sizeof(struct mpc52xx_intr));
+
+ if (!intr) {
+ printk("lite5200.c: Error while mapping INTR during lite5200_setup_cpu\n");
+ goto unmap_regs;
+ }
+
+ /* IRQ[0-3] setup : IRQ0 - Level Active Low */
+ /* IRQ[1-3] - Level Active High */
+ intr_ctrl = in_be32(&intr->ctrl);
+ intr_ctrl &= ~0x00ff0000;
+ intr_ctrl |= 0x00c00000;
+ out_be32(&intr->ctrl, intr_ctrl);
+
+ /* Unmap reg zone */
+unmap_regs:
+ if (intr) iounmap(intr);
+}
+
+static void __init
+lite5200_setup_arch(void)
+{
+ /* Add board OCP definitions */
+ mpc52xx_add_board_devices(board_ocp);
+
+ /* CPU & Port mux setup */
+ lite5200_setup_cpu();
+}
+
+void __init
+platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7)
+{
+ /* Generic MPC52xx platform initialization */
+ /* TODO Create one and move a max of stuff in it.
+ Put this init in the syslib */
+
+ struct bi_record *bootinfo = find_bootinfo();
+
+ if (bootinfo)
+ parse_bootinfo(bootinfo);
+ else {
+ /* Load the bd_t board info structure */
+ if (r3)
+ memcpy((void*)&__res,(void*)(r3+KERNELBASE),
+ sizeof(bd_t));
+
+#ifdef CONFIG_BLK_DEV_INITRD
+ /* Load the initrd */
+ if (r4) {
+ initrd_start = r4 + KERNELBASE;
+ initrd_end = r5 + KERNELBASE;
+ }
+#endif
+
+ /* Load the command line */
+ if (r6) {
+ *(char *)(r7+KERNELBASE) = 0;
+ strcpy(cmd_line, (char *)(r6+KERNELBASE));
+ }
+ }
+
+ /* BAT setup */
+ mpc52xx_set_bat();
+
+ /* No ISA bus AFAIK */
+ isa_io_base = 0;
+ isa_mem_base = 0;
+
+ /* Powersave */
+ powersave_nap = 1; /* We allow this platform to NAP */
+
+ /* Setup the ppc_md struct */
+ ppc_md.setup_arch = lite5200_setup_arch;
+ ppc_md.show_cpuinfo = lite5200_show_cpuinfo;
+ ppc_md.show_percpuinfo = NULL;
+ ppc_md.init_IRQ = mpc52xx_init_irq;
+ ppc_md.get_irq = mpc52xx_get_irq;
+
+ ppc_md.find_end_of_memory = mpc52xx_find_end_of_memory;
+ ppc_md.setup_io_mappings = mpc52xx_map_io;
+
+ ppc_md.restart = mpc52xx_restart;
+ ppc_md.power_off = mpc52xx_power_off;
+ ppc_md.halt = mpc52xx_halt;
+
+ /* No time keeper on the LITE5200 */
+ ppc_md.time_init = NULL;
+ ppc_md.get_rtc_time = NULL;
+ ppc_md.set_rtc_time = NULL;
+
+ ppc_md.calibrate_decr = mpc52xx_calibrate_decr;
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+ ppc_md.progress = mpc52xx_progress;
+#endif
+}
+
--- /dev/null
+/*
+ * arch/ppc/platforms/mpc5200.c
+ *
+ * OCP Definitions for the boards based on MPC5200 processor. Contains
+ * definitions for every common peripherals. (Mostly all but PSCs)
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Copyright 2004 Sylvain Munaut <tnt@246tNt.com>
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include <asm/ocp.h>
+#include <asm/mpc52xx.h>
+
+
+struct ocp_fs_i2c_data mpc5200_i2c_def = {
+ .flags = FS_I2C_CLOCK_5200,
+};
+
+
+/* Here is the core_ocp struct.
+ * With all the devices common to all board. Even if port multiplexing is
+ * not setup for them (if the user don't want them, just don't select the
+ * config option). The potentially conflicting devices (like PSCs) goes in
+ * board specific file.
+ */
+struct ocp_def core_ocp[] = {
+ {
+ .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_IIC,
+ .index = 0,
+ .paddr = MPC52xx_I2C1,
+ .irq = OCP_IRQ_NA, /* MPC52xx_IRQ_I2C1 - Buggy */
+ .pm = OCP_CPM_NA,
+ .additions = &mpc5200_i2c_def,
+ },
+ {
+ .vendor = OCP_VENDOR_FREESCALE,
+ .function = OCP_FUNC_IIC,
+ .index = 1,
+ .paddr = MPC52xx_I2C2,
+ .irq = OCP_IRQ_NA, /* MPC52xx_IRQ_I2C2 - Buggy */
+ .pm = OCP_CPM_NA,
+ .additions = &mpc5200_i2c_def,
+ },
+ { /* Terminating entry */
+ .vendor = OCP_VENDOR_INVALID
+ }
+};
--- /dev/null
+/*
+ * A collection of structures, addresses, and values associated with
+ * the Motorola MPC8260ADS/MPC8266ADS-PCI boards.
+ * Copied from the RPX-Classic and SBS8260 stuff.
+ *
+ * Copyright (c) 2001 Dan Malek (dan@mvista.com)
+ */
+#ifdef __KERNEL__
+#ifndef __MACH_ADS8260_DEFS
+#define __MACH_ADS8260_DEFS
+
+#include <linux/config.h>
+
+#include <asm/ppcboot.h>
+
+/* Memory map is configured by the PROM startup.
+ * We just map a few things we need. The CSR is actually 4 byte-wide
+ * registers that can be accessed as 8-, 16-, or 32-bit values.
+ */
+#define CPM_MAP_ADDR ((uint)0xf0000000)
+#define BCSR_ADDR ((uint)0xf4500000)
+#define BCSR_SIZE ((uint)(32 * 1024))
+
+#define BOOTROM_RESTART_ADDR ((uint)0xff000104)
+
+/* For our show_cpuinfo hooks. */
+#define CPUINFO_VENDOR "Motorola"
+#define CPUINFO_MACHINE "PQ2 ADS PowerPC"
+
+/* The ADS8260 has 16, 32-bit wide control/status registers, accessed
+ * only on word boundaries.
+ * Not all are used (yet), or are interesting to us (yet).
+ */
+
+/* Things of interest in the CSR.
+*/
+#define BCSR0_LED0 ((uint)0x02000000) /* 0 == on */
+#define BCSR0_LED1 ((uint)0x01000000) /* 0 == on */
+#define BCSR1_FETHIEN ((uint)0x08000000) /* 0 == enable */
+#define BCSR1_FETH_RST ((uint)0x04000000) /* 0 == reset */
+#define BCSR1_RS232_EN1 ((uint)0x02000000) /* 0 == enable */
+#define BCSR1_RS232_EN2 ((uint)0x01000000) /* 0 == enable */
+
+#define PHY_INTERRUPT SIU_INT_IRQ7
+
+#ifdef CONFIG_PCI
+/* PCI interrupt controller */
+#define PCI_INT_STAT_REG 0xF8200000
+#define PCI_INT_MASK_REG 0xF8200004
+#define PIRQA (NR_SIU_INTS + 0)
+#define PIRQB (NR_SIU_INTS + 1)
+#define PIRQC (NR_SIU_INTS + 2)
+#define PIRQD (NR_SIU_INTS + 3)
+
+/*
+ * PCI memory map definitions for MPC8266ADS-PCI.
+ *
+ * processor view
+ * local address PCI address target
+ * 0x80000000-0x9FFFFFFF 0x80000000-0x9FFFFFFF PCI mem with prefetch
+ * 0xA0000000-0xBFFFFFFF 0xA0000000-0xBFFFFFFF PCI mem w/o prefetch
+ * 0xF4000000-0xF7FFFFFF 0x00000000-0x03FFFFFF PCI IO
+ *
+ * PCI master view
+ * local address PCI address target
+ * 0x00000000-0x1FFFFFFF 0x00000000-0x1FFFFFFF MPC8266 local memory
+ */
+
+/* window for a PCI master to access MPC8266 memory */
+#define PCI_SLV_MEM_LOCAL 0x00000000 /* Local base */
+#define PCI_SLV_MEM_BUS 0x00000000 /* PCI base */
+
+/* window for the processor to access PCI memory with prefetching */
+#define PCI_MSTR_MEM_LOCAL 0x80000000 /* Local base */
+#define PCI_MSTR_MEM_BUS 0x80000000 /* PCI base */
+#define PCI_MSTR_MEM_SIZE 0x20000000 /* 512MB */
+
+/* window for the processor to access PCI memory without prefetching */
+#define PCI_MSTR_MEMIO_LOCAL 0xA0000000 /* Local base */
+#define PCI_MSTR_MEMIO_BUS 0xA0000000 /* PCI base */
+#define PCI_MSTR_MEMIO_SIZE 0x20000000 /* 512MB */
+
+/* window for the processor to access PCI I/O */
+#define PCI_MSTR_IO_LOCAL 0xF4000000 /* Local base */
+#define PCI_MSTR_IO_BUS 0x00000000 /* PCI base */
+#define PCI_MSTR_IO_SIZE 0x04000000 /* 64MB */
+
+#define _IO_BASE PCI_MSTR_IO_LOCAL
+#define _ISA_MEM_BASE PCI_MSTR_MEMIO_LOCAL
+#define PCI_DRAM_OFFSET PCI_SLV_MEM_BUS
+#endif /* CONFIG_PCI */
+
+#endif /* __MACH_ADS8260_DEFS */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * A collection of structures, addresses, and values associated with
+ * the Embedded Planet RPX6 (or RPX Super) MPC8260 board.
+ * Copied from the RPX-Classic and SBS8260 stuff.
+ *
+ * Copyright (c) 2001 Dan Malek <dan@embeddededge.com>
+ */
+#ifdef __KERNEL__
+#ifndef __ASM_PLATFORMS_RPX8260_H__
+#define __ASM_PLATFORMS_RPX8260_H__
+
+/* A Board Information structure that is given to a program when
+ * prom starts it up.
+ */
+typedef struct bd_info {
+ unsigned int bi_memstart; /* Memory start address */
+ unsigned int bi_memsize; /* Memory (end) size in bytes */
+ unsigned int bi_nvsize; /* NVRAM size in bytes (can be 0) */
+ unsigned int bi_intfreq; /* Internal Freq, in Hz */
+ unsigned int bi_busfreq; /* Bus Freq, in MHz */
+ unsigned int bi_cpmfreq; /* CPM Freq, in MHz */
+ unsigned int bi_brgfreq; /* BRG Freq, in MHz */
+ unsigned int bi_vco; /* VCO Out from PLL */
+ unsigned int bi_baudrate; /* Default console baud rate */
+ unsigned int bi_immr; /* IMMR when called from boot rom */
+ unsigned char bi_enetaddr[6];
+} bd_t;
+
+extern bd_t m8xx_board_info;
+
+/* Memory map is configured by the PROM startup.
+ * We just map a few things we need. The CSR is actually 4 byte-wide
+ * registers that can be accessed as 8-, 16-, or 32-bit values.
+ */
+#define CPM_MAP_ADDR ((uint)0xf0000000)
+#define RPX_CSR_ADDR ((uint)0xfa000000)
+#define RPX_CSR_SIZE ((uint)(512 * 1024))
+#define RPX_NVRTC_ADDR ((uint)0xfa080000)
+#define RPX_NVRTC_SIZE ((uint)(512 * 1024))
+
+/* The RPX6 has 16, byte wide control/status registers.
+ * Not all are used (yet).
+ */
+extern volatile u_char *rpx6_csr_addr;
+
+/* Things of interest in the CSR.
+*/
+#define BCSR0_ID_MASK ((u_char)0xf0) /* Read only */
+#define BCSR0_SWITCH_MASK ((u_char)0x0f) /* Read only */
+#define BCSR1_XCVR_SMC1 ((u_char)0x80)
+#define BCSR1_XCVR_SMC2 ((u_char)0x40)
+#define BCSR2_FLASH_WENABLE ((u_char)0x20)
+#define BCSR2_NVRAM_ENABLE ((u_char)0x10)
+#define BCSR2_ALT_IRQ2 ((u_char)0x08)
+#define BCSR2_ALT_IRQ3 ((u_char)0x04)
+#define BCSR2_PRST ((u_char)0x02) /* Force reset */
+#define BCSR2_ENPRST ((u_char)0x01) /* Enable POR */
+#define BCSR3_MODCLK_MASK ((u_char)0xe0)
+#define BCSR3_ENCLKHDR ((u_char)0x10)
+#define BCSR3_LED5 ((u_char)0x04) /* 0 == on */
+#define BCSR3_LED6 ((u_char)0x02) /* 0 == on */
+#define BCSR3_LED7 ((u_char)0x01) /* 0 == on */
+#define BCSR4_EN_PHY ((u_char)0x80) /* Enable PHY */
+#define BCSR4_EN_MII ((u_char)0x40) /* Enable PHY */
+#define BCSR4_MII_READ ((u_char)0x04)
+#define BCSR4_MII_MDC ((u_char)0x02)
+#define BCSR4_MII_MDIO ((u_char)0x01)
+#define BCSR13_FETH_IRQMASK ((u_char)0xf0)
+#define BCSR15_FETH_IRQ ((u_char)0x20)
+
+#define PHY_INTERRUPT SIU_INT_IRQ7
+
+/* For our show_cpuinfo hooks. */
+#define CPUINFO_VENDOR "Embedded Planet"
+#define CPUINFO_MACHINE "EP8260 PowerPC"
+
+/* Warm reset vector. */
+#define BOOTROM_RESTART_ADDR ((uint)0xfff00104)
+
+#endif /* __ASM_PLATFORMS_RPX8260_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/* The CPM2 internal interrupt controller. It is usually
+ * the only interrupt controller.
+ * There are two 32-bit registers (high/low) for up to 64
+ * possible interrupts.
+ *
+ * Now, the fun starts.....Interrupt Numbers DO NOT MAP
+ * in a simple arithmetic fashion to mask or pending registers.
+ * That is, interrupt 4 does not map to bit position 4.
+ * We create two tables, indexed by vector number, to indicate
+ * which register to use and which bit in the register to use.
+ */
+
+#include <linux/stddef.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <linux/irq.h>
+
+#include <asm/immap_cpm2.h>
+#include <asm/mpc8260.h>
+
+#include "cpm2_pic.h"
+
+static u_char irq_to_siureg[] = {
+ 1, 1, 1, 1, 1, 1, 1, 1,
+ 1, 1, 1, 1, 1, 1, 1, 1,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 1, 1, 1, 1, 1, 1, 1, 1,
+ 1, 1, 1, 1, 1, 1, 1, 1,
+ 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 0, 0, 0, 0, 0
+};
+
+static u_char irq_to_siubit[] = {
+ 31, 16, 17, 18, 19, 20, 21, 22,
+ 23, 24, 25, 26, 27, 28, 29, 30,
+ 29, 30, 16, 17, 18, 19, 20, 21,
+ 22, 23, 24, 25, 26, 27, 28, 31,
+ 0, 1, 2, 3, 4, 5, 6, 7,
+ 8, 9, 10, 11, 12, 13, 14, 15,
+ 15, 14, 13, 12, 11, 10, 9, 8,
+ 7, 6, 5, 4, 3, 2, 1, 0
+};
+
+static void cpm2_mask_irq(unsigned int irq_nr)
+{
+ int bit, word;
+ volatile uint *simr;
+
+ bit = irq_to_siubit[irq_nr];
+ word = irq_to_siureg[irq_nr];
+
+ simr = &(cpm2_immr->im_intctl.ic_simrh);
+ ppc_cached_irq_mask[word] &= ~(1 << (31 - bit));
+ simr[word] = ppc_cached_irq_mask[word];
+}
+
+static void cpm2_unmask_irq(unsigned int irq_nr)
+{
+ int bit, word;
+ volatile uint *simr;
+
+ bit = irq_to_siubit[irq_nr];
+ word = irq_to_siureg[irq_nr];
+
+ simr = &(cpm2_immr->im_intctl.ic_simrh);
+ ppc_cached_irq_mask[word] |= (1 << (31 - bit));
+ simr[word] = ppc_cached_irq_mask[word];
+}
+
+static void cpm2_mask_and_ack(unsigned int irq_nr)
+{
+ int bit, word;
+ volatile uint *simr, *sipnr;
+
+ bit = irq_to_siubit[irq_nr];
+ word = irq_to_siureg[irq_nr];
+
+ simr = &(cpm2_immr->im_intctl.ic_simrh);
+ sipnr = &(cpm2_immr->im_intctl.ic_sipnrh);
+ ppc_cached_irq_mask[word] &= ~(1 << (31 - bit));
+ simr[word] = ppc_cached_irq_mask[word];
+ sipnr[word] = 1 << (31 - bit);
+}
+
+static void cpm2_end_irq(unsigned int irq_nr)
+{
+ int bit, word;
+ volatile uint *simr;
+
+ if (!(irq_desc[irq_nr].status & (IRQ_DISABLED|IRQ_INPROGRESS))
+ && irq_desc[irq_nr].action) {
+
+ bit = irq_to_siubit[irq_nr];
+ word = irq_to_siureg[irq_nr];
+
+ simr = &(cpm2_immr->im_intctl.ic_simrh);
+ ppc_cached_irq_mask[word] |= (1 << (31 - bit));
+ simr[word] = ppc_cached_irq_mask[word];
+ }
+}
+
+struct hw_interrupt_type cpm2_pic = {
+ " CPM2 SIU ",
+ NULL,
+ NULL,
+ cpm2_unmask_irq,
+ cpm2_mask_irq,
+ cpm2_mask_and_ack,
+ cpm2_end_irq,
+ 0
+};
+
+
+int
+cpm2_get_irq(struct pt_regs *regs)
+{
+ int irq;
+ unsigned long bits;
+
+ /* For CPM2, read the SIVEC register and shift the bits down
+ * to get the irq number. */
+ bits = cpm2_immr->im_intctl.ic_sivec;
+ irq = bits >> 26;
+
+ if (irq == 0)
+ return(-1);
+ return irq;
+}
--- /dev/null
+#ifndef _PPC_KERNEL_CPM2_H
+#define _PPC_KERNEL_CPM2_H
+
+extern struct hw_interrupt_type cpm2_pic;
+extern int cpm2_get_irq(struct pt_regs *regs);
+
+#endif /* _PPC_KERNEL_CPM2_H */
--- /dev/null
+/*
+ * arch/ppc/kernel/ibm440gx_common.c
+ *
+ * PPC440GX system library
+ *
+ * Eugene Surovegin <eugene.surovegin@zultys.com> or <ebs@ebshome.net>
+ * Copyright (c) 2003, 2004 Zultys Technologies
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <asm/ibm44x.h>
+#include <asm/mmu.h>
+#include <asm/processor.h>
+#include <syslib/ibm440gx_common.h>
+
+/*
+ * Calculate 440GX clocks
+ */
+static inline u32 __fix_zero(u32 v, u32 def){
+ return v ? v : def;
+}
+
+void __init ibm440gx_get_clocks(struct ibm44x_clocks* p, unsigned int sys_clk,
+ unsigned int ser_clk)
+{
+ u32 pllc = CPR_READ(DCRN_CPR_PLLC);
+ u32 plld = CPR_READ(DCRN_CPR_PLLD);
+ u32 uart0 = SDR_READ(DCRN_SDR_UART0);
+ u32 uart1 = SDR_READ(DCRN_SDR_UART1);
+
+ /* Dividers */
+ u32 fbdv = __fix_zero((plld >> 24) & 0x1f, 32);
+ u32 fwdva = __fix_zero((plld >> 16) & 0xf, 16);
+ u32 fwdvb = __fix_zero((plld >> 8) & 7, 8);
+ u32 lfbdv = __fix_zero(plld & 0x3f, 64);
+ u32 pradv0 = __fix_zero((CPR_READ(DCRN_CPR_PRIMAD) >> 24) & 7, 8);
+ u32 prbdv0 = __fix_zero((CPR_READ(DCRN_CPR_PRIMBD) >> 24) & 7, 8);
+ u32 opbdv0 = __fix_zero((CPR_READ(DCRN_CPR_OPBD) >> 24) & 3, 4);
+ u32 perdv0 = __fix_zero((CPR_READ(DCRN_CPR_PERD) >> 24) & 3, 4);
+
+ /* Input clocks for primary dividers */
+ u32 clk_a, clk_b;
+
+ if (pllc & 0x40000000){
+ u32 m;
+
+ /* Feedback path */
+ switch ((pllc >> 24) & 7){
+ case 0:
+ /* PLLOUTx */
+ m = ((pllc & 0x20000000) ? fwdvb : fwdva) * lfbdv;
+ break;
+ case 1:
+ /* CPU */
+ m = fwdva * pradv0;
+ break;
+ case 5:
+ /* PERClk */
+ m = fwdvb * prbdv0 * opbdv0 * perdv0;
+ break;
+ default:
+ printk(KERN_EMERG "invalid PLL feedback source\n");
+ goto bypass;
+ }
+ m *= fbdv;
+ p->vco = sys_clk * m;
+ clk_a = p->vco / fwdva;
+ clk_b = p->vco / fwdvb;
+ }
+ else {
+bypass:
+ /* Bypass system PLL */
+ p->vco = 0;
+ clk_a = clk_b = sys_clk;
+ }
+
+ p->cpu = clk_a / pradv0;
+ p->plb = clk_b / prbdv0;
+ p->opb = p->plb / opbdv0;
+ p->ebc = p->opb / perdv0;
+
+ /* UARTs clock */
+ if (uart0 & 0x00800000)
+ p->uart0 = ser_clk;
+ else
+ p->uart0 = p->plb / __fix_zero(uart0 & 0xff, 256);
+
+ if (uart1 & 0x00800000)
+ p->uart1 = ser_clk;
+ else
+ p->uart1 = p->plb / __fix_zero(uart1 & 0xff, 256);
+}
+
+/* Issue L2C diagnostic command */
+static inline u32 l2c_diag(u32 addr)
+{
+ mtdcr(DCRN_L2C0_ADDR, addr);
+ mtdcr(DCRN_L2C0_CMD, L2C_CMD_DIAG);
+ while (!(mfdcr(DCRN_L2C0_SR) & L2C_SR_CC)) ;
+ return mfdcr(DCRN_L2C0_DATA);
+}
+
+static irqreturn_t l2c_error_handler(int irq, void* dev, struct pt_regs* regs)
+{
+ u32 sr = mfdcr(DCRN_L2C0_SR);
+ if (sr & L2C_SR_CPE){
+ /* Read cache trapped address */
+ u32 addr = l2c_diag(0x42000000);
+ printk(KERN_EMERG "L2C: Cache Parity Error, addr[16:26] = 0x%08x\n", addr);
+ }
+ if (sr & L2C_SR_TPE){
+ /* Read tag trapped address */
+ u32 addr = l2c_diag(0x82000000) >> 16;
+ printk(KERN_EMERG "L2C: Tag Parity Error, addr[16:26] = 0x%08x\n", addr);
+ }
+
+ /* Clear parity errors */
+ if (sr & (L2C_SR_CPE | L2C_SR_TPE)){
+ mtdcr(DCRN_L2C0_ADDR, 0);
+ mtdcr(DCRN_L2C0_CMD, L2C_CMD_CCP | L2C_CMD_CTE);
+ } else
+ printk(KERN_EMERG "L2C: LRU error\n");
+
+ return IRQ_HANDLED;
+}
+
+/* Enable L2 cache */
+void __init ibm440gx_l2c_enable(void){
+ u32 r;
+ unsigned long flags;
+
+ /* Install error handler */
+ if (request_irq(87, l2c_error_handler, SA_INTERRUPT, "L2C", 0) < 0){
+ printk(KERN_ERR "Cannot install L2C error handler, cache is not enabled\n");
+ return;
+ }
+
+ local_irq_save(flags);
+ asm volatile ("sync" ::: "memory");
+
+ /* Disable SRAM */
+ mtdcr(DCRN_SRAM0_DPC, mfdcr(DCRN_SRAM0_DPC) & ~SRAM_DPC_ENABLE);
+ mtdcr(DCRN_SRAM0_SB0CR, mfdcr(DCRN_SRAM0_SB0CR) & ~SRAM_SBCR_BU_MASK);
+ mtdcr(DCRN_SRAM0_SB1CR, mfdcr(DCRN_SRAM0_SB1CR) & ~SRAM_SBCR_BU_MASK);
+ mtdcr(DCRN_SRAM0_SB2CR, mfdcr(DCRN_SRAM0_SB2CR) & ~SRAM_SBCR_BU_MASK);
+ mtdcr(DCRN_SRAM0_SB3CR, mfdcr(DCRN_SRAM0_SB3CR) & ~SRAM_SBCR_BU_MASK);
+
+ /* Enable L2_MODE without ICU/DCU */
+ r = mfdcr(DCRN_L2C0_CFG) & ~(L2C_CFG_ICU | L2C_CFG_DCU | L2C_CFG_SS_MASK);
+ r |= L2C_CFG_L2M | L2C_CFG_SS_256;
+ mtdcr(DCRN_L2C0_CFG, r);
+
+ mtdcr(DCRN_L2C0_ADDR, 0);
+
+ /* Hardware Clear Command */
+ mtdcr(DCRN_L2C0_CMD, L2C_CMD_HCC);
+ while (!(mfdcr(DCRN_L2C0_SR) & L2C_SR_CC)) ;
+
+ /* Clear Cache Parity and Tag Errors */
+ mtdcr(DCRN_L2C0_CMD, L2C_CMD_CCP | L2C_CMD_CTE);
+
+ /* Enable 64G snoop region starting at 0 */
+ r = mfdcr(DCRN_L2C0_SNP0) & ~(L2C_SNP_BA_MASK | L2C_SNP_SSR_MASK);
+ r |= L2C_SNP_SSR_32G | L2C_SNP_ESR;
+ mtdcr(DCRN_L2C0_SNP0, r);
+
+ r = mfdcr(DCRN_L2C0_SNP1) & ~(L2C_SNP_BA_MASK | L2C_SNP_SSR_MASK);
+ r |= 0x80000000 | L2C_SNP_SSR_32G | L2C_SNP_ESR;
+ mtdcr(DCRN_L2C0_SNP1, r);
+
+ asm volatile ("sync" ::: "memory");
+
+ /* Enable ICU/DCU ports */
+ r = mfdcr(DCRN_L2C0_CFG);
+ r &= ~(L2C_CFG_DCW_MASK | L2C_CFG_PMUX_MASK | L2C_CFG_PMIM | L2C_CFG_TPEI
+ | L2C_CFG_CPEI | L2C_CFG_NAM | L2C_CFG_NBRM);
+ r |= L2C_CFG_ICU | L2C_CFG_DCU | L2C_CFG_TPC | L2C_CFG_CPC | L2C_CFG_FRAN
+ | L2C_CFG_CPIM | L2C_CFG_TPIM | L2C_CFG_LIM | L2C_CFG_SMCM;
+ mtdcr(DCRN_L2C0_CFG, r);
+
+ asm volatile ("sync; isync" ::: "memory");
+ local_irq_restore(flags);
+}
+
+/* Disable L2 cache */
+void __init ibm440gx_l2c_disable(void){
+ u32 r;
+ unsigned long flags;
+
+ local_irq_save(flags);
+ asm volatile ("sync" ::: "memory");
+
+ /* Disable L2C mode */
+ r = mfdcr(DCRN_L2C0_CFG) & ~(L2C_CFG_L2M | L2C_CFG_ICU | L2C_CFG_DCU);
+ mtdcr(DCRN_L2C0_CFG, r);
+
+ /* Enable SRAM */
+ mtdcr(DCRN_SRAM0_DPC, mfdcr(DCRN_SRAM0_DPC) | SRAM_DPC_ENABLE);
+ mtdcr(DCRN_SRAM0_SB0CR,
+ SRAM_SBCR_BAS0 | SRAM_SBCR_BS_64KB | SRAM_SBCR_BU_RW);
+ mtdcr(DCRN_SRAM0_SB1CR,
+ SRAM_SBCR_BAS1 | SRAM_SBCR_BS_64KB | SRAM_SBCR_BU_RW);
+ mtdcr(DCRN_SRAM0_SB2CR,
+ SRAM_SBCR_BAS2 | SRAM_SBCR_BS_64KB | SRAM_SBCR_BU_RW);
+ mtdcr(DCRN_SRAM0_SB3CR,
+ SRAM_SBCR_BAS3 | SRAM_SBCR_BS_64KB | SRAM_SBCR_BU_RW);
+
+ asm volatile ("sync; isync" ::: "memory");
+ local_irq_restore(flags);
+}
+
+void __init ibm440gx_l2c_setup(struct ibm44x_clocks* p)
+{
+ /* Disable L2C on rev.A, rev.B and 800MHz version of rev.C,
+ enable it on all other revisions
+ */
+ u32 pvr = mfspr(PVR);
+ if (pvr == PVR_440GX_RA || pvr == PVR_440GX_RB ||
+ (pvr == PVR_440GX_RC && p->cpu > 667000000))
+ ibm440gx_l2c_disable();
+ else
+ ibm440gx_l2c_enable();
+}
+
+int __init ibm440gx_get_eth_grp(void)
+{
+ return (SDR_READ(DCRN_SDR_PFC1) & DCRN_SDR_PFC1_EPS) >> DCRN_SDR_PFC1_EPS_SHIFT;
+}
+
+void __init ibm440gx_set_eth_grp(int group)
+{
+ SDR_WRITE(DCRN_SDR_PFC1, (SDR_READ(DCRN_SDR_PFC1) & ~DCRN_SDR_PFC1_EPS) | (group << DCRN_SDR_PFC1_EPS_SHIFT));
+}
+
+void __init ibm440gx_tah_enable(void)
+{
+ /* Enable TAH0 and TAH1 */
+ SDR_WRITE(DCRN_SDR_MFR,SDR_READ(DCRN_SDR_MFR) &
+ ~DCRN_SDR_MFR_TAH0);
+ SDR_WRITE(DCRN_SDR_MFR,SDR_READ(DCRN_SDR_MFR) &
+ ~DCRN_SDR_MFR_TAH1);
+}
+
+int ibm440gx_show_cpuinfo(struct seq_file *m){
+
+ u32 l2c_cfg = mfdcr(DCRN_L2C0_CFG);
+ const char* s;
+ if (l2c_cfg & L2C_CFG_L2M){
+ switch (l2c_cfg & (L2C_CFG_ICU | L2C_CFG_DCU)){
+ case L2C_CFG_ICU: s = "I-Cache only"; break;
+ case L2C_CFG_DCU: s = "D-Cache only"; break;
+ default: s = "I-Cache/D-Cache"; break;
+ }
+ }
+ else
+ s = "disabled";
+
+ seq_printf(m, "L2-Cache\t: %s (0x%08x 0x%08x)\n", s,
+ l2c_cfg, mfdcr(DCRN_L2C0_SR));
+
+ return 0;
+}
+
--- /dev/null
+/*
+ * arch/ppc/kernel/ibm440gx_common.h
+ *
+ * PPC440GX system library
+ *
+ * Eugene Surovegin <eugene.surovegin@zultys.com> or <ebs@ebshome.net>
+ * Copyright (c) 2003, 2004 Zultys Technologies
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+#ifdef __KERNEL__
+#ifndef __PPC_SYSLIB_IBM440GX_COMMON_H
+#define __PPC_SYSLIB_IBM440GX_COMMON_H
+
+#ifndef __ASSEMBLY__
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <syslib/ibm44x_common.h>
+
+/*
+ * Please, refer to the Figure 14.1 in 440GX user manual
+ *
+ * if internal UART clock is used, ser_clk is ignored
+ */
+void ibm440gx_get_clocks(struct ibm44x_clocks*, unsigned int sys_clk,
+ unsigned int ser_clk) __init;
+
+/* Enable L2 cache */
+void ibm440gx_l2c_enable(void) __init;
+
+/* Disable L2 cache */
+void ibm440gx_l2c_disable(void) __init;
+
+/* Enable/disable L2 cache for a particular chip revision */
+void ibm440gx_l2c_setup(struct ibm44x_clocks*) __init;
+
+/* Get Ethernet Group */
+int ibm440gx_get_eth_grp(void) __init;
+
+/* Set Ethernet Group */
+void ibm440gx_set_eth_grp(int group) __init;
+
+/* Enable TAH devices */
+void ibm440gx_tah_enable(void) __init;
+
+/* Add L2C info to /proc/cpuinfo */
+int ibm440gx_show_cpuinfo(struct seq_file*);
+
+#endif /* __ASSEMBLY__ */
+#endif /* __PPC_SYSLIB_IBM440GX_COMMON_H */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * arch/ppc/kernel/ibm44x_common.h
+ *
+ * PPC44x system library
+ *
+ * Eugene Surovegin <eugene.surovegin@zultys.com> or <ebs@ebshome.net>
+ * Copyright (c) 2003, 2004 Zultys Technologies
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ *
+ */
+#ifdef __KERNEL__
+#ifndef __PPC_SYSLIB_IBM44x_COMMON_H
+#define __PPC_SYSLIB_IBM44x_COMMON_H
+
+#ifndef __ASSEMBLY__
+
+/*
+ * All clocks are in Hz
+ */
+struct ibm44x_clocks {
+ unsigned int vco; /* VCO, 0 if system PLL is bypassed */
+ unsigned int cpu; /* CPUCoreClk */
+ unsigned int plb; /* PLBClk */
+ unsigned int opb; /* OPBClk */
+ unsigned int ebc; /* PerClk */
+ unsigned int uart0;
+ unsigned int uart1;
+};
+
+/* common 44x platform init */
+void ibm44x_platform_init(void) __init;
+
+/* initialize decrementer and tick-related variables */
+void ibm44x_calibrate_decr(unsigned int freq) __init;
+
+#endif /* __ASSEMBLY__ */
+#endif /* __PPC_SYSLIB_IBM44x_COMMON_H */
+#endif /* __KERNEL__ */
--- /dev/null
+
+#ifndef _PPC_KERNEL_M8260_PCI_H
+#define _PPC_KERNEL_M8260_PCI_H
+
+#include <asm/m8260_pci.h>
+
+/*
+ * Local->PCI map (from CPU) controlled by
+ * MPC826x master window
+ *
+ * 0x80000000 - 0xBFFFFFFF Total CPU2PCI space PCIBR0
+ *
+ * 0x80000000 - 0x9FFFFFFF PCI Mem with prefetch (Outbound ATU #1)
+ * 0xA0000000 - 0xAFFFFFFF PCI Mem w/o prefetch (Outbound ATU #2)
+ * 0xB0000000 - 0xB0FFFFFF 32-bit PCI IO (Outbound ATU #3)
+ *
+ * PCI->Local map (from PCI)
+ * MPC826x slave window controlled by
+ *
+ * 0x00000000 - 0x07FFFFFF MPC826x local memory (Inbound ATU #1)
+ */
+
+/*
+ * Slave window that allows PCI masters to access MPC826x local memory.
+ * This window is set up using the first set of Inbound ATU registers
+ */
+
+#ifndef MPC826x_PCI_SLAVE_MEM_LOCAL
+#define MPC826x_PCI_SLAVE_MEM_LOCAL (((struct bd_info *)__res)->bi_memstart)
+#define MPC826x_PCI_SLAVE_MEM_BUS (((struct bd_info *)__res)->bi_memstart)
+#define MPC826x_PCI_SLAVE_MEM_SIZE (((struct bd_info *)__res)->bi_memsize)
+#endif
+
+/*
+ * This is the window that allows the CPU to access PCI address space.
+ * It will be setup with the SIU PCIBR0 register. All three PCI master
+ * windows, which allow the CPU to access PCI prefetch, non prefetch,
+ * and IO space (see below), must all fit within this window.
+ */
+#ifndef MPC826x_PCI_BASE
+#define MPC826x_PCI_BASE 0x80000000
+#define MPC826x_PCI_MASK 0xc0000000
+#endif
+
+#ifndef MPC826x_PCI_LOWER_MEM
+#define MPC826x_PCI_LOWER_MEM 0x80000000
+#define MPC826x_PCI_UPPER_MEM 0x9fffffff
+#define MPC826x_PCI_MEM_OFFSET 0x00000000
+#endif
+
+#ifndef MPC826x_PCI_LOWER_MMIO
+#define MPC826x_PCI_LOWER_MMIO 0xa0000000
+#define MPC826x_PCI_UPPER_MMIO 0xafffffff
+#define MPC826x_PCI_MMIO_OFFSET 0x00000000
+#endif
+
+#ifndef MPC826x_PCI_LOWER_IO
+#define MPC826x_PCI_LOWER_IO 0x00000000
+#define MPC826x_PCI_UPPER_IO 0x00ffffff
+#define MPC826x_PCI_IO_BASE 0xb0000000
+#define MPC826x_PCI_IO_SIZE 0x01000000
+#endif
+
+#ifndef _IO_BASE
+#define _IO_BASE isa_io_base
+#endif
+
+#ifdef CONFIG_8260_PCI9
+struct pci_controller;
+extern void setup_m8260_indirect_pci(struct pci_controller* hose,
+ u32 cfg_addr, u32 cfg_data);
+#else
+#define setup_m8260_indirect_pci setup_indirect_pci
+#endif
+
+#endif /* _PPC_KERNEL_M8260_PCI_H */
--- /dev/null
+/*
+ * arch/ppc/syslib/mpc52xx_pic.c
+ *
+ * Programmable Interrupt Controller functions for the Freescale MPC52xx
+ * embedded CPU.
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Based on (well, mostly copied from) the code from the 2.4 kernel by
+ * Dale Farnsworth <dfarnsworth@mvista.com> and Kent Borg.
+ *
+ * Copyright (C) 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright (C) 2003 Montavista Software, Inc
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include <linux/stddef.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/signal.h>
+#include <linux/stddef.h>
+#include <linux/delay.h>
+#include <linux/irq.h>
+
+#include <asm/io.h>
+#include <asm/processor.h>
+#include <asm/system.h>
+#include <asm/irq.h>
+#include <asm/mpc52xx.h>
+
+
+static struct mpc52xx_intr *intr;
+static struct mpc52xx_sdma *sdma;
+
+static void
+mpc52xx_ic_disable(unsigned int irq)
+{
+ u32 val;
+
+ if (irq == MPC52xx_IRQ0) {
+ val = in_be32(&intr->ctrl);
+ val &= ~(1 << 11);
+ out_be32(&intr->ctrl, val);
+ }
+ else if (irq < MPC52xx_IRQ1) {
+ BUG();
+ }
+ else if (irq <= MPC52xx_IRQ3) {
+ val = in_be32(&intr->ctrl);
+ val &= ~(1 << (10 - (irq - MPC52xx_IRQ1)));
+ out_be32(&intr->ctrl, val);
+ }
+ else if (irq < MPC52xx_SDMA_IRQ_BASE) {
+ val = in_be32(&intr->main_mask);
+ val |= 1 << (16 - (irq - MPC52xx_MAIN_IRQ_BASE));
+ out_be32(&intr->main_mask, val);
+ }
+ else if (irq < MPC52xx_PERP_IRQ_BASE) {
+ val = in_be32(&sdma->IntMask);
+ val |= 1 << (irq - MPC52xx_SDMA_IRQ_BASE);
+ out_be32(&sdma->IntMask, val);
+ }
+ else {
+ val = in_be32(&intr->per_mask);
+ val |= 1 << (31 - (irq - MPC52xx_PERP_IRQ_BASE));
+ out_be32(&intr->per_mask, val);
+ }
+}
+
+static void
+mpc52xx_ic_enable(unsigned int irq)
+{
+ u32 val;
+
+ if (irq == MPC52xx_IRQ0) {
+ val = in_be32(&intr->ctrl);
+ val |= 1 << 11;
+ out_be32(&intr->ctrl, val);
+ }
+ else if (irq < MPC52xx_IRQ1) {
+ BUG();
+ }
+ else if (irq <= MPC52xx_IRQ3) {
+ val = in_be32(&intr->ctrl);
+ val |= 1 << (10 - (irq - MPC52xx_IRQ1));
+ out_be32(&intr->ctrl, val);
+ }
+ else if (irq < MPC52xx_SDMA_IRQ_BASE) {
+ val = in_be32(&intr->main_mask);
+ val &= ~(1 << (16 - (irq - MPC52xx_MAIN_IRQ_BASE)));
+ out_be32(&intr->main_mask, val);
+ }
+ else if (irq < MPC52xx_PERP_IRQ_BASE) {
+ val = in_be32(&sdma->IntMask);
+ val &= ~(1 << (irq - MPC52xx_SDMA_IRQ_BASE));
+ out_be32(&sdma->IntMask, val);
+ }
+ else {
+ val = in_be32(&intr->per_mask);
+ val &= ~(1 << (31 - (irq - MPC52xx_PERP_IRQ_BASE)));
+ out_be32(&intr->per_mask, val);
+ }
+}
+
+static void
+mpc52xx_ic_ack(unsigned int irq)
+{
+ u32 val;
+
+ /*
+ * Only some irqs are reset here, others in interrupting hardware.
+ */
+
+ switch (irq) {
+ case MPC52xx_IRQ0:
+ val = in_be32(&intr->ctrl);
+ val |= 0x08000000;
+ out_be32(&intr->ctrl, val);
+ break;
+ case MPC52xx_CCS_IRQ:
+ val = in_be32(&intr->enc_status);
+ val |= 0x00000400;
+ out_be32(&intr->enc_status, val);
+ break;
+ case MPC52xx_IRQ1:
+ val = in_be32(&intr->ctrl);
+ val |= 0x04000000;
+ out_be32(&intr->ctrl, val);
+ break;
+ case MPC52xx_IRQ2:
+ val = in_be32(&intr->ctrl);
+ val |= 0x02000000;
+ out_be32(&intr->ctrl, val);
+ break;
+ case MPC52xx_IRQ3:
+ val = in_be32(&intr->ctrl);
+ val |= 0x01000000;
+ out_be32(&intr->ctrl, val);
+ break;
+ default:
+ if (irq >= MPC52xx_SDMA_IRQ_BASE
+ && irq < (MPC52xx_SDMA_IRQ_BASE + MPC52xx_SDMA_IRQ_NUM)) {
+ out_be32(&sdma->IntPend,
+ 1 << (irq - MPC52xx_SDMA_IRQ_BASE));
+ }
+ break;
+ }
+}
+
+static void
+mpc52xx_ic_disable_and_ack(unsigned int irq)
+{
+ mpc52xx_ic_disable(irq);
+ mpc52xx_ic_ack(irq);
+}
+
+static void
+mpc52xx_ic_end(unsigned int irq)
+{
+ if (!(irq_desc[irq].status & (IRQ_DISABLED | IRQ_INPROGRESS)))
+ mpc52xx_ic_enable(irq);
+}
+
+static struct hw_interrupt_type mpc52xx_ic = {
+ "MPC52xx",
+ NULL, /* startup(irq) */
+ NULL, /* shutdown(irq) */
+ mpc52xx_ic_enable, /* enable(irq) */
+ mpc52xx_ic_disable, /* disable(irq) */
+ mpc52xx_ic_disable_and_ack, /* disable_and_ack(irq) */
+ mpc52xx_ic_end, /* end(irq) */
+ 0 /* set_affinity(irq, cpumask) SMP. */
+};
+
+void __init
+mpc52xx_init_irq(void)
+{
+ int i;
+ u32 intr_ctrl;
+
+ /* Remap the necessary zones */
+ intr = (struct mpc52xx_intr *)
+ ioremap(MPC52xx_INTR, sizeof(struct mpc52xx_intr));
+ sdma = (struct mpc52xx_sdma *)
+ ioremap(MPC52xx_SDMA, sizeof(struct mpc52xx_sdma));
+
+ if ((intr==NULL) || (sdma==NULL))
+ panic("Can't ioremap PIC/SDMA register for init_irq !");
+
+ /* Disable all interrupt sources. */
+ out_be32(&sdma->IntPend, 0xffffffff); /* 1 means clear pending */
+ out_be32(&sdma->IntMask, 0xffffffff); /* 1 means disabled */
+ out_be32(&intr->per_mask, 0x7ffffc00); /* 1 means disabled */
+ out_be32(&intr->main_mask, 0x00010fff); /* 1 means disabled */
+ intr_ctrl = in_be32(&intr->ctrl);
+ intr_ctrl &= 0x00ff0000; /* Keeps IRQ[0-3] config */
+ intr_ctrl |= 0x0f000000 | /* clear IRQ 0-3 */
+ 0x00001000 | /* MEE master external enable */
+ 0x00000000 | /* 0 means disable IRQ 0-3 */
+ 0x00000001; /* CEb route critical normally */
+ out_be32(&intr->ctrl, intr_ctrl);
+
+ /* Zero a bunch of the priority settings. */
+ out_be32(&intr->per_pri1, 0);
+ out_be32(&intr->per_pri2, 0);
+ out_be32(&intr->per_pri3, 0);
+ out_be32(&intr->main_pri1, 0);
+ out_be32(&intr->main_pri2, 0);
+
+ /* Initialize irq_desc[i].handler's with mpc52xx_ic. */
+ for (i = 0; i < NR_IRQS; i++) {
+ irq_desc[i].handler = &mpc52xx_ic;
+ irq_desc[i].status = IRQ_LEVEL;
+ }
+
+ #define IRQn_MODE(intr_ctrl,irq) (((intr_ctrl) >> (22-(i<<1))) & 0x03)
+ for (i=0 ; i<4 ; i++) {
+ int mode;
+ mode = IRQn_MODE(intr_ctrl,i);
+ if ((mode == 0x1) || (mode == 0x2))
+ irq_desc[i?MPC52xx_IRQ1+i-1:MPC52xx_IRQ0].status = 0;
+ }
+}
+
+int
+mpc52xx_get_irq(struct pt_regs *regs)
+{
+ u32 status;
+ int irq = -1;
+
+ status = in_be32(&intr->enc_status);
+
+ if (status & 0x00000400) { /* critical */
+ irq = (status >> 8) & 0x3;
+ if (irq == 2) /* high priority peripheral */
+ goto peripheral;
+ irq += MPC52xx_CRIT_IRQ_BASE;
+ }
+ else if (status & 0x00200000) { /* main */
+ irq = (status >> 16) & 0x1f;
+ if (irq == 4) /* low priority peripheral */
+ goto peripheral;
+ irq += MPC52xx_MAIN_IRQ_BASE;
+ }
+ else if (status & 0x20000000) { /* peripheral */
+peripheral:
+ irq = (status >> 24) & 0x1f;
+ if (irq == 0) { /* bestcomm */
+ status = in_be32(&sdma->IntPend);
+ irq = ffs(status) + MPC52xx_SDMA_IRQ_BASE-1;
+ }
+ else
+ irq += MPC52xx_PERP_IRQ_BASE;
+ }
+
+ return irq;
+}
+
--- /dev/null
+/*
+ * arch/ppc/syslib/mpc52xx_setup.c
+ *
+ * Common code for the boards based on Freescale MPC52xx embedded CPU.
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Support for other bootloaders than UBoot by Dale Farnsworth
+ * <dfarnsworth@mvista.com>
+ *
+ * Copyright (C) 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright (C) 2003 Montavista Software, Inc
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+
+#include <asm/io.h>
+#include <asm/time.h>
+#include <asm/mpc52xx.h>
+#include <asm/mpc52xx_psc.h>
+#include <asm/ocp.h>
+#include <asm/pgtable.h>
+#include <asm/ppcboot.h>
+
+extern bd_t __res;
+
+static int core_mult[] = { /* CPU Frequency multiplier, taken */
+ 0, 0, 0, 10, 20, 20, 25, 45, /* from the datasheet used to compute */
+ 30, 55, 40, 50, 0, 60, 35, 0, /* CPU frequency from XLB freq and */
+ 30, 25, 65, 10, 70, 20, 75, 45, /* external jumper config */
+ 0, 55, 40, 50, 80, 60, 35, 0
+};
+
+void
+mpc52xx_restart(char *cmd)
+{
+ struct mpc52xx_gpt* gpt0 = (struct mpc52xx_gpt*) MPC52xx_GPTx(0);
+
+ local_irq_disable();
+
+ /* Turn on the watchdog and wait for it to expire. It effectively
+ does a reset */
+ if (gpt0 != NULL) {
+ out_be32(&gpt0->count, 0x000000ff);
+ out_be32(&gpt0->mode, 0x00009004);
+ } else
+ printk(KERN_ERR "mpc52xx_restart: Unable to ioremap GPT0 registers, -> looping ...");
+
+ while (1);
+}
+
+void
+mpc52xx_halt(void)
+{
+ local_irq_disable();
+
+ while (1);
+}
+
+void
+mpc52xx_power_off(void)
+{
+ /* By default we don't have any way of shut down.
+ If a specific board wants to, it can set the power down
+ code to any hardware implementation dependent code */
+ mpc52xx_halt();
+}
+
+
+void __init
+mpc52xx_set_bat(void)
+{
+ /* Set BAT 2 to map the 0xf0000000 area */
+ /* This mapping is used during mpc52xx_progress,
+ * mpc52xx_find_end_of_memory, and UARTs/GPIO access for debug
+ */
+ mb();
+ mtspr(DBAT2U, 0xf0001ffe);
+ mtspr(DBAT2L, 0xf000002a);
+ mb();
+}
+
+void __init
+mpc52xx_map_io(void)
+{
+ /* Here we only map the MBAR */
+ io_block_mapping(
+ MPC52xx_MBAR_VIRT, MPC52xx_MBAR, MPC52xx_MBAR_SIZE, _PAGE_IO);
+}
+
+
+#ifdef CONFIG_SERIAL_TEXT_DEBUG
+#ifdef MPC52xx_PF_CONSOLE_PORT
+#define MPC52xx_CONSOLE MPC52xx_PSCx(MPC52xx_PF_CONSOLE_PORT)
+#else
+#error "mpc52xx PSC for console not selected"
+#endif
+
+static void
+mpc52xx_psc_putc(struct mpc52xx_psc * psc, unsigned char c)
+{
+ while (!(in_be16(&psc->mpc52xx_psc_status) &
+ MPC52xx_PSC_SR_TXRDY));
+ out_8(&psc->mpc52xx_psc_buffer_8, c);
+}
+
+void
+mpc52xx_progress(char *s, unsigned short hex)
+{
+ struct mpc52xx_psc *psc = (struct mpc52xx_psc *)MPC52xx_CONSOLE;
+ char c;
+
+ while ((c = *s++) != 0) {
+ if (c == '\n')
+ mpc52xx_psc_putc(psc, '\r');
+ mpc52xx_psc_putc(psc, c);
+ }
+
+ mpc52xx_psc_putc(psc, '\r');
+ mpc52xx_psc_putc(psc, '\n');
+}
+
+#endif /* CONFIG_SERIAL_TEXT_DEBUG */
+
+
+unsigned long __init
+mpc52xx_find_end_of_memory(void)
+{
+ u32 ramsize = __res.bi_memsize;
+
+ /*
+ * if bootloader passed a memsize, just use it
+ * else get size from sdram config registers
+ */
+ if (ramsize == 0) {
+ struct mpc52xx_mmap_ctl *mmap_ctl;
+ u32 sdram_config_0, sdram_config_1;
+
+ /* Temp BAT2 mapping active when this is called ! */
+ mmap_ctl = (struct mpc52xx_mmap_ctl*) MPC52xx_MMAP_CTL;
+
+ sdram_config_0 = in_be32(&mmap_ctl->sdram0);
+ sdram_config_1 = in_be32(&mmap_ctl->sdram1);
+
+ if ((sdram_config_0 & 0x1f) >= 0x13)
+ ramsize = 1 << ((sdram_config_0 & 0xf) + 17);
+
+ if (((sdram_config_1 & 0x1f) >= 0x13) &&
+ ((sdram_config_1 & 0xfff00000) == ramsize))
+ ramsize += 1 << ((sdram_config_1 & 0xf) + 17);
+ }
+
+ return ramsize;
+}
+
+void __init
+mpc52xx_calibrate_decr(void)
+{
+ int current_time, previous_time;
+ int tbl_start, tbl_end;
+ unsigned int xlbfreq, cpufreq, ipbfreq, pcifreq, divisor;
+
+ xlbfreq = __res.bi_busfreq;
+ /* if bootloader didn't pass bus frequencies, calculate them */
+ if (xlbfreq == 0) {
+ /* Get RTC & Clock manager modules */
+ struct mpc52xx_rtc *rtc;
+ struct mpc52xx_cdm *cdm;
+
+ rtc = (struct mpc52xx_rtc*)
+ ioremap(MPC52xx_RTC, sizeof(struct mpc52xx_rtc));
+ cdm = (struct mpc52xx_cdm*)
+ ioremap(MPC52xx_CDM, sizeof(struct mpc52xx_cdm));
+
+ if ((rtc==NULL) || (cdm==NULL))
+ panic("Can't ioremap RTC/CDM while computing bus freq");
+
+ /* Count bus clock during 1/64 sec */
+ out_be32(&rtc->dividers, 0x8f1f0000); /* Set RTC 64x faster */
+ previous_time = in_be32(&rtc->time);
+ while ((current_time = in_be32(&rtc->time)) == previous_time) ;
+ tbl_start = get_tbl();
+ previous_time = current_time;
+ while ((current_time = in_be32(&rtc->time)) == previous_time) ;
+ tbl_end = get_tbl();
+ out_be32(&rtc->dividers, 0xffff0000); /* Restore RTC */
+
+ /* Compute all frequency from that & CDM settings */
+ xlbfreq = (tbl_end - tbl_start) << 8;
+ cpufreq = (xlbfreq * core_mult[in_be32(&cdm->rstcfg)&0x1f])/10;
+ ipbfreq = (in_8(&cdm->ipb_clk_sel) & 1) ?
+ xlbfreq / 2 : xlbfreq;
+ switch (in_8(&cdm->pci_clk_sel) & 3) {
+ case 0:
+ pcifreq = ipbfreq;
+ break;
+ case 1:
+ pcifreq = ipbfreq / 2;
+ break;
+ default:
+ pcifreq = xlbfreq / 4;
+ break;
+ }
+ __res.bi_busfreq = xlbfreq;
+ __res.bi_intfreq = cpufreq;
+ __res.bi_ipbfreq = ipbfreq;
+ __res.bi_pcifreq = pcifreq;
+
+ /* Release mapping */
+ iounmap((void*)rtc);
+ iounmap((void*)cdm);
+ }
+
+ divisor = 4;
+
+ tb_ticks_per_jiffy = xlbfreq / HZ / divisor;
+ tb_to_us = mulhwu_scale_factor(xlbfreq / divisor, 1000000);
+}
+
+
+void __init
+mpc52xx_add_board_devices(struct ocp_def board_ocp[]) {
+ while (board_ocp->vendor != OCP_VENDOR_INVALID)
+ if(ocp_add_one_device(board_ocp++))
+ printk("mpc5200-ocp: Failed to add board device !\n");
+}
+
--- /dev/null
+/*
+ * arch/ppc/syslib/ppc85xx_setup.c
+ *
+ * MPC85XX common board code
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/serial.h>
+#include <linux/tty.h> /* for linux/serial_core.h */
+#include <linux/serial_core.h>
+
+#include <asm/prom.h>
+#include <asm/time.h>
+#include <asm/mpc85xx.h>
+#include <asm/immap_85xx.h>
+#include <asm/mmu.h>
+#include <asm/ocp.h>
+#include <asm/kgdb.h>
+
+#include <syslib/ppc85xx_setup.h>
+
+/* Return the amount of memory */
+unsigned long __init
+mpc85xx_find_end_of_memory(void)
+{
+ bd_t *binfo;
+
+ binfo = (bd_t *) __res;
+
+ return binfo->bi_memsize;
+}
+
+/* The decrementer counts at the system (internal) clock freq divided by 8 */
+void __init
+mpc85xx_calibrate_decr(void)
+{
+ bd_t *binfo = (bd_t *) __res;
+ unsigned int freq, divisor;
+
+ /* get the core frequency */
+ freq = binfo->bi_busfreq;
+
+ /* The timebase is updated every 8 bus clocks, HID0[SEL_TBCLK] = 0 */
+ divisor = 8;
+ tb_ticks_per_jiffy = freq / divisor / HZ;
+ tb_to_us = mulhwu_scale_factor(freq / divisor, 1000000);
+
+ /* Set the time base to zero */
+ mtspr(SPRN_TBWL, 0);
+ mtspr(SPRN_TBWU, 0);
+
+ /* Clear any pending timer interrupts */
+ mtspr(SPRN_TSR, TSR_ENW | TSR_WIS | TSR_DIS | TSR_FIS);
+
+ /* Enable decrementer interrupt */
+ mtspr(SPRN_TCR, TCR_DIE);
+}
+
+#ifdef CONFIG_SERIAL_8250
+void __init
+mpc85xx_early_serial_map(void)
+{
+ struct uart_port serial_req;
+ bd_t *binfo = (bd_t *) __res;
+ phys_addr_t duart_paddr = binfo->bi_immr_base + MPC85xx_UART0_OFFSET;
+
+ /* Setup serial port access */
+ memset(&serial_req, 0, sizeof (serial_req));
+ serial_req.uartclk = binfo->bi_busfreq;
+ serial_req.line = 0;
+ serial_req.irq = MPC85xx_IRQ_DUART;
+ serial_req.flags = ASYNC_BOOT_AUTOCONF | ASYNC_SKIP_TEST;
+ serial_req.iotype = SERIAL_IO_MEM;
+ serial_req.membase = ioremap(duart_paddr, MPC85xx_UART0_SIZE);
+ serial_req.mapbase = duart_paddr;
+ serial_req.regshift = 0;
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+ gen550_init(0, &serial_req);
+#endif
+
+ if (early_serial_setup(&serial_req) != 0)
+ printk("Early serial init of port 0 failed\n");
+
+ /* Assume early_serial_setup() doesn't modify serial_req */
+ duart_paddr = binfo->bi_immr_base + MPC85xx_UART1_OFFSET;
+ serial_req.line = 1;
+ serial_req.mapbase = duart_paddr;
+ serial_req.membase = ioremap(duart_paddr, MPC85xx_UART1_SIZE);
+
+#if defined(CONFIG_SERIAL_TEXT_DEBUG) || defined(CONFIG_KGDB)
+ gen550_init(1, &serial_req);
+#endif
+
+ if (early_serial_setup(&serial_req) != 0)
+ printk("Early serial init of port 1 failed\n");
+}
+#endif
+
+void
+mpc85xx_restart(char *cmd)
+{
+ local_irq_disable();
+ abort();
+}
+
+void
+mpc85xx_power_off(void)
+{
+ local_irq_disable();
+ for(;;);
+}
+
+void
+mpc85xx_halt(void)
+{
+ local_irq_disable();
+ for(;;);
+}
+
+#ifdef CONFIG_PCI
+static void __init
+mpc85xx_setup_pci1(struct pci_controller *hose)
+{
+ volatile struct ccsr_pci *pci;
+ volatile struct ccsr_guts *guts;
+ unsigned short temps;
+ bd_t *binfo = (bd_t *) __res;
+
+ pci = ioremap(binfo->bi_immr_base + MPC85xx_PCI1_OFFSET,
+ MPC85xx_PCI1_SIZE);
+
+ guts = ioremap(binfo->bi_immr_base + MPC85xx_GUTS_OFFSET,
+ MPC85xx_GUTS_SIZE);
+
+ early_read_config_word(hose, 0, 0, PCI_COMMAND, &temps);
+ temps |= PCI_COMMAND_SERR | PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY;
+ early_write_config_word(hose, 0, 0, PCI_COMMAND, temps);
+
+#define PORDEVSR_PCI (0x00800000) /* PCI Mode */
+ if (guts->pordevsr & PORDEVSR_PCI) {
+ early_write_config_byte(hose, 0, 0, PCI_LATENCY_TIMER, 0x80);
+ } else {
+ /* PCI-X init */
+ temps = PCI_X_CMD_MAX_SPLIT | PCI_X_CMD_MAX_READ
+ | PCI_X_CMD_ERO | PCI_X_CMD_DPERR_E;
+ early_write_config_word(hose, 0, 0, PCIX_COMMAND, temps);
+ }
+
+ /* Disable all windows (except powar0 since its ignored) */
+ pci->powar1 = 0;
+ pci->powar2 = 0;
+ pci->powar3 = 0;
+ pci->powar4 = 0;
+ pci->piwar1 = 0;
+ pci->piwar2 = 0;
+ pci->piwar3 = 0;
+
+ /* Setup Phys:PCI 1:1 outbound mem window @ MPC85XX_PCI1_LOWER_MEM */
+ pci->potar1 = (MPC85XX_PCI1_LOWER_MEM >> 12) & 0x000fffff;
+ pci->potear1 = 0x00000000;
+ pci->powbar1 = (MPC85XX_PCI1_LOWER_MEM >> 12) & 0x000fffff;
+ /* Enable, Mem R/W */
+ pci->powar1 = 0x80044000 |
+ (__ilog2(MPC85XX_PCI1_UPPER_MEM - MPC85XX_PCI1_LOWER_MEM + 1) - 1);
+
+ /* Setup outboud IO windows @ MPC85XX_PCI1_IO_BASE */
+ pci->potar2 = 0x00000000;
+ pci->potear2 = 0x00000000;
+ pci->powbar2 = (MPC85XX_PCI1_IO_BASE >> 12) & 0x000fffff;
+ /* Enable, IO R/W */
+ pci->powar2 = 0x80088000 | (__ilog2(MPC85XX_PCI1_IO_SIZE) - 1);
+
+ /* Setup 2G inbound Memory Window @ 0 */
+ pci->pitar1 = 0x00000000;
+ pci->piwbar1 = 0x00000000;
+ pci->piwar1 = 0xa0f5501e; /* Enable, Prefetch, Local
+ Mem, Snoop R/W, 2G */
+}
+
+
+extern int mpc85xx_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin);
+extern int mpc85xx_exclude_device(u_char bus, u_char devfn);
+
+#ifdef CONFIG_85xx_PCI2
+static void __init
+mpc85xx_setup_pci2(struct pci_controller *hose)
+{
+ volatile struct ccsr_pci *pci;
+ unsigned short temps;
+ bd_t *binfo = (bd_t *) __res;
+
+ pci = ioremap(binfo->bi_immr_base + MPC85xx_PCI2_OFFSET,
+ MPC85xx_PCI2_SIZE);
+
+ early_read_config_word(hose, hose->bus_offset, 0, PCI_COMMAND, &temps);
+ temps |= PCI_COMMAND_SERR | PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY;
+ early_write_config_word(hose, hose->bus_offset, 0, PCI_COMMAND, temps);
+ early_write_config_byte(hose, hose->bus_offset, 0, PCI_LATENCY_TIMER, 0x80);
+
+ /* Disable all windows (except powar0 since its ignored) */
+ pci->powar1 = 0;
+ pci->powar2 = 0;
+ pci->powar3 = 0;
+ pci->powar4 = 0;
+ pci->piwar1 = 0;
+ pci->piwar2 = 0;
+ pci->piwar3 = 0;
+
+ /* Setup Phys:PCI 1:1 outbound mem window @ MPC85XX_PCI2_LOWER_MEM */
+ pci->potar1 = (MPC85XX_PCI2_LOWER_MEM >> 12) & 0x000fffff;
+ pci->potear1 = 0x00000000;
+ pci->powbar1 = (MPC85XX_PCI2_LOWER_MEM >> 12) & 0x000fffff;
+ /* Enable, Mem R/W */
+ pci->powar1 = 0x80044000 |
+ (__ilog2(MPC85XX_PCI1_UPPER_MEM - MPC85XX_PCI1_LOWER_MEM + 1) - 1);
+
+ /* Setup outboud IO windows @ MPC85XX_PCI2_IO_BASE */
+ pci->potar2 = 0x00000000;
+ pci->potear2 = 0x00000000;
+ pci->powbar2 = (MPC85XX_PCI2_IO_BASE >> 12) & 0x000fffff;
+ /* Enable, IO R/W */
+ pci->powar2 = 0x80088000 | (__ilog2(MPC85XX_PCI1_IO_SIZE) - 1);
+
+ /* Setup 2G inbound Memory Window @ 0 */
+ pci->pitar1 = 0x00000000;
+ pci->piwbar1 = 0x00000000;
+ pci->piwar1 = 0xa0f5501e; /* Enable, Prefetch, Local
+ Mem, Snoop R/W, 2G */
+}
+#endif /* CONFIG_85xx_PCI2 */
+
+void __init
+mpc85xx_setup_hose(void)
+{
+ struct pci_controller *hose_a;
+#ifdef CONFIG_85xx_PCI2
+ struct pci_controller *hose_b;
+#endif
+ bd_t *binfo = (bd_t *) __res;
+
+ hose_a = pcibios_alloc_controller();
+
+ if (!hose_a)
+ return;
+
+ ppc_md.pci_swizzle = common_swizzle;
+ ppc_md.pci_map_irq = mpc85xx_map_irq;
+
+ hose_a->first_busno = 0;
+ hose_a->bus_offset = 0;
+ hose_a->last_busno = 0xff;
+
+ setup_indirect_pci(hose_a, binfo->bi_immr_base + PCI1_CFG_ADDR_OFFSET,
+ binfo->bi_immr_base + PCI1_CFG_DATA_OFFSET);
+ hose_a->set_cfg_type = 1;
+
+ mpc85xx_setup_pci1(hose_a);
+
+ hose_a->pci_mem_offset = MPC85XX_PCI1_MEM_OFFSET;
+ hose_a->mem_space.start = MPC85XX_PCI1_LOWER_MEM;
+ hose_a->mem_space.end = MPC85XX_PCI1_UPPER_MEM;
+
+ hose_a->io_space.start = MPC85XX_PCI1_LOWER_IO;
+ hose_a->io_space.end = MPC85XX_PCI1_UPPER_IO;
+ hose_a->io_base_phys = MPC85XX_PCI1_IO_BASE;
+#ifdef CONFIG_85xx_PCI2
+ isa_io_base =
+ (unsigned long) ioremap(MPC85XX_PCI1_IO_BASE,
+ MPC85XX_PCI1_IO_SIZE +
+ MPC85XX_PCI2_IO_SIZE);
+#else
+ isa_io_base =
+ (unsigned long) ioremap(MPC85XX_PCI1_IO_BASE,
+ MPC85XX_PCI1_IO_SIZE);
+#endif
+ hose_a->io_base_virt = (void *) isa_io_base;
+
+ /* setup resources */
+ pci_init_resource(&hose_a->mem_resources[0],
+ MPC85XX_PCI1_LOWER_MEM,
+ MPC85XX_PCI1_UPPER_MEM,
+ IORESOURCE_MEM, "PCI1 host bridge");
+
+ pci_init_resource(&hose_a->io_resource,
+ MPC85XX_PCI1_LOWER_IO,
+ MPC85XX_PCI1_UPPER_IO,
+ IORESOURCE_IO, "PCI1 host bridge");
+
+ ppc_md.pci_exclude_device = mpc85xx_exclude_device;
+
+ hose_a->last_busno = pciauto_bus_scan(hose_a, hose_a->first_busno);
+
+#ifdef CONFIG_85xx_PCI2
+ hose_b = pcibios_alloc_controller();
+
+ if (!hose_b)
+ return;
+
+ hose_b->bus_offset = hose_a->last_busno + 1;
+ hose_b->first_busno = hose_a->last_busno + 1;
+ hose_b->last_busno = 0xff;
+
+ setup_indirect_pci(hose_b, binfo->bi_immr_base + PCI2_CFG_ADDR_OFFSET,
+ binfo->bi_immr_base + PCI2_CFG_DATA_OFFSET);
+ hose_b->set_cfg_type = 1;
+
+ mpc85xx_setup_pci2(hose_b);
+
+ hose_b->pci_mem_offset = MPC85XX_PCI2_MEM_OFFSET;
+ hose_b->mem_space.start = MPC85XX_PCI2_LOWER_MEM;
+ hose_b->mem_space.end = MPC85XX_PCI2_UPPER_MEM;
+
+ hose_b->io_space.start = MPC85XX_PCI2_LOWER_IO;
+ hose_b->io_space.end = MPC85XX_PCI2_UPPER_IO;
+ hose_b->io_base_phys = MPC85XX_PCI2_IO_BASE;
+ hose_b->io_base_virt = (void *) isa_io_base + MPC85XX_PCI1_IO_SIZE;
+
+ /* setup resources */
+ pci_init_resource(&hose_b->mem_resources[0],
+ MPC85XX_PCI2_LOWER_MEM,
+ MPC85XX_PCI2_UPPER_MEM,
+ IORESOURCE_MEM, "PCI2 host bridge");
+
+ pci_init_resource(&hose_b->io_resource,
+ MPC85XX_PCI2_LOWER_IO,
+ MPC85XX_PCI2_UPPER_IO,
+ IORESOURCE_IO, "PCI2 host bridge");
+
+ hose_b->last_busno = pciauto_bus_scan(hose_b, hose_b->first_busno);
+#endif
+ return;
+}
+#endif /* CONFIG_PCI */
+
+
--- /dev/null
+/*
+ * hvcserver.c
+ * Copyright (C) 2004 Ryan S Arnold, IBM Corporation
+ *
+ * PPC64 virtual I/O console server support.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <asm/hvcall.h>
+#include <asm/hvcserver.h>
+#include <asm/io.h>
+
+#define HVCS_ARCH_VERSION "1.0.0"
+
+MODULE_AUTHOR("Ryan S. Arnold <rsa@us.ibm.com>");
+MODULE_DESCRIPTION("IBM hvcs ppc64 API");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(HVCS_ARCH_VERSION);
+
+/*
+ * Convert arch specific return codes into relevant errnos. The hvcs
+ * functions aren't performance sensitive, so this conversion isn't an
+ * issue.
+ */
+int hvcs_convert(long to_convert)
+{
+ switch (to_convert) {
+ case H_Success:
+ return 0;
+ case H_Parameter:
+ return -EINVAL;
+ case H_Hardware:
+ return -EIO;
+ case H_Busy:
+ case H_LongBusyOrder1msec:
+ case H_LongBusyOrder10msec:
+ case H_LongBusyOrder100msec:
+ case H_LongBusyOrder1sec:
+ case H_LongBusyOrder10sec:
+ case H_LongBusyOrder100sec:
+ return -EBUSY;
+ case H_Function: /* fall through */
+ default:
+ return -EPERM;
+ }
+}
+
+/**
+ * hvcs_free_partner_info - free pi allocated by hvcs_get_partner_info
+ * @head: list_head pointer for an allocated list of partner info structs to
+ * free.
+ *
+ * This function is used to free the partner info list that was returned by
+ * calling hvcs_get_partner_info().
+ */
+int hvcs_free_partner_info(struct list_head *head)
+{
+ struct hvcs_partner_info *pi;
+ struct list_head *element;
+
+ if (!head)
+ return -EINVAL;
+
+ while (!list_empty(head)) {
+ element = head->next;
+ pi = list_entry(element, struct hvcs_partner_info, node);
+ list_del(element);
+ kfree(pi);
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(hvcs_free_partner_info);
+
+/* Helper function for hvcs_get_partner_info */
+int hvcs_next_partner(uint32_t unit_address,
+ unsigned long last_p_partition_ID,
+ unsigned long last_p_unit_address, unsigned long *pi_buff)
+
+{
+ long retval;
+ retval = plpar_hcall_norets(H_VTERM_PARTNER_INFO, unit_address,
+ last_p_partition_ID,
+ last_p_unit_address, virt_to_phys(pi_buff));
+ return hvcs_convert(retval);
+}
+
+/**
+ * hvcs_get_partner_info - Get all of the partner info for a vty-server adapter
+ * @unit_address: The unit_address of the vty-server adapter for which this
+ * function is fetching partner info.
+ * @head: An initialized list_head pointer to an empty list to use to return the
+ * list of partner info fetched from the hypervisor to the caller.
+ * @pi_buff: A page sized buffer pre-allocated prior to calling this function
+ * that is to be used to be used by firmware as an iterator to keep track
+ * of the partner info retrieval.
+ *
+ * This function returns non-zero on success, or if there is no partner info.
+ *
+ * The pi_buff is pre-allocated prior to calling this function because this
+ * function may be called with a spin_lock held and kmalloc of a page is not
+ * recommended as GFP_ATOMIC.
+ *
+ * The first long of this buffer is used to store a partner unit address. The
+ * second long is used to store a partner partition ID and starting at
+ * pi_buff[2] is the 79 character Converged Location Code (diff size than the
+ * unsigned longs, hence the casting mumbo jumbo you see later).
+ *
+ * Invocation of this function should always be followed by an invocation of
+ * hvcs_free_partner_info() using a pointer to the SAME list head instance
+ * that was passed as a parameter to this function.
+ */
+int hvcs_get_partner_info(uint32_t unit_address, struct list_head *head,
+ unsigned long *pi_buff)
+{
+ /*
+ * Dealt with as longs because of the hcall interface even though the
+ * values are uint32_t.
+ */
+ unsigned long last_p_partition_ID;
+ unsigned long last_p_unit_address;
+ struct hvcs_partner_info *next_partner_info = NULL;
+ int more = 1;
+ int retval;
+
+ memset(pi_buff, 0x00, PAGE_SIZE);
+ /* invalid parameters */
+ if (!head || !pi_buff)
+ return -EINVAL;
+
+ last_p_partition_ID = last_p_unit_address = ~0UL;
+ INIT_LIST_HEAD(head);
+
+ do {
+ retval = hvcs_next_partner(unit_address, last_p_partition_ID,
+ last_p_unit_address, pi_buff);
+ if (retval) {
+ /*
+ * Don't indicate that we've failed if we have
+ * any list elements.
+ */
+ if (!list_empty(head))
+ return 0;
+ return retval;
+ }
+
+ last_p_partition_ID = pi_buff[0];
+ last_p_unit_address = pi_buff[1];
+
+ /* This indicates that there are no further partners */
+ if (last_p_partition_ID == ~0UL
+ && last_p_unit_address == ~0UL)
+ break;
+
+ /* This is a very small struct and will be freed soon in
+ * hvcs_free_partner_info(). */
+ next_partner_info = kmalloc(sizeof(struct hvcs_partner_info),
+ GFP_ATOMIC);
+
+ if (!next_partner_info) {
+ printk(KERN_WARNING "HVCONSOLE: kmalloc() failed to"
+ " allocate partner info struct.\n");
+ hvcs_free_partner_info(head);
+ return -ENOMEM;
+ }
+
+ next_partner_info->unit_address
+ = (unsigned int)last_p_unit_address;
+ next_partner_info->partition_ID
+ = (unsigned int)last_p_partition_ID;
+
+ /* copy the Null-term char too */
+ strncpy(&next_partner_info->location_code[0],
+ (char *)&pi_buff[2],
+ strlen((char *)&pi_buff[2]) + 1);
+
+ list_add_tail(&(next_partner_info->node), head);
+ next_partner_info = NULL;
+
+ } while (more);
+
+ return 0;
+}
+EXPORT_SYMBOL(hvcs_get_partner_info);
+
+/**
+ * hvcs_register_connection - establish a connection between this vty-server and
+ * a vty.
+ * @unit_address: The unit address of the vty-server adapter that is to be
+ * establish a connection.
+ * @p_partition_ID: The partition ID of the vty adapter that is to be connected.
+ * @p_unit_address: The unit address of the vty adapter to which the vty-server
+ * is to be connected.
+ *
+ * If this function is called once and -EINVAL is returned it may
+ * indicate that the partner info needs to be refreshed for the
+ * target unit address at which point the caller must invoke
+ * hvcs_get_partner_info() and then call this function again. If,
+ * for a second time, -EINVAL is returned then it indicates that
+ * there is probably already a partner connection registered to a
+ * different vty-server adapter. It is also possible that a second
+ * -EINVAL may indicate that one of the parms is not valid, for
+ * instance if the link was removed between the vty-server adapter
+ * and the vty adapter that you are trying to open. Don't shoot the
+ * messenger. Firmware implemented it this way.
+ */
+int hvcs_register_connection( uint32_t unit_address,
+ uint32_t p_partition_ID, uint32_t p_unit_address)
+{
+ long retval;
+ retval = plpar_hcall_norets(H_REGISTER_VTERM, unit_address,
+ p_partition_ID, p_unit_address);
+ return hvcs_convert(retval);
+}
+EXPORT_SYMBOL(hvcs_register_connection);
+
+/**
+ * hvcs_free_connection - free the connection between a vty-server and vty
+ * @unit_address: The unit address of the vty-server that is to have its
+ * connection severed.
+ *
+ * This function is used to free the partner connection between a vty-server
+ * adapter and a vty adapter.
+ *
+ * If -EBUSY is returned continue to call this function until 0 is returned.
+ */
+int hvcs_free_connection(uint32_t unit_address)
+{
+ long retval;
+ retval = plpar_hcall_norets(H_FREE_VTERM, unit_address);
+ return hvcs_convert(retval);
+}
+EXPORT_SYMBOL(hvcs_free_connection);
--- /dev/null
+/*
+ * Routines to emulate some Altivec/VMX instructions, specifically
+ * those that can trap when given denormalized operands in Java mode.
+ */
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <asm/ptrace.h>
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+
+/* Functions in vector.S */
+extern void vaddfp(vector128 *dst, vector128 *a, vector128 *b);
+extern void vsubfp(vector128 *dst, vector128 *a, vector128 *b);
+extern void vmaddfp(vector128 *dst, vector128 *a, vector128 *b, vector128 *c);
+extern void vnmsubfp(vector128 *dst, vector128 *a, vector128 *b, vector128 *c);
+extern void vrefp(vector128 *dst, vector128 *src);
+extern void vrsqrtefp(vector128 *dst, vector128 *src);
+extern void vexptep(vector128 *dst, vector128 *src);
+
+static unsigned int exp2s[8] = {
+ 0x800000,
+ 0x8b95c2,
+ 0x9837f0,
+ 0xa5fed7,
+ 0xb504f3,
+ 0xc5672a,
+ 0xd744fd,
+ 0xeac0c7
+};
+
+/*
+ * Computes an estimate of 2^x. The `s' argument is the 32-bit
+ * single-precision floating-point representation of x.
+ */
+static unsigned int eexp2(unsigned int s)
+{
+ int exp, pwr;
+ unsigned int mant, frac;
+
+ /* extract exponent field from input */
+ exp = ((s >> 23) & 0xff) - 127;
+ if (exp > 7) {
+ /* check for NaN input */
+ if (exp == 128 && (s & 0x7fffff) != 0)
+ return s | 0x400000; /* return QNaN */
+ /* 2^-big = 0, 2^+big = +Inf */
+ return (s & 0x80000000)? 0: 0x7f800000; /* 0 or +Inf */
+ }
+ if (exp < -23)
+ return 0x3f800000; /* 1.0 */
+
+ /* convert to fixed point integer in 9.23 representation */
+ pwr = (s & 0x7fffff) | 0x800000;
+ if (exp > 0)
+ pwr <<= exp;
+ else
+ pwr >>= -exp;
+ if (s & 0x80000000)
+ pwr = -pwr;
+
+ /* extract integer part, which becomes exponent part of result */
+ exp = (pwr >> 23) + 126;
+ if (exp >= 254)
+ return 0x7f800000;
+ if (exp < -23)
+ return 0;
+
+ /* table lookup on top 3 bits of fraction to get mantissa */
+ mant = exp2s[(pwr >> 20) & 7];
+
+ /* linear interpolation using remaining 20 bits of fraction */
+ asm("mulhwu %0,%1,%2" : "=r" (frac)
+ : "r" (pwr << 12), "r" (0x172b83ff));
+ asm("mulhwu %0,%1,%2" : "=r" (frac) : "r" (frac), "r" (mant));
+ mant += frac;
+
+ if (exp >= 0)
+ return mant + (exp << 23);
+
+ /* denormalized result */
+ exp = -exp;
+ mant += 1 << (exp - 1);
+ return mant >> exp;
+}
+
+/*
+ * Computes an estimate of log_2(x). The `s' argument is the 32-bit
+ * single-precision floating-point representation of x.
+ */
+static unsigned int elog2(unsigned int s)
+{
+ int exp, mant, lz, frac;
+
+ exp = s & 0x7f800000;
+ mant = s & 0x7fffff;
+ if (exp == 0x7f800000) { /* Inf or NaN */
+ if (mant != 0)
+ s |= 0x400000; /* turn NaN into QNaN */
+ return s;
+ }
+ if ((exp | mant) == 0) /* +0 or -0 */
+ return 0xff800000; /* return -Inf */
+
+ if (exp == 0) {
+ /* denormalized */
+ asm("cntlzw %0,%1" : "=r" (lz) : "r" (mant));
+ mant <<= lz - 8;
+ exp = (-118 - lz) << 23;
+ } else {
+ mant |= 0x800000;
+ exp -= 127 << 23;
+ }
+
+ if (mant >= 0xb504f3) { /* 2^0.5 * 2^23 */
+ exp |= 0x400000; /* 0.5 * 2^23 */
+ asm("mulhwu %0,%1,%2" : "=r" (mant)
+ : "r" (mant), "r" (0xb504f334)); /* 2^-0.5 * 2^32 */
+ }
+ if (mant >= 0x9837f0) { /* 2^0.25 * 2^23 */
+ exp |= 0x200000; /* 0.25 * 2^23 */
+ asm("mulhwu %0,%1,%2" : "=r" (mant)
+ : "r" (mant), "r" (0xd744fccb)); /* 2^-0.25 * 2^32 */
+ }
+ if (mant >= 0x8b95c2) { /* 2^0.125 * 2^23 */
+ exp |= 0x100000; /* 0.125 * 2^23 */
+ asm("mulhwu %0,%1,%2" : "=r" (mant)
+ : "r" (mant), "r" (0xeac0c6e8)); /* 2^-0.125 * 2^32 */
+ }
+ if (mant > 0x800000) { /* 1.0 * 2^23 */
+ /* calculate (mant - 1) * 1.381097463 */
+ /* 1.381097463 == 0.125 / (2^0.125 - 1) */
+ asm("mulhwu %0,%1,%2" : "=r" (frac)
+ : "r" ((mant - 0x800000) << 1), "r" (0xb0c7cd3a));
+ exp += frac;
+ }
+ s = exp & 0x80000000;
+ if (exp != 0) {
+ if (s)
+ exp = -exp;
+ asm("cntlzw %0,%1" : "=r" (lz) : "r" (exp));
+ lz = 8 - lz;
+ if (lz > 0)
+ exp >>= lz;
+ else if (lz < 0)
+ exp <<= -lz;
+ s += ((lz + 126) << 23) + exp;
+ }
+ return s;
+}
+
+#define VSCR_SAT 1
+
+static int ctsxs(unsigned int x, int scale, unsigned int *vscrp)
+{
+ int exp, mant;
+
+ exp = (x >> 23) & 0xff;
+ mant = x & 0x7fffff;
+ if (exp == 255 && mant != 0)
+ return 0; /* NaN -> 0 */
+ exp = exp - 127 + scale;
+ if (exp < 0)
+ return 0; /* round towards zero */
+ if (exp >= 31) {
+ /* saturate, unless the result would be -2^31 */
+ if (x + (scale << 23) != 0xcf000000)
+ *vscrp |= VSCR_SAT;
+ return (x & 0x80000000)? 0x80000000: 0x7fffffff;
+ }
+ mant |= 0x800000;
+ mant = (mant << 7) >> (30 - exp);
+ return (x & 0x80000000)? -mant: mant;
+}
+
+static unsigned int ctuxs(unsigned int x, int scale, unsigned int *vscrp)
+{
+ int exp;
+ unsigned int mant;
+
+ exp = (x >> 23) & 0xff;
+ mant = x & 0x7fffff;
+ if (exp == 255 && mant != 0)
+ return 0; /* NaN -> 0 */
+ exp = exp - 127 + scale;
+ if (exp < 0)
+ return 0; /* round towards zero */
+ if (x & 0x80000000) {
+ /* negative => saturate to 0 */
+ *vscrp |= VSCR_SAT;
+ return 0;
+ }
+ if (exp >= 32) {
+ /* saturate */
+ *vscrp |= VSCR_SAT;
+ return 0xffffffff;
+ }
+ mant |= 0x800000;
+ mant = (mant << 8) >> (31 - exp);
+ return mant;
+}
+
+/* Round to floating integer, towards 0 */
+static unsigned int rfiz(unsigned int x)
+{
+ int exp;
+
+ exp = ((x >> 23) & 0xff) - 127;
+ if (exp == 128 && (x & 0x7fffff) != 0)
+ return x | 0x400000; /* NaN -> make it a QNaN */
+ if (exp >= 23)
+ return x; /* it's an integer already (or Inf) */
+ if (exp < 0)
+ return x & 0x80000000; /* |x| < 1.0 rounds to 0 */
+ return x & ~(0x7fffff >> exp);
+}
+
+/* Round to floating integer, towards +/- Inf */
+static unsigned int rfii(unsigned int x)
+{
+ int exp, mask;
+
+ exp = ((x >> 23) & 0xff) - 127;
+ if (exp == 128 && (x & 0x7fffff) != 0)
+ return x | 0x400000; /* NaN -> make it a QNaN */
+ if (exp >= 23)
+ return x; /* it's an integer already (or Inf) */
+ if ((x & 0x7fffffff) == 0)
+ return x; /* +/-0 -> +/-0 */
+ if (exp < 0)
+ /* 0 < |x| < 1.0 rounds to +/- 1.0 */
+ return (x & 0x80000000) | 0x3f800000;
+ mask = 0x7fffff >> exp;
+ /* mantissa overflows into exponent - that's OK,
+ it can't overflow into the sign bit */
+ return (x + mask) & ~mask;
+}
+
+/* Round to floating integer, to nearest */
+static unsigned int rfin(unsigned int x)
+{
+ int exp, half;
+
+ exp = ((x >> 23) & 0xff) - 127;
+ if (exp == 128 && (x & 0x7fffff) != 0)
+ return x | 0x400000; /* NaN -> make it a QNaN */
+ if (exp >= 23)
+ return x; /* it's an integer already (or Inf) */
+ if (exp < -1)
+ return x & 0x80000000; /* |x| < 0.5 -> +/-0 */
+ if (exp == -1)
+ /* 0.5 <= |x| < 1.0 rounds to +/- 1.0 */
+ return (x & 0x80000000) | 0x3f800000;
+ half = 0x400000 >> exp;
+ /* add 0.5 to the magnitude and chop off the fraction bits */
+ return (x + half) & ~(0x7fffff >> exp);
+}
+
+int
+emulate_altivec(struct pt_regs *regs)
+{
+ unsigned int instr, i;
+ unsigned int va, vb, vc, vd;
+ vector128 *vrs;
+
+ if (get_user(instr, (unsigned int __user *) regs->nip))
+ return -EFAULT;
+ if ((instr >> 26) != 4)
+ return -EINVAL; /* not an altivec instruction */
+ vd = (instr >> 21) & 0x1f;
+ va = (instr >> 16) & 0x1f;
+ vb = (instr >> 11) & 0x1f;
+ vc = (instr >> 6) & 0x1f;
+
+ vrs = current->thread.vr;
+ switch (instr & 0x3f) {
+ case 10:
+ switch (vc) {
+ case 0: /* vaddfp */
+ vaddfp(&vrs[vd], &vrs[va], &vrs[vb]);
+ break;
+ case 1: /* vsubfp */
+ vsubfp(&vrs[vd], &vrs[va], &vrs[vb]);
+ break;
+ case 4: /* vrefp */
+ vrefp(&vrs[vd], &vrs[vb]);
+ break;
+ case 5: /* vrsqrtefp */
+ vrsqrtefp(&vrs[vd], &vrs[vb]);
+ break;
+ case 6: /* vexptefp */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = eexp2(vrs[vb].u[i]);
+ break;
+ case 7: /* vlogefp */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = elog2(vrs[vb].u[i]);
+ break;
+ case 8: /* vrfin */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = rfin(vrs[vb].u[i]);
+ break;
+ case 9: /* vrfiz */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = rfiz(vrs[vb].u[i]);
+ break;
+ case 10: /* vrfip */
+ for (i = 0; i < 4; ++i) {
+ u32 x = vrs[vb].u[i];
+ x = (x & 0x80000000)? rfiz(x): rfii(x);
+ vrs[vd].u[i] = x;
+ }
+ break;
+ case 11: /* vrfim */
+ for (i = 0; i < 4; ++i) {
+ u32 x = vrs[vb].u[i];
+ x = (x & 0x80000000)? rfii(x): rfiz(x);
+ vrs[vd].u[i] = x;
+ }
+ break;
+ case 14: /* vctuxs */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = ctuxs(vrs[vb].u[i], va,
+ ¤t->thread.vscr.u[3]);
+ break;
+ case 15: /* vctsxs */
+ for (i = 0; i < 4; ++i)
+ vrs[vd].u[i] = ctsxs(vrs[vb].u[i], va,
+ ¤t->thread.vscr.u[3]);
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case 46: /* vmaddfp */
+ vmaddfp(&vrs[vd], &vrs[va], &vrs[vb], &vrs[vc]);
+ break;
+ case 47: /* vnmsubfp */
+ vnmsubfp(&vrs[vd], &vrs[va], &vrs[vb], &vrs[vc]);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
--- /dev/null
+#include <asm/ppc_asm.h>
+#include <asm/processor.h>
+
+/*
+ * The routines below are in assembler so we can closely control the
+ * usage of floating-point registers. These routines must be called
+ * with preempt disabled.
+ */
+ .section ".toc","aw"
+fpzero:
+ .tc FD_0_0[TC],0
+fpone:
+ .tc FD_3ff00000_0[TC],0x3ff0000000000000 /* 1.0 */
+fphalf:
+ .tc FD_3fe00000_0[TC],0x3fe0000000000000 /* 0.5 */
+
+ .text
+/*
+ * Internal routine to enable floating point and set FPSCR to 0.
+ * Don't call it from C; it doesn't use the normal calling convention.
+ */
+fpenable:
+ mfmsr r10
+ ori r11,r10,MSR_FP
+ mtmsr r11
+ isync
+ stfd fr31,-8(r1)
+ stfd fr0,-16(r1)
+ stfd fr1,-24(r1)
+ mffs fr31
+ lfd fr1,fpzero@toc(r2)
+ mtfsf 0xff,fr1
+ blr
+
+fpdisable:
+ mtlr r12
+ mtfsf 0xff,fr31
+ lfd fr1,-24(r1)
+ lfd fr0,-16(r1)
+ lfd fr31,-8(r1)
+ mtmsr r10
+ isync
+ blr
+
+/*
+ * Vector add, floating point.
+ */
+_GLOBAL(vaddfp)
+ mflr r12
+ bl fpenable
+ li r0,4
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ lfsx fr1,r5,r6
+ fadds fr0,fr0,fr1
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ b fpdisable
+
+/*
+ * Vector subtract, floating point.
+ */
+_GLOBAL(vsubfp)
+ mflr r12
+ bl fpenable
+ li r0,4
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ lfsx fr1,r5,r6
+ fsubs fr0,fr0,fr1
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ b fpdisable
+
+/*
+ * Vector multiply and add, floating point.
+ */
+_GLOBAL(vmaddfp)
+ mflr r12
+ bl fpenable
+ stfd fr2,-32(r1)
+ li r0,4
+ mtctr r0
+ li r7,0
+1: lfsx fr0,r4,r7
+ lfsx fr1,r5,r7
+ lfsx fr2,r6,r7
+ fmadds fr0,fr0,fr2,fr1
+ stfsx fr0,r3,r7
+ addi r7,r7,4
+ bdnz 1b
+ lfd fr2,-32(r1)
+ b fpdisable
+
+/*
+ * Vector negative multiply and subtract, floating point.
+ */
+_GLOBAL(vnmsubfp)
+ mflr r12
+ bl fpenable
+ stfd fr2,-32(r1)
+ li r0,4
+ mtctr r0
+ li r7,0
+1: lfsx fr0,r4,r7
+ lfsx fr1,r5,r7
+ lfsx fr2,r6,r7
+ fnmsubs fr0,fr0,fr2,fr1
+ stfsx fr0,r3,r7
+ addi r7,r7,4
+ bdnz 1b
+ lfd fr2,-32(r1)
+ b fpdisable
+
+/*
+ * Vector reciprocal estimate. We just compute 1.0/x.
+ * r3 -> destination, r4 -> source.
+ */
+_GLOBAL(vrefp)
+ mflr r12
+ bl fpenable
+ li r0,4
+ lfd fr1,fpone@toc(r2)
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ fdivs fr0,fr1,fr0
+ stfsx fr0,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ b fpdisable
+
+/*
+ * Vector reciprocal square-root estimate, floating point.
+ * We use the frsqrte instruction for the initial estimate followed
+ * by 2 iterations of Newton-Raphson to get sufficient accuracy.
+ * r3 -> destination, r4 -> source.
+ */
+_GLOBAL(vrsqrtefp)
+ mflr r12
+ bl fpenable
+ stfd fr2,-32(r1)
+ stfd fr3,-40(r1)
+ stfd fr4,-48(r1)
+ stfd fr5,-56(r1)
+ li r0,4
+ lfd fr4,fpone@toc(r2)
+ lfd fr5,fphalf@toc(r2)
+ mtctr r0
+ li r6,0
+1: lfsx fr0,r4,r6
+ frsqrte fr1,fr0 /* r = frsqrte(s) */
+ fmuls fr3,fr1,fr0 /* r * s */
+ fmuls fr2,fr1,fr5 /* r * 0.5 */
+ fnmsubs fr3,fr1,fr3,fr4 /* 1 - s * r * r */
+ fmadds fr1,fr2,fr3,fr1 /* r = r + 0.5 * r * (1 - s * r * r) */
+ fmuls fr3,fr1,fr0 /* r * s */
+ fmuls fr2,fr1,fr5 /* r * 0.5 */
+ fnmsubs fr3,fr1,fr3,fr4 /* 1 - s * r * r */
+ fmadds fr1,fr2,fr3,fr1 /* r = r + 0.5 * r * (1 - s * r * r) */
+ stfsx fr1,r3,r6
+ addi r6,r6,4
+ bdnz 1b
+ lfd fr5,-56(r1)
+ lfd fr4,-48(r1)
+ lfd fr3,-40(r1)
+ lfd fr2,-32(r1)
+ b fpdisable
--- /dev/null
+/*
+ * Spin and read/write lock operations.
+ *
+ * Copyright (C) 2001-2004 Paul Mackerras <paulus@au.ibm.com>, IBM
+ * Copyright (C) 2001 Anton Blanchard <anton@au.ibm.com>, IBM
+ * Copyright (C) 2002 Dave Engebretsen <engebret@us.ibm.com>, IBM
+ * Rework to support virtual processors
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/spinlock.h>
+#include <linux/module.h>
+#include <linux/stringify.h>
+#include <asm/hvcall.h>
+#include <asm/iSeries/HvCall.h>
+
+/* waiting for a spinlock... */
+#if defined(CONFIG_PPC_SPLPAR) || defined(CONFIG_PPC_ISERIES)
+
+void __spin_yield(spinlock_t *lock)
+{
+ unsigned int lock_value, holder_cpu, yield_count;
+ struct paca_struct *holder_paca;
+
+ lock_value = lock->lock;
+ if (lock_value == 0)
+ return;
+ holder_cpu = lock_value & 0xffff;
+ BUG_ON(holder_cpu >= NR_CPUS);
+ holder_paca = &paca[holder_cpu];
+ yield_count = holder_paca->lppaca.xYieldCount;
+ if ((yield_count & 1) == 0)
+ return; /* virtual cpu is currently running */
+ rmb();
+ if (lock->lock != lock_value)
+ return; /* something has changed */
+#ifdef CONFIG_PPC_ISERIES
+ HvCall2(HvCallBaseYieldProcessor, HvCall_YieldToProc,
+ ((u64)holder_cpu << 32) | yield_count);
+#else
+ plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(holder_cpu),
+ yield_count);
+#endif
+}
+
+/*
+ * Waiting for a read lock or a write lock on a rwlock...
+ * This turns out to be the same for read and write locks, since
+ * we only know the holder if it is write-locked.
+ */
+void __rw_yield(rwlock_t *rw)
+{
+ int lock_value;
+ unsigned int holder_cpu, yield_count;
+ struct paca_struct *holder_paca;
+
+ lock_value = rw->lock;
+ if (lock_value >= 0)
+ return; /* no write lock at present */
+ holder_cpu = lock_value & 0xffff;
+ BUG_ON(holder_cpu >= NR_CPUS);
+ holder_paca = &paca[holder_cpu];
+ yield_count = holder_paca->lppaca.xYieldCount;
+ if ((yield_count & 1) == 0)
+ return; /* virtual cpu is currently running */
+ rmb();
+ if (rw->lock != lock_value)
+ return; /* something has changed */
+#ifdef CONFIG_PPC_ISERIES
+ HvCall2(HvCallBaseYieldProcessor, HvCall_YieldToProc,
+ ((u64)holder_cpu << 32) | yield_count);
+#else
+ plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(holder_cpu),
+ yield_count);
+#endif
+}
+#endif
+
+void spin_unlock_wait(spinlock_t *lock)
+{
+ while (lock->lock) {
+ HMT_low();
+ if (SHARED_PROCESSOR)
+ __spin_yield(lock);
+ }
+ HMT_medium();
+}
+
+EXPORT_SYMBOL(spin_unlock_wait);
--- /dev/null
+/*
+ * linux/arch/ppc64/mm/mmap.c
+ *
+ * flexible mmap layout support
+ *
+ * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
+ * All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * Started by Ingo Molnar <mingo@elte.hu>
+ */
+
+#include <linux/personality.h>
+#include <linux/mm.h>
+
+/*
+ * Top of mmap area (just below the process stack).
+ *
+ * Leave an at least ~128 MB hole.
+ */
+#define MIN_GAP (128*1024*1024)
+#define MAX_GAP (TASK_SIZE/6*5)
+
+static inline unsigned long mmap_base(void)
+{
+ unsigned long gap = current->signal->rlim[RLIMIT_STACK].rlim_cur;
+
+ if (gap < MIN_GAP)
+ gap = MIN_GAP;
+ else if (gap > MAX_GAP)
+ gap = MAX_GAP;
+
+ return TASK_SIZE - (gap & PAGE_MASK);
+}
+
+static inline int mmap_is_legacy(void)
+{
+ /*
+ * Force standard allocation for 64 bit programs.
+ */
+ if (!test_thread_flag(TIF_32BIT))
+ return 1;
+
+ if (current->personality & ADDR_COMPAT_LAYOUT)
+ return 1;
+
+ if (current->signal->rlim[RLIMIT_STACK].rlim_cur == RLIM_INFINITY)
+ return 1;
+
+ return sysctl_legacy_va_layout;
+}
+
+/*
+ * This function, called very early during the creation of a new
+ * process VM image, sets up which VM layout function to use:
+ */
+void arch_pick_mmap_layout(struct mm_struct *mm)
+{
+ /*
+ * Fall back to the standard layout if the personality
+ * bit is set, or if the expected stack growth is unlimited:
+ */
+ if (mmap_is_legacy()) {
+ mm->mmap_base = TASK_UNMAPPED_BASE;
+ mm->get_unmapped_area = arch_get_unmapped_area;
+ mm->unmap_area = arch_unmap_area;
+ } else {
+ mm->mmap_base = mmap_base();
+ mm->get_unmapped_area = arch_get_unmapped_area_topdown;
+ mm->unmap_area = arch_unmap_area_topdown;
+ }
+}
--- /dev/null
+/*
+ * PowerPC64 SLB support.
+ *
+ * Copyright (C) 2004 David Gibson <dwg@au.ibm.com>, IBM
+ * Based on earlier code writteh by:
+ * Dave Engebretsen and Mike Corrigan {engebret|mikejc}@us.ibm.com
+ * Copyright (c) 2001 Dave Engebretsen
+ * Copyright (C) 2002 Anton Blanchard <anton@au.ibm.com>, IBM
+ *
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <asm/pgtable.h>
+#include <asm/mmu.h>
+#include <asm/mmu_context.h>
+#include <asm/paca.h>
+#include <asm/naca.h>
+#include <asm/cputable.h>
+
+extern void slb_allocate(unsigned long ea);
+
+static inline unsigned long mk_esid_data(unsigned long ea, unsigned long slot)
+{
+ return (ea & ESID_MASK) | SLB_ESID_V | slot;
+}
+
+static inline unsigned long mk_vsid_data(unsigned long ea, unsigned long flags)
+{
+ return (get_kernel_vsid(ea) << SLB_VSID_SHIFT) | flags;
+}
+
+static inline void create_slbe(unsigned long ea, unsigned long vsid,
+ unsigned long flags, unsigned long entry)
+{
+ asm volatile("slbmte %0,%1" :
+ : "r" (mk_vsid_data(ea, flags)),
+ "r" (mk_esid_data(ea, entry))
+ : "memory" );
+}
+
+static void slb_flush_and_rebolt(void)
+{
+ /* If you change this make sure you change SLB_NUM_BOLTED
+ * appropriately too. */
+ unsigned long ksp_flags = SLB_VSID_KERNEL;
+ unsigned long ksp_esid_data;
+
+ WARN_ON(!irqs_disabled());
+
+ if (cur_cpu_spec->cpu_features & CPU_FTR_16M_PAGE)
+ ksp_flags |= SLB_VSID_L;
+
+ ksp_esid_data = mk_esid_data(get_paca()->kstack, 2);
+ if ((ksp_esid_data & ESID_MASK) == KERNELBASE)
+ ksp_esid_data &= ~SLB_ESID_V;
+
+ /* We need to do this all in asm, so we're sure we don't touch
+ * the stack between the slbia and rebolting it. */
+ asm volatile("isync\n"
+ "slbia\n"
+ /* Slot 1 - first VMALLOC segment */
+ "slbmte %0,%1\n"
+ /* Slot 2 - kernel stack */
+ "slbmte %2,%3\n"
+ "isync"
+ :: "r"(mk_vsid_data(VMALLOCBASE, SLB_VSID_KERNEL)),
+ "r"(mk_esid_data(VMALLOCBASE, 1)),
+ "r"(mk_vsid_data(ksp_esid_data, ksp_flags)),
+ "r"(ksp_esid_data)
+ : "memory");
+}
+
+/* Flush all user entries from the segment table of the current processor. */
+void switch_slb(struct task_struct *tsk, struct mm_struct *mm)
+{
+ unsigned long offset = get_paca()->slb_cache_ptr;
+ unsigned long esid_data;
+ unsigned long pc = KSTK_EIP(tsk);
+ unsigned long stack = KSTK_ESP(tsk);
+ unsigned long unmapped_base;
+
+ if (offset <= SLB_CACHE_ENTRIES) {
+ int i;
+ asm volatile("isync" : : : "memory");
+ for (i = 0; i < offset; i++) {
+ esid_data = (unsigned long)get_paca()->slb_cache[i]
+ << SID_SHIFT;
+ asm volatile("slbie %0" : : "r" (esid_data));
+ }
+ asm volatile("isync" : : : "memory");
+ } else {
+ slb_flush_and_rebolt();
+ }
+
+ /* Workaround POWER5 < DD2.1 issue */
+ if (offset == 1 || offset > SLB_CACHE_ENTRIES) {
+ /* flush segment in EEH region, we shouldn't ever
+ * access addresses in this region. */
+ asm volatile("slbie %0" : : "r"(EEHREGIONBASE));
+ }
+
+ get_paca()->slb_cache_ptr = 0;
+ get_paca()->context = mm->context;
+
+ /*
+ * preload some userspace segments into the SLB.
+ */
+ if (test_tsk_thread_flag(tsk, TIF_32BIT))
+ unmapped_base = TASK_UNMAPPED_BASE_USER32;
+ else
+ unmapped_base = TASK_UNMAPPED_BASE_USER64;
+
+ if (pc >= KERNELBASE)
+ return;
+ slb_allocate(pc);
+
+ if (GET_ESID(pc) == GET_ESID(stack))
+ return;
+
+ if (stack >= KERNELBASE)
+ return;
+ slb_allocate(stack);
+
+ if ((GET_ESID(pc) == GET_ESID(unmapped_base))
+ || (GET_ESID(stack) == GET_ESID(unmapped_base)))
+ return;
+
+ if (unmapped_base >= KERNELBASE)
+ return;
+ slb_allocate(unmapped_base);
+}
+
+void slb_initialize(void)
+{
+ /* On iSeries the bolted entries have already been set up by
+ * the hypervisor from the lparMap data in head.S */
+#ifndef CONFIG_PPC_ISERIES
+ unsigned long flags = SLB_VSID_KERNEL;
+
+ /* Invalidate the entire SLB (even slot 0) & all the ERATS */
+ if (cur_cpu_spec->cpu_features & CPU_FTR_16M_PAGE)
+ flags |= SLB_VSID_L;
+
+ asm volatile("isync":::"memory");
+ asm volatile("slbmte %0,%0"::"r" (0) : "memory");
+ asm volatile("isync; slbia; isync":::"memory");
+ create_slbe(KERNELBASE, get_kernel_vsid(KERNELBASE), flags, 0);
+ create_slbe(VMALLOCBASE, get_kernel_vsid(KERNELBASE),
+ SLB_VSID_KERNEL, 1);
+ /* We don't bolt the stack for the time being - we're in boot,
+ * so the stack is in the bolted segment. By the time it goes
+ * elsewhere, we'll call _switch() which will bolt in the new
+ * one. */
+ asm volatile("isync":::"memory");
+#endif
+
+ get_paca()->stab_rr = SLB_NUM_BOLTED;
+}
--- /dev/null
+/*
+ * arch/ppc64/mm/slb_low.S
+ *
+ * Low-level SLB routines
+ *
+ * Copyright (C) 2004 David Gibson <dwg@au.ibm.com>, IBM
+ *
+ * Based on earlier C version:
+ * Dave Engebretsen and Mike Corrigan {engebret|mikejc}@us.ibm.com
+ * Copyright (c) 2001 Dave Engebretsen
+ * Copyright (C) 2002 Anton Blanchard <anton@au.ibm.com>, IBM
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <asm/processor.h>
+#include <asm/page.h>
+#include <asm/mmu.h>
+#include <asm/ppc_asm.h>
+#include <asm/offsets.h>
+#include <asm/cputable.h>
+
+/* void slb_allocate(unsigned long ea);
+ *
+ * Create an SLB entry for the given EA (user or kernel).
+ * r3 = faulting address, r13 = PACA
+ * r9, r10, r11 are clobbered by this function
+ * No other registers are examined or changed.
+ */
+_GLOBAL(slb_allocate)
+ /*
+ * First find a slot, round robin. Previously we tried to find
+ * a free slot first but that took too long. Unfortunately we
+ * dont have any LRU information to help us choose a slot.
+ */
+#ifdef CONFIG_PPC_ISERIES
+ /*
+ * On iSeries, the "bolted" stack segment can be cast out on
+ * shared processor switch so we need to check for a miss on
+ * it and restore it to the right slot.
+ */
+ ld r9,PACAKSAVE(r13)
+ clrrdi r9,r9,28
+ clrrdi r11,r3,28
+ li r10,SLB_NUM_BOLTED-1 /* Stack goes in last bolted slot */
+ cmpld r9,r11
+ beq 3f
+#endif /* CONFIG_PPC_ISERIES */
+
+ ld r10,PACASTABRR(r13)
+ addi r10,r10,1
+ /* use a cpu feature mask if we ever change our slb size */
+ cmpldi r10,SLB_NUM_ENTRIES
+
+ blt+ 4f
+ li r10,SLB_NUM_BOLTED
+
+4:
+ std r10,PACASTABRR(r13)
+3:
+ /* r3 = faulting address, r10 = entry */
+
+ srdi r9,r3,60 /* get region */
+ srdi r3,r3,28 /* get esid */
+ cmpldi cr7,r9,0xc /* cmp KERNELBASE for later use */
+
+ rldimi r10,r3,28,0 /* r10= ESID<<28 | entry */
+ oris r10,r10,SLB_ESID_V@h /* r10 |= SLB_ESID_V */
+
+ /* r3 = esid, r10 = esid_data, cr7 = <>KERNELBASE */
+
+ blt cr7,0f /* user or kernel? */
+
+ /* kernel address: proto-VSID = ESID */
+ /* WARNING - MAGIC: we don't use the VSID 0xfffffffff, but
+ * this code will generate the protoVSID 0xfffffffff for the
+ * top segment. That's ok, the scramble below will translate
+ * it to VSID 0, which is reserved as a bad VSID - one which
+ * will never have any pages in it. */
+ li r11,SLB_VSID_KERNEL
+BEGIN_FTR_SECTION
+ bne cr7,9f
+ li r11,(SLB_VSID_KERNEL|SLB_VSID_L)
+END_FTR_SECTION_IFSET(CPU_FTR_16M_PAGE)
+ b 9f
+
+0: /* user address: proto-VSID = context<<15 | ESID */
+ li r11,SLB_VSID_USER
+
+ srdi. r9,r3,13
+ bne- 8f /* invalid ea bits set */
+
+#ifdef CONFIG_HUGETLB_PAGE
+BEGIN_FTR_SECTION
+ /* check against the hugepage ranges */
+ cmpldi r3,(TASK_HPAGE_END>>SID_SHIFT)
+ bge 6f /* >= TASK_HPAGE_END */
+ cmpldi r3,(TASK_HPAGE_BASE>>SID_SHIFT)
+ bge 5f /* TASK_HPAGE_BASE..TASK_HPAGE_END */
+ cmpldi r3,16
+ bge 6f /* 4GB..TASK_HPAGE_BASE */
+
+ lhz r9,PACAHTLBSEGS(r13)
+ srd r9,r9,r3
+ andi. r9,r9,1
+ beq 6f
+
+5: /* this is a hugepage user address */
+ li r11,(SLB_VSID_USER|SLB_VSID_L)
+END_FTR_SECTION_IFSET(CPU_FTR_16M_PAGE)
+#endif /* CONFIG_HUGETLB_PAGE */
+
+6: ld r9,PACACONTEXTID(r13)
+ rldimi r3,r9,USER_ESID_BITS,0
+
+9: /* r3 = protovsid, r11 = flags, r10 = esid_data, cr7 = <>KERNELBASE */
+ ASM_VSID_SCRAMBLE(r3,r9)
+
+ rldimi r11,r3,SLB_VSID_SHIFT,16 /* combine VSID and flags */
+
+ /*
+ * No need for an isync before or after this slbmte. The exception
+ * we enter with and the rfid we exit with are context synchronizing.
+ */
+ slbmte r11,r10
+
+ bgelr cr7 /* we're done for kernel addresses */
+
+ /* Update the slb cache */
+ lhz r3,PACASLBCACHEPTR(r13) /* offset = paca->slb_cache_ptr */
+ cmpldi r3,SLB_CACHE_ENTRIES
+ bge 1f
+
+ /* still room in the slb cache */
+ sldi r11,r3,1 /* r11 = offset * sizeof(u16) */
+ rldicl r10,r10,36,28 /* get low 16 bits of the ESID */
+ add r11,r11,r13 /* r11 = (u16 *)paca + offset */
+ sth r10,PACASLBCACHE(r11) /* paca->slb_cache[offset] = esid */
+ addi r3,r3,1 /* offset++ */
+ b 2f
+1: /* offset >= SLB_CACHE_ENTRIES */
+ li r3,SLB_CACHE_ENTRIES+1
+2:
+ sth r3,PACASLBCACHEPTR(r13) /* paca->slb_cache_ptr = offset */
+ blr
+
+8: /* invalid EA */
+ li r3,0 /* BAD_VSID */
+ li r11,SLB_VSID_USER /* flags don't much matter */
+ b 9b
--- /dev/null
+/*
+ * arch/s390/kernel/vtime.c
+ * Virtual cpu timer based timer functions.
+ *
+ * S390 version
+ * Copyright (C) 2004 IBM Deutschland Entwicklung GmbH, IBM Corporation
+ * Author(s): Jan Glauber <jan.glauber@de.ibm.com>
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/time.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/smp.h>
+#include <linux/types.h>
+#include <linux/timex.h>
+#include <linux/notifier.h>
+
+#include <asm/s390_ext.h>
+#include <asm/timer.h>
+
+#define VTIMER_MAGIC (TIMER_MAGIC + 1)
+static ext_int_info_t ext_int_info_timer;
+DEFINE_PER_CPU(struct vtimer_queue, virt_cpu_timer);
+
+void start_cpu_timer(void)
+{
+ struct vtimer_queue *vt_list;
+
+ vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
+ set_vtimer(vt_list->idle);
+}
+
+void stop_cpu_timer(void)
+{
+ __u64 done;
+ struct vtimer_queue *vt_list;
+
+ vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
+
+ /* nothing to do */
+ if (list_empty(&vt_list->list)) {
+ vt_list->idle = VTIMER_MAX_SLICE;
+ goto fire;
+ }
+
+ /* store progress */
+ asm volatile ("STPT %0" : "=m" (done));
+
+ /*
+ * If done is negative we do not stop the CPU timer
+ * because we will get instantly an interrupt that
+ * will start the CPU timer again.
+ */
+ if (done & 1LL<<63)
+ return;
+ else
+ vt_list->offset += vt_list->to_expire - done;
+
+ /* save the actual expire value */
+ vt_list->idle = done;
+
+ /*
+ * We cannot halt the CPU timer, we just write a value that
+ * nearly never expires (only after 71 years) and re-write
+ * the stored expire value if we continue the timer
+ */
+ fire:
+ set_vtimer(VTIMER_MAX_SLICE);
+}
+
+void set_vtimer(__u64 expires)
+{
+ asm volatile ("SPT %0" : : "m" (expires));
+
+ /* store expire time for this CPU timer */
+ per_cpu(virt_cpu_timer, smp_processor_id()).to_expire = expires;
+}
+
+/*
+ * Sorted add to a list. List is linear searched until first bigger
+ * element is found.
+ */
+void list_add_sorted(struct vtimer_list *timer, struct list_head *head)
+{
+ struct vtimer_list *event;
+
+ list_for_each_entry(event, head, entry) {
+ if (event->expires > timer->expires) {
+ list_add_tail(&timer->entry, &event->entry);
+ return;
+ }
+ }
+ list_add_tail(&timer->entry, head);
+}
+
+/*
+ * Do the callback functions of expired vtimer events.
+ * Called from within the interrupt handler.
+ */
+static void do_callbacks(struct list_head *cb_list, struct pt_regs *regs)
+{
+ struct vtimer_queue *vt_list;
+ struct vtimer_list *event, *tmp;
+ void (*fn)(unsigned long, struct pt_regs*);
+ unsigned long data;
+
+ if (list_empty(cb_list))
+ return;
+
+ vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
+
+ list_for_each_entry_safe(event, tmp, cb_list, entry) {
+ fn = event->function;
+ data = event->data;
+ fn(data, regs);
+
+ if (!event->interval)
+ /* delete one shot timer */
+ list_del_init(&event->entry);
+ else {
+ /* move interval timer back to list */
+ spin_lock(&vt_list->lock);
+ list_del_init(&event->entry);
+ list_add_sorted(event, &vt_list->list);
+ spin_unlock(&vt_list->lock);
+ }
+ }
+}
+
+/*
+ * Handler for the virtual CPU timer.
+ */
+static void do_cpu_timer_interrupt(struct pt_regs *regs, __u16 error_code)
+{
+ int cpu;
+ __u64 next, delta;
+ struct vtimer_queue *vt_list;
+ struct vtimer_list *event, *tmp;
+ struct list_head *ptr;
+ /* the callback queue */
+ struct list_head cb_list;
+
+ INIT_LIST_HEAD(&cb_list);
+ cpu = smp_processor_id();
+ vt_list = &per_cpu(virt_cpu_timer, cpu);
+
+ /* walk timer list, fire all expired events */
+ spin_lock(&vt_list->lock);
+
+ if (vt_list->to_expire < VTIMER_MAX_SLICE)
+ vt_list->offset += vt_list->to_expire;
+
+ list_for_each_entry_safe(event, tmp, &vt_list->list, entry) {
+ if (event->expires > vt_list->offset)
+ /* found first unexpired event, leave */
+ break;
+
+ /* re-charge interval timer, we have to add the offset */
+ if (event->interval)
+ event->expires = event->interval + vt_list->offset;
+
+ /* move expired timer to the callback queue */
+ list_move_tail(&event->entry, &cb_list);
+ }
+ spin_unlock(&vt_list->lock);
+ do_callbacks(&cb_list, regs);
+
+ /* next event is first in list */
+ spin_lock(&vt_list->lock);
+ if (!list_empty(&vt_list->list)) {
+ ptr = vt_list->list.next;
+ event = list_entry(ptr, struct vtimer_list, entry);
+ next = event->expires - vt_list->offset;
+
+ /* add the expired time from this interrupt handler
+ * and the callback functions
+ */
+ asm volatile ("STPT %0" : "=m" (delta));
+ delta = 0xffffffffffffffffLL - delta + 1;
+ vt_list->offset += delta;
+ next -= delta;
+ } else {
+ vt_list->offset = 0;
+ next = VTIMER_MAX_SLICE;
+ }
+ spin_unlock(&vt_list->lock);
+ set_vtimer(next);
+}
+
+void init_virt_timer(struct vtimer_list *timer)
+{
+ timer->magic = VTIMER_MAGIC;
+ timer->function = NULL;
+ INIT_LIST_HEAD(&timer->entry);
+ spin_lock_init(&timer->lock);
+}
+EXPORT_SYMBOL(init_virt_timer);
+
+static inline int check_vtimer(struct vtimer_list *timer)
+{
+ if (timer->magic != VTIMER_MAGIC)
+ return -EINVAL;
+ return 0;
+}
+
+static inline int vtimer_pending(struct vtimer_list *timer)
+{
+ return (!list_empty(&timer->entry));
+}
+
+/*
+ * this function should only run on the specified CPU
+ */
+static void internal_add_vtimer(struct vtimer_list *timer)
+{
+ unsigned long flags;
+ __u64 done;
+ struct vtimer_list *event;
+ struct vtimer_queue *vt_list;
+
+ vt_list = &per_cpu(virt_cpu_timer, timer->cpu);
+ spin_lock_irqsave(&vt_list->lock, flags);
+
+ if (timer->cpu != smp_processor_id())
+ printk("internal_add_vtimer: BUG, running on wrong CPU");
+
+ /* if list is empty we only have to set the timer */
+ if (list_empty(&vt_list->list)) {
+ /* reset the offset, this may happen if the last timer was
+ * just deleted by mod_virt_timer and the interrupt
+ * didn't happen until here
+ */
+ vt_list->offset = 0;
+ goto fire;
+ }
+
+ /* save progress */
+ asm volatile ("STPT %0" : "=m" (done));
+
+ /* calculate completed work */
+ done = vt_list->to_expire - done + vt_list->offset;
+ vt_list->offset = 0;
+
+ list_for_each_entry(event, &vt_list->list, entry)
+ event->expires -= done;
+
+ fire:
+ list_add_sorted(timer, &vt_list->list);
+
+ /* get first element, which is the next vtimer slice */
+ event = list_entry(vt_list->list.next, struct vtimer_list, entry);
+
+ set_vtimer(event->expires);
+ spin_unlock_irqrestore(&vt_list->lock, flags);
+ /* release CPU aquired in prepare_vtimer or mod_virt_timer() */
+ put_cpu();
+}
+
+static inline int prepare_vtimer(struct vtimer_list *timer)
+{
+ if (check_vtimer(timer) || !timer->function) {
+ printk("add_virt_timer: uninitialized timer\n");
+ return -EINVAL;
+ }
+
+ if (!timer->expires || timer->expires > VTIMER_MAX_SLICE) {
+ printk("add_virt_timer: invalid timer expire value!\n");
+ return -EINVAL;
+ }
+
+ if (vtimer_pending(timer)) {
+ printk("add_virt_timer: timer pending\n");
+ return -EBUSY;
+ }
+
+ timer->cpu = get_cpu();
+ return 0;
+}
+
+/*
+ * add_virt_timer - add an oneshot virtual CPU timer
+ */
+void add_virt_timer(void *new)
+{
+ struct vtimer_list *timer;
+
+ timer = (struct vtimer_list *)new;
+
+ if (prepare_vtimer(timer) < 0)
+ return;
+
+ timer->interval = 0;
+ internal_add_vtimer(timer);
+}
+EXPORT_SYMBOL(add_virt_timer);
+
+/*
+ * add_virt_timer_int - add an interval virtual CPU timer
+ */
+void add_virt_timer_periodic(void *new)
+{
+ struct vtimer_list *timer;
+
+ timer = (struct vtimer_list *)new;
+
+ if (prepare_vtimer(timer) < 0)
+ return;
+
+ timer->interval = timer->expires;
+ internal_add_vtimer(timer);
+}
+EXPORT_SYMBOL(add_virt_timer_periodic);
+
+/*
+ * If we change a pending timer the function must be called on the CPU
+ * where the timer is running on, e.g. by smp_call_function_on()
+ *
+ * The original mod_timer adds the timer if it is not pending. For compatibility
+ * we do the same. The timer will be added on the current CPU as a oneshot timer.
+ *
+ * returns whether it has modified a pending timer (1) or not (0)
+ */
+int mod_virt_timer(struct vtimer_list *timer, __u64 expires)
+{
+ struct vtimer_queue *vt_list;
+ unsigned long flags;
+ int cpu;
+
+ if (check_vtimer(timer) || !timer->function) {
+ printk("mod_virt_timer: uninitialized timer\n");
+ return -EINVAL;
+ }
+
+ if (!expires || expires > VTIMER_MAX_SLICE) {
+ printk("mod_virt_timer: invalid expire range\n");
+ return -EINVAL;
+ }
+
+ /*
+ * This is a common optimization triggered by the
+ * networking code - if the timer is re-modified
+ * to be the same thing then just return:
+ */
+ if (timer->expires == expires && vtimer_pending(timer))
+ return 1;
+
+ cpu = get_cpu();
+ vt_list = &per_cpu(virt_cpu_timer, cpu);
+
+ /* disable interrupts before test if timer is pending */
+ spin_lock_irqsave(&vt_list->lock, flags);
+
+ /* if timer isn't pending add it on the current CPU */
+ if (!vtimer_pending(timer)) {
+ spin_unlock_irqrestore(&vt_list->lock, flags);
+ /* we do not activate an interval timer with mod_virt_timer */
+ timer->interval = 0;
+ timer->expires = expires;
+ timer->cpu = cpu;
+ internal_add_vtimer(timer);
+ return 0;
+ }
+
+ /* check if we run on the right CPU */
+ if (timer->cpu != cpu) {
+ printk("mod_virt_timer: running on wrong CPU, check your code\n");
+ spin_unlock_irqrestore(&vt_list->lock, flags);
+ put_cpu();
+ return -EINVAL;
+ }
+
+ list_del_init(&timer->entry);
+ timer->expires = expires;
+
+ /* also change the interval if we have an interval timer */
+ if (timer->interval)
+ timer->interval = expires;
+
+ /* the timer can't expire anymore so we can release the lock */
+ spin_unlock_irqrestore(&vt_list->lock, flags);
+ internal_add_vtimer(timer);
+ return 1;
+}
+EXPORT_SYMBOL(mod_virt_timer);
+
+/*
+ * delete a virtual timer
+ *
+ * returns whether the deleted timer was pending (1) or not (0)
+ */
+int del_virt_timer(struct vtimer_list *timer)
+{
+ unsigned long flags;
+ struct vtimer_queue *vt_list;
+
+ if (check_vtimer(timer)) {
+ printk("del_virt_timer: timer not initialized\n");
+ return -EINVAL;
+ }
+
+ /* check if timer is pending */
+ if (!vtimer_pending(timer))
+ return 0;
+
+ vt_list = &per_cpu(virt_cpu_timer, timer->cpu);
+ spin_lock_irqsave(&vt_list->lock, flags);
+
+ /* we don't interrupt a running timer, just let it expire! */
+ list_del_init(&timer->entry);
+
+ /* last timer removed */
+ if (list_empty(&vt_list->list)) {
+ vt_list->to_expire = 0;
+ vt_list->offset = 0;
+ }
+
+ spin_unlock_irqrestore(&vt_list->lock, flags);
+ return 1;
+}
+EXPORT_SYMBOL(del_virt_timer);
+
+/*
+ * Start the virtual CPU timer on the current CPU.
+ */
+void init_cpu_vtimer(void)
+{
+ struct vtimer_queue *vt_list;
+ unsigned long cr0;
+ __u64 timer;
+
+ /* kick the virtual timer */
+ timer = VTIMER_MAX_SLICE;
+ asm volatile ("SPT %0" : : "m" (timer));
+ __ctl_store(cr0, 0, 0);
+ cr0 |= 0x400;
+ __ctl_load(cr0, 0, 0);
+
+ vt_list = &per_cpu(virt_cpu_timer, smp_processor_id());
+ INIT_LIST_HEAD(&vt_list->list);
+ spin_lock_init(&vt_list->lock);
+ vt_list->to_expire = 0;
+ vt_list->offset = 0;
+ vt_list->idle = 0;
+
+}
+
+static int vtimer_idle_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ switch (action) {
+ case CPU_IDLE:
+ stop_cpu_timer();
+ break;
+ case CPU_NOT_IDLE:
+ start_cpu_timer();
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static struct notifier_block vtimer_idle_nb = {
+ .notifier_call = vtimer_idle_notify,
+};
+
+void __init vtime_init(void)
+{
+ /* request the cpu timer external interrupt */
+ if (register_early_external_interrupt(0x1005, do_cpu_timer_interrupt,
+ &ext_int_info_timer) != 0)
+ panic("Couldn't request external interrupt 0x1005");
+
+ if (register_idle_notifier(&vtimer_idle_nb))
+ panic("Couldn't register idle notifier");
+
+ init_cpu_vtimer();
+}
+
--- /dev/null
+/*
+ * linux/arch/s390/mm/mmap.c
+ *
+ * flexible mmap layout support
+ *
+ * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
+ * All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * Started by Ingo Molnar <mingo@elte.hu>
+ */
+
+#include <linux/personality.h>
+#include <linux/mm.h>
+
+/*
+ * Top of mmap area (just below the process stack).
+ *
+ * Leave an at least ~128 MB hole.
+ */
+#define MIN_GAP (128*1024*1024)
+#define MAX_GAP (TASK_SIZE/6*5)
+
+static inline unsigned long mmap_base(void)
+{
+ unsigned long gap = current->signal->rlim[RLIMIT_STACK].rlim_cur;
+
+ if (gap < MIN_GAP)
+ gap = MIN_GAP;
+ else if (gap > MAX_GAP)
+ gap = MAX_GAP;
+
+ return TASK_SIZE - (gap & PAGE_MASK);
+}
+
+static inline int mmap_is_legacy(void)
+{
+#ifdef CONFIG_ARCH_S390X
+ /*
+ * Force standard allocation for 64 bit programs.
+ */
+ if (!test_thread_flag(TIF_31BIT))
+ return 1;
+#endif
+ return sysctl_legacy_va_layout ||
+ (current->personality & ADDR_COMPAT_LAYOUT) ||
+ current->signal->rlim[RLIMIT_STACK].rlim_cur == RLIM_INFINITY;
+}
+
+/*
+ * This function, called very early during the creation of a new
+ * process VM image, sets up which VM layout function to use:
+ */
+void arch_pick_mmap_layout(struct mm_struct *mm)
+{
+ /*
+ * Fall back to the standard layout if the personality
+ * bit is set, or if the expected stack growth is unlimited:
+ */
+ if (mmap_is_legacy()) {
+ mm->mmap_base = TASK_UNMAPPED_BASE;
+ mm->get_unmapped_area = arch_get_unmapped_area;
+ mm->unmap_area = arch_unmap_area;
+ } else {
+ mm->mmap_base = mmap_base();
+ mm->get_unmapped_area = arch_get_unmapped_area_topdown;
+ mm->unmap_area = arch_unmap_area_topdown;
+ }
+}
--- /dev/null
+/*
+ * linux/arch/sh/kernel/mach_rts7751r2d.c
+ *
+ * Minor tweak of mach_se.c file to reference rts7751r2d-specific items.
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Machine vector for the Renesas Technology sales RTS7751R2D
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/types.h>
+
+#include <asm/machvec.h>
+#include <asm/rtc.h>
+#include <asm/irq.h>
+#include <asm/rts7751r2d/io.h>
+
+extern void heartbeat_rts7751r2d(void);
+extern void init_rts7751r2d_IRQ(void);
+extern void *rts7751r2d_ioremap(unsigned long, unsigned long);
+extern int rts7751r2d_irq_demux(int irq);
+
+extern void *voyagergx_consistent_alloc(struct device *, size_t, dma_addr_t *, int);
+extern int voyagergx_consistent_free(struct device *, size_t, void *, dma_addr_t);
+
+/*
+ * The Machine Vector
+ */
+
+struct sh_machine_vector mv_rts7751r2d __initmv = {
+ .mv_nr_irqs = 72,
+
+ .mv_inb = rts7751r2d_inb,
+ .mv_inw = rts7751r2d_inw,
+ .mv_inl = rts7751r2d_inl,
+ .mv_outb = rts7751r2d_outb,
+ .mv_outw = rts7751r2d_outw,
+ .mv_outl = rts7751r2d_outl,
+
+ .mv_inb_p = rts7751r2d_inb_p,
+ .mv_inw_p = rts7751r2d_inw,
+ .mv_inl_p = rts7751r2d_inl,
+ .mv_outb_p = rts7751r2d_outb_p,
+ .mv_outw_p = rts7751r2d_outw,
+ .mv_outl_p = rts7751r2d_outl,
+
+ .mv_insb = rts7751r2d_insb,
+ .mv_insw = rts7751r2d_insw,
+ .mv_insl = rts7751r2d_insl,
+ .mv_outsb = rts7751r2d_outsb,
+ .mv_outsw = rts7751r2d_outsw,
+ .mv_outsl = rts7751r2d_outsl,
+
+ .mv_ioremap = rts7751r2d_ioremap,
+ .mv_isa_port2addr = rts7751r2d_isa_port2addr,
+ .mv_init_irq = init_rts7751r2d_IRQ,
+#ifdef CONFIG_HEARTBEAT
+ .mv_heartbeat = heartbeat_rts7751r2d,
+#endif
+ .mv_irq_demux = rts7751r2d_irq_demux,
+
+#ifdef CONFIG_USB_OHCI_HCD
+ .mv_consistent_alloc = voyagergx_consistent_alloc,
+ .mv_consistent_free = voyagergx_consistent_free,
+#endif
+};
+ALIAS_MV(rts7751r2d)
--- /dev/null
+/*
+ * arch/sh/cchips/voyagergx/consistent.c
+ *
+ * Copyright (C) 2004 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/mm.h>
+#include <linux/dma-mapping.h>
+#include <linux/slab.h>
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <asm/io.h>
+#include <asm/bus-sh.h>
+
+struct voya_alloc_entry {
+ struct list_head list;
+ unsigned long ofs;
+ unsigned long len;
+};
+
+static spinlock_t voya_list_lock = SPIN_LOCK_UNLOCKED;
+static LIST_HEAD(voya_alloc_list);
+
+#define OHCI_SRAM_START 0xb0000000
+#define OHCI_HCCA_SIZE 0x100
+#define OHCI_SRAM_SIZE 0x10000
+
+void *voyagergx_consistent_alloc(struct device *dev, size_t size,
+ dma_addr_t *handle, int flag)
+{
+ struct list_head *list = &voya_alloc_list;
+ struct voya_alloc_entry *entry;
+ struct sh_dev *shdev = to_sh_dev(dev);
+ unsigned long start, end;
+ unsigned long flags;
+
+ /*
+ * The SM501 contains an integrated 8051 with its own SRAM.
+ * Devices within the cchip can all hook into the 8051 SRAM.
+ * We presently use this for the OHCI.
+ *
+ * Everything else goes through consistent_alloc().
+ */
+ if (!dev || dev->bus != &sh_bus_types[SH_BUS_VIRT] ||
+ (dev->bus == &sh_bus_types[SH_BUS_VIRT] &&
+ shdev->dev_id != SH_DEV_ID_USB_OHCI))
+ return NULL;
+
+ start = OHCI_SRAM_START + OHCI_HCCA_SIZE;
+
+ entry = kmalloc(sizeof(struct voya_alloc_entry), GFP_ATOMIC);
+ if (!entry)
+ return ERR_PTR(-ENOMEM);
+
+ entry->len = (size + 15) & ~15;
+
+ /*
+ * The basis for this allocator is dwmw2's malloc.. the
+ * Matrox allocator :-)
+ */
+ spin_lock_irqsave(&voya_list_lock, flags);
+ list_for_each(list, &voya_alloc_list) {
+ struct voya_alloc_entry *p;
+
+ p = list_entry(list, struct voya_alloc_entry, list);
+
+ if (p->ofs - start >= size)
+ goto out;
+
+ start = p->ofs + p->len;
+ }
+
+ end = start + (OHCI_SRAM_SIZE - OHCI_HCCA_SIZE);
+ list = &voya_alloc_list;
+
+ if (end - start >= size) {
+out:
+ entry->ofs = start;
+ list_add_tail(&entry->list, list);
+ spin_unlock_irqrestore(&voya_list_lock, flags);
+
+ *handle = start;
+ return (void *)start;
+ }
+
+ kfree(entry);
+ spin_unlock_irqrestore(&voya_list_lock, flags);
+
+ return ERR_PTR(-EINVAL);
+}
+
+int voyagergx_consistent_free(struct device *dev, size_t size,
+ void *vaddr, dma_addr_t handle)
+{
+ struct voya_alloc_entry *entry;
+ struct sh_dev *shdev = to_sh_dev(dev);
+ unsigned long flags;
+
+ if (!dev || dev->bus != &sh_bus_types[SH_BUS_VIRT] ||
+ (dev->bus == &sh_bus_types[SH_BUS_VIRT] &&
+ shdev->dev_id != SH_DEV_ID_USB_OHCI))
+ return -EINVAL;
+
+ spin_lock_irqsave(&voya_list_lock, flags);
+ list_for_each_entry(entry, &voya_alloc_list, list) {
+ if (entry->ofs != handle)
+ continue;
+
+ list_del(&entry->list);
+ kfree(entry);
+
+ break;
+ }
+ spin_unlock_irqrestore(&voya_list_lock, flags);
+
+ return 0;
+}
+
+EXPORT_SYMBOL(voyagergx_consistent_alloc);
+EXPORT_SYMBOL(voyagergx_consistent_free);
+
--- /dev/null
+#
+# Automatically generated make config: don't edit
+#
+CONFIG_SUPERH=y
+CONFIG_UID16=y
+CONFIG_RWSEM_GENERIC_SPINLOCK=y
+
+#
+# Code maturity level options
+#
+CONFIG_EXPERIMENTAL=y
+CONFIG_CLEAN_COMPILE=y
+CONFIG_STANDALONE=y
+CONFIG_BROKEN_ON_SMP=y
+
+#
+# General setup
+#
+CONFIG_SWAP=y
+CONFIG_SYSVIPC=y
+# CONFIG_POSIX_MQUEUE is not set
+# CONFIG_BSD_PROCESS_ACCT is not set
+CONFIG_SYSCTL=y
+# CONFIG_AUDIT is not set
+CONFIG_LOG_BUF_SHIFT=14
+CONFIG_HOTPLUG=y
+# CONFIG_IKCONFIG is not set
+CONFIG_EMBEDDED=y
+CONFIG_KALLSYMS=y
+CONFIG_FUTEX=y
+CONFIG_EPOLL=y
+CONFIG_IOSCHED_NOOP=y
+CONFIG_IOSCHED_AS=y
+CONFIG_IOSCHED_DEADLINE=y
+CONFIG_IOSCHED_CFQ=y
+# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
+
+#
+# Loadable module support
+#
+CONFIG_MODULES=y
+# CONFIG_MODULE_UNLOAD is not set
+CONFIG_OBSOLETE_MODPARM=y
+# CONFIG_MODVERSIONS is not set
+# CONFIG_KMOD is not set
+
+#
+# System type
+#
+# CONFIG_SH_SOLUTION_ENGINE is not set
+# CONFIG_SH_7751_SOLUTION_ENGINE is not set
+# CONFIG_SH_7300_SOLUTION_ENGINE is not set
+# CONFIG_SH_7751_SYSTEMH is not set
+# CONFIG_SH_STB1_HARP is not set
+# CONFIG_SH_STB1_OVERDRIVE is not set
+# CONFIG_SH_HP620 is not set
+# CONFIG_SH_HP680 is not set
+# CONFIG_SH_HP690 is not set
+# CONFIG_SH_CQREEK is not set
+# CONFIG_SH_DMIDA is not set
+# CONFIG_SH_EC3104 is not set
+# CONFIG_SH_SATURN is not set
+# CONFIG_SH_DREAMCAST is not set
+# CONFIG_SH_CAT68701 is not set
+# CONFIG_SH_BIGSUR is not set
+# CONFIG_SH_SH2000 is not set
+# CONFIG_SH_ADX is not set
+# CONFIG_SH_MPC1211 is not set
+# CONFIG_SH_SECUREEDGE5410 is not set
+# CONFIG_SH_HS7751RVOIP is not set
+CONFIG_SH_RTS7751R2D=y
+# CONFIG_SH_UNKNOWN is not set
+# CONFIG_CPU_SH2 is not set
+# CONFIG_CPU_SH3 is not set
+CONFIG_CPU_SH4=y
+# CONFIG_CPU_SUBTYPE_SH7604 is not set
+# CONFIG_CPU_SUBTYPE_SH7300 is not set
+# CONFIG_CPU_SUBTYPE_SH7705 is not set
+# CONFIG_CPU_SUBTYPE_SH7707 is not set
+# CONFIG_CPU_SUBTYPE_SH7708 is not set
+# CONFIG_CPU_SUBTYPE_SH7709 is not set
+# CONFIG_CPU_SUBTYPE_SH7750 is not set
+CONFIG_CPU_SUBTYPE_SH7751=y
+# CONFIG_CPU_SUBTYPE_SH7760 is not set
+# CONFIG_CPU_SUBTYPE_ST40STB1 is not set
+# CONFIG_CPU_SUBTYPE_ST40GX1 is not set
+CONFIG_MMU=y
+CONFIG_CMDLINE_BOOL=y
+CONFIG_CMDLINE="mem=64M console=ttySC0,115200 root=/dev/hda1"
+CONFIG_MEMORY_START=0x0c000000
+CONFIG_MEMORY_SIZE=0x04000000
+CONFIG_MEMORY_SET=y
+# CONFIG_MEMORY_OVERRIDE is not set
+CONFIG_SH_RTC=y
+CONFIG_ZERO_PAGE_OFFSET=0x00010000
+CONFIG_BOOT_LINK_OFFSET=0x00800000
+CONFIG_CPU_LITTLE_ENDIAN=y
+# CONFIG_PREEMPT is not set
+# CONFIG_UBC_WAKEUP is not set
+# CONFIG_SH_WRITETHROUGH is not set
+# CONFIG_SH_OCRAM is not set
+# CONFIG_SH_STORE_QUEUES is not set
+# CONFIG_SMP is not set
+CONFIG_RTS7751R2D_REV11=y
+CONFIG_SH_PCLK_CALC=y
+CONFIG_SH_PCLK_FREQ=60000000
+
+#
+# CPU Frequency scaling
+#
+# CONFIG_CPU_FREQ is not set
+
+#
+# DMA support
+#
+CONFIG_SH_DMA=y
+CONFIG_NR_ONCHIP_DMA_CHANNELS=8
+# CONFIG_NR_DMA_CHANNELS_BOOL is not set
+
+#
+# Companion Chips
+#
+CONFIG_VOYAGERGX=y
+# CONFIG_HD6446X_SERIES is not set
+CONFIG_HEARTBEAT=y
+CONFIG_RTC_9701JE=y
+
+#
+# Bus options (PCI, PCMCIA, EISA, MCA, ISA)
+#
+CONFIG_ISA=y
+CONFIG_PCI=y
+CONFIG_SH_PCIDMA_NONCOHERENT=y
+CONFIG_PCI_AUTO=y
+CONFIG_PCI_AUTO_UPDATE_RESOURCES=y
+# CONFIG_PCI_LEGACY_PROC is not set
+CONFIG_PCI_NAMES=y
+
+#
+# PCMCIA/CardBus support
+#
+CONFIG_PCMCIA=m
+# CONFIG_PCMCIA_DEBUG is not set
+CONFIG_YENTA=m
+CONFIG_CARDBUS=y
+# CONFIG_I82092 is not set
+# CONFIG_I82365 is not set
+# CONFIG_TCIC is not set
+CONFIG_PCMCIA_PROBE=y
+
+#
+# PCI Hotplug Support
+#
+CONFIG_HOTPLUG_PCI=y
+# CONFIG_HOTPLUG_PCI_FAKE is not set
+# CONFIG_HOTPLUG_PCI_CPCI is not set
+# CONFIG_HOTPLUG_PCI_PCIE is not set
+# CONFIG_HOTPLUG_PCI_SHPC is not set
+
+#
+# Executable file formats
+#
+CONFIG_BINFMT_ELF=y
+# CONFIG_BINFMT_FLAT is not set
+# CONFIG_BINFMT_MISC is not set
+
+#
+# Device Drivers
+#
+
+#
+# Generic Driver Options
+#
+# CONFIG_FW_LOADER is not set
+
+#
+# Memory Technology Devices (MTD)
+#
+# CONFIG_MTD is not set
+
+#
+# Parallel port support
+#
+# CONFIG_PARPORT is not set
+
+#
+# Plug and Play support
+#
+# CONFIG_PNP is not set
+
+#
+# Block devices
+#
+# CONFIG_BLK_DEV_FD is not set
+# CONFIG_BLK_DEV_XD is not set
+# CONFIG_BLK_CPQ_DA is not set
+# CONFIG_BLK_CPQ_CISS_DA is not set
+# CONFIG_BLK_DEV_DAC960 is not set
+# CONFIG_BLK_DEV_UMEM is not set
+# CONFIG_BLK_DEV_LOOP is not set
+# CONFIG_BLK_DEV_NBD is not set
+# CONFIG_BLK_DEV_CARMEL is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_SIZE=4096
+# CONFIG_BLK_DEV_INITRD is not set
+# CONFIG_LBD is not set
+
+#
+# ATA/ATAPI/MFM/RLL support
+#
+CONFIG_IDE=y
+CONFIG_IDE_MAX_HWIFS=4
+CONFIG_BLK_DEV_IDE=y
+
+#
+# Please see Documentation/ide.txt for help/info on IDE drives
+#
+CONFIG_BLK_DEV_IDEDISK=y
+# CONFIG_IDEDISK_MULTI_MODE is not set
+CONFIG_BLK_DEV_IDECS=m
+# CONFIG_BLK_DEV_IDECD is not set
+# CONFIG_BLK_DEV_IDETAPE is not set
+# CONFIG_BLK_DEV_IDEFLOPPY is not set
+# CONFIG_IDE_TASK_IOCTL is not set
+# CONFIG_IDE_TASKFILE_IO is not set
+
+#
+# IDE chipset support/bugfixes
+#
+CONFIG_IDE_GENERIC=y
+# CONFIG_BLK_DEV_IDEPCI is not set
+CONFIG_IDE_SH=y
+# CONFIG_IDE_ARM is not set
+# CONFIG_IDE_CHIPSETS is not set
+# CONFIG_BLK_DEV_IDEDMA is not set
+# CONFIG_IDEDMA_AUTO is not set
+# CONFIG_BLK_DEV_HD is not set
+
+#
+# SCSI device support
+#
+# CONFIG_SCSI is not set
+
+#
+# Old CD-ROM drivers (not SCSI, not IDE)
+#
+# CONFIG_CD_NO_IDESCSI is not set
+
+#
+# Multi-device support (RAID and LVM)
+#
+# CONFIG_MD is not set
+
+#
+# Fusion MPT device support
+#
+
+#
+# IEEE 1394 (FireWire) support
+#
+# CONFIG_IEEE1394 is not set
+
+#
+# I2O device support
+#
+# CONFIG_I2O is not set
+
+#
+# Networking support
+#
+CONFIG_NET=y
+
+#
+# Networking options
+#
+CONFIG_PACKET=y
+# CONFIG_PACKET_MMAP is not set
+# CONFIG_NETLINK_DEV is not set
+CONFIG_UNIX=y
+# CONFIG_NET_KEY is not set
+CONFIG_INET=y
+# CONFIG_IP_MULTICAST is not set
+# CONFIG_IP_ADVANCED_ROUTER is not set
+# CONFIG_IP_PNP is not set
+# CONFIG_NET_IPIP is not set
+# CONFIG_NET_IPGRE is not set
+# CONFIG_ARPD is not set
+# CONFIG_SYN_COOKIES is not set
+# CONFIG_INET_AH is not set
+# CONFIG_INET_ESP is not set
+# CONFIG_INET_IPCOMP is not set
+# CONFIG_IPV6 is not set
+# CONFIG_NETFILTER is not set
+
+#
+# SCTP Configuration (EXPERIMENTAL)
+#
+# CONFIG_IP_SCTP is not set
+# CONFIG_ATM is not set
+# CONFIG_BRIDGE is not set
+# CONFIG_VLAN_8021Q is not set
+# CONFIG_DECNET is not set
+# CONFIG_LLC2 is not set
+# CONFIG_IPX is not set
+# CONFIG_ATALK is not set
+# CONFIG_X25 is not set
+# CONFIG_LAPB is not set
+# CONFIG_NET_DIVERT is not set
+# CONFIG_ECONET is not set
+# CONFIG_WAN_ROUTER is not set
+# CONFIG_NET_FASTROUTE is not set
+# CONFIG_NET_HW_FLOWCONTROL is not set
+
+#
+# QoS and/or fair queueing
+#
+# CONFIG_NET_SCHED is not set
+
+#
+# Network testing
+#
+# CONFIG_NET_PKTGEN is not set
+# CONFIG_NETPOLL is not set
+# CONFIG_NET_POLL_CONTROLLER is not set
+# CONFIG_HAMRADIO is not set
+# CONFIG_IRDA is not set
+# CONFIG_BT is not set
+CONFIG_NETDEVICES=y
+# CONFIG_DUMMY is not set
+# CONFIG_BONDING is not set
+# CONFIG_EQUALIZER is not set
+# CONFIG_TUN is not set
+
+#
+# ARCnet devices
+#
+# CONFIG_ARCNET is not set
+
+#
+# Ethernet (10 or 100Mbit)
+#
+CONFIG_NET_ETHERNET=y
+CONFIG_MII=y
+# CONFIG_STNIC is not set
+# CONFIG_HAPPYMEAL is not set
+# CONFIG_SUNGEM is not set
+# CONFIG_NET_VENDOR_3COM is not set
+# CONFIG_LANCE is not set
+# CONFIG_NET_VENDOR_SMC is not set
+# CONFIG_NET_VENDOR_RACAL is not set
+
+#
+# Tulip family network device support
+#
+# CONFIG_NET_TULIP is not set
+# CONFIG_AT1700 is not set
+# CONFIG_DEPCA is not set
+# CONFIG_HP100 is not set
+CONFIG_NET_ISA=y
+# CONFIG_E2100 is not set
+# CONFIG_EWRK3 is not set
+# CONFIG_EEXPRESS is not set
+# CONFIG_EEXPRESS_PRO is not set
+# CONFIG_HPLAN_PLUS is not set
+# CONFIG_HPLAN is not set
+# CONFIG_LP486E is not set
+# CONFIG_ETH16I is not set
+CONFIG_NE2000=m
+# CONFIG_ZNET is not set
+# CONFIG_SEEQ8005 is not set
+CONFIG_NET_PCI=y
+# CONFIG_PCNET32 is not set
+# CONFIG_AMD8111_ETH is not set
+# CONFIG_ADAPTEC_STARFIRE is not set
+# CONFIG_AC3200 is not set
+# CONFIG_APRICOT is not set
+# CONFIG_B44 is not set
+# CONFIG_FORCEDETH is not set
+# CONFIG_CS89x0 is not set
+# CONFIG_DGRS is not set
+# CONFIG_EEPRO100 is not set
+# CONFIG_E100 is not set
+# CONFIG_FEALNX is not set
+# CONFIG_NATSEMI is not set
+# CONFIG_NE2K_PCI is not set
+# CONFIG_8139CP is not set
+CONFIG_8139TOO=y
+# CONFIG_8139TOO_PIO is not set
+# CONFIG_8139TOO_TUNE_TWISTER is not set
+# CONFIG_8139TOO_8129 is not set
+# CONFIG_8139_OLD_RX_RESET is not set
+# CONFIG_SIS900 is not set
+# CONFIG_EPIC100 is not set
+# CONFIG_SUNDANCE is not set
+# CONFIG_TLAN is not set
+# CONFIG_VIA_RHINE is not set
+# CONFIG_NET_POCKET is not set
+
+#
+# Ethernet (1000 Mbit)
+#
+# CONFIG_ACENIC is not set
+# CONFIG_DL2K is not set
+# CONFIG_E1000 is not set
+# CONFIG_NS83820 is not set
+# CONFIG_HAMACHI is not set
+# CONFIG_YELLOWFIN is not set
+# CONFIG_R8169 is not set
+# CONFIG_SK98LIN is not set
+# CONFIG_TIGON3 is not set
+
+#
+# Ethernet (10000 Mbit)
+#
+# CONFIG_IXGB is not set
+# CONFIG_S2IO is not set
+
+#
+# Token Ring devices
+#
+# CONFIG_TR is not set
+
+#
+# Wireless LAN (non-hamradio)
+#
+CONFIG_NET_RADIO=y
+
+#
+# Obsolete Wireless cards support (pre-802.11)
+#
+# CONFIG_STRIP is not set
+# CONFIG_ARLAN is not set
+# CONFIG_WAVELAN is not set
+# CONFIG_PCMCIA_WAVELAN is not set
+# CONFIG_PCMCIA_NETWAVE is not set
+
+#
+# Wireless 802.11 Frequency Hopping cards support
+#
+# CONFIG_PCMCIA_RAYCS is not set
+
+#
+# Wireless 802.11b ISA/PCI cards support
+#
+# CONFIG_AIRO is not set
+CONFIG_HERMES=m
+# CONFIG_PLX_HERMES is not set
+# CONFIG_TMD_HERMES is not set
+# CONFIG_PCI_HERMES is not set
+# CONFIG_ATMEL is not set
+
+#
+# Wireless 802.11b Pcmcia/Cardbus cards support
+#
+CONFIG_PCMCIA_HERMES=m
+# CONFIG_AIRO_CS is not set
+# CONFIG_PCMCIA_WL3501 is not set
+
+#
+# Prism GT/Duette 802.11(a/b/g) PCI/Cardbus support
+#
+# CONFIG_PRISM54 is not set
+CONFIG_NET_WIRELESS=y
+
+#
+# PCMCIA network device support
+#
+# CONFIG_NET_PCMCIA is not set
+
+#
+# Wan interfaces
+#
+# CONFIG_WAN is not set
+# CONFIG_FDDI is not set
+# CONFIG_HIPPI is not set
+# CONFIG_PPP is not set
+# CONFIG_SLIP is not set
+# CONFIG_SHAPER is not set
+# CONFIG_NETCONSOLE is not set
+
+#
+# ISDN subsystem
+#
+# CONFIG_ISDN is not set
+
+#
+# Telephony Support
+#
+# CONFIG_PHONE is not set
+
+#
+# Input device support
+#
+# CONFIG_INPUT is not set
+
+#
+# Userland interfaces
+#
+
+#
+# Input I/O drivers
+#
+# CONFIG_GAMEPORT is not set
+CONFIG_SOUND_GAMEPORT=y
+# CONFIG_SERIO is not set
+# CONFIG_SERIO_I8042 is not set
+
+#
+# Input Device Drivers
+#
+
+#
+# Character devices
+#
+# CONFIG_VT is not set
+# CONFIG_SERIAL_NONSTANDARD is not set
+
+#
+# Serial drivers
+#
+# CONFIG_SERIAL_8250 is not set
+
+#
+# Non-8250 serial port support
+#
+# CONFIG_SERIAL_SH_SCI is not set
+# CONFIG_UNIX98_PTYS is not set
+CONFIG_LEGACY_PTYS=y
+CONFIG_LEGACY_PTY_COUNT=256
+# CONFIG_QIC02_TAPE is not set
+
+#
+# IPMI
+#
+# CONFIG_IPMI_HANDLER is not set
+
+#
+# Watchdog Cards
+#
+# CONFIG_WATCHDOG is not set
+# CONFIG_RTC is not set
+# CONFIG_GEN_RTC is not set
+# CONFIG_DTLK is not set
+# CONFIG_R3964 is not set
+# CONFIG_APPLICOM is not set
+
+#
+# Ftape, the floppy tape device driver
+#
+# CONFIG_FTAPE is not set
+# CONFIG_AGP is not set
+# CONFIG_DRM is not set
+
+#
+# PCMCIA character devices
+#
+# CONFIG_SYNCLINK_CS is not set
+# CONFIG_RAW_DRIVER is not set
+
+#
+# I2C support
+#
+# CONFIG_I2C is not set
+
+#
+# Misc devices
+#
+
+#
+# Multimedia devices
+#
+# CONFIG_VIDEO_DEV is not set
+
+#
+# Digital Video Broadcasting Devices
+#
+# CONFIG_DVB is not set
+
+#
+# Graphics support
+#
+# CONFIG_FB is not set
+
+#
+# Sound
+#
+CONFIG_SOUND=y
+
+#
+# Advanced Linux Sound Architecture
+#
+CONFIG_SND=m
+CONFIG_SND_TIMER=m
+CONFIG_SND_PCM=m
+CONFIG_SND_HWDEP=m
+CONFIG_SND_RAWMIDI=m
+# CONFIG_SND_SEQUENCER is not set
+# CONFIG_SND_MIXER_OSS is not set
+# CONFIG_SND_PCM_OSS is not set
+# CONFIG_SND_VERBOSE_PRINTK is not set
+# CONFIG_SND_DEBUG is not set
+
+#
+# Generic devices
+#
+CONFIG_SND_MPU401_UART=m
+CONFIG_SND_OPL3_LIB=m
+# CONFIG_SND_DUMMY is not set
+# CONFIG_SND_MTPAV is not set
+# CONFIG_SND_SERIAL_U16550 is not set
+# CONFIG_SND_MPU401 is not set
+
+#
+# ISA devices
+#
+# CONFIG_SND_AD1848 is not set
+# CONFIG_SND_CS4231 is not set
+# CONFIG_SND_CS4232 is not set
+# CONFIG_SND_CS4236 is not set
+# CONFIG_SND_ES1688 is not set
+# CONFIG_SND_ES18XX is not set
+# CONFIG_SND_GUSCLASSIC is not set
+# CONFIG_SND_GUSEXTREME is not set
+# CONFIG_SND_GUSMAX is not set
+# CONFIG_SND_INTERWAVE is not set
+# CONFIG_SND_INTERWAVE_STB is not set
+# CONFIG_SND_OPTI92X_AD1848 is not set
+# CONFIG_SND_OPTI92X_CS4231 is not set
+# CONFIG_SND_OPTI93X is not set
+# CONFIG_SND_SB8 is not set
+# CONFIG_SND_SB16 is not set
+# CONFIG_SND_SBAWE is not set
+# CONFIG_SND_WAVEFRONT is not set
+# CONFIG_SND_CMI8330 is not set
+# CONFIG_SND_OPL3SA2 is not set
+# CONFIG_SND_SGALAXY is not set
+# CONFIG_SND_SSCAPE is not set
+
+#
+# PCI devices
+#
+CONFIG_SND_AC97_CODEC=m
+# CONFIG_SND_ALI5451 is not set
+# CONFIG_SND_ATIIXP is not set
+# CONFIG_SND_AU8810 is not set
+# CONFIG_SND_AU8820 is not set
+# CONFIG_SND_AU8830 is not set
+# CONFIG_SND_AZT3328 is not set
+# CONFIG_SND_BT87X is not set
+# CONFIG_SND_CS46XX is not set
+# CONFIG_SND_CS4281 is not set
+# CONFIG_SND_EMU10K1 is not set
+# CONFIG_SND_KORG1212 is not set
+# CONFIG_SND_MIXART is not set
+# CONFIG_SND_NM256 is not set
+# CONFIG_SND_RME32 is not set
+# CONFIG_SND_RME96 is not set
+# CONFIG_SND_RME9652 is not set
+# CONFIG_SND_HDSP is not set
+# CONFIG_SND_TRIDENT is not set
+CONFIG_SND_YMFPCI=m
+# CONFIG_SND_ALS4000 is not set
+# CONFIG_SND_CMIPCI is not set
+# CONFIG_SND_ENS1370 is not set
+# CONFIG_SND_ENS1371 is not set
+# CONFIG_SND_ES1938 is not set
+# CONFIG_SND_ES1968 is not set
+# CONFIG_SND_MAESTRO3 is not set
+# CONFIG_SND_FM801 is not set
+# CONFIG_SND_ICE1712 is not set
+# CONFIG_SND_ICE1724 is not set
+# CONFIG_SND_INTEL8X0 is not set
+# CONFIG_SND_INTEL8X0M is not set
+# CONFIG_SND_SONICVIBES is not set
+# CONFIG_SND_VIA82XX is not set
+# CONFIG_SND_VX222 is not set
+
+#
+# PCMCIA devices
+#
+# CONFIG_SND_VXPOCKET is not set
+# CONFIG_SND_VXP440 is not set
+# CONFIG_SND_PDAUDIOCF is not set
+
+#
+# Open Sound System
+#
+CONFIG_SOUND_PRIME=m
+# CONFIG_SOUND_BT878 is not set
+CONFIG_SOUND_CMPCI=m
+# CONFIG_SOUND_EMU10K1 is not set
+# CONFIG_SOUND_FUSION is not set
+# CONFIG_SOUND_CS4281 is not set
+# CONFIG_SOUND_ES1370 is not set
+# CONFIG_SOUND_ES1371 is not set
+# CONFIG_SOUND_ESSSOLO1 is not set
+# CONFIG_SOUND_MAESTRO is not set
+# CONFIG_SOUND_MAESTRO3 is not set
+# CONFIG_SOUND_ICH is not set
+# CONFIG_SOUND_SONICVIBES is not set
+# CONFIG_SOUND_TRIDENT is not set
+# CONFIG_SOUND_MSNDCLAS is not set
+# CONFIG_SOUND_MSNDPIN is not set
+# CONFIG_SOUND_VIA82CXXX is not set
+# CONFIG_SOUND_OSS is not set
+# CONFIG_SOUND_ALI5455 is not set
+# CONFIG_SOUND_FORTE is not set
+# CONFIG_SOUND_RME96XX is not set
+# CONFIG_SOUND_AD1980 is not set
+CONFIG_SOUND_VOYAGERGX=m
+
+#
+# USB support
+#
+# CONFIG_USB is not set
+
+#
+# USB Gadget Support
+#
+# CONFIG_USB_GADGET is not set
+
+#
+# File systems
+#
+CONFIG_EXT2_FS=y
+# CONFIG_EXT2_FS_XATTR is not set
+# CONFIG_EXT3_FS is not set
+# CONFIG_JBD is not set
+# CONFIG_REISERFS_FS is not set
+# CONFIG_JFS_FS is not set
+# CONFIG_XFS_FS is not set
+CONFIG_MINIX_FS=y
+# CONFIG_ROMFS_FS is not set
+# CONFIG_QUOTA is not set
+# CONFIG_AUTOFS_FS is not set
+# CONFIG_AUTOFS4_FS is not set
+
+#
+# CD-ROM/DVD Filesystems
+#
+# CONFIG_ISO9660_FS is not set
+# CONFIG_UDF_FS is not set
+
+#
+# DOS/FAT/NT Filesystems
+#
+CONFIG_FAT_FS=y
+CONFIG_MSDOS_FS=y
+CONFIG_VFAT_FS=y
+# CONFIG_NTFS_FS is not set
+
+#
+# Pseudo filesystems
+#
+CONFIG_PROC_FS=y
+CONFIG_PROC_KCORE=y
+CONFIG_SYSFS=y
+# CONFIG_DEVFS_FS is not set
+# CONFIG_TMPFS is not set
+# CONFIG_HUGETLBFS is not set
+# CONFIG_HUGETLB_PAGE is not set
+CONFIG_RAMFS=y
+
+#
+# Miscellaneous filesystems
+#
+# CONFIG_ADFS_FS is not set
+# CONFIG_AFFS_FS is not set
+# CONFIG_HFS_FS is not set
+# CONFIG_HFSPLUS_FS is not set
+# CONFIG_BEFS_FS is not set
+# CONFIG_BFS_FS is not set
+# CONFIG_EFS_FS is not set
+# CONFIG_CRAMFS is not set
+# CONFIG_VXFS_FS is not set
+# CONFIG_HPFS_FS is not set
+# CONFIG_QNX4FS_FS is not set
+# CONFIG_SYSV_FS is not set
+# CONFIG_UFS_FS is not set
+
+#
+# Network File Systems
+#
+# CONFIG_NFS_FS is not set
+# CONFIG_NFSD is not set
+# CONFIG_EXPORTFS is not set
+# CONFIG_SMB_FS is not set
+# CONFIG_CIFS is not set
+# CONFIG_NCP_FS is not set
+# CONFIG_CODA_FS is not set
+# CONFIG_AFS_FS is not set
+
+#
+# Partition Types
+#
+# CONFIG_PARTITION_ADVANCED is not set
+CONFIG_MSDOS_PARTITION=y
+
+#
+# Native Language Support
+#
+CONFIG_NLS=y
+CONFIG_NLS_DEFAULT="iso8859-1"
+# CONFIG_NLS_CODEPAGE_437 is not set
+# CONFIG_NLS_CODEPAGE_737 is not set
+# CONFIG_NLS_CODEPAGE_775 is not set
+# CONFIG_NLS_CODEPAGE_850 is not set
+# CONFIG_NLS_CODEPAGE_852 is not set
+# CONFIG_NLS_CODEPAGE_855 is not set
+# CONFIG_NLS_CODEPAGE_857 is not set
+# CONFIG_NLS_CODEPAGE_860 is not set
+# CONFIG_NLS_CODEPAGE_861 is not set
+# CONFIG_NLS_CODEPAGE_862 is not set
+# CONFIG_NLS_CODEPAGE_863 is not set
+# CONFIG_NLS_CODEPAGE_864 is not set
+# CONFIG_NLS_CODEPAGE_865 is not set
+# CONFIG_NLS_CODEPAGE_866 is not set
+# CONFIG_NLS_CODEPAGE_869 is not set
+# CONFIG_NLS_CODEPAGE_936 is not set
+# CONFIG_NLS_CODEPAGE_950 is not set
+CONFIG_NLS_CODEPAGE_932=y
+# CONFIG_NLS_CODEPAGE_949 is not set
+# CONFIG_NLS_CODEPAGE_874 is not set
+# CONFIG_NLS_ISO8859_8 is not set
+# CONFIG_NLS_CODEPAGE_1250 is not set
+# CONFIG_NLS_CODEPAGE_1251 is not set
+# CONFIG_NLS_ISO8859_1 is not set
+# CONFIG_NLS_ISO8859_2 is not set
+# CONFIG_NLS_ISO8859_3 is not set
+# CONFIG_NLS_ISO8859_4 is not set
+# CONFIG_NLS_ISO8859_5 is not set
+# CONFIG_NLS_ISO8859_6 is not set
+# CONFIG_NLS_ISO8859_7 is not set
+# CONFIG_NLS_ISO8859_9 is not set
+# CONFIG_NLS_ISO8859_13 is not set
+# CONFIG_NLS_ISO8859_14 is not set
+# CONFIG_NLS_ISO8859_15 is not set
+# CONFIG_NLS_KOI8_R is not set
+# CONFIG_NLS_KOI8_U is not set
+# CONFIG_NLS_UTF8 is not set
+
+#
+# Profiling support
+#
+CONFIG_PROFILING=y
+CONFIG_OPROFILE=y
+
+#
+# Kernel hacking
+#
+# CONFIG_MAGIC_SYSRQ is not set
+# CONFIG_DEBUG_SPINLOCK is not set
+# CONFIG_DEBUG_INFO is not set
+# CONFIG_SH_STANDARD_BIOS is not set
+# CONFIG_EARLY_SCIF_CONSOLE is not set
+# CONFIG_KGDB is not set
+# CONFIG_FRAME_POINTER is not set
+
+#
+# Security options
+#
+# CONFIG_SECURITY is not set
+
+#
+# Cryptographic options
+#
+# CONFIG_CRYPTO is not set
+
+#
+# Library routines
+#
+CONFIG_CRC32=y
+# CONFIG_LIBCRC32C is not set
--- /dev/null
+/*
+ * arch/sh/drivers/pci/fixups-rts7751r2d.c
+ *
+ * RTS7751R2D PCI fixups
+ *
+ * Copyright (C) 2003 Lineo uSolutions, Inc.
+ * Copyright (C) 2004 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include "pci-sh7751.h"
+#include <asm/io.h>
+
+#define PCIMCR_MRSET_OFF 0xBFFFFFFF
+#define PCIMCR_RFSH_OFF 0xFFFFFFFB
+
+int pci_fixup_pcic(void)
+{
+ unsigned long bcr1, mcr;
+
+ bcr1 = inl(SH7751_BCR1);
+ bcr1 |= 0x40080000; /* Enable Bit 19 BREQEN, set PCIC to slave */
+ outl(bcr1, PCI_REG(SH7751_PCIBCR1));
+
+ /* Enable all interrupts, so we known what to fix */
+ outl(0x0000c3ff, PCI_REG(SH7751_PCIINTM));
+ outl(0x0000380f, PCI_REG(SH7751_PCIAINTM));
+
+ outl(0xfb900047, PCI_REG(SH7751_PCICONF1));
+ outl(0xab000001, PCI_REG(SH7751_PCICONF4));
+
+ mcr = inl(SH7751_MCR);
+ mcr = (mcr & PCIMCR_MRSET_OFF) & PCIMCR_RFSH_OFF;
+ outl(mcr, PCI_REG(SH7751_PCIMCR));
+
+ outl(0x0c000000, PCI_REG(SH7751_PCICONF5));
+ outl(0xd0000000, PCI_REG(SH7751_PCICONF6));
+ outl(0x0c000000, PCI_REG(SH7751_PCILAR0));
+ outl(0x00000000, PCI_REG(SH7751_PCILAR1));
+ return 0;
+}
--- /dev/null
+/*
+ * linux/arch/sh/kernel/pci-rts7751r2d.c
+ *
+ * Author: Ian DaSilva (idasilva@mvista.com)
+ *
+ * Highly leveraged from pci-bigsur.c, written by Dustin McIntire.
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * PCI initialization for the Renesas SH7751R RTS7751R2D board
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+
+#include <asm/io.h>
+#include "pci-sh7751.h"
+#include <asm/rts7751r2d/rts7751r2d.h>
+
+int __init pcibios_map_platform_irq(u8 slot, u8 pin)
+{
+ switch (slot) {
+ case 0: return IRQ_PCISLOT1; /* PCI Extend slot #1 */
+ case 1: return IRQ_PCISLOT2; /* PCI Extend slot #2 */
+ case 2: return IRQ_PCMCIA; /* PCI Cardbus Bridge */
+ case 3: return IRQ_PCIETH; /* Realtek Ethernet controller */
+ default:
+ printk("PCI: Bad IRQ mapping request for slot %d\n", slot);
+ return -1;
+ }
+}
+
+static struct resource sh7751_io_resource = {
+ .name = "SH7751_IO",
+ .start = 0x4000,
+ .end = 0x4000 + SH7751_PCI_IO_SIZE - 1,
+ .flags = IORESOURCE_IO
+};
+
+static struct resource sh7751_mem_resource = {
+ .name = "SH7751_mem",
+ .start = SH7751_PCI_MEMORY_BASE,
+ .end = SH7751_PCI_MEMORY_BASE + SH7751_PCI_MEM_SIZE - 1,
+ .flags = IORESOURCE_MEM
+};
+
+extern struct pci_ops sh7751_pci_ops;
+
+struct pci_channel board_pci_channels[] = {
+ { &sh7751_pci_ops, &sh7751_io_resource, &sh7751_mem_resource, 0, 0xff },
+ { NULL, NULL, NULL, 0, 0 },
+};
+EXPORT_SYMBOL(board_pci_channels);
+
+static struct sh7751_pci_address_map sh7751_pci_map = {
+ .window0 = {
+ .base = SH7751_CS3_BASE_ADDR,
+ .size = 0x04000000,
+ },
+
+ .window1 = {
+ .base = 0x00000000, /* Unused */
+ .size = 0x00000000, /* Unused */
+ },
+
+ .flags = SH7751_PCIC_NO_RESET,
+};
+
+int __init pcibios_init_platform(void)
+{
+ return sh7751_pcic_init(&sh7751_pci_map);
+}
+
--- /dev/null
+/*
+ * arch/sh/kernel/early_printk.c
+ *
+ * Copyright (C) 1999, 2000 Niibe Yutaka
+ * Copyright (C) 2002 M. R. Brown
+ * Copyright (C) 2004 Paul Mundt
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+#include <linux/console.h>
+#include <linux/tty.h>
+#include <linux/init.h>
+#include <asm/io.h>
+
+#ifdef CONFIG_SH_STANDARD_BIOS
+#include <asm/sh_bios.h>
+
+/*
+ * Print a string through the BIOS
+ */
+static void sh_console_write(struct console *co, const char *s,
+ unsigned count)
+{
+ sh_bios_console_write(s, count);
+}
+
+/*
+ * Setup initial baud/bits/parity. We do two things here:
+ * - construct a cflag setting for the first rs_open()
+ * - initialize the serial port
+ * Return non-zero if we didn't find a serial port.
+ */
+static int __init sh_console_setup(struct console *co, char *options)
+{
+ int cflag = CREAD | HUPCL | CLOCAL;
+
+ /*
+ * Now construct a cflag setting.
+ * TODO: this is a totally bogus cflag, as we have
+ * no idea what serial settings the BIOS is using, or
+ * even if its using the serial port at all.
+ */
+ cflag |= B115200 | CS8 | /*no parity*/0;
+
+ co->cflag = cflag;
+
+ return 0;
+}
+
+static struct console early_console = {
+ .name = "bios",
+ .write = sh_console_write,
+ .setup = sh_console_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+};
+#endif
+
+#ifdef CONFIG_EARLY_SCIF_CONSOLE
+#define SCIF_REG 0xffe80000
+
+static void scif_sercon_putc(int c)
+{
+ while (!(ctrl_inw(SCIF_REG + 0x10) & 0x20)) ;
+
+ ctrl_outb(c, SCIF_REG + 12);
+ ctrl_outw((ctrl_inw(SCIF_REG + 0x10) & 0x9f), SCIF_REG + 0x10);
+
+ if (c == '\n')
+ scif_sercon_putc('\r');
+}
+
+static void scif_sercon_flush(void)
+{
+ ctrl_outw((ctrl_inw(SCIF_REG + 0x10) & 0xbf), SCIF_REG + 0x10);
+
+ while (!(ctrl_inw(SCIF_REG + 0x10) & 0x40)) ;
+
+ ctrl_outw((ctrl_inw(SCIF_REG + 0x10) & 0xbf), SCIF_REG + 0x10);
+}
+
+static void scif_sercon_write(struct console *con, const char *s, unsigned count)
+{
+ while (count-- > 0)
+ scif_sercon_putc(*s++);
+
+ scif_sercon_flush();
+}
+
+static int __init scif_sercon_setup(struct console *con, char *options)
+{
+ con->cflag = CREAD | HUPCL | CLOCAL | B115200 | CS8;
+
+ return 0;
+}
+
+static struct console early_console = {
+ .name = "sercon",
+ .write = scif_sercon_write,
+ .setup = scif_sercon_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+};
+
+void scif_sercon_init(int baud)
+{
+ ctrl_outw(0, SCIF_REG + 8);
+ ctrl_outw(0, SCIF_REG);
+
+ /* Set baud rate */
+ ctrl_outb((CONFIG_SH_PCLK_FREQ + 16 * baud) /
+ (32 * baud) - 1, SCIF_REG + 4);
+
+ ctrl_outw(12, SCIF_REG + 24);
+ ctrl_outw(8, SCIF_REG + 24);
+ ctrl_outw(0, SCIF_REG + 32);
+ ctrl_outw(0x60, SCIF_REG + 16);
+ ctrl_outw(0, SCIF_REG + 36);
+ ctrl_outw(0x30, SCIF_REG + 8);
+}
+#endif
+
+void __init enable_early_printk(void)
+{
+#ifdef CONFIG_EARLY_SCIF_CONSOLE
+ scif_sercon_init(115200);
+#endif
+ register_console(&early_console);
+}
+
+void disable_early_printk(void)
+{
+ unregister_console(&early_console);
+}
+
--- /dev/null
+#
+# Makefile for a ramdisk image
+#
+
+obj-y += ramdisk.o
+
+
+O_FORMAT = $(shell $(OBJDUMP) -i | head -n 2 | grep elf32)
+img := $(subst ",,$(CONFIG_EMBEDDED_RAMDISK_IMAGE))
+# add $(src) when $(img) is relative
+img := $(subst $(src)//,/,$(src)/$(img))
+
+quiet_cmd_ramdisk = LD $@
+define cmd_ramdisk
+ $(LD) -T $(srctree)/$(src)/ld.script -b binary --oformat $(O_FORMAT) \
+ -o $@ $(img)
+endef
+
+$(obj)/ramdisk.o: $(img) $(srctree)/$(src)/ld.script
+ $(call cmd,ramdisk)
--- /dev/null
+#
+# For a description of the syntax of this configuration file,
+# see Documentation/kbuild/config-language.txt.
+#
+
+mainmenu "Linux/SH64 Kernel Configuration"
+
+config SUPERH
+ bool
+ default y
+
+config SUPERH64
+ bool
+ default y
+
+config MMU
+ bool
+ default y
+
+config UID16
+ bool
+ default y
+
+config RWSEM_GENERIC_SPINLOCK
+ bool
+ default y
+
+config LOG_BUF_SHIFT
+ int
+ default 14
+
+config RWSEM_XCHGADD_ALGORITHM
+ bool
+
+config GENERIC_ISA_DMA
+ bool
+
+source init/Kconfig
+
+menu "System type"
+
+choice
+ prompt "SuperH system type"
+ default SH_SIMULATOR
+
+config SH_GENERIC
+ bool "Generic"
+
+config SH_SIMULATOR
+ bool "Simulator"
+
+config SH_CAYMAN
+ bool "Cayman"
+
+config SH_ROMRAM
+ bool "ROM/RAM"
+
+config SH_HARP
+ bool "ST50-Harp"
+
+endchoice
+
+choice
+ prompt "Processor family"
+ default CPU_SH5
+
+config CPU_SH5
+ bool "SH-5"
+
+endchoice
+
+choice
+ prompt "Processor type"
+
+config CPU_SUBTYPE_SH5_101
+ bool "SH5-101"
+ depends on CPU_SH5
+
+config CPU_SUBTYPE_SH5_103
+ bool "SH5-103"
+ depends on CPU_SH5
+
+endchoice
+
+choice
+ prompt "Endianness"
+ default LITTLE_ENDIAN
+
+config LITTLE_ENDIAN
+ bool "Little-Endian"
+
+config BIG_ENDIAN
+ bool "Big-Endian"
+
+endchoice
+
+config SH64_FPU_DENORM_FLUSH
+ bool "Flush floating point denorms to zero"
+
+choice
+ prompt "Page table levels"
+ default SH64_PGTABLE_2_LEVEL
+
+config SH64_PGTABLE_2_LEVEL
+ bool "2"
+
+config SH64_PGTABLE_3_LEVEL
+ bool "3"
+
+endchoice
+
+choice
+ prompt "HugeTLB page size"
+ depends on HUGETLB_PAGE && MMU
+ default HUGETLB_PAGE_SIZE_64K
+
+config HUGETLB_PAGE_SIZE_64K
+ bool "64K"
+
+config HUGETLB_PAGE_SIZE_1MB
+ bool "1MB"
+
+config HUGETLB_PAGE_SIZE_512MB
+ bool "512MB"
+
+endchoice
+
+config SH64_USER_MISALIGNED_FIXUP
+ bool "Fixup misaligned loads/stores occurring in user mode"
+
+comment "Memory options"
+
+config CACHED_MEMORY_OFFSET
+ hex "Cached Area Offset"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "20000000"
+
+config MEMORY_START
+ hex "Physical memory start address"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "80000000"
+
+config MEMORY_SIZE_IN_MB
+ int "Memory size (in MB)" if SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "64" if SH_HARP || SH_CAYMAN
+ default "8" if SH_SIMULATOR
+
+comment "Cache options"
+
+config DCACHE_DISABLED
+ bool "DCache Disabling"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+
+choice
+ prompt "DCache mode"
+ depends on !DCACHE_DISABLED && !SH_SIMULATOR
+ default DCACHE_WRITE_BACK
+
+config DCACHE_WRITE_BACK
+ bool "Write-back"
+
+config DCACHE_WRITE_THROUGH
+ bool "Write-through"
+
+endchoice
+
+config ICACHE_DISABLED
+ bool "ICache Disabling"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+
+config PCIDEVICE_MEMORY_START
+ hex
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "C0000000"
+
+config DEVICE_MEMORY_START
+ hex
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "E0000000"
+
+config FLASH_MEMORY_START
+ hex "Flash memory/on-chip devices start address"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "00000000"
+
+config PCI_BLOCK_START
+ hex "PCI block start address"
+ depends on SH_HARP || SH_CAYMAN || SH_SIMULATOR
+ default "40000000"
+
+comment "CPU Subtype specific options"
+
+config SH64_ID2815_WORKAROUND
+ bool "Include workaround for SH5-101 cut2 silicon defect ID2815"
+
+comment "Misc options"
+config HEARTBEAT
+ bool "Heartbeat LED"
+
+config HDSP253_LED
+ bool "Support for HDSP-253 LED"
+ depends on SH_CAYMAN
+
+config SH_DMA
+ tristate "DMA controller (DMAC) support"
+
+config PREEMPT
+ bool "Preemptible Kernel (EXPERIMENTAL)"
+ depends on EXPERIMENTAL
+
+endmenu
+
+menu "Bus options (PCI, PCMCIA, EISA, MCA, ISA)"
+
+config ISA
+ bool
+
+config SBUS
+ bool
+
+config PCI
+ bool "PCI support"
+ help
+ Find out whether you have a PCI motherboard. PCI is the name of a
+ bus system, i.e. the way the CPU talks to the other stuff inside
+ your box. Other bus systems are ISA, EISA, MicroChannel (MCA) or
+ VESA. If you have PCI, say Y, otherwise N.
+
+ The PCI-HOWTO, available from
+ <http://www.tldp.org/docs.html#howto>, contains valuable
+ information about which PCI hardware does work under Linux and which
+ doesn't.
+
+config SH_PCIDMA_NONCOHERENT
+ bool "Cache and PCI noncoherent"
+ depends on PCI
+ default y
+ help
+ Enable this option if your platform does not have a CPU cache which
+ remains coherent with PCI DMA. It is safest to say 'Y', although you
+ will see better performance if you can say 'N', because the PCI DMA
+ code will not have to flush the CPU's caches. If you have a PCI host
+ bridge integrated with your SH CPU, refer carefully to the chip specs
+ to see if you can say 'N' here. Otherwise, leave it as 'Y'.
+
+source "drivers/pci/Kconfig"
+
+source "drivers/pcmcia/Kconfig"
+
+source "drivers/pci/hotplug/Kconfig"
+
+endmenu
+
+menu "Executable file formats"
+
+source "fs/Kconfig.binfmt"
+
+endmenu
+
+source "drivers/Kconfig"
+
+source "fs/Kconfig"
+
+source "arch/sh64/oprofile/Kconfig"
+
+source "arch/sh64/Kconfig.debug"
+
+source "security/Kconfig"
+
+source "crypto/Kconfig"
+
+source "lib/Kconfig"
--- /dev/null
+#
+# linux/arch/sh64/boot/compressed/Makefile
+#
+# This file is subject to the terms and conditions of the GNU General Public
+# License. See the file "COPYING" in the main directory of this archive
+# for more details.
+#
+# Copyright (C) 2002 Stuart Menefy
+# Copyright (C) 2004 Paul Mundt
+#
+# create a compressed vmlinux image from the original vmlinux
+#
+
+targets := vmlinux vmlinux.bin vmlinux.bin.gz \
+ head.o misc.o cache.o piggy.o vmlinux.lds
+
+EXTRA_AFLAGS := -traditional
+
+OBJECTS := $(obj)/head.o $(obj)/misc.o $(obj)/cache.o
+
+#
+# ZIMAGE_OFFSET is the load offset of the compression loader
+# (4M for the kernel plus 64K for this loader)
+#
+ZIMAGE_OFFSET = $(shell printf "0x%8x" $$[$(CONFIG_MEMORY_START)+0x400000+0x10000])
+
+LDFLAGS_vmlinux := -Ttext $(ZIMAGE_OFFSET) -e startup \
+ -T $(obj)/../../kernel/vmlinux.lds \
+ --no-warn-mismatch
+
+$(obj)/vmlinux: $(OBJECTS) $(obj)/piggy.o FORCE
+ $(call if_changed,ld)
+ @:
+
+$(obj)/vmlinux.bin: vmlinux FORCE
+ $(call if_changed,objcopy)
+
+$(obj)/vmlinux.bin.gz: $(obj)/vmlinux.bin FORCE
+ $(call if_changed,gzip)
+
+LDFLAGS_piggy.o := -r --format binary --oformat elf32-sh64-linux -T
+OBJCOPYFLAGS += -R .empty_zero_page
+
+$(obj)/piggy.o: $(obj)/vmlinux.lds $(obj)/vmlinux.bin.gz FORCE
+ $(call if_changed,ld)
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/irq.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+
+/*
+ * IRQs are in fact implemented a bit like signal handlers for the kernel.
+ * Naturally it's not a 1:1 relation, but there are similarities.
+ */
+
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/kernel_stat.h>
+#include <linux/signal.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/ioport.h>
+#include <linux/interrupt.h>
+#include <linux/timex.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/init.h>
+#include <linux/seq_file.h>
+#include <linux/bitops.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/smp.h>
+#include <asm/pgalloc.h>
+#include <asm/delay.h>
+#include <asm/irq.h>
+#include <linux/irq.h>
+
+/*
+ * Controller mappings for all interrupt sources:
+ */
+irq_desc_t irq_desc[NR_IRQS] __cacheline_aligned = {
+ [0 ... NR_IRQS-1] = {
+ .handler = &no_irq_type,
+ .lock = SPIN_LOCK_UNLOCKED
+ }
+};
+
+
+/*
+ * Special irq handlers.
+ */
+
+irqreturn_t no_action(int cpl, void *dev_id, struct pt_regs *regs)
+{
+ return IRQ_NONE;
+}
+
+/*
+ * Generic no controller code
+ */
+
+static void enable_none(unsigned int irq) { }
+static unsigned int startup_none(unsigned int irq) { return 0; }
+static void disable_none(unsigned int irq) { }
+static void ack_none(unsigned int irq)
+{
+/*
+ * 'what should we do if we get a hw irq event on an illegal vector'.
+ * each architecture has to answer this themselves, it doesnt deserve
+ * a generic callback i think.
+ */
+ printk("unexpected IRQ trap at irq %02x\n", irq);
+}
+
+/* startup is the same as "enable", shutdown is same as "disable" */
+#define shutdown_none disable_none
+#define end_none enable_none
+
+struct hw_interrupt_type no_irq_type = {
+ "none",
+ startup_none,
+ shutdown_none,
+ enable_none,
+ disable_none,
+ ack_none,
+ end_none
+};
+
+#if defined(CONFIG_PROC_FS)
+int show_interrupts(struct seq_file *p, void *v)
+{
+ int i = *(loff_t *) v, j;
+ struct irqaction * action;
+ unsigned long flags;
+
+ if (i == 0) {
+ seq_puts(p, " ");
+ for (j=0; j<NR_CPUS; j++)
+ if (cpu_online(j))
+ seq_printf(p, "CPU%d ",j);
+ seq_putc(p, '\n');
+ }
+
+ if (i < NR_IRQS) {
+ spin_lock_irqsave(&irq_desc[i].lock, flags);
+ action = irq_desc[i].action;
+ if (!action)
+ goto unlock;
+ seq_printf(p, "%3d: ",i);
+ seq_printf(p, "%10u ", kstat_irqs(i));
+ seq_printf(p, " %14s", irq_desc[i].handler->typename);
+ seq_printf(p, " %s", action->name);
+
+ for (action=action->next; action; action = action->next)
+ seq_printf(p, ", %s", action->name);
+ seq_putc(p, '\n');
+unlock:
+ spin_unlock_irqrestore(&irq_desc[i].lock, flags);
+ }
+ return 0;
+}
+#endif
+
+/*
+ * do_NMI handles all Non-Maskable Interrupts.
+ */
+asmlinkage void do_NMI(unsigned long vector_num, struct pt_regs * regs)
+{
+ if (regs->sr & 0x40000000)
+ printk("unexpected NMI trap in system mode\n");
+ else
+ printk("unexpected NMI trap in user mode\n");
+
+ /* No statistics */
+}
+
+/*
+ * This should really return information about whether
+ * we should do bottom half handling etc. Right now we
+ * end up _always_ checking the bottom half, which is a
+ * waste of time and is not what some drivers would
+ * prefer.
+ */
+int handle_IRQ_event(unsigned int irq, struct pt_regs * regs, struct irqaction * action)
+{
+ int status;
+ int ret;
+
+ status = 1; /* Force the "do bottom halves" bit */
+
+ if (!(action->flags & SA_INTERRUPT))
+ local_irq_enable();
+
+ do {
+ ret = action->handler(irq, action->dev_id, regs);
+ if (ret == IRQ_HANDLED)
+ status |= action->flags;
+ action = action->next;
+ } while (action);
+ if (status & SA_SAMPLE_RANDOM)
+ add_interrupt_randomness(irq);
+
+ local_irq_disable();
+
+ return status;
+}
+
+/*
+ * Generic enable/disable code: this just calls
+ * down into the PIC-specific version for the actual
+ * hardware disable after having gotten the irq
+ * controller lock.
+ */
+
+/**
+ * disable_irq_nosync - disable an irq without waiting
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Disables of an interrupt
+ * stack. Unlike disable_irq(), this function does not ensure existing
+ * instances of the IRQ handler have completed before returning.
+ *
+ * This function may be called from IRQ context.
+ */
+void disable_irq_nosync(unsigned int irq)
+{
+ irq_desc_t *desc = irq_desc + irq;
+ unsigned long flags;
+
+ spin_lock_irqsave(&desc->lock, flags);
+ if (!desc->depth++) {
+ desc->status |= IRQ_DISABLED;
+ desc->handler->disable(irq);
+ }
+ spin_unlock_irqrestore(&desc->lock, flags);
+}
+
+/**
+ * disable_irq - disable an irq and wait for completion
+ * @irq: Interrupt to disable
+ *
+ * Disable the selected interrupt line. Disables of an interrupt
+ * stack. That is for two disables you need two enables. This
+ * function waits for any pending IRQ handlers for this interrupt
+ * to complete before returning. If you use this function while
+ * holding a resource the IRQ handler may need you will deadlock.
+ *
+ * This function may be called - with care - from IRQ context.
+ */
+void disable_irq(unsigned int irq)
+{
+ disable_irq_nosync(irq);
+ synchronize_irq(irq);
+}
+
+/**
+ * enable_irq - enable interrupt handling on an irq
+ * @irq: Interrupt to enable
+ *
+ * Re-enables the processing of interrupts on this IRQ line
+ * providing no disable_irq calls are now in effect.
+ *
+ * This function may be called from IRQ context.
+ */
+void enable_irq(unsigned int irq)
+{
+ irq_desc_t *desc = irq_desc + irq;
+ unsigned long flags;
+
+ spin_lock_irqsave(&desc->lock, flags);
+ switch (desc->depth) {
+ case 1: {
+ unsigned int status = desc->status & ~IRQ_DISABLED;
+ desc->status = status;
+ if ((status & (IRQ_PENDING | IRQ_REPLAY)) == IRQ_PENDING) {
+ desc->status = status | IRQ_REPLAY;
+ hw_resend_irq(desc->handler,irq);
+ }
+ desc->handler->enable(irq);
+ /* fall-through */
+ }
+ default:
+ desc->depth--;
+ break;
+ case 0:
+ printk("enable_irq() unbalanced from %p\n",
+ __builtin_return_address(0));
+ }
+ spin_unlock_irqrestore(&desc->lock, flags);
+}
+
+/*
+ * do_IRQ handles all normal device IRQ's.
+ */
+asmlinkage int do_IRQ(unsigned long vector_num, struct pt_regs * regs)
+{
+ /*
+ * We ack quickly, we don't want the irq controller
+ * thinking we're snobs just because some other CPU has
+ * disabled global interrupts (we have already done the
+ * INT_ACK cycles, it's too late to try to pretend to the
+ * controller that we aren't taking the interrupt).
+ *
+ * 0 return value means that this irq is already being
+ * handled by some other CPU. (or is disabled)
+ */
+ int irq;
+ int cpu = smp_processor_id();
+ irq_desc_t *desc = NULL;
+ struct irqaction * action;
+ unsigned int status;
+
+ irq_enter();
+
+#ifdef CONFIG_PREEMPT
+ /*
+ * At this point we're now about to actually call handlers,
+ * and interrupts might get reenabled during them... bump
+ * preempt_count to prevent any preemption while the handler
+ * called here is pending...
+ */
+ preempt_disable();
+#endif
+
+ irq = irq_demux(vector_num);
+
+ /*
+ * Should never happen, if it does check
+ * vectorN_to_IRQ[] against trap_jtable[].
+ */
+ if (irq == -1) {
+ printk("unexpected IRQ trap at vector %03lx\n", vector_num);
+ goto out;
+ }
+
+ desc = irq_desc + irq;
+
+ kstat_cpu(cpu).irqs[irq]++;
+ spin_lock(&desc->lock);
+ desc->handler->ack(irq);
+ /*
+ REPLAY is when Linux resends an IRQ that was dropped earlier
+ WAITING is used by probe to mark irqs that are being tested
+ */
+ status = desc->status & ~(IRQ_REPLAY | IRQ_WAITING | IRQ_INPROGRESS);
+ status |= IRQ_PENDING; /* we _want_ to handle it */
+
+ /*
+ * If the IRQ is disabled for whatever reason, we cannot
+ * use the action we have.
+ */
+ action = NULL;
+ if (!(status & (IRQ_DISABLED | IRQ_INPROGRESS))) {
+ action = desc->action;
+ status &= ~IRQ_PENDING; /* we commit to handling */
+ status |= IRQ_INPROGRESS; /* we are handling it */
+ }
+ desc->status = status;
+
+ /*
+ * If there is no IRQ handler or it was disabled, exit early.
+ Since we set PENDING, if another processor is handling
+ a different instance of this same irq, the other processor
+ will take care of it.
+ */
+ if (!action)
+ goto out;
+
+ /*
+ * Edge triggered interrupts need to remember
+ * pending events.
+ * This applies to any hw interrupts that allow a second
+ * instance of the same irq to arrive while we are in do_IRQ
+ * or in the handler. But the code here only handles the _second_
+ * instance of the irq, not the third or fourth. So it is mostly
+ * useful for irq hardware that does not mask cleanly in an
+ * SMP environment.
+ */
+ for (;;) {
+ spin_unlock(&desc->lock);
+ handle_IRQ_event(irq, regs, action);
+ spin_lock(&desc->lock);
+
+ if (!(desc->status & IRQ_PENDING))
+ break;
+ desc->status &= ~IRQ_PENDING;
+ }
+ desc->status &= ~IRQ_INPROGRESS;
+out:
+ /*
+ * The ->end() handler has to deal with interrupts which got
+ * disabled while the handler was running.
+ */
+ if (desc) {
+ desc->handler->end(irq);
+ spin_unlock(&desc->lock);
+ }
+
+ irq_exit();
+
+#ifdef CONFIG_PREEMPT
+ /*
+ * We're done with the handlers, interrupts should be
+ * currently disabled; decrement preempt_count now so
+ * as we return preemption may be allowed...
+ */
+ preempt_enable_no_resched();
+#endif
+
+ return 1;
+}
+
+/**
+ * request_irq - allocate an interrupt line
+ * @irq: Interrupt line to allocate
+ * @handler: Function to be called when the IRQ occurs
+ * @irqflags: Interrupt type flags
+ * @devname: An ascii name for the claiming device
+ * @dev_id: A cookie passed back to the handler function
+ *
+ * This call allocates interrupt resources and enables the
+ * interrupt line and IRQ handling. From the point this
+ * call is made your handler function may be invoked. Since
+ * your handler function must clear any interrupt the board
+ * raises, you must take care both to initialise your hardware
+ * and to set up the interrupt handler in the right order.
+ *
+ * Dev_id must be globally unique. Normally the address of the
+ * device data structure is used as the cookie. Since the handler
+ * receives this value it makes sense to use it.
+ *
+ * If your interrupt is shared you must pass a non NULL dev_id
+ * as this is required when freeing the interrupt.
+ *
+ * Flags:
+ *
+ * SA_SHIRQ Interrupt is shared
+ *
+ * SA_INTERRUPT Disable local interrupts while processing
+ *
+ * SA_SAMPLE_RANDOM The interrupt can be used for entropy
+ *
+ */
+int request_irq(unsigned int irq,
+ irqreturn_t (*handler)(int, void *, struct pt_regs *),
+ unsigned long irqflags,
+ const char * devname,
+ void *dev_id)
+{
+ int retval;
+ struct irqaction * action;
+
+#if 1
+ /*
+ * Sanity-check: shared interrupts should REALLY pass in
+ * a real dev-ID, otherwise we'll have trouble later trying
+ * to figure out which interrupt is which (messes up the
+ * interrupt freeing logic etc).
+ */
+ if (irqflags & SA_SHIRQ) {
+ if (!dev_id)
+ printk("Bad boy: %s (at 0x%x) called us without a dev_id!\n", devname, (&irq)[-1]);
+ }
+#endif
+
+ if (irq >= NR_IRQS)
+ return -EINVAL;
+ if (!handler)
+ return -EINVAL;
+
+ action = (struct irqaction *)
+ kmalloc(sizeof(struct irqaction), GFP_KERNEL);
+ if (!action)
+ return -ENOMEM;
+
+ action->handler = handler;
+ action->flags = irqflags;
+ cpus_clear(action->mask);
+ action->name = devname;
+ action->next = NULL;
+ action->dev_id = dev_id;
+
+ retval = setup_irq(irq, action);
+ if (retval)
+ kfree(action);
+ return retval;
+}
+
+/**
+ * free_irq - free an interrupt
+ * @irq: Interrupt line to free
+ * @dev_id: Device identity to free
+ *
+ * Remove an interrupt handler. The handler is removed and if the
+ * interrupt line is no longer in use by any driver it is disabled.
+ * On a shared IRQ the caller must ensure the interrupt is disabled
+ * on the card it drives before calling this function. The function
+ * does not return until any executing interrupts for this IRQ
+ * have completed.
+ *
+ * This function may be called from interrupt context.
+ *
+ * Bugs: Attempting to free an irq in a handler for the same irq hangs
+ * the machine.
+ */
+void free_irq(unsigned int irq, void *dev_id)
+{
+ irq_desc_t *desc;
+ struct irqaction **p;
+ unsigned long flags;
+
+ if (irq >= NR_IRQS)
+ return;
+
+ desc = irq_desc + irq;
+ spin_lock_irqsave(&desc->lock,flags);
+ p = &desc->action;
+ for (;;) {
+ struct irqaction * action = *p;
+ if (action) {
+ struct irqaction **pp = p;
+ p = &action->next;
+ if (action->dev_id != dev_id)
+ continue;
+
+ /* Found it - now remove it from the list of entries */
+ *pp = action->next;
+ if (!desc->action) {
+ desc->status |= IRQ_DISABLED;
+ desc->handler->shutdown(irq);
+ }
+ spin_unlock_irqrestore(&desc->lock,flags);
+ kfree(action);
+ return;
+ }
+ printk("Trying to free free IRQ%d\n",irq);
+ spin_unlock_irqrestore(&desc->lock,flags);
+ return;
+ }
+}
+
+/*
+ * IRQ autodetection code..
+ *
+ * This depends on the fact that any interrupt that
+ * comes in on to an unassigned handler will get stuck
+ * with "IRQ_WAITING" cleared and the interrupt
+ * disabled.
+ */
+
+/**
+ * probe_irq_on - begin an interrupt autodetect
+ *
+ * Commence probing for an interrupt. The interrupts are scanned
+ * and a mask of potential interrupt lines is returned.
+ *
+ */
+unsigned long probe_irq_on(void)
+{
+ unsigned int i;
+ irq_desc_t *desc;
+ unsigned long val;
+ unsigned long delay;
+
+ /*
+ * something may have generated an irq long ago and we want to
+ * flush such a longstanding irq before considering it as spurious.
+ */
+ for (i = NR_IRQS-1; i >= 0; i--) {
+ desc = irq_desc + i;
+
+ spin_lock_irq(&desc->lock);
+ if (!irq_desc[i].action) {
+ irq_desc[i].handler->startup(i);
+ }
+ spin_unlock_irq(&desc->lock);
+ }
+
+ /* Wait for longstanding interrupts to trigger. */
+ for (delay = jiffies + HZ/50; time_after(delay, jiffies); )
+ /* about 20ms delay */ synchronize_irq();
+
+ /*
+ * enable any unassigned irqs
+ * (we must startup again here because if a longstanding irq
+ * happened in the previous stage, it may have masked itself)
+ */
+ for (i = NR_IRQS-1; i >= 0; i--) {
+ desc = irq_desc + 1;
+
+ spin_lock_irq(&desc->lock);
+ if (!desc->action) {
+ desc->status |= IRQ_AUTODETECT | IRQ_WAITING;
+ if (desc->handler->startup(i))
+ desc->status |= IRQ_PENDING;
+ }
+ spin_unlock_irq(&desc->lock);
+ }
+
+ /*
+ * Wait for spurious interrupts to trigger
+ */
+ for (delay = jiffies + HZ/10; time_after(delay, jiffies); )
+ /* about 100ms delay */ synchronize_irq();
+
+ /*
+ * Now filter out any obviously spurious interrupts
+ */
+ val = 0;
+ for (i = 0; i < NR_IRQS; i++) {
+ irq_desc_t *desc = irq_desc + i;
+ unsigned int status;
+
+ spin_lock_irq(&desc->lock);
+ status = desc->status;
+
+ if (status & IRQ_AUTODETECT) {
+ /* It triggered already - consider it spurious. */
+ if (!(status & IRQ_WAITING)) {
+ desc->status = status & ~IRQ_AUTODETECT;
+ desc->handler->shutdown(i);
+ } else
+ if (i < 32)
+ val |= 1 << i;
+ }
+ spin_unlock_irq(&desc->lock);
+ }
+
+ return val;
+}
+
+/*
+ * Return the one interrupt that triggered (this can
+ * handle any interrupt source).
+ */
+
+/**
+ * probe_irq_off - end an interrupt autodetect
+ * @val: mask of potential interrupts (unused)
+ *
+ * Scans the unused interrupt lines and returns the line which
+ * appears to have triggered the interrupt. If no interrupt was
+ * found then zero is returned. If more than one interrupt is
+ * found then minus the first candidate is returned to indicate
+ * their is doubt.
+ *
+ * The interrupt probe logic state is returned to its previous
+ * value.
+ *
+ * BUGS: When used in a module (which arguably shouldnt happen)
+ * nothing prevents two IRQ probe callers from overlapping. The
+ * results of this are non-optimal.
+ */
+int probe_irq_off(unsigned long val)
+{
+ int i, irq_found, nr_irqs;
+
+ nr_irqs = 0;
+ irq_found = 0;
+ for (i=0; i<NR_IRQS; i++) {
+ irq_desc_t *desc = irq_desc + i;
+ unsigned int status;
+
+ spin_lock_irq(&desc->lock);
+ status = desc->status;
+ if (!(status & IRQ_AUTODETECT))
+ continue;
+
+ if (status & IRQ_AUTODETECT) {
+ if (!(status & IRQ_WAITING)) {
+ if (!nr_irqs)
+ irq_found = i;
+ nr_irqs++;
+ }
+
+ desc->status = status & ~IRQ_AUTODETECT;
+ desc->handler->shutdown(i);
+ }
+ spin_unlock_irq(&desc->lock);
+ }
+
+ if (nr_irqs > 1)
+ irq_found = -irq_found;
+ return irq_found;
+}
+
+int setup_irq(unsigned int irq, struct irqaction * new)
+{
+ int shared = 0;
+ unsigned long flags;
+ struct irqaction *old, **p;
+ irq_desc_t *desc = irq_desc + irq;
+
+ /*
+ * Some drivers like serial.c use request_irq() heavily,
+ * so we have to be careful not to interfere with a
+ * running system.
+ */
+ if (new->flags & SA_SAMPLE_RANDOM) {
+ /*
+ * This function might sleep, we want to call it first,
+ * outside of the atomic block.
+ * Yes, this might clear the entropy pool if the wrong
+ * driver is attempted to be loaded, without actually
+ * installing a new handler, but is this really a problem,
+ * only the sysadmin is able to do this.
+ */
+ rand_initialize_irq(irq);
+ }
+
+ /*
+ * The following block of code has to be executed atomically
+ */
+ spin_lock_irqsave(&desc->lock,flags);
+ p = &desc->action;
+ if ((old = *p) != NULL) {
+ /* Can't share interrupts unless both agree to */
+ if (!(old->flags & new->flags & SA_SHIRQ)) {
+ spin_unlock_irqrestore(&desc->lock,flags);
+ return -EBUSY;
+ }
+
+ /* add new interrupt at end of irq queue */
+ do {
+ p = &old->next;
+ old = *p;
+ } while (old);
+ shared = 1;
+ }
+
+ *p = new;
+
+ if (!shared) {
+ desc->depth = 0;
+ desc->status &= ~IRQ_DISABLED;
+ desc->handler->startup(irq);
+ }
+ spin_unlock_irqrestore(&desc->lock,flags);
+
+ /*
+ * No PROC FS support for interrupts.
+ * For improvements in this area please check
+ * the i386 branch.
+ */
+ return 0;
+}
+
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_SYSCTL)
+
+void init_irq_proc(void)
+{
+ /*
+ * No PROC FS support for interrupts.
+ * For improvements in this area please check
+ * the i386 branch.
+ */
+}
+#endif
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/irq_intc.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * Interrupt Controller support for SH5 INTC.
+ * Per-interrupt selective. IRLM=0 (Fixed priority) is not
+ * supported being useless without a cascaded interrupt
+ * controller.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/irq.h>
+#include <linux/kernel.h>
+#include <linux/stddef.h>
+#include <linux/bitops.h> /* this includes also <asm/registers.h */
+ /* which is required to remap register */
+ /* names used into __asm__ blocks... */
+
+#include <asm/hardware.h>
+#include <asm/platform.h>
+#include <asm/page.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+
+/*
+ * Maybe the generic Peripheral block could move to a more
+ * generic include file. INTC Block will be defined here
+ * and only here to make INTC self-contained in a single
+ * file.
+ */
+#define INTC_BLOCK_OFFSET 0x01000000
+
+/* Base */
+#define INTC_BASE PHYS_PERIPHERAL_BLOCK + \
+ INTC_BLOCK_OFFSET
+
+/* Address */
+#define INTC_ICR_SET (intc_virt + 0x0)
+#define INTC_ICR_CLEAR (intc_virt + 0x8)
+#define INTC_INTPRI_0 (intc_virt + 0x10)
+#define INTC_INTSRC_0 (intc_virt + 0x50)
+#define INTC_INTSRC_1 (intc_virt + 0x58)
+#define INTC_INTREQ_0 (intc_virt + 0x60)
+#define INTC_INTREQ_1 (intc_virt + 0x68)
+#define INTC_INTENB_0 (intc_virt + 0x70)
+#define INTC_INTENB_1 (intc_virt + 0x78)
+#define INTC_INTDSB_0 (intc_virt + 0x80)
+#define INTC_INTDSB_1 (intc_virt + 0x88)
+
+#define INTC_ICR_IRLM 0x1
+#define INTC_INTPRI_PREGS 8 /* 8 Priority Registers */
+#define INTC_INTPRI_PPREG 8 /* 8 Priorities per Register */
+
+
+/*
+ * Mapper between the vector ordinal and the IRQ number
+ * passed to kernel/device drivers.
+ */
+int intc_evt_to_irq[(0xE20/0x20)+1] = {
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0x000 - 0x0E0 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0x100 - 0x1E0 */
+ 0, 0, 0, 0, 0, 1, 0, 0, /* 0x200 - 0x2E0 */
+ 2, 0, 0, 3, 0, 0, 0, -1, /* 0x300 - 0x3E0 */
+ 32, 33, 34, 35, 36, 37, 38, -1, /* 0x400 - 0x4E0 */
+ -1, -1, -1, 63, -1, -1, -1, -1, /* 0x500 - 0x5E0 */
+ -1, -1, 18, 19, 20, 21, 22, -1, /* 0x600 - 0x6E0 */
+ 39, 40, 41, 42, -1, -1, -1, -1, /* 0x700 - 0x7E0 */
+ 4, 5, 6, 7, -1, -1, -1, -1, /* 0x800 - 0x8E0 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0x900 - 0x9E0 */
+ 12, 13, 14, 15, 16, 17, -1, -1, /* 0xA00 - 0xAE0 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0xB00 - 0xBE0 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0xC00 - 0xCE0 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 0xD00 - 0xDE0 */
+ -1, -1 /* 0xE00 - 0xE20 */
+};
+
+/*
+ * Opposite mapper.
+ */
+static int IRQ_to_vectorN[NR_INTC_IRQS] = {
+ 0x12, 0x15, 0x18, 0x1B, 0x40, 0x41, 0x42, 0x43, /* 0- 7 */
+ -1, -1, -1, -1, 0x50, 0x51, 0x52, 0x53, /* 8-15 */
+ 0x54, 0x55, 0x32, 0x33, 0x34, 0x35, 0x36, -1, /* 16-23 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 24-31 */
+ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x38, /* 32-39 */
+ 0x39, 0x3A, 0x3B, -1, -1, -1, -1, -1, /* 40-47 */
+ -1, -1, -1, -1, -1, -1, -1, -1, /* 48-55 */
+ -1, -1, -1, -1, -1, -1, -1, 0x2B, /* 56-63 */
+
+};
+
+static unsigned long intc_virt;
+
+static unsigned int startup_intc_irq(unsigned int irq);
+static void shutdown_intc_irq(unsigned int irq);
+static void enable_intc_irq(unsigned int irq);
+static void disable_intc_irq(unsigned int irq);
+static void mask_and_ack_intc(unsigned int);
+static void end_intc_irq(unsigned int irq);
+
+static struct hw_interrupt_type intc_irq_type = {
+ "INTC",
+ startup_intc_irq,
+ shutdown_intc_irq,
+ enable_intc_irq,
+ disable_intc_irq,
+ mask_and_ack_intc,
+ end_intc_irq
+};
+
+static int irlm; /* IRL mode */
+
+static unsigned int startup_intc_irq(unsigned int irq)
+{
+ enable_intc_irq(irq);
+ return 0; /* never anything pending */
+}
+
+static void shutdown_intc_irq(unsigned int irq)
+{
+ disable_intc_irq(irq);
+}
+
+static void enable_intc_irq(unsigned int irq)
+{
+ unsigned long reg;
+ unsigned long bitmask;
+
+ if ((irq <= IRQ_IRL3) && (irlm == NO_PRIORITY))
+ printk("Trying to use straight IRL0-3 with an encoding platform.\n");
+
+ if (irq < 32) {
+ reg = INTC_INTENB_0;
+ bitmask = 1 << irq;
+ } else {
+ reg = INTC_INTENB_1;
+ bitmask = 1 << (irq - 32);
+ }
+
+ ctrl_outl(bitmask, reg);
+}
+
+static void disable_intc_irq(unsigned int irq)
+{
+ unsigned long reg;
+ unsigned long bitmask;
+
+ if (irq < 32) {
+ reg = INTC_INTDSB_0;
+ bitmask = 1 << irq;
+ } else {
+ reg = INTC_INTDSB_1;
+ bitmask = 1 << (irq - 32);
+ }
+
+ ctrl_outl(bitmask, reg);
+}
+
+static void mask_and_ack_intc(unsigned int irq)
+{
+ disable_intc_irq(irq);
+}
+
+static void end_intc_irq(unsigned int irq)
+{
+ enable_intc_irq(irq);
+}
+
+/* For future use, if we ever support IRLM=0) */
+void make_intc_irq(unsigned int irq)
+{
+ disable_irq_nosync(irq);
+ irq_desc[irq].handler = &intc_irq_type;
+ disable_intc_irq(irq);
+}
+
+#if defined(CONFIG_PROC_FS) && defined(CONFIG_SYSCTL)
+int intc_irq_describe(char* p, int irq)
+{
+ if (irq < NR_INTC_IRQS)
+ return sprintf(p, "(0x%3x)", IRQ_to_vectorN[irq]*0x20);
+ else
+ return 0;
+}
+#endif
+
+void __init init_IRQ(void)
+{
+ unsigned long long __dummy0, __dummy1=~0x00000000100000f0;
+ unsigned long reg;
+ unsigned long data;
+ int i;
+
+ intc_virt = onchip_remap(INTC_BASE, 1024, "INTC");
+ if (!intc_virt) {
+ panic("Unable to remap INTC\n");
+ }
+
+
+ /* Set default: per-line enable/disable, priority driven ack/eoi */
+ for (i = 0; i < NR_INTC_IRQS; i++) {
+ if (platform_int_priority[i] != NO_PRIORITY) {
+ irq_desc[i].handler = &intc_irq_type;
+ }
+ }
+
+
+ /* Disable all interrupts and set all priorities to 0 to avoid trouble */
+ ctrl_outl(-1, INTC_INTDSB_0);
+ ctrl_outl(-1, INTC_INTDSB_1);
+
+ for (reg = INTC_INTPRI_0, i = 0; i < INTC_INTPRI_PREGS; i++, reg += 8)
+ ctrl_outl( NO_PRIORITY, reg);
+
+
+ /* Set IRLM */
+ /* If all the priorities are set to 'no priority', then
+ * assume we are using encoded mode.
+ */
+ irlm = platform_int_priority[IRQ_IRL0] + platform_int_priority[IRQ_IRL1] + \
+ platform_int_priority[IRQ_IRL2] + platform_int_priority[IRQ_IRL3];
+
+ if (irlm == NO_PRIORITY) {
+ /* IRLM = 0 */
+ reg = INTC_ICR_CLEAR;
+ i = IRQ_INTA;
+ printk("Trying to use encoded IRL0-3. IRLs unsupported.\n");
+ } else {
+ /* IRLM = 1 */
+ reg = INTC_ICR_SET;
+ i = IRQ_IRL0;
+ }
+ ctrl_outl(INTC_ICR_IRLM, reg);
+
+ /* Set interrupt priorities according to platform description */
+ for (data = 0, reg = INTC_INTPRI_0; i < NR_INTC_IRQS; i++) {
+ data |= platform_int_priority[i] << ((i % INTC_INTPRI_PPREG) * 4);
+ if ((i % INTC_INTPRI_PPREG) == (INTC_INTPRI_PPREG - 1)) {
+ /* Upon the 7th, set Priority Register */
+ ctrl_outl(data, reg);
+ data = 0;
+ reg += 8;
+ }
+ }
+
+#ifdef CONFIG_SH_CAYMAN
+ {
+ extern void init_cayman_irq(void);
+
+ init_cayman_irq();
+ }
+#endif
+
+ /*
+ * And now let interrupts come in.
+ * sti() is not enough, we need to
+ * lower priority, too.
+ */
+ __asm__ __volatile__("getcon " __SR ", %0\n\t"
+ "and %0, %1, %0\n\t"
+ "putcon %0, " __SR "\n\t"
+ : "=&r" (__dummy0)
+ : "r" (__dummy1));
+}
--- /dev/null
+/*
+ * Copyright (C) 2001 David J. Mckay (david.mckay@st.com)
+ * Copyright (C) 2003, 2004 Paul Mundt
+ * Copyright (C) 2004 Richard Curnow
+ *
+ * May be copied or modified under the terms of the GNU General Public
+ * License. See linux/COPYING for more information.
+ *
+ * Support functions for the SH5 PCI hardware.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/rwsem.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/errno.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/types.h>
+#include <asm/pci.h>
+#include <linux/irq.h>
+
+#include <asm/io.h>
+#include <asm/hardware.h>
+#include "pci_sh5.h"
+
+static unsigned long pcicr_virt;
+unsigned long pciio_virt;
+
+static void __init pci_fixup_ide_bases(struct pci_dev *d)
+{
+ int i;
+
+ /*
+ * PCI IDE controllers use non-standard I/O port decoding, respect it.
+ */
+ if ((d->class >> 8) != PCI_CLASS_STORAGE_IDE)
+ return;
+ printk("PCI: IDE base address fixup for %s\n", d->slot_name);
+ for(i=0; i<4; i++) {
+ struct resource *r = &d->resource[i];
+ if ((r->start & ~0x80) == 0x374) {
+ r->start |= 2;
+ r->end = r->start;
+ }
+ }
+}
+DECLARE_PCI_FIXUP_HEADER(PCI_ANY_ID, PCI_ANY_ID, pci_fixup_ide_bases);
+
+char * __init pcibios_setup(char *str)
+{
+ return str;
+}
+
+/* Rounds a number UP to the nearest power of two. Used for
+ * sizing the PCI window.
+ */
+static u32 __init r2p2(u32 num)
+{
+ int i = 31;
+ u32 tmp = num;
+
+ if (num == 0)
+ return 0;
+
+ do {
+ if (tmp & (1 << 31))
+ break;
+ i--;
+ tmp <<= 1;
+ } while (i >= 0);
+
+ tmp = 1 << i;
+ /* If the original number isn't a power of 2, round it up */
+ if (tmp != num)
+ tmp <<= 1;
+
+ return tmp;
+}
+
+extern unsigned long long memory_start, memory_end;
+
+int __init sh5pci_init(unsigned memStart, unsigned memSize)
+{
+ u32 lsr0;
+ u32 uval;
+
+ pcicr_virt = onchip_remap(SH5PCI_ICR_BASE, 1024, "PCICR");
+ if (!pcicr_virt) {
+ panic("Unable to remap PCICR\n");
+ }
+
+ pciio_virt = onchip_remap(SH5PCI_IO_BASE, 0x10000, "PCIIO");
+ if (!pciio_virt) {
+ panic("Unable to remap PCIIO\n");
+ }
+
+ pr_debug("Register base addres is 0x%08lx\n", pcicr_virt);
+
+ /* Clear snoop registers */
+ SH5PCI_WRITE(CSCR0, 0);
+ SH5PCI_WRITE(CSCR1, 0);
+
+ pr_debug("Wrote to reg\n");
+
+ /* Switch off interrupts */
+ SH5PCI_WRITE(INTM, 0);
+ SH5PCI_WRITE(AINTM, 0);
+ SH5PCI_WRITE(PINTM, 0);
+
+ /* Set bus active, take it out of reset */
+ uval = SH5PCI_READ(CR);
+
+ /* Set command Register */
+ SH5PCI_WRITE(CR, uval | CR_LOCK_MASK | CR_CFINT| CR_FTO | CR_PFE | CR_PFCS | CR_BMAM);
+
+ uval=SH5PCI_READ(CR);
+ pr_debug("CR is actually 0x%08x\n",uval);
+
+ /* Allow it to be a master */
+ /* NB - WE DISABLE I/O ACCESS to stop overlap */
+ /* set WAIT bit to enable stepping, an attempt to improve stability */
+ SH5PCI_WRITE_SHORT(CSR_CMD,
+ PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER | PCI_COMMAND_WAIT);
+
+ /*
+ ** Set translation mapping memory in order to convert the address
+ ** used for the main bus, to the PCI internal address.
+ */
+ SH5PCI_WRITE(MBR,0x40000000);
+
+ /* Always set the max size 512M */
+ SH5PCI_WRITE(MBMR, PCISH5_MEM_SIZCONV(512*1024*1024));
+
+ /*
+ ** I/O addresses are mapped at internal PCI specific address
+ ** as is described into the configuration bridge table.
+ ** These are changed to 0, to allow cards that have legacy
+ ** io such as vga to function correctly. We set the SH5 IOBAR to
+ ** 256K, which is a bit big as we can only have 64K of address space
+ */
+
+ SH5PCI_WRITE(IOBR,0x0);
+
+ pr_debug("PCI:Writing 0x%08x to IOBR\n",0);
+
+ /* Set up a 256K window. Totally pointless waste of address space */
+ SH5PCI_WRITE(IOBMR,0);
+ pr_debug("PCI:Writing 0x%08x to IOBMR\n",0);
+
+ /* The SH5 has a HUGE 256K I/O region, which breaks the PCI spec. Ideally,
+ * we would want to map the I/O region somewhere, but it is so big this is not
+ * that easy!
+ */
+ SH5PCI_WRITE(CSR_IBAR0,~0);
+ /* Set memory size value */
+ memSize = memory_end - memory_start;
+
+ /* Now we set up the mbars so the PCI bus can see the memory of the machine */
+ if (memSize < (1024 * 1024)) {
+ printk(KERN_ERR "PCISH5: Ridiculous memory size of 0x%x?\n", memSize);
+ return -EINVAL;
+ }
+
+ /* Set LSR 0 */
+ lsr0 = (memSize > (512 * 1024 * 1024)) ? 0x1ff00001 : ((r2p2(memSize) - 0x100000) | 0x1);
+ SH5PCI_WRITE(LSR0, lsr0);
+
+ pr_debug("PCI:Writing 0x%08x to LSR0\n",lsr0);
+
+ /* Set MBAR 0 */
+ SH5PCI_WRITE(CSR_MBAR0, memory_start);
+ SH5PCI_WRITE(LAR0, memory_start);
+
+ SH5PCI_WRITE(CSR_MBAR1,0);
+ SH5PCI_WRITE(LAR1,0);
+ SH5PCI_WRITE(LSR1,0);
+
+ pr_debug("PCI:Writing 0x%08llx to CSR_MBAR0\n",memory_start);
+ pr_debug("PCI:Writing 0x%08llx to LAR0\n",memory_start);
+
+ /* Enable the PCI interrupts on the device */
+ SH5PCI_WRITE(INTM, ~0);
+ SH5PCI_WRITE(AINTM, ~0);
+ SH5PCI_WRITE(PINTM, ~0);
+
+ pr_debug("Switching on all error interrupts\n");
+
+ return(0);
+}
+
+static int sh5pci_read(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, u32 *val)
+{
+ SH5PCI_WRITE(PAR, CONFIG_CMD(bus, devfn, where));
+
+ switch (size) {
+ case 1:
+ *val = (u8)SH5PCI_READ_BYTE(PDR + (where & 3));
+ break;
+ case 2:
+ *val = (u16)SH5PCI_READ_SHORT(PDR + (where & 2));
+ break;
+ case 4:
+ *val = SH5PCI_READ(PDR);
+ break;
+ }
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static int sh5pci_write(struct pci_bus *bus, unsigned int devfn, int where,
+ int size, u32 val)
+{
+ SH5PCI_WRITE(PAR, CONFIG_CMD(bus, devfn, where));
+
+ switch (size) {
+ case 1:
+ SH5PCI_WRITE_BYTE(PDR + (where & 3), (u8)val);
+ break;
+ case 2:
+ SH5PCI_WRITE_SHORT(PDR + (where & 2), (u16)val);
+ break;
+ case 4:
+ SH5PCI_WRITE(PDR, val);
+ break;
+ }
+
+ return PCIBIOS_SUCCESSFUL;
+}
+
+static struct pci_ops pci_config_ops = {
+ .read = sh5pci_read,
+ .write = sh5pci_write,
+};
+
+/* Everything hangs off this */
+static struct pci_bus *pci_root_bus;
+
+
+static u8 __init no_swizzle(struct pci_dev *dev, u8 * pin)
+{
+ pr_debug("swizzle for dev %d on bus %d slot %d pin is %d\n",
+ dev->devfn,dev->bus->number, PCI_SLOT(dev->devfn),*pin);
+ return PCI_SLOT(dev->devfn);
+}
+
+static inline u8 bridge_swizzle(u8 pin, u8 slot)
+{
+ return (((pin-1) + slot) % 4) + 1;
+}
+
+u8 __init common_swizzle(struct pci_dev *dev, u8 *pinp)
+{
+ if (dev->bus->number != 0) {
+ u8 pin = *pinp;
+ do {
+ pin = bridge_swizzle(pin, PCI_SLOT(dev->devfn));
+ /* Move up the chain of bridges. */
+ dev = dev->bus->self;
+ } while (dev->bus->self);
+ *pinp = pin;
+
+ /* The slot is the slot of the last bridge. */
+ }
+
+ return PCI_SLOT(dev->devfn);
+}
+
+/* This needs to be shunted out of here into the board specific bit */
+
+static int __init map_cayman_irq(struct pci_dev *dev, u8 slot, u8 pin)
+{
+ int result = -1;
+
+ /* The complication here is that the PCI IRQ lines from the Cayman's 2
+ 5V slots get into the CPU via a different path from the IRQ lines
+ from the 3 3.3V slots. Thus, we have to detect whether the card's
+ interrupts go via the 5V or 3.3V path, i.e. the 'bridge swizzling'
+ at the point where we cross from 5V to 3.3V is not the normal case.
+
+ The added complication is that we don't know that the 5V slots are
+ always bus 2, because a card containing a PCI-PCI bridge may be
+ plugged into a 3.3V slot, and this changes the bus numbering.
+
+ Also, the Cayman has an intermediate PCI bus that goes a custom
+ expansion board header (and to the secondary bridge). This bus has
+ never been used in practice.
+
+ The 1ary onboard PCI-PCI bridge is device 3 on bus 0
+ The 2ary onboard PCI-PCI bridge is device 0 on the 2ary bus of the 1ary bridge.
+ */
+
+ struct slot_pin {
+ int slot;
+ int pin;
+ } path[4];
+ int i=0;
+
+ while (dev->bus->number > 0) {
+
+ slot = path[i].slot = PCI_SLOT(dev->devfn);
+ pin = path[i].pin = bridge_swizzle(pin, slot);
+ dev = dev->bus->self;
+ i++;
+ if (i > 3) panic("PCI path to root bus too long!\n");
+ }
+
+ slot = PCI_SLOT(dev->devfn);
+ /* This is the slot on bus 0 through which the device is eventually
+ reachable. */
+
+ /* Now work back up. */
+ if ((slot < 3) || (i == 0)) {
+ /* Bus 0 (incl. PCI-PCI bridge itself) : perform the final
+ swizzle now. */
+ result = IRQ_INTA + bridge_swizzle(pin, slot) - 1;
+ } else {
+ i--;
+ slot = path[i].slot;
+ pin = path[i].pin;
+ if (slot > 0) {
+ panic("PCI expansion bus device found - not handled!\n");
+ } else {
+ if (i > 0) {
+ /* 5V slots */
+ i--;
+ slot = path[i].slot;
+ pin = path[i].pin;
+ /* 'pin' was swizzled earlier wrt slot, don't do it again. */
+ result = IRQ_P2INTA + (pin - 1);
+ } else {
+ /* IRQ for 2ary PCI-PCI bridge : unused */
+ result = -1;
+ }
+ }
+ }
+
+ return result;
+}
+
+irqreturn_t pcish5_err_irq(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned pci_int, pci_air, pci_cir, pci_aint;
+
+ pci_int = SH5PCI_READ(INT);
+ pci_cir = SH5PCI_READ(CIR);
+ pci_air = SH5PCI_READ(AIR);
+
+ if (pci_int) {
+ printk("PCI INTERRUPT (at %08llx)!\n", regs->pc);
+ printk("PCI INT -> 0x%x\n", pci_int & 0xffff);
+ printk("PCI AIR -> 0x%x\n", pci_air);
+ printk("PCI CIR -> 0x%x\n", pci_cir);
+ SH5PCI_WRITE(INT, ~0);
+ }
+
+ pci_aint = SH5PCI_READ(AINT);
+ if (pci_aint) {
+ printk("PCI ARB INTERRUPT!\n");
+ printk("PCI AINT -> 0x%x\n", pci_aint);
+ printk("PCI AIR -> 0x%x\n", pci_air);
+ printk("PCI CIR -> 0x%x\n", pci_cir);
+ SH5PCI_WRITE(AINT, ~0);
+ }
+
+ return IRQ_HANDLED;
+}
+
+irqreturn_t pcish5_serr_irq(int irq, void *dev_id, struct pt_regs *regs)
+{
+ printk("SERR IRQ\n");
+
+ return IRQ_NONE;
+}
+
+#define ROUND_UP(x, a) (((x) + (a) - 1) & ~((a) - 1))
+
+static void __init
+pcibios_size_bridge(struct pci_bus *bus, struct resource *ior,
+ struct resource *memr)
+{
+ struct resource io_res, mem_res;
+ struct pci_dev *dev;
+ struct pci_dev *bridge = bus->self;
+ struct list_head *ln;
+
+ if (!bridge)
+ return; /* host bridge, nothing to do */
+
+ /* set reasonable default locations for pcibios_align_resource */
+ io_res.start = PCIBIOS_MIN_IO;
+ mem_res.start = PCIBIOS_MIN_MEM;
+
+ io_res.end = io_res.start;
+ mem_res.end = mem_res.start;
+
+ /* Collect information about how our direct children are layed out. */
+ for (ln=bus->devices.next; ln != &bus->devices; ln=ln->next) {
+ int i;
+ dev = pci_dev_b(ln);
+
+ /* Skip bridges for now */
+ if (dev->class >> 8 == PCI_CLASS_BRIDGE_PCI)
+ continue;
+
+ for (i = 0; i < PCI_NUM_RESOURCES; i++) {
+ struct resource res;
+ unsigned long size;
+
+ memcpy(&res, &dev->resource[i], sizeof(res));
+ size = res.end - res.start + 1;
+
+ if (res.flags & IORESOURCE_IO) {
+ res.start = io_res.end;
+ pcibios_align_resource(dev, &res, size, 0);
+ io_res.end = res.start + size;
+ } else if (res.flags & IORESOURCE_MEM) {
+ res.start = mem_res.end;
+ pcibios_align_resource(dev, &res, size, 0);
+ mem_res.end = res.start + size;
+ }
+ }
+ }
+
+ /* And for all of the subordinate busses. */
+ for (ln=bus->children.next; ln != &bus->children; ln=ln->next)
+ pcibios_size_bridge(pci_bus_b(ln), &io_res, &mem_res);
+
+ /* turn the ending locations into sizes (subtract start) */
+ io_res.end -= io_res.start;
+ mem_res.end -= mem_res.start;
+
+ /* Align the sizes up by bridge rules */
+ io_res.end = ROUND_UP(io_res.end, 4*1024) - 1;
+ mem_res.end = ROUND_UP(mem_res.end, 1*1024*1024) - 1;
+
+ /* Adjust the bridge's allocation requirements */
+ bridge->resource[0].end = bridge->resource[0].start + io_res.end;
+ bridge->resource[1].end = bridge->resource[1].start + mem_res.end;
+
+ bridge->resource[PCI_BRIDGE_RESOURCES].end =
+ bridge->resource[PCI_BRIDGE_RESOURCES].start + io_res.end;
+ bridge->resource[PCI_BRIDGE_RESOURCES+1].end =
+ bridge->resource[PCI_BRIDGE_RESOURCES+1].start + mem_res.end;
+
+ /* adjust parent's resource requirements */
+ if (ior) {
+ ior->end = ROUND_UP(ior->end, 4*1024);
+ ior->end += io_res.end;
+ }
+
+ if (memr) {
+ memr->end = ROUND_UP(memr->end, 1*1024*1024);
+ memr->end += mem_res.end;
+ }
+}
+
+#undef ROUND_UP
+
+static void __init pcibios_size_bridges(void)
+{
+ struct resource io_res, mem_res;
+
+ memset(&io_res, 0, sizeof(io_res));
+ memset(&mem_res, 0, sizeof(mem_res));
+
+ pcibios_size_bridge(pci_root_bus, &io_res, &mem_res);
+}
+
+static int __init pcibios_init(void)
+{
+ if (request_irq(IRQ_ERR, pcish5_err_irq,
+ SA_INTERRUPT, "PCI Error",NULL) < 0) {
+ printk(KERN_ERR "PCISH5: Cannot hook PCI_PERR interrupt\n");
+ return -EINVAL;
+ }
+
+ if (request_irq(IRQ_SERR, pcish5_serr_irq,
+ SA_INTERRUPT, "PCI SERR interrupt", NULL) < 0) {
+ printk(KERN_ERR "PCISH5: Cannot hook PCI_SERR interrupt\n");
+ return -EINVAL;
+ }
+
+ /* The pci subsytem needs to know where memory is and how much
+ * of it there is. I've simply made these globals. A better mechanism
+ * is probably needed.
+ */
+ sh5pci_init(__pa(memory_start),
+ __pa(memory_end) - __pa(memory_start));
+
+ pci_root_bus = pci_scan_bus(0, &pci_config_ops, NULL);
+ pcibios_size_bridges();
+ pci_assign_unassigned_resources();
+ pci_fixup_irqs(no_swizzle, map_cayman_irq);
+
+ return 0;
+}
+
+subsys_initcall(pcibios_init);
+
+void __init pcibios_fixup_bus(struct pci_bus *bus)
+{
+ struct pci_dev *dev = bus->self;
+ int i;
+
+#if 1
+ if(dev) {
+ for(i=0; i<3; i++) {
+ bus->resource[i] =
+ &dev->resource[PCI_BRIDGE_RESOURCES+i];
+ bus->resource[i]->name = bus->name;
+ }
+ bus->resource[0]->flags |= IORESOURCE_IO;
+ bus->resource[1]->flags |= IORESOURCE_MEM;
+
+ /* For now, propagate host limits to the bus;
+ * we'll adjust them later. */
+
+#if 1
+ bus->resource[0]->end = 64*1024 - 1 ;
+ bus->resource[1]->end = PCIBIOS_MIN_MEM+(256*1024*1024)-1;
+ bus->resource[0]->start = PCIBIOS_MIN_IO;
+ bus->resource[1]->start = PCIBIOS_MIN_MEM;
+#else
+ bus->resource[0]->end = 0
+ bus->resource[1]->end = 0
+ bus->resource[0]->start =0
+ bus->resource[1]->start = 0;
+#endif
+ /* Turn off downstream PF memory address range by default */
+ bus->resource[2]->start = 1024*1024;
+ bus->resource[2]->end = bus->resource[2]->start - 1;
+ }
+#endif
+
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/process.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ * Copyright (C) 2003, 2004 Richard Curnow
+ *
+ * Started from SH3/4 version:
+ * Copyright (C) 1999, 2000 Niibe Yutaka & Kaz Kojima
+ *
+ * In turn started from i386 version:
+ * Copyright (C) 1995 Linus Torvalds
+ *
+ */
+
+/*
+ * This file handles the architecture-dependent parts of process handling..
+ */
+
+/* Temporary flags/tests. All to be removed/undefined. BEGIN */
+#define IDLE_TRACE
+#define VM_SHOW_TABLES
+#define VM_TEST_FAULT
+#define VM_TEST_RTLBMISS
+#define VM_TEST_WTLBMISS
+
+#undef VM_SHOW_TABLES
+#undef IDLE_TRACE
+/* Temporary flags/tests. All to be removed/undefined. END */
+
+#define __KERNEL_SYSCALLS__
+#include <stdarg.h>
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/rwsem.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/ptrace.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/user.h>
+#include <linux/a.out.h>
+#include <linux/interrupt.h>
+#include <linux/unistd.h>
+#include <linux/delay.h>
+#include <linux/reboot.h>
+#include <linux/init.h>
+
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/processor.h> /* includes also <asm/registers.h> */
+#include <asm/mmu_context.h>
+#include <asm/elf.h>
+#include <asm/page.h>
+
+#include <linux/irq.h>
+
+struct task_struct *last_task_used_math = NULL;
+
+#ifdef IDLE_TRACE
+#ifdef VM_SHOW_TABLES
+/* For testing */
+static void print_PTE(long base)
+{
+ int i, skip=0;
+ long long x, y, *p = (long long *) base;
+
+ for (i=0; i< 512; i++, p++){
+ if (*p == 0) {
+ if (!skip) {
+ skip++;
+ printk("(0s) ");
+ }
+ } else {
+ skip=0;
+ x = (*p) >> 32;
+ y = (*p) & 0xffffffff;
+ printk("%08Lx%08Lx ", x, y);
+ if (!((i+1)&0x3)) printk("\n");
+ }
+ }
+}
+
+/* For testing */
+static void print_DIR(long base)
+{
+ int i, skip=0;
+ long *p = (long *) base;
+
+ for (i=0; i< 512; i++, p++){
+ if (*p == 0) {
+ if (!skip) {
+ skip++;
+ printk("(0s) ");
+ }
+ } else {
+ skip=0;
+ printk("%08lx ", *p);
+ if (!((i+1)&0x7)) printk("\n");
+ }
+ }
+}
+
+/* For testing */
+static void print_vmalloc_first_tables(void)
+{
+
+#define PRESENT 0x800 /* Bit 11 */
+
+ /*
+ * Do it really dirty by looking at raw addresses,
+ * raw offsets, no types. If we used pgtable/pgalloc
+ * macros/definitions we could hide potential bugs.
+ *
+ * Note that pointers are 32-bit for CDC.
+ */
+ long pgdt, pmdt, ptet;
+
+ pgdt = (long) &swapper_pg_dir;
+ printk("-->PGD (0x%08lx):\n", pgdt);
+ print_DIR(pgdt);
+ printk("\n");
+
+ /* VMALLOC pool is mapped at 0xc0000000, second (pointer) entry in PGD */
+ pgdt += 4;
+ pmdt = (long) (* (long *) pgdt);
+ if (!(pmdt & PRESENT)) {
+ printk("No PMD\n");
+ return;
+ } else pmdt &= 0xfffff000;
+
+ printk("-->PMD (0x%08lx):\n", pmdt);
+ print_DIR(pmdt);
+ printk("\n");
+
+ /* Get the pmdt displacement for 0xc0000000 */
+ pmdt += 2048;
+
+ /* just look at first two address ranges ... */
+ /* ... 0xc0000000 ... */
+ ptet = (long) (* (long *) pmdt);
+ if (!(ptet & PRESENT)) {
+ printk("No PTE0\n");
+ return;
+ } else ptet &= 0xfffff000;
+
+ printk("-->PTE0 (0x%08lx):\n", ptet);
+ print_PTE(ptet);
+ printk("\n");
+
+ /* ... 0xc0001000 ... */
+ ptet += 4;
+ if (!(ptet & PRESENT)) {
+ printk("No PTE1\n");
+ return;
+ } else ptet &= 0xfffff000;
+ printk("-->PTE1 (0x%08lx):\n", ptet);
+ print_PTE(ptet);
+ printk("\n");
+}
+#else
+#define print_vmalloc_first_tables()
+#endif /* VM_SHOW_TABLES */
+
+static void test_VM(void)
+{
+ void *a, *b, *c;
+
+#ifdef VM_SHOW_TABLES
+ printk("Initial PGD/PMD/PTE\n");
+#endif
+ print_vmalloc_first_tables();
+
+ printk("Allocating 2 bytes\n");
+ a = vmalloc(2);
+ print_vmalloc_first_tables();
+
+ printk("Allocating 4100 bytes\n");
+ b = vmalloc(4100);
+ print_vmalloc_first_tables();
+
+ printk("Allocating 20234 bytes\n");
+ c = vmalloc(20234);
+ print_vmalloc_first_tables();
+
+#ifdef VM_TEST_FAULT
+ /* Here you may want to fault ! */
+
+#ifdef VM_TEST_RTLBMISS
+ printk("Ready to fault upon read.\n");
+ if (* (char *) a) {
+ printk("RTLBMISSed on area a !\n");
+ }
+ printk("RTLBMISSed on area a !\n");
+#endif
+
+#ifdef VM_TEST_WTLBMISS
+ printk("Ready to fault upon write.\n");
+ *((char *) b) = 'L';
+ printk("WTLBMISSed on area b !\n");
+#endif
+
+#endif /* VM_TEST_FAULT */
+
+ printk("Deallocating the 4100 byte chunk\n");
+ vfree(b);
+ print_vmalloc_first_tables();
+
+ printk("Deallocating the 2 byte chunk\n");
+ vfree(a);
+ print_vmalloc_first_tables();
+
+ printk("Deallocating the last chunk\n");
+ vfree(c);
+ print_vmalloc_first_tables();
+}
+
+extern unsigned long volatile jiffies;
+int once = 0;
+unsigned long old_jiffies;
+int pid = -1, pgid = -1;
+
+void idle_trace(void)
+{
+
+ _syscall0(int, getpid)
+ _syscall1(int, getpgid, int, pid)
+
+ if (!once) {
+ /* VM allocation/deallocation simple test */
+ test_VM();
+ pid = getpid();
+
+ printk("Got all through to Idle !!\n");
+ printk("I'm now going to loop forever ...\n");
+ printk("Any ! below is a timer tick.\n");
+ printk("Any . below is a getpgid system call from pid = %d.\n", pid);
+
+
+ old_jiffies = jiffies;
+ once++;
+ }
+
+ if (old_jiffies != jiffies) {
+ old_jiffies = jiffies - old_jiffies;
+ switch (old_jiffies) {
+ case 1:
+ printk("!");
+ break;
+ case 2:
+ printk("!!");
+ break;
+ case 3:
+ printk("!!!");
+ break;
+ case 4:
+ printk("!!!!");
+ break;
+ default:
+ printk("(%d!)", (int) old_jiffies);
+ }
+ old_jiffies = jiffies;
+ }
+ pgid = getpgid(pid);
+ printk(".");
+}
+#else
+#define idle_trace() do { } while (0)
+#endif /* IDLE_TRACE */
+
+static int hlt_counter = 1;
+
+#define HARD_IDLE_TIMEOUT (HZ / 3)
+
+void disable_hlt(void)
+{
+ hlt_counter++;
+}
+
+void enable_hlt(void)
+{
+ hlt_counter--;
+}
+
+static int __init nohlt_setup(char *__unused)
+{
+ hlt_counter = 1;
+ return 1;
+}
+
+static int __init hlt_setup(char *__unused)
+{
+ hlt_counter = 0;
+ return 1;
+}
+
+__setup("nohlt", nohlt_setup);
+__setup("hlt", hlt_setup);
+
+static inline void hlt(void)
+{
+ if (hlt_counter)
+ return;
+
+ __asm__ __volatile__ ("sleep" : : : "memory");
+}
+
+/*
+ * The idle loop on a uniprocessor SH..
+ */
+void default_idle(void)
+{
+ /* endless idle loop with no priority at all */
+ while (1) {
+ if (hlt_counter) {
+ while (1)
+ if (need_resched())
+ break;
+ } else {
+ local_irq_disable();
+ while (!need_resched()) {
+ local_irq_enable();
+ idle_trace();
+ hlt();
+ local_irq_disable();
+ }
+ local_irq_enable();
+ }
+ schedule();
+ }
+}
+
+void cpu_idle(void *unused)
+{
+ default_idle();
+}
+
+void machine_restart(char * __unused)
+{
+ extern void phys_stext(void);
+
+ phys_stext();
+}
+
+void machine_halt(void)
+{
+ for (;;);
+}
+
+void machine_power_off(void)
+{
+ extern void enter_deep_standby(void);
+
+ enter_deep_standby();
+}
+
+void show_regs(struct pt_regs * regs)
+{
+ unsigned long long ah, al, bh, bl, ch, cl;
+
+ printk("\n");
+
+ ah = (regs->pc) >> 32;
+ al = (regs->pc) & 0xffffffff;
+ bh = (regs->regs[18]) >> 32;
+ bl = (regs->regs[18]) & 0xffffffff;
+ ch = (regs->regs[15]) >> 32;
+ cl = (regs->regs[15]) & 0xffffffff;
+ printk("PC : %08Lx%08Lx LINK: %08Lx%08Lx SP : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->sr) >> 32;
+ al = (regs->sr) & 0xffffffff;
+ asm volatile ("getcon " __TEA ", %0" : "=r" (bh));
+ asm volatile ("getcon " __TEA ", %0" : "=r" (bl));
+ bh = (bh) >> 32;
+ bl = (bl) & 0xffffffff;
+ asm volatile ("getcon " __KCR0 ", %0" : "=r" (ch));
+ asm volatile ("getcon " __KCR0 ", %0" : "=r" (cl));
+ ch = (ch) >> 32;
+ cl = (cl) & 0xffffffff;
+ printk("SR : %08Lx%08Lx TEA : %08Lx%08Lx KCR0: %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[0]) >> 32;
+ al = (regs->regs[0]) & 0xffffffff;
+ bh = (regs->regs[1]) >> 32;
+ bl = (regs->regs[1]) & 0xffffffff;
+ ch = (regs->regs[2]) >> 32;
+ cl = (regs->regs[2]) & 0xffffffff;
+ printk("R0 : %08Lx%08Lx R1 : %08Lx%08Lx R2 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[3]) >> 32;
+ al = (regs->regs[3]) & 0xffffffff;
+ bh = (regs->regs[4]) >> 32;
+ bl = (regs->regs[4]) & 0xffffffff;
+ ch = (regs->regs[5]) >> 32;
+ cl = (regs->regs[5]) & 0xffffffff;
+ printk("R3 : %08Lx%08Lx R4 : %08Lx%08Lx R5 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[6]) >> 32;
+ al = (regs->regs[6]) & 0xffffffff;
+ bh = (regs->regs[7]) >> 32;
+ bl = (regs->regs[7]) & 0xffffffff;
+ ch = (regs->regs[8]) >> 32;
+ cl = (regs->regs[8]) & 0xffffffff;
+ printk("R6 : %08Lx%08Lx R7 : %08Lx%08Lx R8 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[9]) >> 32;
+ al = (regs->regs[9]) & 0xffffffff;
+ bh = (regs->regs[10]) >> 32;
+ bl = (regs->regs[10]) & 0xffffffff;
+ ch = (regs->regs[11]) >> 32;
+ cl = (regs->regs[11]) & 0xffffffff;
+ printk("R9 : %08Lx%08Lx R10 : %08Lx%08Lx R11 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[12]) >> 32;
+ al = (regs->regs[12]) & 0xffffffff;
+ bh = (regs->regs[13]) >> 32;
+ bl = (regs->regs[13]) & 0xffffffff;
+ ch = (regs->regs[14]) >> 32;
+ cl = (regs->regs[14]) & 0xffffffff;
+ printk("R12 : %08Lx%08Lx R13 : %08Lx%08Lx R14 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[16]) >> 32;
+ al = (regs->regs[16]) & 0xffffffff;
+ bh = (regs->regs[17]) >> 32;
+ bl = (regs->regs[17]) & 0xffffffff;
+ ch = (regs->regs[19]) >> 32;
+ cl = (regs->regs[19]) & 0xffffffff;
+ printk("R16 : %08Lx%08Lx R17 : %08Lx%08Lx R19 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[20]) >> 32;
+ al = (regs->regs[20]) & 0xffffffff;
+ bh = (regs->regs[21]) >> 32;
+ bl = (regs->regs[21]) & 0xffffffff;
+ ch = (regs->regs[22]) >> 32;
+ cl = (regs->regs[22]) & 0xffffffff;
+ printk("R20 : %08Lx%08Lx R21 : %08Lx%08Lx R22 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[23]) >> 32;
+ al = (regs->regs[23]) & 0xffffffff;
+ bh = (regs->regs[24]) >> 32;
+ bl = (regs->regs[24]) & 0xffffffff;
+ ch = (regs->regs[25]) >> 32;
+ cl = (regs->regs[25]) & 0xffffffff;
+ printk("R23 : %08Lx%08Lx R24 : %08Lx%08Lx R25 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[26]) >> 32;
+ al = (regs->regs[26]) & 0xffffffff;
+ bh = (regs->regs[27]) >> 32;
+ bl = (regs->regs[27]) & 0xffffffff;
+ ch = (regs->regs[28]) >> 32;
+ cl = (regs->regs[28]) & 0xffffffff;
+ printk("R26 : %08Lx%08Lx R27 : %08Lx%08Lx R28 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[29]) >> 32;
+ al = (regs->regs[29]) & 0xffffffff;
+ bh = (regs->regs[30]) >> 32;
+ bl = (regs->regs[30]) & 0xffffffff;
+ ch = (regs->regs[31]) >> 32;
+ cl = (regs->regs[31]) & 0xffffffff;
+ printk("R29 : %08Lx%08Lx R30 : %08Lx%08Lx R31 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[32]) >> 32;
+ al = (regs->regs[32]) & 0xffffffff;
+ bh = (regs->regs[33]) >> 32;
+ bl = (regs->regs[33]) & 0xffffffff;
+ ch = (regs->regs[34]) >> 32;
+ cl = (regs->regs[34]) & 0xffffffff;
+ printk("R32 : %08Lx%08Lx R33 : %08Lx%08Lx R34 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[35]) >> 32;
+ al = (regs->regs[35]) & 0xffffffff;
+ bh = (regs->regs[36]) >> 32;
+ bl = (regs->regs[36]) & 0xffffffff;
+ ch = (regs->regs[37]) >> 32;
+ cl = (regs->regs[37]) & 0xffffffff;
+ printk("R35 : %08Lx%08Lx R36 : %08Lx%08Lx R37 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[38]) >> 32;
+ al = (regs->regs[38]) & 0xffffffff;
+ bh = (regs->regs[39]) >> 32;
+ bl = (regs->regs[39]) & 0xffffffff;
+ ch = (regs->regs[40]) >> 32;
+ cl = (regs->regs[40]) & 0xffffffff;
+ printk("R38 : %08Lx%08Lx R39 : %08Lx%08Lx R40 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[41]) >> 32;
+ al = (regs->regs[41]) & 0xffffffff;
+ bh = (regs->regs[42]) >> 32;
+ bl = (regs->regs[42]) & 0xffffffff;
+ ch = (regs->regs[43]) >> 32;
+ cl = (regs->regs[43]) & 0xffffffff;
+ printk("R41 : %08Lx%08Lx R42 : %08Lx%08Lx R43 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[44]) >> 32;
+ al = (regs->regs[44]) & 0xffffffff;
+ bh = (regs->regs[45]) >> 32;
+ bl = (regs->regs[45]) & 0xffffffff;
+ ch = (regs->regs[46]) >> 32;
+ cl = (regs->regs[46]) & 0xffffffff;
+ printk("R44 : %08Lx%08Lx R45 : %08Lx%08Lx R46 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[47]) >> 32;
+ al = (regs->regs[47]) & 0xffffffff;
+ bh = (regs->regs[48]) >> 32;
+ bl = (regs->regs[48]) & 0xffffffff;
+ ch = (regs->regs[49]) >> 32;
+ cl = (regs->regs[49]) & 0xffffffff;
+ printk("R47 : %08Lx%08Lx R48 : %08Lx%08Lx R49 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[50]) >> 32;
+ al = (regs->regs[50]) & 0xffffffff;
+ bh = (regs->regs[51]) >> 32;
+ bl = (regs->regs[51]) & 0xffffffff;
+ ch = (regs->regs[52]) >> 32;
+ cl = (regs->regs[52]) & 0xffffffff;
+ printk("R50 : %08Lx%08Lx R51 : %08Lx%08Lx R52 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[53]) >> 32;
+ al = (regs->regs[53]) & 0xffffffff;
+ bh = (regs->regs[54]) >> 32;
+ bl = (regs->regs[54]) & 0xffffffff;
+ ch = (regs->regs[55]) >> 32;
+ cl = (regs->regs[55]) & 0xffffffff;
+ printk("R53 : %08Lx%08Lx R54 : %08Lx%08Lx R55 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[56]) >> 32;
+ al = (regs->regs[56]) & 0xffffffff;
+ bh = (regs->regs[57]) >> 32;
+ bl = (regs->regs[57]) & 0xffffffff;
+ ch = (regs->regs[58]) >> 32;
+ cl = (regs->regs[58]) & 0xffffffff;
+ printk("R56 : %08Lx%08Lx R57 : %08Lx%08Lx R58 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[59]) >> 32;
+ al = (regs->regs[59]) & 0xffffffff;
+ bh = (regs->regs[60]) >> 32;
+ bl = (regs->regs[60]) & 0xffffffff;
+ ch = (regs->regs[61]) >> 32;
+ cl = (regs->regs[61]) & 0xffffffff;
+ printk("R59 : %08Lx%08Lx R60 : %08Lx%08Lx R61 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->regs[62]) >> 32;
+ al = (regs->regs[62]) & 0xffffffff;
+ bh = (regs->tregs[0]) >> 32;
+ bl = (regs->tregs[0]) & 0xffffffff;
+ ch = (regs->tregs[1]) >> 32;
+ cl = (regs->tregs[1]) & 0xffffffff;
+ printk("R62 : %08Lx%08Lx T0 : %08Lx%08Lx T1 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->tregs[2]) >> 32;
+ al = (regs->tregs[2]) & 0xffffffff;
+ bh = (regs->tregs[3]) >> 32;
+ bl = (regs->tregs[3]) & 0xffffffff;
+ ch = (regs->tregs[4]) >> 32;
+ cl = (regs->tregs[4]) & 0xffffffff;
+ printk("T2 : %08Lx%08Lx T3 : %08Lx%08Lx T4 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ ah = (regs->tregs[5]) >> 32;
+ al = (regs->tregs[5]) & 0xffffffff;
+ bh = (regs->tregs[6]) >> 32;
+ bl = (regs->tregs[6]) & 0xffffffff;
+ ch = (regs->tregs[7]) >> 32;
+ cl = (regs->tregs[7]) & 0xffffffff;
+ printk("T5 : %08Lx%08Lx T6 : %08Lx%08Lx T7 : %08Lx%08Lx\n",
+ ah, al, bh, bl, ch, cl);
+
+ /*
+ * If we're in kernel mode, dump the stack too..
+ */
+ if (!user_mode(regs)) {
+ void show_stack(struct task_struct *tsk, unsigned long *sp);
+ unsigned long sp = regs->regs[15] & 0xffffffff;
+ struct task_struct *tsk = get_current();
+
+ tsk->thread.kregs = regs;
+
+ show_stack(tsk, (unsigned long *)sp);
+ }
+}
+
+struct task_struct * alloc_task_struct(void)
+{
+ /* Get task descriptor pages */
+ return (struct task_struct *)
+ __get_free_pages(GFP_KERNEL, get_order(THREAD_SIZE));
+}
+
+void free_task_struct(struct task_struct *p)
+{
+ free_pages((unsigned long) p, get_order(THREAD_SIZE));
+}
+
+/*
+ * Create a kernel thread
+ */
+
+/*
+ * This is the mechanism for creating a new kernel thread.
+ *
+ * NOTE! Only a kernel-only process(ie the swapper or direct descendants
+ * who haven't done an "execve()") should use this: it will work within
+ * a system call from a "real" process, but the process memory space will
+ * not be free'd until both the parent and the child have exited.
+ */
+int kernel_thread(int (*fn)(void *), void * arg, unsigned long flags)
+{
+ /* A bit less processor dependent than older sh ... */
+
+ unsigned int reply;
+
+static __inline__ _syscall2(int,clone,unsigned long,flags,unsigned long,newsp)
+static __inline__ _syscall1(int,exit,int,ret)
+
+ reply = clone(flags | CLONE_VM, 0);
+ if (!reply) {
+ /* Child */
+ reply = exit(fn(arg));
+ }
+
+ return reply;
+}
+
+/*
+ * Free current thread data structures etc..
+ */
+void exit_thread(void)
+{
+ /* See arch/sparc/kernel/process.c for the precedent for doing this -- RPC.
+
+ The SH-5 FPU save/restore approach relies on last_task_used_math
+ pointing to a live task_struct. When another task tries to use the
+ FPU for the 1st time, the FPUDIS trap handling (see
+ arch/sh64/kernel/fpu.c) will save the existing FPU state to the
+ FP regs field within last_task_used_math before re-loading the new
+ task's FPU state (or initialising it if the FPU has been used
+ before). So if last_task_used_math is stale, and its page has already been
+ re-allocated for another use, the consequences are rather grim. Unless we
+ null it here, there is no other path through which it would get safely
+ nulled. */
+
+#ifndef CONFIG_NOFPU_SUPPORT
+ if (last_task_used_math == current) {
+ last_task_used_math = NULL;
+ }
+#endif
+}
+
+void flush_thread(void)
+{
+
+ /* Called by fs/exec.c (flush_old_exec) to remove traces of a
+ * previously running executable. */
+#ifndef CONFIG_NOFPU_SUPPORT
+ if (last_task_used_math == current) {
+ last_task_used_math = NULL;
+ }
+ /* Force FPU state to be reinitialised after exec */
+ current->used_math = 0;
+#endif
+
+ /* if we are a kernel thread, about to change to user thread,
+ * update kreg
+ */
+ if(current->thread.kregs==&fake_swapper_regs) {
+ current->thread.kregs =
+ ((struct pt_regs *)(THREAD_SIZE + (unsigned long) current) - 1);
+ current->thread.uregs = current->thread.kregs;
+ }
+}
+
+void release_thread(struct task_struct *dead_task)
+{
+ /* do nothing */
+}
+
+/* Fill in the fpu structure for a core dump.. */
+int dump_fpu(struct pt_regs *regs, elf_fpregset_t *fpu)
+{
+#ifndef CONFIG_NOFPU_SUPPORT
+ int fpvalid;
+ struct task_struct *tsk = current;
+
+ fpvalid = tsk->used_math;
+ if (fpvalid) {
+ if (current == last_task_used_math) {
+ grab_fpu();
+ fpsave(&tsk->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = 0;
+ regs->sr |= SR_FD;
+ }
+
+ memcpy(fpu, &tsk->thread.fpu.hard, sizeof(*fpu));
+ }
+
+ return fpvalid;
+#else
+ return 0; /* Task didn't use the fpu at all. */
+#endif
+}
+
+asmlinkage void ret_from_fork(void);
+
+int copy_thread(int nr, unsigned long clone_flags, unsigned long usp,
+ unsigned long unused,
+ struct task_struct *p, struct pt_regs *regs)
+{
+ struct pt_regs *childregs;
+ unsigned long long se; /* Sign extension */
+
+#ifndef CONFIG_NOFPU_SUPPORT
+ if(last_task_used_math == current) {
+ grab_fpu();
+ fpsave(¤t->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = NULL;
+ regs->sr |= SR_FD;
+ }
+#endif
+ /* Copy from sh version */
+ childregs = ((struct pt_regs *)(THREAD_SIZE + (unsigned long) p->thread_info )) - 1;
+
+ *childregs = *regs;
+
+ if (user_mode(regs)) {
+ childregs->regs[15] = usp;
+ p->thread.uregs = childregs;
+ } else {
+ childregs->regs[15] = (unsigned long)p->thread_info + THREAD_SIZE;
+ }
+
+ childregs->regs[9] = 0; /* Set return value for child */
+ childregs->sr |= SR_FD; /* Invalidate FPU flag */
+
+ p->thread.sp = (unsigned long) childregs;
+ p->thread.pc = (unsigned long) ret_from_fork;
+
+ /*
+ * Sign extend the edited stack.
+ * Note that thread.pc and thread.pc will stay
+ * 32-bit wide and context switch must take care
+ * of NEFF sign extension.
+ */
+
+ se = childregs->regs[15];
+ se = (se & NEFF_SIGN) ? (se | NEFF_MASK) : se;
+ childregs->regs[15] = se;
+
+ return 0;
+}
+
+/*
+ * fill in the user structure for a core dump..
+ */
+void dump_thread(struct pt_regs * regs, struct user * dump)
+{
+ dump->magic = CMAGIC;
+ dump->start_code = current->mm->start_code;
+ dump->start_data = current->mm->start_data;
+ dump->start_stack = regs->regs[15] & ~(PAGE_SIZE - 1);
+ dump->u_tsize = (current->mm->end_code - dump->start_code) >> PAGE_SHIFT;
+ dump->u_dsize = (current->mm->brk + (PAGE_SIZE-1) - dump->start_data) >> PAGE_SHIFT;
+ dump->u_ssize = (current->mm->start_stack - dump->start_stack +
+ PAGE_SIZE - 1) >> PAGE_SHIFT;
+ /* Debug registers will come here. */
+
+ dump->regs = *regs;
+
+ dump->u_fpvalid = dump_fpu(regs, &dump->fpu);
+}
+
+asmlinkage int sys_fork(unsigned long r2, unsigned long r3,
+ unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs *pregs)
+{
+ return do_fork(SIGCHLD, pregs->regs[15], pregs, 0, 0, 0);
+}
+
+asmlinkage int sys_clone(unsigned long clone_flags, unsigned long newsp,
+ unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs *pregs)
+{
+ if (!newsp)
+ newsp = pregs->regs[15];
+ return do_fork(clone_flags, newsp, pregs, 0, 0, 0);
+}
+
+/*
+ * This is trivial, and on the face of it looks like it
+ * could equally well be done in user mode.
+ *
+ * Not so, for quite unobvious reasons - register pressure.
+ * In user mode vfork() cannot have a stack frame, and if
+ * done by calling the "clone()" system call directly, you
+ * do not have enough call-clobbered registers to hold all
+ * the information you need.
+ */
+asmlinkage int sys_vfork(unsigned long r2, unsigned long r3,
+ unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs *pregs)
+{
+ return do_fork(CLONE_VFORK | CLONE_VM | SIGCHLD, pregs->regs[15], pregs, 0, 0, 0);
+}
+
+/*
+ * sys_execve() executes a new program.
+ */
+asmlinkage int sys_execve(char *ufilename, char **uargv,
+ char **uenvp, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs *pregs)
+{
+ int error;
+ char *filename;
+
+ lock_kernel();
+ filename = getname((char __user *)ufilename);
+ error = PTR_ERR(filename);
+ if (IS_ERR(filename))
+ goto out;
+
+ error = do_execve(filename,
+ (char __user * __user *)uargv,
+ (char __user * __user *)uenvp,
+ pregs);
+ if (error == 0) {
+ task_lock(current);
+ current->ptrace &= ~PT_DTRACE;
+ task_unlock(current);
+ }
+ putname(filename);
+out:
+ unlock_kernel();
+ return error;
+}
+
+/*
+ * These bracket the sleeping functions..
+ */
+extern void interruptible_sleep_on(wait_queue_head_t *q);
+
+#define mid_sched ((unsigned long) interruptible_sleep_on)
+
+static int in_sh64_switch_to(unsigned long pc)
+{
+ extern char __sh64_switch_to_end;
+ /* For a sleeping task, the PC is somewhere in the middle of the function,
+ so we don't have to worry about masking the LSB off */
+ return (pc >= (unsigned long) sh64_switch_to) &&
+ (pc < (unsigned long) &__sh64_switch_to_end);
+}
+
+unsigned long get_wchan(struct task_struct *p)
+{
+ unsigned long schedule_fp;
+ unsigned long sh64_switch_to_fp;
+ unsigned long schedule_caller_pc;
+ unsigned long pc;
+
+ if (!p || p == current || p->state == TASK_RUNNING)
+ return 0;
+
+ /*
+ * The same comment as on the Alpha applies here, too ...
+ */
+ pc = thread_saved_pc(p);
+
+#ifdef CONFIG_FRAME_POINTER
+ if (in_sh64_switch_to(pc)) {
+ sh64_switch_to_fp = (long) p->thread.sp;
+ /* r14 is saved at offset 4 in the sh64_switch_to frame */
+ schedule_fp = *(unsigned long *) (long)(sh64_switch_to_fp + 4);
+
+ /* and the caller of 'schedule' is (currently!) saved at offset 24
+ in the frame of schedule (from disasm) */
+ schedule_caller_pc = *(unsigned long *) (long)(schedule_fp + 24);
+ return schedule_caller_pc;
+ }
+#endif
+ return pc;
+}
+
+/* Provide a /proc/asids file that lists out the
+ ASIDs currently associated with the processes. (If the DM.PC register is
+ examined through the debug link, this shows ASID + PC. To make use of this,
+ the PID->ASID relationship needs to be known. This is primarily for
+ debugging.)
+ */
+
+#if defined(CONFIG_SH64_PROC_ASIDS)
+#include <linux/init.h>
+#include <linux/proc_fs.h>
+
+static int
+asids_proc_info(char *buf, char **start, off_t fpos, int length, int *eof, void *data)
+{
+ int len=0;
+ struct task_struct *p;
+ read_lock(&tasklist_lock);
+ for_each_task(p) {
+ int pid = p->pid;
+ struct mm_struct *mm;
+ if (!pid) continue;
+ mm = p->mm;
+ if (mm) {
+ unsigned long asid, context;
+ context = mm->context;
+ asid = (context & 0xff);
+ len += sprintf(buf+len, "%5d : %02x\n", pid, asid);
+ } else {
+ len += sprintf(buf+len, "%5d : (none)\n", pid);
+ }
+ }
+ read_unlock(&tasklist_lock);
+ *eof = 1;
+ return len;
+}
+
+static int __init register_proc_asids(void)
+{
+ create_proc_read_entry("asids", 0, NULL, asids_proc_info, NULL);
+ return 0;
+}
+
+__initcall(register_proc_asids);
+#endif
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/ptrace.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ * Started from SH3/4 version:
+ * SuperH version: Copyright (C) 1999, 2000 Kaz Kojima & Niibe Yutaka
+ *
+ * Original x86 implementation:
+ * By Ross Biro 1/23/92
+ * edited by Linus Torvalds
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/errno.h>
+#include <linux/ptrace.h>
+#include <linux/user.h>
+
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+#include <asm/system.h>
+#include <asm/processor.h>
+#include <asm/mmu_context.h>
+
+/* This mask defines the bits of the SR which the user is not allowed to
+ change, which are everything except S, Q, M, PR, SZ, FR. */
+#define SR_MASK (0xffff8cfd)
+
+/*
+ * does not yet catch signals sent when the child dies.
+ * in exit.c or in signal.c.
+ */
+
+/*
+ * This routine will get a word from the user area in the process kernel stack.
+ */
+static inline int get_stack_long(struct task_struct *task, int offset)
+{
+ unsigned char *stack;
+
+ stack = (unsigned char *)(task->thread.uregs);
+ stack += offset;
+ return (*((int *)stack));
+}
+
+static inline unsigned long
+get_fpu_long(struct task_struct *task, unsigned long addr)
+{
+ unsigned long tmp;
+ struct pt_regs *regs;
+ regs = (struct pt_regs*)((unsigned char *)task + THREAD_SIZE) - 1;
+
+ if (!task->used_math) {
+ if (addr == offsetof(struct user_fpu_struct, fpscr)) {
+ tmp = FPSCR_INIT;
+ } else {
+ tmp = 0xffffffffUL; /* matches initial value in fpu.c */
+ }
+ return tmp;
+ }
+
+ if (last_task_used_math == task) {
+ grab_fpu();
+ fpsave(&task->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = 0;
+ regs->sr |= SR_FD;
+ }
+
+ tmp = ((long *)&task->thread.fpu)[addr / sizeof(unsigned long)];
+ return tmp;
+}
+
+/*
+ * This routine will put a word into the user area in the process kernel stack.
+ */
+static inline int put_stack_long(struct task_struct *task, int offset,
+ unsigned long data)
+{
+ unsigned char *stack;
+
+ stack = (unsigned char *)(task->thread.uregs);
+ stack += offset;
+ *(unsigned long *) stack = data;
+ return 0;
+}
+
+static inline int
+put_fpu_long(struct task_struct *task, unsigned long addr, unsigned long data)
+{
+ struct pt_regs *regs;
+
+ regs = (struct pt_regs*)((unsigned char *)task + THREAD_SIZE) - 1;
+
+ if (!task->used_math) {
+ fpinit(&task->thread.fpu.hard);
+ task->used_math = 1;
+ } else if (last_task_used_math == task) {
+ grab_fpu();
+ fpsave(&task->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = 0;
+ regs->sr |= SR_FD;
+ }
+
+ ((long *)&task->thread.fpu)[addr / sizeof(unsigned long)] = data;
+ return 0;
+}
+
+asmlinkage int sys_ptrace(long request, long pid, long addr, long data)
+{
+ struct task_struct *child;
+ int ret;
+
+ lock_kernel();
+ ret = -EPERM;
+ if (request == PTRACE_TRACEME) {
+ /* are we already being traced? */
+ if (current->ptrace & PT_PTRACED)
+ goto out;
+ /* set the ptrace bit in the process flags. */
+ current->ptrace |= PT_PTRACED;
+ ret = 0;
+ goto out;
+ }
+ ret = -ESRCH;
+ read_lock(&tasklist_lock);
+ child = find_task_by_pid(pid);
+ if (child)
+ get_task_struct(child);
+ read_unlock(&tasklist_lock);
+ if (!child)
+ goto out;
+
+ ret = -EPERM;
+ if (pid == 1) /* you may not mess with init */
+ goto out_tsk;
+
+ if (request == PTRACE_ATTACH) {
+ ret = ptrace_attach(child);
+ goto out_tsk;
+ }
+
+ ret = ptrace_check_attach(child, request == PTRACE_KILL);
+ if (ret < 0)
+ goto out_tsk;
+
+ switch (request) {
+ /* when I and D space are separate, these will need to be fixed. */
+ case PTRACE_PEEKTEXT: /* read word at location addr. */
+ case PTRACE_PEEKDATA: {
+ unsigned long tmp;
+ int copied;
+
+ copied = access_process_vm(child, addr, &tmp, sizeof(tmp), 0);
+ ret = -EIO;
+ if (copied != sizeof(tmp))
+ break;
+ ret = put_user(tmp,(unsigned long *) data);
+ break;
+ }
+
+ /* read the word at location addr in the USER area. */
+ case PTRACE_PEEKUSR: {
+ unsigned long tmp;
+
+ ret = -EIO;
+ if ((addr & 3) || addr < 0)
+ break;
+
+ if (addr < sizeof(struct pt_regs))
+ tmp = get_stack_long(child, addr);
+ else if ((addr >= offsetof(struct user, fpu)) &&
+ (addr < offsetof(struct user, u_fpvalid))) {
+ tmp = get_fpu_long(child, addr - offsetof(struct user, fpu));
+ } else if (addr == offsetof(struct user, u_fpvalid)) {
+ tmp = child->used_math;
+ } else {
+ break;
+ }
+ ret = put_user(tmp, (unsigned long *)data);
+ break;
+ }
+
+ /* when I and D space are separate, this will have to be fixed. */
+ case PTRACE_POKETEXT: /* write the word at location addr. */
+ case PTRACE_POKEDATA:
+ ret = 0;
+ if (access_process_vm(child, addr, &data, sizeof(data), 1) == sizeof(data))
+ break;
+ ret = -EIO;
+ break;
+
+ case PTRACE_POKEUSR:
+ /* write the word at location addr in the USER area. We must
+ disallow any changes to certain SR bits or u_fpvalid, since
+ this could crash the kernel or result in a security
+ loophole. */
+ ret = -EIO;
+ if ((addr & 3) || addr < 0)
+ break;
+
+ if (addr < sizeof(struct pt_regs)) {
+ /* Ignore change of top 32 bits of SR */
+ if (addr == offsetof (struct pt_regs, sr)+4)
+ {
+ ret = 0;
+ break;
+ }
+ /* If lower 32 bits of SR, ignore non-user bits */
+ if (addr == offsetof (struct pt_regs, sr))
+ {
+ long cursr = get_stack_long(child, addr);
+ data &= ~(SR_MASK);
+ data |= (cursr & SR_MASK);
+ }
+ ret = put_stack_long(child, addr, data);
+ }
+ else if ((addr >= offsetof(struct user, fpu)) &&
+ (addr < offsetof(struct user, u_fpvalid))) {
+ ret = put_fpu_long(child, addr - offsetof(struct user, fpu), data);
+ }
+ break;
+
+ case PTRACE_SYSCALL: /* continue and stop at next (return from) syscall */
+ case PTRACE_CONT: { /* restart after signal. */
+ ret = -EIO;
+ if ((unsigned long) data > _NSIG)
+ break;
+ if (request == PTRACE_SYSCALL)
+ set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+ else
+ clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+ child->exit_code = data;
+ wake_up_process(child);
+ ret = 0;
+ break;
+ }
+
+/*
+ * make the child exit. Best I can do is send it a sigkill.
+ * perhaps it should be put in the status that it wants to
+ * exit.
+ */
+ case PTRACE_KILL: {
+ ret = 0;
+ if (child->exit_state == EXIT_ZOMBIE) /* already dead */
+ break;
+ child->exit_code = SIGKILL;
+ wake_up_process(child);
+ break;
+ }
+
+ case PTRACE_SINGLESTEP: { /* set the trap flag. */
+ struct pt_regs *regs;
+
+ ret = -EIO;
+ if ((unsigned long) data > _NSIG)
+ break;
+ clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
+ if ((child->ptrace & PT_DTRACE) == 0) {
+ /* Spurious delayed TF traps may occur */
+ child->ptrace |= PT_DTRACE;
+ }
+
+ regs = child->thread.uregs;
+
+ regs->sr |= SR_SSTEP; /* auto-resetting upon exception */
+
+ child->exit_code = data;
+ /* give it a chance to run. */
+ wake_up_process(child);
+ ret = 0;
+ break;
+ }
+
+ case PTRACE_DETACH: /* detach a process that was attached. */
+ ret = ptrace_detach(child, data);
+ break;
+
+ default:
+ ret = ptrace_request(child, request, addr, data);
+ break;
+ }
+out_tsk:
+ put_task_struct(child);
+out:
+ unlock_kernel();
+ return ret;
+}
+
+asmlinkage void syscall_trace(void)
+{
+ struct task_struct *tsk = current;
+
+ if (!test_thread_flag(TIF_SYSCALL_TRACE))
+ return;
+ if (!(tsk->ptrace & PT_PTRACED))
+ return;
+
+ ptrace_notify(SIGTRAP | ((current->ptrace & PT_TRACESYSGOOD)
+ ? 0x80 : 0));
+ /*
+ * this isn't the same as continuing with a signal, but it will do
+ * for normal use. strace only continues with a signal if the
+ * stopping signal is not SIGTRAP. -brl
+ */
+ if (tsk->exit_code) {
+ send_sig(tsk->exit_code, tsk, 1);
+ tsk->exit_code = 0;
+ }
+}
+
+/* Called with interrupts disabled */
+asmlinkage void do_single_step(unsigned long long vec, struct pt_regs *regs)
+{
+ /* This is called after a single step exception (DEBUGSS).
+ There is no need to change the PC, as it is a post-execution
+ exception, as entry.S does not do anything to the PC for DEBUGSS.
+ We need to clear the Single Step setting in SR to avoid
+ continually stepping. */
+ local_irq_enable();
+ regs->sr &= ~SR_SSTEP;
+ force_sig(SIGTRAP, current);
+}
+
+/* Called with interrupts disabled */
+asmlinkage void do_software_break_point(unsigned long long vec,
+ struct pt_regs *regs)
+{
+ /* We need to forward step the PC, to counteract the backstep done
+ in signal.c. */
+ local_irq_enable();
+ force_sig(SIGTRAP, current);
+ regs->pc += 4;
+}
+
+/*
+ * Called by kernel/ptrace.c when detaching..
+ *
+ * Make sure single step bits etc are not set.
+ */
+void ptrace_disable(struct task_struct *child)
+{
+ /* nothing to do.. */
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/sh_ksyms.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/rwsem.h>
+#include <linux/module.h>
+#include <linux/smp.h>
+#include <linux/user.h>
+#include <linux/elfcore.h>
+#include <linux/sched.h>
+#include <linux/in6.h>
+#include <linux/interrupt.h>
+#include <linux/smp_lock.h>
+
+#include <asm/semaphore.h>
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+#include <asm/checksum.h>
+#include <asm/io.h>
+#include <asm/delay.h>
+#include <asm/irq.h>
+
+extern void dump_thread(struct pt_regs *, struct user *);
+extern int dump_fpu(struct pt_regs *, elf_fpregset_t *);
+
+#if 0
+/* Not yet - there's no declaration of drive_info anywhere. */
+#if defined(CONFIG_BLK_DEV_IDE) || defined(CONFIG_BLK_DEV_HD) || defined(CONFIG_BLK_DEV_IDE_MODULE) || defined(CONFIG_BLK_DEV_HD_MODULE)
+extern struct drive_info_struct drive_info;
+EXPORT_SYMBOL(drive_info);
+#endif
+#endif
+
+/* platform dependent support */
+EXPORT_SYMBOL(dump_thread);
+EXPORT_SYMBOL(dump_fpu);
+EXPORT_SYMBOL(iounmap);
+EXPORT_SYMBOL(enable_irq);
+EXPORT_SYMBOL(disable_irq);
+EXPORT_SYMBOL(kernel_thread);
+
+/* Networking helper routines. */
+EXPORT_SYMBOL(csum_partial_copy);
+
+EXPORT_SYMBOL(strtok);
+EXPORT_SYMBOL(strpbrk);
+EXPORT_SYMBOL(strstr);
+
+#ifdef CONFIG_VT
+EXPORT_SYMBOL(screen_info);
+#endif
+
+EXPORT_SYMBOL(__down);
+EXPORT_SYMBOL(__down_trylock);
+EXPORT_SYMBOL(__up);
+EXPORT_SYMBOL(__put_user_asm_l);
+EXPORT_SYMBOL(__get_user_asm_l);
+EXPORT_SYMBOL(memcmp);
+EXPORT_SYMBOL(memcpy);
+EXPORT_SYMBOL(memset);
+EXPORT_SYMBOL(memscan);
+EXPORT_SYMBOL(strchr);
+EXPORT_SYMBOL(strlen);
+
+EXPORT_SYMBOL(flush_dcache_page);
+
+/* Ugh. These come in from libgcc.a at link time. */
+
+extern void __sdivsi3(void);
+extern void __muldi3(void);
+extern void __udivsi3(void);
+EXPORT_SYMBOL(__sdivsi3);
+EXPORT_SYMBOL(__muldi3);
+EXPORT_SYMBOL(__udivsi3);
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/signal.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ * Copyright (C) 2004 Richard Curnow
+ *
+ * Started from sh version.
+ *
+ */
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/kernel.h>
+#include <linux/signal.h>
+#include <linux/errno.h>
+#include <linux/wait.h>
+#include <linux/personality.h>
+#include <linux/suspend.h>
+#include <linux/ptrace.h>
+#include <linux/unistd.h>
+#include <linux/stddef.h>
+#include <linux/personality.h>
+#include <asm/ucontext.h>
+#include <asm/uaccess.h>
+#include <asm/pgtable.h>
+
+
+#define REG_RET 9
+#define REG_ARG1 2
+#define REG_ARG2 3
+#define REG_ARG3 4
+#define REG_SP 15
+#define REG_PR 18
+#define REF_REG_RET regs->regs[REG_RET]
+#define REF_REG_SP regs->regs[REG_SP]
+#define DEREF_REG_PR regs->regs[REG_PR]
+
+#define DEBUG_SIG 0
+
+#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+
+asmlinkage int do_signal(struct pt_regs *regs, sigset_t *oldset);
+
+/*
+ * Atomically swap in the new signal mask, and wait for a signal.
+ */
+
+asmlinkage int
+sys_sigsuspend(old_sigset_t mask,
+ unsigned long r3, unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs * regs)
+{
+ sigset_t saveset;
+
+ mask &= _BLOCKABLE;
+ spin_lock_irq(¤t->sighand->siglock);
+ saveset = current->blocked;
+ siginitset(¤t->blocked, mask);
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ REF_REG_RET = -EINTR;
+ while (1) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ regs->pc += 4; /* because sys_sigreturn decrements the pc */
+ if (do_signal(regs, &saveset)) {
+ /* pc now points at signal handler. Need to decrement
+ it because entry.S will increment it. */
+ regs->pc -= 4;
+ return -EINTR;
+ }
+ }
+}
+
+asmlinkage int
+sys_rt_sigsuspend(sigset_t *unewset, size_t sigsetsize,
+ unsigned long r4, unsigned long r5, unsigned long r6,
+ unsigned long r7,
+ struct pt_regs * regs)
+{
+ sigset_t saveset, newset;
+
+ /* XXX: Don't preclude handling different sized sigset_t's. */
+ if (sigsetsize != sizeof(sigset_t))
+ return -EINVAL;
+
+ if (copy_from_user(&newset, unewset, sizeof(newset)))
+ return -EFAULT;
+ sigdelsetmask(&newset, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sighand->siglock);
+ saveset = current->blocked;
+ current->blocked = newset;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ REF_REG_RET = -EINTR;
+ while (1) {
+ current->state = TASK_INTERRUPTIBLE;
+ schedule();
+ regs->pc += 4; /* because sys_sigreturn decrements the pc */
+ if (do_signal(regs, &saveset)) {
+ /* pc now points at signal handler. Need to decrement
+ it because entry.S will increment it. */
+ regs->pc -= 4;
+ return -EINTR;
+ }
+ }
+}
+
+asmlinkage int
+sys_sigaction(int sig, const struct old_sigaction __user *act,
+ struct old_sigaction __user *oact)
+{
+ struct k_sigaction new_ka, old_ka;
+ int ret;
+
+ if (act) {
+ old_sigset_t mask;
+ if (verify_area(VERIFY_READ, act, sizeof(*act)) ||
+ __get_user(new_ka.sa.sa_handler, &act->sa_handler) ||
+ __get_user(new_ka.sa.sa_restorer, &act->sa_restorer))
+ return -EFAULT;
+ __get_user(new_ka.sa.sa_flags, &act->sa_flags);
+ __get_user(mask, &act->sa_mask);
+ siginitset(&new_ka.sa.sa_mask, mask);
+ }
+
+ ret = do_sigaction(sig, act ? &new_ka : NULL, oact ? &old_ka : NULL);
+
+ if (!ret && oact) {
+ if (verify_area(VERIFY_WRITE, oact, sizeof(*oact)) ||
+ __put_user(old_ka.sa.sa_handler, &oact->sa_handler) ||
+ __put_user(old_ka.sa.sa_restorer, &oact->sa_restorer))
+ return -EFAULT;
+ __put_user(old_ka.sa.sa_flags, &oact->sa_flags);
+ __put_user(old_ka.sa.sa_mask.sig[0], &oact->sa_mask);
+ }
+
+ return ret;
+}
+
+asmlinkage int
+sys_sigaltstack(const stack_t __user *uss, stack_t __user *uoss,
+ unsigned long r4, unsigned long r5, unsigned long r6,
+ unsigned long r7,
+ struct pt_regs * regs)
+{
+ return do_sigaltstack(uss, uoss, REF_REG_SP);
+}
+
+
+/*
+ * Do a signal return; undo the signal stack.
+ */
+
+struct sigframe
+{
+ struct sigcontext sc;
+ unsigned long extramask[_NSIG_WORDS-1];
+ long long retcode[2];
+};
+
+struct rt_sigframe
+{
+ struct siginfo __user *pinfo;
+ void *puc;
+ struct siginfo info;
+ struct ucontext uc;
+ long long retcode[2];
+};
+
+#ifndef CONFIG_NOFPU_SUPPORT
+static inline int
+restore_sigcontext_fpu(struct pt_regs *regs, struct sigcontext __user *sc)
+{
+ int err = 0;
+ int fpvalid;
+
+ err |= __get_user (fpvalid, &sc->sc_fpvalid);
+ current->used_math = fpvalid;
+ if (! fpvalid)
+ return err;
+
+ if (current == last_task_used_math) {
+ last_task_used_math = NULL;
+ regs->sr |= SR_FD;
+ }
+
+ err |= __copy_from_user(¤t->thread.fpu.hard, &sc->sc_fpregs[0],
+ (sizeof(long long) * 32) + (sizeof(int) * 1));
+
+ return err;
+}
+
+static inline int
+setup_sigcontext_fpu(struct pt_regs *regs, struct sigcontext __user *sc)
+{
+ int err = 0;
+ int fpvalid;
+
+ fpvalid = current->used_math;
+ err |= __put_user(fpvalid, &sc->sc_fpvalid);
+ if (! fpvalid)
+ return err;
+
+ if (current == last_task_used_math) {
+ grab_fpu();
+ fpsave(¤t->thread.fpu.hard);
+ release_fpu();
+ last_task_used_math = NULL;
+ regs->sr |= SR_FD;
+ }
+
+ err |= __copy_to_user(&sc->sc_fpregs[0], ¤t->thread.fpu.hard,
+ (sizeof(long long) * 32) + (sizeof(int) * 1));
+ current->used_math = 0;
+
+ return err;
+}
+#else
+static inline int
+restore_sigcontext_fpu(struct pt_regs *regs, struct sigcontext __user *sc)
+{}
+static inline int
+setup_sigcontext_fpu(struct pt_regs *regs, struct sigcontext __user *sc)
+{}
+#endif
+
+static int
+restore_sigcontext(struct pt_regs *regs, struct sigcontext __user *sc, long long *r2_p)
+{
+ unsigned int err = 0;
+ unsigned long long current_sr, new_sr;
+#define SR_MASK 0xffff8cfd
+
+#define COPY(x) err |= __get_user(regs->x, &sc->sc_##x)
+
+ COPY(regs[0]); COPY(regs[1]); COPY(regs[2]); COPY(regs[3]);
+ COPY(regs[4]); COPY(regs[5]); COPY(regs[6]); COPY(regs[7]);
+ COPY(regs[8]); COPY(regs[9]); COPY(regs[10]); COPY(regs[11]);
+ COPY(regs[12]); COPY(regs[13]); COPY(regs[14]); COPY(regs[15]);
+ COPY(regs[16]); COPY(regs[17]); COPY(regs[18]); COPY(regs[19]);
+ COPY(regs[20]); COPY(regs[21]); COPY(regs[22]); COPY(regs[23]);
+ COPY(regs[24]); COPY(regs[25]); COPY(regs[26]); COPY(regs[27]);
+ COPY(regs[28]); COPY(regs[29]); COPY(regs[30]); COPY(regs[31]);
+ COPY(regs[32]); COPY(regs[33]); COPY(regs[34]); COPY(regs[35]);
+ COPY(regs[36]); COPY(regs[37]); COPY(regs[38]); COPY(regs[39]);
+ COPY(regs[40]); COPY(regs[41]); COPY(regs[42]); COPY(regs[43]);
+ COPY(regs[44]); COPY(regs[45]); COPY(regs[46]); COPY(regs[47]);
+ COPY(regs[48]); COPY(regs[49]); COPY(regs[50]); COPY(regs[51]);
+ COPY(regs[52]); COPY(regs[53]); COPY(regs[54]); COPY(regs[55]);
+ COPY(regs[56]); COPY(regs[57]); COPY(regs[58]); COPY(regs[59]);
+ COPY(regs[60]); COPY(regs[61]); COPY(regs[62]);
+ COPY(tregs[0]); COPY(tregs[1]); COPY(tregs[2]); COPY(tregs[3]);
+ COPY(tregs[4]); COPY(tregs[5]); COPY(tregs[6]); COPY(tregs[7]);
+
+ /* Prevent the signal handler manipulating SR in a way that can
+ crash the kernel. i.e. only allow S, Q, M, PR, SZ, FR to be
+ modified */
+ current_sr = regs->sr;
+ err |= __get_user(new_sr, &sc->sc_sr);
+ regs->sr &= SR_MASK;
+ regs->sr |= (new_sr & ~SR_MASK);
+
+ COPY(pc);
+
+#undef COPY
+
+ /* Must do this last in case it sets regs->sr.fd (i.e. after rest of sr
+ * has been restored above.) */
+ err |= restore_sigcontext_fpu(regs, sc);
+
+ regs->syscall_nr = -1; /* disable syscall checks */
+ err |= __get_user(*r2_p, &sc->sc_regs[REG_RET]);
+ return err;
+}
+
+asmlinkage int sys_sigreturn(unsigned long r2, unsigned long r3,
+ unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs * regs)
+{
+ struct sigframe __user *frame = (struct sigframe __user *) (long) REF_REG_SP;
+ sigset_t set;
+ long long ret;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+
+ if (__get_user(set.sig[0], &frame->sc.oldmask)
+ || (_NSIG_WORDS > 1
+ && __copy_from_user(&set.sig[1], &frame->extramask,
+ sizeof(frame->extramask))))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+
+ spin_lock_irq(¤t->sighand->siglock);
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ if (restore_sigcontext(regs, &frame->sc, &ret))
+ goto badframe;
+ regs->pc -= 4;
+
+ return (int) ret;
+
+badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+
+asmlinkage int sys_rt_sigreturn(unsigned long r2, unsigned long r3,
+ unsigned long r4, unsigned long r5,
+ unsigned long r6, unsigned long r7,
+ struct pt_regs * regs)
+{
+ struct rt_sigframe __user *frame = (struct rt_sigframe __user *) (long) REF_REG_SP;
+ sigset_t set;
+ stack_t __user st;
+ long long ret;
+
+ if (verify_area(VERIFY_READ, frame, sizeof(*frame)))
+ goto badframe;
+
+ if (__copy_from_user(&set, &frame->uc.uc_sigmask, sizeof(set)))
+ goto badframe;
+
+ sigdelsetmask(&set, ~_BLOCKABLE);
+ spin_lock_irq(¤t->sighand->siglock);
+ current->blocked = set;
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+
+ if (restore_sigcontext(regs, &frame->uc.uc_mcontext, &ret))
+ goto badframe;
+ regs->pc -= 4;
+
+ if (__copy_from_user(&st, &frame->uc.uc_stack, sizeof(st)))
+ goto badframe;
+ /* It is more difficult to avoid calling this function than to
+ call it and ignore errors. */
+ do_sigaltstack(&st, NULL, REF_REG_SP);
+
+ return (int) ret;
+
+badframe:
+ force_sig(SIGSEGV, current);
+ return 0;
+}
+
+/*
+ * Set up a signal frame.
+ */
+
+static int
+setup_sigcontext(struct sigcontext __user *sc, struct pt_regs *regs,
+ unsigned long mask)
+{
+ int err = 0;
+
+ /* Do this first, otherwise is this sets sr->fd, that value isn't preserved. */
+ err |= setup_sigcontext_fpu(regs, sc);
+
+#define COPY(x) err |= __put_user(regs->x, &sc->sc_##x)
+
+ COPY(regs[0]); COPY(regs[1]); COPY(regs[2]); COPY(regs[3]);
+ COPY(regs[4]); COPY(regs[5]); COPY(regs[6]); COPY(regs[7]);
+ COPY(regs[8]); COPY(regs[9]); COPY(regs[10]); COPY(regs[11]);
+ COPY(regs[12]); COPY(regs[13]); COPY(regs[14]); COPY(regs[15]);
+ COPY(regs[16]); COPY(regs[17]); COPY(regs[18]); COPY(regs[19]);
+ COPY(regs[20]); COPY(regs[21]); COPY(regs[22]); COPY(regs[23]);
+ COPY(regs[24]); COPY(regs[25]); COPY(regs[26]); COPY(regs[27]);
+ COPY(regs[28]); COPY(regs[29]); COPY(regs[30]); COPY(regs[31]);
+ COPY(regs[32]); COPY(regs[33]); COPY(regs[34]); COPY(regs[35]);
+ COPY(regs[36]); COPY(regs[37]); COPY(regs[38]); COPY(regs[39]);
+ COPY(regs[40]); COPY(regs[41]); COPY(regs[42]); COPY(regs[43]);
+ COPY(regs[44]); COPY(regs[45]); COPY(regs[46]); COPY(regs[47]);
+ COPY(regs[48]); COPY(regs[49]); COPY(regs[50]); COPY(regs[51]);
+ COPY(regs[52]); COPY(regs[53]); COPY(regs[54]); COPY(regs[55]);
+ COPY(regs[56]); COPY(regs[57]); COPY(regs[58]); COPY(regs[59]);
+ COPY(regs[60]); COPY(regs[61]); COPY(regs[62]);
+ COPY(tregs[0]); COPY(tregs[1]); COPY(tregs[2]); COPY(tregs[3]);
+ COPY(tregs[4]); COPY(tregs[5]); COPY(tregs[6]); COPY(tregs[7]);
+ COPY(sr); COPY(pc);
+
+#undef COPY
+
+ err |= __put_user(mask, &sc->oldmask);
+
+ return err;
+}
+
+/*
+ * Determine which stack to use..
+ */
+static inline void __user *
+get_sigframe(struct k_sigaction *ka, unsigned long sp, size_t frame_size)
+{
+ if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && ! on_sig_stack(sp))
+ sp = current->sas_ss_sp + current->sas_ss_size;
+
+ return (void __user *)((sp - frame_size) & -8ul);
+}
+
+void sa_default_restorer(void); /* See comments below */
+void sa_default_rt_restorer(void); /* See comments below */
+
+static void setup_frame(int sig, struct k_sigaction *ka,
+ sigset_t *set, struct pt_regs *regs)
+{
+ struct sigframe __user *frame;
+ int err = 0;
+ int signal;
+
+ frame = get_sigframe(ka, regs->regs[REG_SP], sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ signal = current_thread_info()->exec_domain
+ && current_thread_info()->exec_domain->signal_invmap
+ && sig < 32
+ ? current_thread_info()->exec_domain->signal_invmap[sig]
+ : sig;
+
+ err |= setup_sigcontext(&frame->sc, regs, set->sig[0]);
+
+ /* Give up earlier as i386, in case */
+ if (err)
+ goto give_sigsegv;
+
+ if (_NSIG_WORDS > 1) {
+ err |= __copy_to_user(frame->extramask, &set->sig[1],
+ sizeof(frame->extramask)); }
+
+ /* Give up earlier as i386, in case */
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ DEREF_REG_PR = (unsigned long) ka->sa.sa_restorer | 0x1;
+
+ /*
+ * On SH5 all edited pointers are subject to NEFF
+ */
+ DEREF_REG_PR = (DEREF_REG_PR & NEFF_SIGN) ?
+ (DEREF_REG_PR | NEFF_MASK) : DEREF_REG_PR;
+ } else {
+ /*
+ * Different approach on SH5.
+ * . Endianness independent asm code gets placed in entry.S .
+ * This is limited to four ASM instructions corresponding
+ * to two long longs in size.
+ * . err checking is done on the else branch only
+ * . flush_icache_range() is called upon __put_user() only
+ * . all edited pointers are subject to NEFF
+ * . being code, linker turns ShMedia bit on, always
+ * dereference index -1.
+ */
+ DEREF_REG_PR = (unsigned long) frame->retcode | 0x01;
+ DEREF_REG_PR = (DEREF_REG_PR & NEFF_SIGN) ?
+ (DEREF_REG_PR | NEFF_MASK) : DEREF_REG_PR;
+
+ if (__copy_to_user(frame->retcode,
+ (unsigned long long)sa_default_restorer & (~1), 16) != 0)
+ goto give_sigsegv;
+
+ /* Cohere the trampoline with the I-cache. */
+ flush_cache_sigtramp(DEREF_REG_PR-1, DEREF_REG_PR-1+16);
+ }
+
+ /*
+ * Set up registers for signal handler.
+ * All edited pointers are subject to NEFF.
+ */
+ regs->regs[REG_SP] = (unsigned long) frame;
+ regs->regs[REG_SP] = (regs->regs[REG_SP] & NEFF_SIGN) ?
+ (regs->regs[REG_SP] | NEFF_MASK) : regs->regs[REG_SP];
+ regs->regs[REG_ARG1] = signal; /* Arg for signal handler */
+
+ /* FIXME:
+ The glibc profiling support for SH-5 needs to be passed a sigcontext
+ so it can retrieve the PC. At some point during 2003 the glibc
+ support was changed to receive the sigcontext through the 2nd
+ argument, but there are still versions of libc.so in use that use
+ the 3rd argument. Until libc.so is stabilised, pass the sigcontext
+ through both 2nd and 3rd arguments.
+ */
+
+ regs->regs[REG_ARG2] = (unsigned long long)(unsigned long)(signed long)&frame->sc;
+ regs->regs[REG_ARG3] = (unsigned long long)(unsigned long)(signed long)&frame->sc;
+
+ regs->pc = (unsigned long) ka->sa.sa_handler;
+ regs->pc = (regs->pc & NEFF_SIGN) ? (regs->pc | NEFF_MASK) : regs->pc;
+
+ set_fs(USER_DS);
+
+#if DEBUG_SIG
+ /* Broken %016Lx */
+ printk("SIG deliver (#%d,%s:%d): sp=%p pc=%08Lx%08Lx link=%08Lx%08Lx\n",
+ signal,
+ current->comm, current->pid, frame,
+ regs->pc >> 32, regs->pc & 0xffffffff,
+ DEREF_REG_PR >> 32, DEREF_REG_PR & 0xffffffff);
+#endif
+
+ return;
+
+give_sigsegv:
+ force_sigsegv(sig, current);
+}
+
+static void setup_rt_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
+ sigset_t *set, struct pt_regs *regs)
+{
+ struct rt_sigframe __user *frame;
+ int err = 0;
+ int signal;
+
+ frame = get_sigframe(ka, regs->regs[REG_SP], sizeof(*frame));
+
+ if (!access_ok(VERIFY_WRITE, frame, sizeof(*frame)))
+ goto give_sigsegv;
+
+ signal = current_thread_info()->exec_domain
+ && current_thread_info()->exec_domain->signal_invmap
+ && sig < 32
+ ? current_thread_info()->exec_domain->signal_invmap[sig]
+ : sig;
+
+ err |= __put_user(&frame->info, &frame->pinfo);
+ err |= __put_user(&frame->uc, &frame->puc);
+ err |= copy_siginfo_to_user(&frame->info, info);
+
+ /* Give up earlier as i386, in case */
+ if (err)
+ goto give_sigsegv;
+
+ /* Create the ucontext. */
+ err |= __put_user(0, &frame->uc.uc_flags);
+ err |= __put_user(0, &frame->uc.uc_link);
+ err |= __put_user((void *)current->sas_ss_sp,
+ &frame->uc.uc_stack.ss_sp);
+ err |= __put_user(sas_ss_flags(regs->regs[REG_SP]),
+ &frame->uc.uc_stack.ss_flags);
+ err |= __put_user(current->sas_ss_size, &frame->uc.uc_stack.ss_size);
+ err |= setup_sigcontext(&frame->uc.uc_mcontext,
+ regs, set->sig[0]);
+ err |= __copy_to_user(&frame->uc.uc_sigmask, set, sizeof(*set));
+
+ /* Give up earlier as i386, in case */
+ if (err)
+ goto give_sigsegv;
+
+ /* Set up to return from userspace. If provided, use a stub
+ already in userspace. */
+ if (ka->sa.sa_flags & SA_RESTORER) {
+ DEREF_REG_PR = (unsigned long) ka->sa.sa_restorer | 0x1;
+
+ /*
+ * On SH5 all edited pointers are subject to NEFF
+ */
+ DEREF_REG_PR = (DEREF_REG_PR & NEFF_SIGN) ?
+ (DEREF_REG_PR | NEFF_MASK) : DEREF_REG_PR;
+ } else {
+ /*
+ * Different approach on SH5.
+ * . Endianness independent asm code gets placed in entry.S .
+ * This is limited to four ASM instructions corresponding
+ * to two long longs in size.
+ * . err checking is done on the else branch only
+ * . flush_icache_range() is called upon __put_user() only
+ * . all edited pointers are subject to NEFF
+ * . being code, linker turns ShMedia bit on, always
+ * dereference index -1.
+ */
+
+ DEREF_REG_PR = (unsigned long) frame->retcode | 0x01;
+ DEREF_REG_PR = (DEREF_REG_PR & NEFF_SIGN) ?
+ (DEREF_REG_PR | NEFF_MASK) : DEREF_REG_PR;
+
+ if (__copy_to_user(frame->retcode,
+ (unsigned long long)sa_default_rt_restorer & (~1), 16) != 0)
+ goto give_sigsegv;
+
+ flush_icache_range(DEREF_REG_PR-1, DEREF_REG_PR-1+15);
+ }
+
+ /*
+ * Set up registers for signal handler.
+ * All edited pointers are subject to NEFF.
+ */
+ regs->regs[REG_SP] = (unsigned long) frame;
+ regs->regs[REG_SP] = (regs->regs[REG_SP] & NEFF_SIGN) ?
+ (regs->regs[REG_SP] | NEFF_MASK) : regs->regs[REG_SP];
+ regs->regs[REG_ARG1] = signal; /* Arg for signal handler */
+ regs->regs[REG_ARG2] = (unsigned long long)(unsigned long)(signed long)&frame->info;
+ regs->regs[REG_ARG3] = (unsigned long long)(unsigned long)(signed long)&frame->uc.uc_mcontext;
+ regs->pc = (unsigned long) ka->sa.sa_handler;
+ regs->pc = (regs->pc & NEFF_SIGN) ? (regs->pc | NEFF_MASK) : regs->pc;
+
+ set_fs(USER_DS);
+
+#if DEBUG_SIG
+ /* Broken %016Lx */
+ printk("SIG deliver (#%d,%s:%d): sp=%p pc=%08Lx%08Lx link=%08Lx%08Lx\n",
+ signal,
+ current->comm, current->pid, frame,
+ regs->pc >> 32, regs->pc & 0xffffffff,
+ DEREF_REG_PR >> 32, DEREF_REG_PR & 0xffffffff);
+#endif
+
+ return;
+
+give_sigsegv:
+ force_sigsegv(sig, current);
+}
+
+/*
+ * OK, we're invoking a handler
+ */
+
+static void
+handle_signal(unsigned long sig, siginfo_t *info, sigset_t *oldset,
+ struct pt_regs * regs)
+{
+ struct k_sigaction *ka = ¤t->sighand->action[sig-1];
+
+ /* Are we from a system call? */
+ if (regs->syscall_nr >= 0) {
+ /* If so, check system call restarting.. */
+ switch (regs->regs[REG_RET]) {
+ case -ERESTARTNOHAND:
+ regs->regs[REG_RET] = -EINTR;
+ break;
+
+ case -ERESTARTSYS:
+ if (!(ka->sa.sa_flags & SA_RESTART)) {
+ regs->regs[REG_RET] = -EINTR;
+ break;
+ }
+ /* fallthrough */
+ case -ERESTARTNOINTR:
+ /* Decode syscall # */
+ regs->regs[REG_RET] = regs->syscall_nr;
+ regs->pc -= 4;
+ }
+ }
+
+ /* Set up the stack frame */
+ if (ka->sa.sa_flags & SA_SIGINFO)
+ setup_rt_frame(sig, ka, info, oldset, regs);
+ else
+ setup_frame(sig, ka, oldset, regs);
+
+ if (ka->sa.sa_flags & SA_ONESHOT)
+ ka->sa.sa_handler = SIG_DFL;
+
+ if (!(ka->sa.sa_flags & SA_NODEFER)) {
+ spin_lock_irq(¤t->sighand->siglock);
+ sigorsets(¤t->blocked,¤t->blocked,&ka->sa.sa_mask);
+ sigaddset(¤t->blocked,sig);
+ recalc_sigpending();
+ spin_unlock_irq(¤t->sighand->siglock);
+ }
+}
+
+/*
+ * Note that 'init' is a special process: it doesn't get signals it doesn't
+ * want to handle. Thus you cannot kill init even with a SIGKILL even by
+ * mistake.
+ *
+ * Note that we go through the signals twice: once to check the signals that
+ * the kernel can handle, and then we build all the user-level signal handling
+ * stack-frames in one go after that.
+ */
+int do_signal(struct pt_regs *regs, sigset_t *oldset)
+{
+ siginfo_t info;
+ int signr;
+
+ /*
+ * We want the common case to go fast, which
+ * is why we may in certain cases get here from
+ * kernel mode. Just return without doing anything
+ * if so.
+ */
+ if (!user_mode(regs))
+ return 1;
+
+ if (current->flags & PF_FREEZE) {
+ refrigerator(0);
+ goto no_signal;
+ }
+
+ if (!oldset)
+ oldset = ¤t->blocked;
+
+ signr = get_signal_to_deliver(&info, regs, 0);
+
+ if (signr > 0) {
+ /* Whee! Actually deliver the signal. */
+ handle_signal(signr, &info, oldset, regs);
+ return 1;
+ }
+
+no_signal:
+ /* Did we come from a system call? */
+ if (regs->syscall_nr >= 0) {
+ /* Restart the system call - no handlers present */
+ if (regs->regs[REG_RET] == -ERESTARTNOHAND ||
+ regs->regs[REG_RET] == -ERESTARTSYS ||
+ regs->regs[REG_RET] == -ERESTARTNOINTR) {
+ /* Decode Syscall # */
+ regs->regs[REG_RET] = regs->syscall_nr;
+ regs->pc -= 4;
+ }
+ }
+ return 0;
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/kernel/time.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ * Copyright (C) 2003 Richard Curnow
+ *
+ * Original TMU/RTC code taken from sh version.
+ * Copyright (C) 1999 Tetsuya Okada & Niibe Yutaka
+ * Some code taken from i386 version.
+ * Copyright (C) 1991, 1992, 1995 Linus Torvalds
+ */
+
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/interrupt.h>
+#include <linux/time.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/profile.h>
+#include <linux/smp.h>
+
+#include <asm/registers.h> /* required by inline __asm__ stmt. */
+
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/delay.h>
+
+#include <linux/timex.h>
+#include <linux/irq.h>
+#include <asm/hardware.h>
+
+#define TMU_TOCR_INIT 0x00
+#define TMU0_TCR_INIT 0x0020
+#define TMU_TSTR_INIT 1
+
+/* RCR1 Bits */
+#define RCR1_CF 0x80 /* Carry Flag */
+#define RCR1_CIE 0x10 /* Carry Interrupt Enable */
+#define RCR1_AIE 0x08 /* Alarm Interrupt Enable */
+#define RCR1_AF 0x01 /* Alarm Flag */
+
+/* RCR2 Bits */
+#define RCR2_PEF 0x80 /* PEriodic interrupt Flag */
+#define RCR2_PESMASK 0x70 /* Periodic interrupt Set */
+#define RCR2_RTCEN 0x08 /* ENable RTC */
+#define RCR2_ADJ 0x04 /* ADJustment (30-second) */
+#define RCR2_RESET 0x02 /* Reset bit */
+#define RCR2_START 0x01 /* Start bit */
+
+/* Clock, Power and Reset Controller */
+#define CPRC_BLOCK_OFF 0x01010000
+#define CPRC_BASE PHYS_PERIPHERAL_BLOCK + CPRC_BLOCK_OFF
+
+#define FRQCR (cprc_base+0x0)
+#define WTCSR (cprc_base+0x0018)
+#define STBCR (cprc_base+0x0030)
+
+/* Time Management Unit */
+#define TMU_BLOCK_OFF 0x01020000
+#define TMU_BASE PHYS_PERIPHERAL_BLOCK + TMU_BLOCK_OFF
+#define TMU0_BASE tmu_base + 0x8 + (0xc * 0x0)
+#define TMU1_BASE tmu_base + 0x8 + (0xc * 0x1)
+#define TMU2_BASE tmu_base + 0x8 + (0xc * 0x2)
+
+#define TMU_TOCR tmu_base+0x0 /* Byte access */
+#define TMU_TSTR tmu_base+0x4 /* Byte access */
+
+#define TMU0_TCOR TMU0_BASE+0x0 /* Long access */
+#define TMU0_TCNT TMU0_BASE+0x4 /* Long access */
+#define TMU0_TCR TMU0_BASE+0x8 /* Word access */
+
+/* Real Time Clock */
+#define RTC_BLOCK_OFF 0x01040000
+#define RTC_BASE PHYS_PERIPHERAL_BLOCK + RTC_BLOCK_OFF
+
+#define R64CNT rtc_base+0x00
+#define RSECCNT rtc_base+0x04
+#define RMINCNT rtc_base+0x08
+#define RHRCNT rtc_base+0x0c
+#define RWKCNT rtc_base+0x10
+#define RDAYCNT rtc_base+0x14
+#define RMONCNT rtc_base+0x18
+#define RYRCNT rtc_base+0x1c /* 16bit */
+#define RSECAR rtc_base+0x20
+#define RMINAR rtc_base+0x24
+#define RHRAR rtc_base+0x28
+#define RWKAR rtc_base+0x2c
+#define RDAYAR rtc_base+0x30
+#define RMONAR rtc_base+0x34
+#define RCR1 rtc_base+0x38
+#define RCR2 rtc_base+0x3c
+
+#ifndef BCD_TO_BIN
+#define BCD_TO_BIN(val) ((val)=((val)&15) + ((val)>>4)*10)
+#endif
+
+#ifndef BIN_TO_BCD
+#define BIN_TO_BCD(val) ((val)=(((val)/10)<<4) + (val)%10)
+#endif
+
+#define TICK_SIZE (tick_nsec / 1000)
+
+extern unsigned long wall_jiffies;
+
+u64 jiffies_64 = INITIAL_JIFFIES;
+
+static unsigned long tmu_base, rtc_base;
+unsigned long cprc_base;
+
+/* Variables to allow interpolation of time of day to resolution better than a
+ * jiffy. */
+
+/* This is effectively protected by xtime_lock */
+static unsigned long ctc_last_interrupt;
+static unsigned long long usecs_per_jiffy = 1000000/HZ; /* Approximation */
+
+#define CTC_JIFFY_SCALE_SHIFT 40
+
+/* 2**CTC_JIFFY_SCALE_SHIFT / ctc_ticks_per_jiffy */
+static unsigned long long scaled_recip_ctc_ticks_per_jiffy;
+
+/* Estimate number of microseconds that have elapsed since the last timer tick,
+ by scaling the delta that has occured in the CTC register.
+
+ WARNING WARNING WARNING : This algorithm relies on the CTC decrementing at
+ the CPU clock rate. If the CPU sleeps, the CTC stops counting. Bear this
+ in mind if enabling SLEEP_WORKS in process.c. In that case, this algorithm
+ probably needs to use TMU.TCNT0 instead. This will work even if the CPU is
+ sleeping, though will be coarser.
+
+ FIXME : What if usecs_per_tick is moving around too much, e.g. if an adjtime
+ is running or if the freq or tick arguments of adjtimex are modified after
+ we have calibrated the scaling factor? This will result in either a jump at
+ the end of a tick period, or a wrap backwards at the start of the next one,
+ if the application is reading the time of day often enough. I think we
+ ought to do better than this. For this reason, usecs_per_jiffy is left
+ separated out in the calculation below. This allows some future hook into
+ the adjtime-related stuff in kernel/timer.c to remove this hazard.
+
+*/
+
+static unsigned long usecs_since_tick(void)
+{
+ unsigned long long current_ctc;
+ long ctc_ticks_since_interrupt;
+ unsigned long long ull_ctc_ticks_since_interrupt;
+ unsigned long result;
+
+ unsigned long long mul1_out;
+ unsigned long long mul1_out_high;
+ unsigned long long mul2_out_low, mul2_out_high;
+
+ /* Read CTC register */
+ asm ("getcon cr62, %0" : "=r" (current_ctc));
+ /* Note, the CTC counts down on each CPU clock, not up.
+ Note(2), use long type to get correct wraparound arithmetic when
+ the counter crosses zero. */
+ ctc_ticks_since_interrupt = (long) ctc_last_interrupt - (long) current_ctc;
+ ull_ctc_ticks_since_interrupt = (unsigned long long) ctc_ticks_since_interrupt;
+
+ /* Inline assembly to do 32x32x32->64 multiplier */
+ asm volatile ("mulu.l %1, %2, %0" :
+ "=r" (mul1_out) :
+ "r" (ull_ctc_ticks_since_interrupt), "r" (usecs_per_jiffy));
+
+ mul1_out_high = mul1_out >> 32;
+
+ asm volatile ("mulu.l %1, %2, %0" :
+ "=r" (mul2_out_low) :
+ "r" (mul1_out), "r" (scaled_recip_ctc_ticks_per_jiffy));
+
+#if 1
+ asm volatile ("mulu.l %1, %2, %0" :
+ "=r" (mul2_out_high) :
+ "r" (mul1_out_high), "r" (scaled_recip_ctc_ticks_per_jiffy));
+#endif
+
+ result = (unsigned long) (((mul2_out_high << 32) + mul2_out_low) >> CTC_JIFFY_SCALE_SHIFT);
+
+ return result;
+}
+
+void do_gettimeofday(struct timeval *tv)
+{
+ unsigned long flags;
+ unsigned long seq;
+ unsigned long usec, sec;
+
+ do {
+ seq = read_seqbegin_irqsave(&xtime_lock, flags);
+ usec = usecs_since_tick();
+ {
+ unsigned long lost = jiffies - wall_jiffies;
+
+ if (lost)
+ usec += lost * (1000000 / HZ);
+ }
+
+ sec = xtime.tv_sec;
+ usec += xtime.tv_nsec / 1000;
+ } while (read_seqretry_irqrestore(&xtime_lock, seq, flags));
+
+ while (usec >= 1000000) {
+ usec -= 1000000;
+ sec++;
+ }
+
+ tv->tv_sec = sec;
+ tv->tv_usec = usec;
+}
+
+int do_settimeofday(struct timespec *tv)
+{
+ time_t wtm_sec, sec = tv->tv_sec;
+ long wtm_nsec, nsec = tv->tv_nsec;
+
+ if ((unsigned long)tv->tv_nsec >= NSEC_PER_SEC)
+ return -EINVAL;
+
+ write_seqlock_irq(&xtime_lock);
+ /*
+ * This is revolting. We need to set "xtime" correctly. However, the
+ * value in this location is the value at the most recent update of
+ * wall time. Discover what correction gettimeofday() would have
+ * made, and then undo it!
+ */
+ nsec -= 1000 * (usecs_since_tick() +
+ (jiffies - wall_jiffies) * (1000000 / HZ));
+
+ wtm_sec = wall_to_monotonic.tv_sec + (xtime.tv_sec - sec);
+ wtm_nsec = wall_to_monotonic.tv_nsec + (xtime.tv_nsec - nsec);
+
+ set_normalized_timespec(&xtime, sec, nsec);
+ set_normalized_timespec(&wall_to_monotonic, wtm_sec, wtm_nsec);
+
+ time_adjust = 0; /* stop active adjtime() */
+ time_status |= STA_UNSYNC;
+ time_maxerror = NTP_PHASE_LIMIT;
+ time_esterror = NTP_PHASE_LIMIT;
+ write_sequnlock_irq(&xtime_lock);
+ clock_was_set();
+
+ return 0;
+}
+
+static int set_rtc_time(unsigned long nowtime)
+{
+ int retval = 0;
+ int real_seconds, real_minutes, cmos_minutes;
+
+ ctrl_outb(RCR2_RESET, RCR2); /* Reset pre-scaler & stop RTC */
+
+ cmos_minutes = ctrl_inb(RMINCNT);
+ BCD_TO_BIN(cmos_minutes);
+
+ /*
+ * since we're only adjusting minutes and seconds,
+ * don't interfere with hour overflow. This avoids
+ * messing with unknown time zones but requires your
+ * RTC not to be off by more than 15 minutes
+ */
+ real_seconds = nowtime % 60;
+ real_minutes = nowtime / 60;
+ if (((abs(real_minutes - cmos_minutes) + 15)/30) & 1)
+ real_minutes += 30; /* correct for half hour time zone */
+ real_minutes %= 60;
+
+ if (abs(real_minutes - cmos_minutes) < 30) {
+ BIN_TO_BCD(real_seconds);
+ BIN_TO_BCD(real_minutes);
+ ctrl_outb(real_seconds, RSECCNT);
+ ctrl_outb(real_minutes, RMINCNT);
+ } else {
+ printk(KERN_WARNING
+ "set_rtc_time: can't update from %d to %d\n",
+ cmos_minutes, real_minutes);
+ retval = -1;
+ }
+
+ ctrl_outb(RCR2_RTCEN|RCR2_START, RCR2); /* Start RTC */
+
+ return retval;
+}
+
+/* last time the RTC clock got updated */
+static long last_rtc_update = 0;
+
+/*
+ * timer_interrupt() needs to keep up the real-time clock,
+ * as well as call the "do_timer()" routine every clocktick
+ */
+static inline void do_timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned long long current_ctc;
+ asm ("getcon cr62, %0" : "=r" (current_ctc));
+ ctc_last_interrupt = (unsigned long) current_ctc;
+
+ do_timer(regs);
+#ifndef CONFIG_SMP
+ update_process_times(user_mode(regs));
+#endif
+ profile_tick(CPU_PROFILING, regs);
+
+#ifdef CONFIG_HEARTBEAT
+ {
+ extern void heartbeat(void);
+
+ heartbeat();
+ }
+#endif
+
+ /*
+ * If we have an externally synchronized Linux clock, then update
+ * RTC clock accordingly every ~11 minutes. Set_rtc_mmss() has to be
+ * called as close as possible to 500 ms before the new second starts.
+ */
+ if ((time_status & STA_UNSYNC) == 0 &&
+ xtime.tv_sec > last_rtc_update + 660 &&
+ (xtime.tv_nsec / 1000) >= 500000 - ((unsigned) TICK_SIZE) / 2 &&
+ (xtime.tv_nsec / 1000) <= 500000 + ((unsigned) TICK_SIZE) / 2) {
+ if (set_rtc_time(xtime.tv_sec) == 0)
+ last_rtc_update = xtime.tv_sec;
+ else
+ last_rtc_update = xtime.tv_sec - 600; /* do it again in 60 s */
+ }
+}
+
+/*
+ * This is the same as the above, except we _also_ save the current
+ * Time Stamp Counter value at the time of the timer interrupt, so that
+ * we later on can estimate the time of day more exactly.
+ */
+static irqreturn_t timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned long timer_status;
+
+ /* Clear UNF bit */
+ timer_status = ctrl_inw(TMU0_TCR);
+ timer_status &= ~0x100;
+ ctrl_outw(timer_status, TMU0_TCR);
+
+ /*
+ * Here we are in the timer irq handler. We just have irqs locally
+ * disabled but we don't know if the timer_bh is running on the other
+ * CPU. We need to avoid to SMP race with it. NOTE: we don' t need
+ * the irq version of write_lock because as just said we have irq
+ * locally disabled. -arca
+ */
+ write_lock(&xtime_lock);
+ do_timer_interrupt(irq, NULL, regs);
+ write_unlock(&xtime_lock);
+
+ return IRQ_HANDLED;
+}
+
+static unsigned long get_rtc_time(void)
+{
+ unsigned int sec, min, hr, wk, day, mon, yr, yr100;
+
+ again:
+ do {
+ ctrl_outb(0, RCR1); /* Clear CF-bit */
+ sec = ctrl_inb(RSECCNT);
+ min = ctrl_inb(RMINCNT);
+ hr = ctrl_inb(RHRCNT);
+ wk = ctrl_inb(RWKCNT);
+ day = ctrl_inb(RDAYCNT);
+ mon = ctrl_inb(RMONCNT);
+ yr = ctrl_inw(RYRCNT);
+ yr100 = (yr >> 8);
+ yr &= 0xff;
+ } while ((ctrl_inb(RCR1) & RCR1_CF) != 0);
+
+ BCD_TO_BIN(yr100);
+ BCD_TO_BIN(yr);
+ BCD_TO_BIN(mon);
+ BCD_TO_BIN(day);
+ BCD_TO_BIN(hr);
+ BCD_TO_BIN(min);
+ BCD_TO_BIN(sec);
+
+ if (yr > 99 || mon < 1 || mon > 12 || day > 31 || day < 1 ||
+ hr > 23 || min > 59 || sec > 59) {
+ printk(KERN_ERR
+ "SH RTC: invalid value, resetting to 1 Jan 2000\n");
+ ctrl_outb(RCR2_RESET, RCR2); /* Reset & Stop */
+ ctrl_outb(0, RSECCNT);
+ ctrl_outb(0, RMINCNT);
+ ctrl_outb(0, RHRCNT);
+ ctrl_outb(6, RWKCNT);
+ ctrl_outb(1, RDAYCNT);
+ ctrl_outb(1, RMONCNT);
+ ctrl_outw(0x2000, RYRCNT);
+ ctrl_outb(RCR2_RTCEN|RCR2_START, RCR2); /* Start */
+ goto again;
+ }
+
+ return mktime(yr100 * 100 + yr, mon, day, hr, min, sec);
+}
+
+static __init unsigned int get_cpu_hz(void)
+{
+ unsigned int count;
+ unsigned long __dummy;
+ unsigned long ctc_val_init, ctc_val;
+
+ /*
+ ** Regardless the toolchain, force the compiler to use the
+ ** arbitrary register r3 as a clock tick counter.
+ ** NOTE: r3 must be in accordance with rtc_interrupt()
+ */
+ register unsigned long long __rtc_irq_flag __asm__ ("r3");
+
+ sti();
+ do {} while (ctrl_inb(R64CNT) != 0);
+ ctrl_outb(RCR1_CIE, RCR1); /* Enable carry interrupt */
+
+ /*
+ * r3 is arbitrary. CDC does not support "=z".
+ */
+ ctc_val_init = 0xffffffff;
+ ctc_val = ctc_val_init;
+
+ asm volatile("gettr tr0, %1\n\t"
+ "putcon %0, " __CTC "\n\t"
+ "and %2, r63, %2\n\t"
+ "pta $+4, tr0\n\t"
+ "beq/l %2, r63, tr0\n\t"
+ "ptabs %1, tr0\n\t"
+ "getcon " __CTC ", %0\n\t"
+ : "=r"(ctc_val), "=r" (__dummy), "=r" (__rtc_irq_flag)
+ : "0" (0));
+ cli();
+ /*
+ * SH-3:
+ * CPU clock = 4 stages * loop
+ * tst rm,rm if id ex
+ * bt/s 1b if id ex
+ * add #1,rd if id ex
+ * (if) pipe line stole
+ * tst rm,rm if id ex
+ * ....
+ *
+ *
+ * SH-4:
+ * CPU clock = 6 stages * loop
+ * I don't know why.
+ * ....
+ *
+ * SH-5:
+ * Use CTC register to count. This approach returns the right value
+ * even if the I-cache is disabled (e.g. whilst debugging.)
+ *
+ */
+
+ count = ctc_val_init - ctc_val; /* CTC counts down */
+
+#if defined (CONFIG_SH_SIMULATOR)
+ /*
+ * Let's pretend we are a 5MHz SH-5 to avoid a too
+ * little timer interval. Also to keep delay
+ * calibration within a reasonable time.
+ */
+ return 5000000;
+#else
+ /*
+ * This really is count by the number of clock cycles
+ * by the ratio between a complete R64CNT
+ * wrap-around (128) and CUI interrupt being raised (64).
+ */
+ return count*2;
+#endif
+}
+
+static irqreturn_t rtc_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ ctrl_outb(0, RCR1); /* Disable Carry Interrupts */
+ regs->regs[3] = 1; /* Using r3 */
+
+ return IRQ_HANDLED;
+}
+
+static struct irqaction irq0 = { timer_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "timer", NULL, NULL};
+static struct irqaction irq1 = { rtc_interrupt, SA_INTERRUPT, CPU_MASK_NONE, "rtc", NULL, NULL};
+
+void __init time_init(void)
+{
+ unsigned int cpu_clock, master_clock, bus_clock, module_clock;
+ unsigned long interval;
+ unsigned long frqcr, ifc, pfc;
+ static int ifc_table[] = { 2, 4, 6, 8, 10, 12, 16, 24 };
+#define bfc_table ifc_table /* Same */
+#define pfc_table ifc_table /* Same */
+
+ tmu_base = onchip_remap(TMU_BASE, 1024, "TMU");
+ if (!tmu_base) {
+ panic("Unable to remap TMU\n");
+ }
+
+ rtc_base = onchip_remap(RTC_BASE, 1024, "RTC");
+ if (!rtc_base) {
+ panic("Unable to remap RTC\n");
+ }
+
+ cprc_base = onchip_remap(CPRC_BASE, 1024, "CPRC");
+ if (!cprc_base) {
+ panic("Unable to remap CPRC\n");
+ }
+
+ xtime.tv_sec = get_rtc_time();
+ xtime.tv_nsec = 0;
+
+ setup_irq(TIMER_IRQ, &irq0);
+ setup_irq(RTC_IRQ, &irq1);
+
+ /* Check how fast it is.. */
+ cpu_clock = get_cpu_hz();
+
+ /* Note careful order of operations to maintain reasonable precision and avoid overflow. */
+ scaled_recip_ctc_ticks_per_jiffy = ((1ULL << CTC_JIFFY_SCALE_SHIFT) / (unsigned long long)(cpu_clock / HZ));
+
+ disable_irq(RTC_IRQ);
+
+ printk("CPU clock: %d.%02dMHz\n",
+ (cpu_clock / 1000000), (cpu_clock % 1000000)/10000);
+ {
+ unsigned short bfc;
+ frqcr = ctrl_inl(FRQCR);
+ ifc = ifc_table[(frqcr>> 6) & 0x0007];
+ bfc = bfc_table[(frqcr>> 3) & 0x0007];
+ pfc = pfc_table[(frqcr>> 12) & 0x0007];
+ master_clock = cpu_clock * ifc;
+ bus_clock = master_clock/bfc;
+ }
+
+ printk("Bus clock: %d.%02dMHz\n",
+ (bus_clock/1000000), (bus_clock % 1000000)/10000);
+ module_clock = master_clock/pfc;
+ printk("Module clock: %d.%02dMHz\n",
+ (module_clock/1000000), (module_clock % 1000000)/10000);
+ interval = (module_clock/(HZ*4));
+
+ printk("Interval = %ld\n", interval);
+
+ current_cpu_data.cpu_clock = cpu_clock;
+ current_cpu_data.master_clock = master_clock;
+ current_cpu_data.bus_clock = bus_clock;
+ current_cpu_data.module_clock = module_clock;
+
+ /* Start TMU0 */
+ ctrl_outb(TMU_TOCR_INIT, TMU_TOCR);
+ ctrl_outw(TMU0_TCR_INIT, TMU0_TCR);
+ ctrl_outl(interval, TMU0_TCOR);
+ ctrl_outl(interval, TMU0_TCNT);
+ ctrl_outb(TMU_TSTR_INIT, TMU_TSTR);
+}
+
+void enter_deep_standby(void)
+{
+ /* Disable watchdog timer */
+ ctrl_outl(0xa5000000, WTCSR);
+ /* Configure deep standby on sleep */
+ ctrl_outl(0x03, STBCR);
+
+#ifdef CONFIG_SH_ALPHANUMERIC
+ {
+ extern void mach_alphanum(int position, unsigned char value);
+ extern void mach_alphanum_brightness(int setting);
+ char halted[] = "Halted. ";
+ int i;
+ mach_alphanum_brightness(6); /* dimmest setting above off */
+ for (i=0; i<8; i++) {
+ mach_alphanum(i, halted[i]);
+ }
+ asm __volatile__ ("synco");
+ }
+#endif
+
+ asm __volatile__ ("sleep");
+ asm __volatile__ ("synci");
+ asm __volatile__ ("nop");
+ asm __volatile__ ("nop");
+ asm __volatile__ ("nop");
+ asm __volatile__ ("nop");
+ panic("Unexpected wakeup!\n");
+}
+
+/*
+ * Scheduler clock - returns current time in nanosec units.
+ */
+unsigned long long sched_clock(void)
+{
+ return (unsigned long long)jiffies * (1000000000 / HZ);
+}
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh5/vmlinux.lds.S
+ *
+ * ld script to make ST50 Linux kernel
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * benedict.gaster@superh.com: 2nd May 2002
+ * Add definition of empty_zero_page to be the first page of kernel image.
+ *
+ * benedict.gaster@superh.com: 3rd May 2002
+ * Added support for ramdisk, removing statically linked romfs at the same time.
+ *
+ * lethal@linux-sh.org: 9th May 2003
+ * Kill off GLOBAL_NAME() usage and other CDC-isms.
+ *
+ * lethal@linux-sh.org: 19th May 2003
+ * Remove support for ancient toolchains.
+ */
+
+#include <linux/config.h>
+#include <asm/page.h>
+#include <asm/cache.h>
+#include <asm/processor.h>
+#include <asm/thread_info.h>
+
+#define LOAD_OFFSET CONFIG_CACHED_MEMORY_OFFSET
+#include <asm-generic/vmlinux.lds.h>
+
+#ifdef NOTDEF
+#ifdef CONFIG_LITTLE_ENDIAN
+OUTPUT_FORMAT("elf32-sh64l-linux", "elf32-sh64l-linux", "elf32-sh64l-linux")
+#else
+OUTPUT_FORMAT("elf32-sh64", "elf32-sh64", "elf32-sh64")
+#endif
+#endif
+
+OUTPUT_ARCH(sh:sh5)
+
+#define C_PHYS(x) AT (ADDR(x) - LOAD_OFFSET)
+
+ENTRY(__start)
+SECTIONS
+{
+ . = CONFIG_CACHED_MEMORY_OFFSET + CONFIG_MEMORY_START + PAGE_SIZE;
+ _text = .; /* Text and read-only data */
+ text = .; /* Text and read-only data */
+
+ .empty_zero_page : C_PHYS(.empty_zero_page) {
+ *(.empty_zero_page)
+ } = 0
+
+ .text : C_PHYS(.text) {
+ *(.text)
+ *(.text64)
+ *(.text..SHmedia32)
+ SCHED_TEXT
+ LOCK_TEXT
+ *(.fixup)
+ *(.gnu.warning)
+#ifdef CONFIG_LITTLE_ENDIAN
+ } = 0x6ff0fff0
+#else
+ } = 0xf0fff06f
+#endif
+
+ /* We likely want __ex_table to be Cache Line aligned */
+ . = ALIGN(L1_CACHE_BYTES); /* Exception table */
+ __start___ex_table = .;
+ __ex_table : C_PHYS(__ex_table) { *(__ex_table) }
+ __stop___ex_table = .;
+
+ RODATA
+
+ _etext = .; /* End of text section */
+
+ .data : C_PHYS(.data) { /* Data */
+ *(.data)
+ CONSTRUCTORS
+ }
+
+ . = ALIGN(PAGE_SIZE);
+ .data.page_aligned : C_PHYS(.data.page_aligned) { *(.data.page_aligned) }
+
+ . = ALIGN(L1_CACHE_BYTES);
+ __per_cpu_start = .;
+ .data.percpu : C_PHYS(.data.percpu) { *(.data.percpu) }
+ __per_cpu_end = . ;
+ .data.cacheline_aligned : C_PHYS(.data.cacheline_aligned) { *(.data.cacheline_aligned) }
+
+ _edata = .; /* End of data section */
+
+ . = ALIGN(THREAD_SIZE); /* init_task: structure size aligned */
+ .data.init_task : C_PHYS(.data.init_task) { *(.data.init_task) }
+
+ . = ALIGN(PAGE_SIZE); /* Init code and data */
+ __init_begin = .;
+ _sinittext = .;
+ .init.text : C_PHYS(.init.text) { *(.init.text) }
+ _einittext = .;
+ .init.data : C_PHYS(.init.data) { *(.init.data) }
+ . = ALIGN(L1_CACHE_BYTES); /* Better if Cache Line aligned */
+ __setup_start = .;
+ .init.setup : C_PHYS(.init.setup) { *(.init.setup) }
+ __setup_end = .;
+ __initcall_start = .;
+ .initcall.init : C_PHYS(.initcall.init) {
+ *(.initcall1.init)
+ *(.initcall2.init)
+ *(.initcall3.init)
+ *(.initcall4.init)
+ *(.initcall5.init)
+ *(.initcall6.init)
+ *(.initcall7.init)
+ }
+ __initcall_end = .;
+ __con_initcall_start = .;
+ .con_initcall.init : C_PHYS(.con_initcall.init) { *(.con_initcall.init) }
+ __con_initcall_end = .;
+ SECURITY_INIT
+ __initramfs_start = .;
+ .init.ramfs : C_PHYS(.init.ramfs) { *(.init.ramfs) }
+ __initramfs_end = .;
+ . = ALIGN(PAGE_SIZE);
+ __init_end = .;
+
+ /* Align to the biggest single data representation, head and tail */
+ . = ALIGN(8);
+ __bss_start = .; /* BSS */
+ .bss : C_PHYS(.bss) {
+ *(.bss)
+ }
+ . = ALIGN(8);
+ _end = . ;
+
+ /* Sections to be discarded */
+ /DISCARD/ : {
+ *(.exit.text)
+ *(.exit.data)
+ *(.exitcall.exit)
+ }
+
+ /* Stabs debugging sections. */
+ .stab 0 : C_PHYS(.stab) { *(.stab) }
+ .stabstr 0 : C_PHYS(.stabstr) { *(.stabstr) }
+ .stab.excl 0 : C_PHYS(.stab.excl) { *(.stab.excl) }
+ .stab.exclstr 0 : C_PHYS(.stab.exclstr) { *(.stab.exclstr) }
+ .stab.index 0 : C_PHYS(.stab.index) { *(.stab.index) }
+ .stab.indexstr 0 : C_PHYS(.stab.indexstr) { *(.stab.indexstr) }
+ .comment 0 : C_PHYS(.comment) { *(.comment) }
+ /* DWARF debug sections.
+ Symbols in the DWARF debugging section are relative to the beginning
+ of the section so we begin .debug at 0. */
+ /* DWARF 1 */
+ .debug 0 : C_PHYS(.debug) { *(.debug) }
+ .line 0 : C_PHYS(.line) { *(.line) }
+ /* GNU DWARF 1 extensions */
+ .debug_srcinfo 0 : C_PHYS(.debug_srcinfo) { *(.debug_srcinfo) }
+ .debug_sfnames 0 : C_PHYS(.debug_sfnames) { *(.debug_sfnames) }
+ /* DWARF 1.1 and DWARF 2 */
+ .debug_aranges 0 : C_PHYS(.debug_aranges) { *(.debug_aranges) }
+ .debug_pubnames 0 : C_PHYS(.debug_pubnames) { *(.debug_pubnames) }
+ /* DWARF 2 */
+ .debug_info 0 : C_PHYS(.debug_info) { *(.debug_info) }
+ .debug_abbrev 0 : C_PHYS(.debug_abbrev) { *(.debug_abbrev) }
+ .debug_line 0 : C_PHYS(.debug_line) { *(.debug_line) }
+ .debug_frame 0 : C_PHYS(.debug_frame) { *(.debug_frame) }
+ .debug_str 0 : C_PHYS(.debug_str) { *(.debug_str) }
+ .debug_loc 0 : C_PHYS(.debug_loc) { *(.debug_loc) }
+ .debug_macinfo 0 : C_PHYS(.debug_macinfo) { *(.debug_macinfo) }
+ /* SGI/MIPS DWARF 2 extensions */
+ .debug_weaknames 0 : C_PHYS(.debug_weaknames) { *(.debug_weaknames) }
+ .debug_funcnames 0 : C_PHYS(.debug_funcnames) { *(.debug_funcnames) }
+ .debug_typenames 0 : C_PHYS(.debug_typenames) { *(.debug_typenames) }
+ .debug_varnames 0 : C_PHYS(.debug_varnames) { *(.debug_varnames) }
+ /* These must appear regardless of . */
+}
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mm/fault.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Richard Curnow (/proc/tlb, bug fixes)
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+
+#include <linux/signal.h>
+#include <linux/rwsem.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/tlb.h>
+#include <asm/uaccess.h>
+#include <asm/pgalloc.h>
+#include <asm/mmu_context.h>
+#include <asm/registers.h> /* required by inline asm statements */
+
+#if defined(CONFIG_SH64_PROC_TLB)
+#include <linux/init.h>
+#include <linux/proc_fs.h>
+/* Count numbers of tlb refills in each region */
+static unsigned long long calls_to_update_mmu_cache = 0ULL;
+static unsigned long long calls_to_flush_tlb_page = 0ULL;
+static unsigned long long calls_to_flush_tlb_range = 0ULL;
+static unsigned long long calls_to_flush_tlb_mm = 0ULL;
+static unsigned long long calls_to_flush_tlb_all = 0ULL;
+unsigned long long calls_to_do_slow_page_fault = 0ULL;
+unsigned long long calls_to_do_fast_page_fault = 0ULL;
+
+/* Count size of ranges for flush_tlb_range */
+static unsigned long long flush_tlb_range_1 = 0ULL;
+static unsigned long long flush_tlb_range_2 = 0ULL;
+static unsigned long long flush_tlb_range_3_4 = 0ULL;
+static unsigned long long flush_tlb_range_5_7 = 0ULL;
+static unsigned long long flush_tlb_range_8_11 = 0ULL;
+static unsigned long long flush_tlb_range_12_15 = 0ULL;
+static unsigned long long flush_tlb_range_16_up = 0ULL;
+
+static unsigned long long page_not_present = 0ULL;
+
+#endif
+
+extern void die(const char *,struct pt_regs *,long);
+
+#define PFLAG(val,flag) (( (val) & (flag) ) ? #flag : "" )
+#define PPROT(flag) PFLAG(pgprot_val(prot),flag)
+
+static inline void print_prots(pgprot_t prot)
+{
+ printk("prot is 0x%08lx\n",pgprot_val(prot));
+
+ printk("%s %s %s %s %s\n",PPROT(_PAGE_SHARED),PPROT(_PAGE_READ),
+ PPROT(_PAGE_EXECUTE),PPROT(_PAGE_WRITE),PPROT(_PAGE_USER));
+}
+
+static inline void print_vma(struct vm_area_struct *vma)
+{
+ printk("vma start 0x%08lx\n", vma->vm_start);
+ printk("vma end 0x%08lx\n", vma->vm_end);
+
+ print_prots(vma->vm_page_prot);
+ printk("vm_flags 0x%08lx\n", vma->vm_flags);
+}
+
+static inline void print_task(struct task_struct *tsk)
+{
+ printk("Task pid %d\n", tsk->pid);
+}
+
+static pte_t *lookup_pte(struct mm_struct *mm, unsigned long address)
+{
+ pgd_t *dir;
+ pmd_t *pmd;
+ pte_t *pte;
+ pte_t entry;
+
+ dir = pgd_offset(mm, address);
+ if (pgd_none(*dir)) {
+ return NULL;
+ }
+
+ pmd = pmd_offset(dir, address);
+ if (pmd_none(*pmd)) {
+ return NULL;
+ }
+
+ pte = pte_offset_kernel(pmd, address);
+ entry = *pte;
+
+ if (pte_none(entry)) {
+ return NULL;
+ }
+ if (!pte_present(entry)) {
+ return NULL;
+ }
+
+ return pte;
+}
+
+/*
+ * This routine handles page faults. It determines the address,
+ * and the problem, and then passes it off to one of the appropriate
+ * routines.
+ */
+asmlinkage void do_page_fault(struct pt_regs *regs, unsigned long writeaccess,
+ unsigned long textaccess, unsigned long address)
+{
+ struct task_struct *tsk;
+ struct mm_struct *mm;
+ struct vm_area_struct * vma;
+ const struct exception_table_entry *fixup;
+ pte_t *pte;
+
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_do_slow_page_fault;
+#endif
+
+ /* SIM
+ * Note this is now called with interrupts still disabled
+ * This is to cope with being called for a missing IO port
+ * address with interupts disabled. This should be fixed as
+ * soon as we have a better 'fast path' miss handler.
+ *
+ * Plus take care how you try and debug this stuff.
+ * For example, writing debug data to a port which you
+ * have just faulted on is not going to work.
+ */
+
+ tsk = current;
+ mm = tsk->mm;
+
+ /* Not an IO address, so reenable interrupts */
+ sti();
+
+ /*
+ * If we're in an interrupt or have no user
+ * context, we must not take the fault..
+ */
+ if (in_interrupt() || !mm)
+ goto no_context;
+
+ /* TLB misses upon some cache flushes get done under cli() */
+ down_read(&mm->mmap_sem);
+
+ vma = find_vma(mm, address);
+
+ if (!vma) {
+#ifdef DEBUG_FAULT
+ print_task(tsk);
+ printk("%s:%d fault, address is 0x%08x PC %016Lx textaccess %d writeaccess %d\n",
+ __FUNCTION__,__LINE__,
+ address,regs->pc,textaccess,writeaccess);
+ show_regs(regs);
+#endif
+ goto bad_area;
+ }
+ if (vma->vm_start <= address) {
+ goto good_area;
+ }
+
+ if (!(vma->vm_flags & VM_GROWSDOWN)) {
+#ifdef DEBUG_FAULT
+ print_task(tsk);
+ printk("%s:%d fault, address is 0x%08x PC %016Lx textaccess %d writeaccess %d\n",
+ __FUNCTION__,__LINE__,
+ address,regs->pc,textaccess,writeaccess);
+ show_regs(regs);
+
+ print_vma(vma);
+#endif
+ goto bad_area;
+ }
+ if (expand_stack(vma, address)) {
+#ifdef DEBUG_FAULT
+ print_task(tsk);
+ printk("%s:%d fault, address is 0x%08x PC %016Lx textaccess %d writeaccess %d\n",
+ __FUNCTION__,__LINE__,
+ address,regs->pc,textaccess,writeaccess);
+ show_regs(regs);
+#endif
+ goto bad_area;
+ }
+/*
+ * Ok, we have a good vm_area for this memory access, so
+ * we can handle it..
+ */
+good_area:
+ if (writeaccess) {
+ if (!(vma->vm_flags & VM_WRITE))
+ goto bad_area;
+ } else {
+ if (!(vma->vm_flags & (VM_READ | VM_EXEC)))
+ goto bad_area;
+ }
+
+ if (textaccess) {
+ if (!(vma->vm_flags & VM_EXEC))
+ goto bad_area;
+ }
+
+ /*
+ * If for any reason at all we couldn't handle the fault,
+ * make sure we exit gracefully rather than endlessly redo
+ * the fault.
+ */
+survive:
+ switch (handle_mm_fault(mm, vma, address, writeaccess)) {
+ case 1:
+ tsk->min_flt++;
+ break;
+ case 2:
+ tsk->maj_flt++;
+ break;
+ case 0:
+ goto do_sigbus;
+ default:
+ goto out_of_memory;
+ }
+ /* If we get here, the page fault has been handled. Do the TLB refill
+ now from the newly-setup PTE, to avoid having to fault again right
+ away on the same instruction. */
+ pte = lookup_pte (mm, address);
+ if (!pte) {
+ /* From empirical evidence, we can get here, due to
+ !pte_present(pte). (e.g. if a swap-in occurs, and the page
+ is swapped back out again before the process that wanted it
+ gets rescheduled?) */
+ goto no_pte;
+ }
+
+ __do_tlb_refill(address, textaccess, pte);
+
+no_pte:
+
+ up_read(&mm->mmap_sem);
+ return;
+
+/*
+ * Something tried to access memory that isn't in our memory map..
+ * Fix it, but check if it's kernel or user first..
+ */
+bad_area:
+#ifdef DEBUG_FAULT
+ printk("fault:bad area\n");
+#endif
+ up_read(&mm->mmap_sem);
+
+ if (user_mode(regs)) {
+ printk("user mode bad_area address=%08lx pid=%d (%s) pc=%08lx opcode=%08lx\n",
+ address, current->pid, current->comm,
+ (unsigned long) regs->pc,
+ *(unsigned long *)(u32)(regs->pc & ~3));
+ show_regs(regs);
+ if (tsk->pid == 1) {
+ panic("INIT had user mode bad_area\n");
+ }
+ tsk->thread.address = address;
+ tsk->thread.error_code = writeaccess;
+ force_sig(SIGSEGV, tsk);
+ return;
+ }
+
+no_context:
+#ifdef DEBUG_FAULT
+ printk("fault:No context\n");
+#endif
+ /* Are we prepared to handle this kernel fault? */
+ fixup = search_exception_tables(regs->pc);
+ if (fixup) {
+ regs->pc = fixup->fixup;
+ return;
+ }
+
+/*
+ * Oops. The kernel tried to access some bad page. We'll have to
+ * terminate things with extreme prejudice.
+ *
+ */
+ if (address < PAGE_SIZE)
+ printk(KERN_ALERT "Unable to handle kernel NULL pointer dereference");
+ else
+ printk(KERN_ALERT "Unable to handle kernel paging request");
+ printk(" at virtual address %08lx\n", address);
+ printk(KERN_ALERT "pc = %08Lx%08Lx\n", regs->pc >> 32, regs->pc & 0xffffffff);
+ die("Oops", regs, writeaccess);
+ do_exit(SIGKILL);
+
+/*
+ * We ran out of memory, or some other thing happened to us that made
+ * us unable to handle the page fault gracefully.
+ */
+out_of_memory:
+ if (current->pid == 1) {
+ panic("INIT out of memory\n");
+ yield();
+ goto survive;
+ }
+ printk("fault:Out of memory\n");
+ up_read(&mm->mmap_sem);
+ if (current->pid == 1) {
+ yield();
+ down_read(&mm->mmap_sem);
+ goto survive;
+ }
+ printk("VM: killing process %s\n", tsk->comm);
+ if (user_mode(regs))
+ do_exit(SIGKILL);
+ goto no_context;
+
+do_sigbus:
+ printk("fault:Do sigbus\n");
+ up_read(&mm->mmap_sem);
+
+ /*
+ * Send a sigbus, regardless of whether we were in kernel
+ * or user mode.
+ */
+ tsk->thread.address = address;
+ tsk->thread.error_code = writeaccess;
+ tsk->thread.trap_no = 14;
+ force_sig(SIGBUS, tsk);
+
+ /* Kernel mode? Handle exceptions or die */
+ if (!user_mode(regs))
+ goto no_context;
+}
+
+
+void flush_tlb_all(void);
+
+void update_mmu_cache(struct vm_area_struct * vma,
+ unsigned long address, pte_t pte)
+{
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_update_mmu_cache;
+#endif
+
+ /*
+ * This appears to get called once for every pte entry that gets
+ * established => I don't think it's efficient to try refilling the
+ * TLBs with the pages - some may not get accessed even. Also, for
+ * executable pages, it is impossible to determine reliably here which
+ * TLB they should be mapped into (or both even).
+ *
+ * So, just do nothing here and handle faults on demand. In the
+ * TLBMISS handling case, the refill is now done anyway after the pte
+ * has been fixed up, so that deals with most useful cases.
+ */
+}
+
+static void __flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+{
+ unsigned long long match, pteh=0, lpage;
+ unsigned long tlb;
+ struct mm_struct *mm;
+
+ mm = vma->vm_mm;
+
+ if (mm->context == NO_CONTEXT)
+ return;
+
+ /*
+ * Sign-extend based on neff.
+ */
+ lpage = (page & NEFF_SIGN) ? (page | NEFF_MASK) : page;
+ match = ((mm->context & MMU_CONTEXT_ASID_MASK) << PTEH_ASID_SHIFT) | PTEH_VALID;
+ match |= lpage;
+
+ /* Do ITLB : don't bother for pages in non-exectutable VMAs */
+ if (vma->vm_flags & VM_EXEC) {
+ for_each_itlb_entry(tlb) {
+ asm volatile ("getcfg %1, 0, %0"
+ : "=r" (pteh)
+ : "r" (tlb) );
+
+ if (pteh == match) {
+ __flush_tlb_slot(tlb);
+ break;
+ }
+
+ }
+ }
+
+ /* Do DTLB : any page could potentially be in here. */
+ for_each_dtlb_entry(tlb) {
+ asm volatile ("getcfg %1, 0, %0"
+ : "=r" (pteh)
+ : "r" (tlb) );
+
+ if (pteh == match) {
+ __flush_tlb_slot(tlb);
+ break;
+ }
+
+ }
+}
+
+void flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
+{
+ unsigned long flags;
+
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_flush_tlb_page;
+#endif
+
+ if (vma->vm_mm) {
+ page &= PAGE_MASK;
+ local_irq_save(flags);
+ __flush_tlb_page(vma, page);
+ local_irq_restore(flags);
+ }
+}
+
+void flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end)
+{
+ unsigned long flags;
+ unsigned long long match, pteh=0, pteh_epn, pteh_low;
+ unsigned long tlb;
+ struct mm_struct *mm;
+
+ mm = vma->vm_mm;
+
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_flush_tlb_range;
+
+ {
+ unsigned long size = (end - 1) - start;
+ size >>= 12; /* divide by PAGE_SIZE */
+ size++; /* end=start+4096 => 1 page */
+ switch (size) {
+ case 1 : flush_tlb_range_1++; break;
+ case 2 : flush_tlb_range_2++; break;
+ case 3 ... 4 : flush_tlb_range_3_4++; break;
+ case 5 ... 7 : flush_tlb_range_5_7++; break;
+ case 8 ... 11 : flush_tlb_range_8_11++; break;
+ case 12 ... 15 : flush_tlb_range_12_15++; break;
+ default : flush_tlb_range_16_up++; break;
+ }
+ }
+#endif
+
+ if (mm->context == NO_CONTEXT)
+ return;
+
+ local_irq_save(flags);
+
+ start &= PAGE_MASK;
+ end &= PAGE_MASK;
+
+ match = ((mm->context & MMU_CONTEXT_ASID_MASK) << PTEH_ASID_SHIFT) | PTEH_VALID;
+
+ /* Flush ITLB */
+ for_each_itlb_entry(tlb) {
+ asm volatile ("getcfg %1, 0, %0"
+ : "=r" (pteh)
+ : "r" (tlb) );
+
+ pteh_epn = pteh & PAGE_MASK;
+ pteh_low = pteh & ~PAGE_MASK;
+
+ if (pteh_low == match && pteh_epn >= start && pteh_epn <= end)
+ __flush_tlb_slot(tlb);
+ }
+
+ /* Flush DTLB */
+ for_each_dtlb_entry(tlb) {
+ asm volatile ("getcfg %1, 0, %0"
+ : "=r" (pteh)
+ : "r" (tlb) );
+
+ pteh_epn = pteh & PAGE_MASK;
+ pteh_low = pteh & ~PAGE_MASK;
+
+ if (pteh_low == match && pteh_epn >= start && pteh_epn <= end)
+ __flush_tlb_slot(tlb);
+ }
+
+ local_irq_restore(flags);
+}
+
+void flush_tlb_mm(struct mm_struct *mm)
+{
+ unsigned long flags;
+
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_flush_tlb_mm;
+#endif
+
+ if (mm->context == NO_CONTEXT)
+ return;
+
+ local_irq_save(flags);
+
+ mm->context=NO_CONTEXT;
+ if(mm==current->mm)
+ activate_context(mm);
+
+ local_irq_restore(flags);
+
+}
+
+void flush_tlb_all(void)
+{
+ /* Invalidate all, including shared pages, excluding fixed TLBs */
+
+ unsigned long flags, tlb;
+
+#if defined(CONFIG_SH64_PROC_TLB)
+ ++calls_to_flush_tlb_all;
+#endif
+
+ local_irq_save(flags);
+
+ /* Flush each ITLB entry */
+ for_each_itlb_entry(tlb) {
+ __flush_tlb_slot(tlb);
+ }
+
+ /* Flush each DTLB entry */
+ for_each_dtlb_entry(tlb) {
+ __flush_tlb_slot(tlb);
+ }
+
+ local_irq_restore(flags);
+}
+
+void flush_tlb_kernel_range(unsigned long start, unsigned long end)
+{
+ /* FIXME: Optimize this later.. */
+ flush_tlb_all();
+}
+
+#if defined(CONFIG_SH64_PROC_TLB)
+/* Procfs interface to read the performance information */
+
+static int
+tlb_proc_info(char *buf, char **start, off_t fpos, int length, int *eof, void *data)
+{
+ int len=0;
+ len += sprintf(buf+len, "do_fast_page_fault called %12lld times\n", calls_to_do_fast_page_fault);
+ len += sprintf(buf+len, "do_slow_page_fault called %12lld times\n", calls_to_do_slow_page_fault);
+ len += sprintf(buf+len, "update_mmu_cache called %12lld times\n", calls_to_update_mmu_cache);
+ len += sprintf(buf+len, "flush_tlb_page called %12lld times\n", calls_to_flush_tlb_page);
+ len += sprintf(buf+len, "flush_tlb_range called %12lld times\n", calls_to_flush_tlb_range);
+ len += sprintf(buf+len, "flush_tlb_mm called %12lld times\n", calls_to_flush_tlb_mm);
+ len += sprintf(buf+len, "flush_tlb_all called %12lld times\n", calls_to_flush_tlb_all);
+ len += sprintf(buf+len, "flush_tlb_range_sizes\n"
+ " 1 : %12lld\n"
+ " 2 : %12lld\n"
+ " 3 - 4 : %12lld\n"
+ " 5 - 7 : %12lld\n"
+ " 8 - 11 : %12lld\n"
+ "12 - 15 : %12lld\n"
+ "16+ : %12lld\n",
+ flush_tlb_range_1, flush_tlb_range_2, flush_tlb_range_3_4,
+ flush_tlb_range_5_7, flush_tlb_range_8_11, flush_tlb_range_12_15,
+ flush_tlb_range_16_up);
+ len += sprintf(buf+len, "page not present %12lld times\n", page_not_present);
+ *eof = 1;
+ return len;
+}
+
+static int __init register_proc_tlb(void)
+{
+ create_proc_read_entry("tlb", 0, NULL, tlb_proc_info, NULL);
+ return 0;
+}
+
+__initcall(register_proc_tlb);
+
+#endif
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mm/init.c
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/rwsem.h>
+#include <linux/mm.h>
+#include <linux/swap.h>
+#include <linux/bootmem.h>
+
+#include <asm/mmu_context.h>
+#include <asm/page.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/tlb.h>
+
+#ifdef CONFIG_BLK_DEV_INITRD
+#include <linux/blk.h>
+#endif
+
+DEFINE_PER_CPU(struct mmu_gather, mmu_gathers);
+
+/*
+ * Cache of MMU context last used.
+ */
+unsigned long mmu_context_cache;
+pgd_t * mmu_pdtp_cache;
+int after_bootmem = 0;
+
+/*
+ * BAD_PAGE is the page that is used for page faults when linux
+ * is out-of-memory. Older versions of linux just did a
+ * do_exit(), but using this instead means there is less risk
+ * for a process dying in kernel mode, possibly leaving an inode
+ * unused etc..
+ *
+ * BAD_PAGETABLE is the accompanying page-table: it is initialized
+ * to point to BAD_PAGE entries.
+ *
+ * ZERO_PAGE is a special page that is used for zero-initialized
+ * data and COW.
+ */
+
+extern unsigned char empty_zero_page[PAGE_SIZE];
+extern unsigned char empty_bad_page[PAGE_SIZE];
+extern pte_t empty_bad_pte_table[PTRS_PER_PTE];
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
+extern char _text, _etext, _edata, __bss_start, _end;
+extern char __init_begin, __init_end;
+
+/* It'd be good if these lines were in the standard header file. */
+#define START_PFN (NODE_DATA(0)->bdata->node_boot_start >> PAGE_SHIFT)
+#define MAX_LOW_PFN (NODE_DATA(0)->bdata->node_low_pfn)
+
+
+void show_mem(void)
+{
+ int i, total = 0, reserved = 0;
+ int shared = 0, cached = 0;
+
+ printk("Mem-info:\n");
+ show_free_areas();
+ printk("Free swap: %6ldkB\n",nr_swap_pages<<(PAGE_SHIFT-10));
+ i = max_mapnr;
+ while (i-- > 0) {
+ total++;
+ if (PageReserved(mem_map+i))
+ reserved++;
+ else if (PageSwapCache(mem_map+i))
+ cached++;
+ else if (page_count(mem_map+i))
+ shared += page_count(mem_map+i) - 1;
+ }
+ printk("%d pages of RAM\n",total);
+ printk("%d reserved pages\n",reserved);
+ printk("%d pages shared\n",shared);
+ printk("%d pages swap cached\n",cached);
+ printk("%ld pages in page table cache\n",pgtable_cache_size);
+}
+
+/*
+ * paging_init() sets up the page tables.
+ *
+ * head.S already did a lot to set up address translation for the kernel.
+ * Here we comes with:
+ * . MMU enabled
+ * . ASID set (SR)
+ * . some 512MB regions being mapped of which the most relevant here is:
+ * . CACHED segment (ASID 0 [irrelevant], shared AND NOT user)
+ * . possible variable length regions being mapped as:
+ * . UNCACHED segment (ASID 0 [irrelevant], shared AND NOT user)
+ * . All of the memory regions are placed, independently from the platform
+ * on high addresses, above 0x80000000.
+ * . swapper_pg_dir is already cleared out by the .space directive
+ * in any case swapper does not require a real page directory since
+ * it's all kernel contained.
+ *
+ * Those pesky NULL-reference errors in the kernel are then
+ * dealt with by not mapping address 0x00000000 at all.
+ *
+ */
+void __init paging_init(void)
+{
+ unsigned long zones_size[MAX_NR_ZONES] = {0, 0, 0};
+
+ pgd_init((unsigned long)swapper_pg_dir);
+ pgd_init((unsigned long)swapper_pg_dir +
+ sizeof(pgd_t) * USER_PTRS_PER_PGD);
+
+ mmu_context_cache = MMU_CONTEXT_FIRST_VERSION;
+
+ /*
+ * All memory is good as ZONE_NORMAL (fall-through) and ZONE_DMA.
+ */
+ zones_size[ZONE_DMA] = MAX_LOW_PFN - START_PFN;
+ NODE_DATA(0)->node_mem_map = NULL;
+ free_area_init_node(0, NODE_DATA(0), zones_size, __MEMORY_START >> PAGE_SHIFT, 0);
+
+ /* XXX: MRB-remove - this doesn't seem sane, should this be done somewhere else ?*/
+ mem_map = NODE_DATA(0)->node_mem_map;
+}
+
+void __init mem_init(void)
+{
+ int codesize, reservedpages, datasize, initsize;
+ int tmp;
+
+ max_mapnr = num_physpages = MAX_LOW_PFN - START_PFN;
+ high_memory = (void *)__va(MAX_LOW_PFN * PAGE_SIZE);
+
+ /*
+ * Clear the zero-page.
+ * This is not required but we might want to re-use
+ * this very page to pass boot parameters, one day.
+ */
+ memset(empty_zero_page, 0, PAGE_SIZE);
+
+ /* this will put all low memory onto the freelists */
+ totalram_pages += free_all_bootmem_node(NODE_DATA(0));
+ reservedpages = 0;
+ for (tmp = 0; tmp < num_physpages; tmp++)
+ /*
+ * Only count reserved RAM pages
+ */
+ if (PageReserved(mem_map+tmp))
+ reservedpages++;
+
+ after_bootmem = 1;
+
+ codesize = (unsigned long) &_etext - (unsigned long) &_text;
+ datasize = (unsigned long) &_edata - (unsigned long) &_etext;
+ initsize = (unsigned long) &__init_end - (unsigned long) &__init_begin;
+
+ printk("Memory: %luk/%luk available (%dk kernel code, %dk reserved, %dk data, %dk init)\n",
+ (unsigned long) nr_free_pages() << (PAGE_SHIFT-10),
+ max_mapnr << (PAGE_SHIFT-10),
+ codesize >> 10,
+ reservedpages << (PAGE_SHIFT-10),
+ datasize >> 10,
+ initsize >> 10);
+}
+
+void free_initmem(void)
+{
+ unsigned long addr;
+
+ addr = (unsigned long)(&__init_begin);
+ for (; addr < (unsigned long)(&__init_end); addr += PAGE_SIZE) {
+ ClearPageReserved(virt_to_page(addr));
+ set_page_count(virt_to_page(addr), 1);
+ free_page(addr);
+ totalram_pages++;
+ }
+ printk ("Freeing unused kernel memory: %ldk freed\n", (&__init_end - &__init_begin) >> 10);
+}
+
+#ifdef CONFIG_BLK_DEV_INITRD
+void free_initrd_mem(unsigned long start, unsigned long end)
+{
+ unsigned long p;
+ for (p = start; p < end; p += PAGE_SIZE) {
+ ClearPageReserved(virt_to_page(p));
+ set_page_count(virt_to_page(p), 1);
+ free_page(p);
+ totalram_pages++;
+ }
+ printk ("Freeing initrd memory: %ldk freed\n", (end - start) >> 10);
+}
+#endif
+
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * arch/sh64/mm/tlbmiss.c
+ *
+ * Original code from fault.c
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * Fast PTE->TLB refill path
+ * Copyright (C) 2003 Richard.Curnow@superh.com
+ *
+ * IMPORTANT NOTES :
+ * The do_fast_page_fault function is called from a context in entry.S where very few registers
+ * have been saved. In particular, the code in this file must be compiled not to use ANY
+ * caller-save regiseters that are not part of the restricted save set. Also, it means that
+ * code in this file must not make calls to functions elsewhere in the kernel, or else the
+ * excepting context will see corruption in its caller-save registers. Plus, the entry.S save
+ * area is non-reentrant, so this code has to run with SR.BL==1, i.e. no interrupts taken inside
+ * it and panic on any exception.
+ *
+ */
+
+#include <linux/signal.h>
+#include <linux/sched.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/types.h>
+#include <linux/ptrace.h>
+#include <linux/mman.h>
+#include <linux/mm.h>
+#include <linux/smp.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+
+#include <asm/system.h>
+#include <asm/tlb.h>
+#include <asm/io.h>
+#include <asm/uaccess.h>
+#include <asm/pgalloc.h>
+#include <asm/mmu_context.h>
+#include <asm/registers.h> /* required by inline asm statements */
+
+/* Callable from fault.c, so not static */
+inline void __do_tlb_refill(unsigned long address,
+ unsigned long long is_text_not_data, pte_t *pte)
+{
+ unsigned long long ptel;
+ unsigned long long pteh=0;
+ struct tlb_info *tlbp;
+ unsigned long long next;
+
+ /* Get PTEL first */
+ ptel = pte_val(*pte);
+
+ /*
+ * Set PTEH register
+ */
+ pteh = address & MMU_VPN_MASK;
+
+ /* Sign extend based on neff. */
+#if (NEFF == 32)
+ /* Faster sign extension */
+ pteh = (unsigned long long)(signed long long)(signed long)pteh;
+#else
+ /* General case */
+ pteh = (pteh & NEFF_SIGN) ? (pteh | NEFF_MASK) : pteh;
+#endif
+
+ /* Set the ASID. */
+ pteh |= get_asid() << PTEH_ASID_SHIFT;
+ pteh |= PTEH_VALID;
+
+ /* Set PTEL register, set_pte has performed the sign extension */
+ ptel &= _PAGE_FLAGS_HARDWARE_MASK; /* drop software flags */
+ ptel |= _PAGE_FLAGS_HARDWARE_DEFAULT; /* add default flags */
+
+ tlbp = is_text_not_data ? &(cpu_data->itlb) : &(cpu_data->dtlb);
+ next = tlbp->next;
+ __flush_tlb_slot(next);
+ asm volatile ("putcfg %0,1,%2\n\n\t"
+ "putcfg %0,0,%1\n"
+ : : "r" (next), "r" (pteh), "r" (ptel) );
+
+ next += TLB_STEP;
+ if (next > tlbp->last) next = tlbp->first;
+ tlbp->next = next;
+
+}
+
+static int handle_vmalloc_fault(struct mm_struct *mm, unsigned long protection_flags,
+ unsigned long long textaccess,
+ unsigned long address)
+{
+ pgd_t *dir;
+ pmd_t *pmd;
+ static pte_t *pte;
+ pte_t entry;
+
+ dir = pgd_offset_k(address);
+ pmd = pmd_offset(dir, address);
+
+ if (pmd_none(*pmd)) {
+ return 0;
+ }
+
+ if (pmd_bad(*pmd)) {
+ pmd_clear(pmd);
+ return 0;
+ }
+
+ pte = pte_offset_kernel(pmd, address);
+ entry = *pte;
+
+ if (pte_none(entry) || !pte_present(entry)) {
+ return 0;
+ }
+
+ if ((pte_val(entry) & protection_flags) != protection_flags) {
+ return 0;
+ }
+
+ __do_tlb_refill(address, textaccess, pte);
+
+ return 1;
+}
+
+static int handle_tlbmiss(struct mm_struct *mm, unsigned long long protection_flags,
+ unsigned long long textaccess,
+ unsigned long address)
+{
+ pgd_t *dir;
+ pmd_t *pmd;
+ pte_t *pte;
+ pte_t entry;
+
+ /* NB. The PGD currently only contains a single entry - there is no
+ page table tree stored for the top half of the address space since
+ virtual pages in that region should never be mapped in user mode.
+ (In kernel mode, the only things in that region are the 512Mb super
+ page (locked in), and vmalloc (modules) + I/O device pages (handled
+ by handle_vmalloc_fault), so no PGD for the upper half is required
+ by kernel mode either).
+
+ See how mm->pgd is allocated and initialised in pgd_alloc to see why
+ the next test is necessary. - RPC */
+ if (address >= (unsigned long) TASK_SIZE) {
+ /* upper half - never has page table entries. */
+ return 0;
+ }
+ dir = pgd_offset(mm, address);
+ if (pgd_none(*dir)) {
+ return 0;
+ }
+ if (!pgd_present(*dir)) {
+ return 0;
+ }
+
+ pmd = pmd_offset(dir, address);
+ if (pmd_none(*pmd)) {
+ return 0;
+ }
+ if (!pmd_present(*pmd)) {
+ return 0;
+ }
+ pte = pte_offset_kernel(pmd, address);
+ entry = *pte;
+ if (pte_none(entry)) {
+ return 0;
+ }
+ if (!pte_present(entry)) {
+ return 0;
+ }
+
+ /* If the page doesn't have sufficient protection bits set to service the
+ kind of fault being handled, there's not much point doing the TLB refill.
+ Punt the fault to the general handler. */
+ if ((pte_val(entry) & protection_flags) != protection_flags) {
+ return 0;
+ }
+
+ __do_tlb_refill(address, textaccess, pte);
+
+ return 1;
+}
+
+/* Put all this information into one structure so that everything is just arithmetic
+ relative to a single base address. This reduces the number of movi/shori pairs needed
+ just to load addresses of static data. */
+struct expevt_lookup {
+ unsigned short protection_flags[8];
+ unsigned char is_text_access[8];
+ unsigned char is_write_access[8];
+};
+
+#define PRU (1<<9)
+#define PRW (1<<8)
+#define PRX (1<<7)
+#define PRR (1<<6)
+
+#define DIRTY (_PAGE_DIRTY | _PAGE_ACCESSED)
+#define YOUNG (_PAGE_ACCESSED)
+
+/* Sized as 8 rather than 4 to allow checking the PTE's PRU bit against whether
+ the fault happened in user mode or privileged mode. */
+static struct expevt_lookup expevt_lookup_table = {
+ .protection_flags = {PRX, PRX, 0, 0, PRR, PRR, PRW, PRW},
+ .is_text_access = {1, 1, 0, 0, 0, 0, 0, 0}
+};
+
+/*
+ This routine handles page faults that can be serviced just by refilling a
+ TLB entry from an existing page table entry. (This case represents a very
+ large majority of page faults.) Return 1 if the fault was successfully
+ handled. Return 0 if the fault could not be handled. (This leads into the
+ general fault handling in fault.c which deals with mapping file-backed
+ pages, stack growth, segmentation faults, swapping etc etc)
+ */
+asmlinkage int do_fast_page_fault(unsigned long long ssr_md, unsigned long long expevt,
+ unsigned long address)
+{
+ struct task_struct *tsk;
+ struct mm_struct *mm;
+ unsigned long long textaccess;
+ unsigned long long protection_flags;
+ unsigned long long index;
+ unsigned long long expevt4;
+
+ /* The next few lines implement a way of hashing EXPEVT into a small array index
+ which can be used to lookup parameters specific to the type of TLBMISS being
+ handled. Note:
+ ITLBMISS has EXPEVT==0xa40
+ RTLBMISS has EXPEVT==0x040
+ WTLBMISS has EXPEVT==0x060
+ */
+
+ expevt4 = (expevt >> 4);
+ /* TODO : xor ssr_md into this expression too. Then we can check that PRU is set
+ when it needs to be. */
+ index = expevt4 ^ (expevt4 >> 5);
+ index &= 7;
+ protection_flags = expevt_lookup_table.protection_flags[index];
+ textaccess = expevt_lookup_table.is_text_access[index];
+
+#ifdef CONFIG_SH64_PROC_TLB
+ ++calls_to_do_fast_page_fault;
+#endif
+
+ /* SIM
+ * Note this is now called with interrupts still disabled
+ * This is to cope with being called for a missing IO port
+ * address with interupts disabled. This should be fixed as
+ * soon as we have a better 'fast path' miss handler.
+ *
+ * Plus take care how you try and debug this stuff.
+ * For example, writing debug data to a port which you
+ * have just faulted on is not going to work.
+ */
+
+ tsk = current;
+ mm = tsk->mm;
+
+ if ((address >= VMALLOC_START && address < VMALLOC_END) ||
+ (address >= IOBASE_VADDR && address < IOBASE_END)) {
+ if (ssr_md) {
+ /* Process-contexts can never have this address range mapped */
+ if (handle_vmalloc_fault(mm, protection_flags, textaccess, address)) {
+ return 1;
+ }
+ }
+ } else if (!in_interrupt() && mm) {
+ if (handle_tlbmiss(mm, protection_flags, textaccess, address)) {
+ return 1;
+ }
+ }
+
+ return 0;
+}
+
--- /dev/null
+/*
+ * linux/arch/x86-64/mm/mmap.c
+ *
+ * flexible mmap layout support
+ *
+ * Copyright 2003-2004 Red Hat Inc., Durham, North Carolina.
+ * All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * Started by Ingo Molnar <mingo@elte.hu>
+ */
+
+#include <linux/personality.h>
+#include <linux/mm.h>
+
+/*
+ * Top of mmap area (just below the process stack).
+ *
+ * Leave an at least ~128 MB hole.
+ */
+#define MIN_GAP (128*1024*1024)
+#define MAX_GAP (TASK_SIZE_3264/6*5)
+
+static inline unsigned long mmap_base(void)
+{
+ unsigned long gap = current->signal->rlim[RLIMIT_STACK].rlim_cur;
+
+ if (gap < MIN_GAP)
+ gap = MIN_GAP;
+ else if (gap > MAX_GAP)
+ gap = MAX_GAP;
+
+ return TASK_SIZE_3264 - (gap & PAGE_MASK);
+}
+
+static inline int mmap_is_legacy(void)
+{
+ /*
+ * Force standard allocation for 64 bit programs.
+ */
+ if (!test_thread_flag(TIF_IA32))
+ return 1;
+
+ if (current->personality & ADDR_COMPAT_LAYOUT)
+ return 1;
+
+ if (current->signal->rlim[RLIMIT_STACK].rlim_cur == RLIM_INFINITY)
+ return 1;
+
+ return sysctl_legacy_va_layout;
+}
+
+/*
+ * This function, called very early during the creation of a new
+ * process VM image, sets up which VM layout function to use:
+ */
+void arch_pick_mmap_layout(struct mm_struct *mm)
+{
+ /*
+ * Fall back to the standard layout if the personality
+ * bit is set, or if the expected stack growth is unlimited:
+ */
+ if (mmap_is_legacy()) {
+ mm->mmap_base = TASK_UNMAPPED_BASE;
+ mm->get_unmapped_area = arch_get_unmapped_area;
+ mm->unmap_area = arch_unmap_area;
+ } else {
+ mm->mmap_base = mmap_base();
+ mm->get_unmapped_area = arch_get_unmapped_area_topdown;
+ mm->unmap_area = arch_unmap_area_topdown;
+ }
+}
--- /dev/null
+const int ksign_def_public_key_size = 0;
+/* automatically generated by bin2hex */
+static unsigned char ksign_def_public_key[] __initdata =
+{
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+};
+
--- /dev/null
+/* ksign-keyring.c: public key cache
+ *
+ * Copyright (C) 2001 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This file is derived from part of GnuPG.
+ *
+ * GnuPG is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * GnuPG is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
+ */
+
+#include <linux/rwsem.h>
+#include "local.h"
+
+static LIST_HEAD(keyring);
+static DECLARE_RWSEM(keyring_sem);
+
+static int add_keyblock_key(struct ksign_public_key *pk, void *data)
+{
+ printk("- Added public key %X%X\n", pk->keyid[0], pk->keyid[1]);
+
+ if (pk->expiredate && pk->expiredate < xtime.tv_sec)
+ printk(" - public key has expired\n");
+
+ if (pk->timestamp > xtime.tv_sec )
+ printk(" - key was been created %lu seconds in future\n",
+ pk->timestamp - xtime.tv_sec);
+
+ atomic_inc(&pk->count);
+
+ down_write(&keyring_sem);
+ list_add_tail(&pk->link, &keyring);
+ up_write(&keyring_sem);
+
+ return 0;
+}
+
+static int add_keyblock_uid(struct ksign_user_id *uid, void *data)
+{
+ printk("- User ID: %s\n", uid->name);
+ return 1;
+}
+
+/*****************************************************************************/
+/*
+ *
+ */
+int ksign_load_keyring_from_buffer(const void *buffer, size_t size)
+{
+ printk("Loading keyring\n");
+
+ return ksign_parse_packets((const uint8_t *) buffer,
+ size,
+ NULL,
+ add_keyblock_key,
+ add_keyblock_uid,
+ NULL);
+} /* end ksign_load_keyring_from_buffer() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+struct ksign_public_key *ksign_get_public_key(const uint32_t *keyid)
+{
+ struct ksign_public_key *pk;
+
+ down_read(&keyring_sem);
+
+ list_for_each_entry(pk, &keyring, link) {
+ if (memcmp(pk->keyid, keyid, sizeof(pk->keyid)) == 0) {
+ atomic_inc(&pk->count);
+ goto found;
+ }
+ }
+
+ found:
+ up_read(&keyring_sem);
+
+ return pk;
+} /* end ksign_get_public_key() */
+
+/*****************************************************************************/
+/*
+ * clear the public key keyring
+ */
+void ksign_clear_keyring(void)
+{
+ struct ksign_public_key *pk;
+
+ down_write(&keyring_sem);
+
+ while (!list_empty(&keyring)) {
+ pk = list_entry(keyring.next, struct ksign_public_key, link);
+ list_del(&pk->link);
+
+ ksign_put_public_key(pk);
+ }
+
+ up_write(&keyring_sem);
+} /* end ksign_clear_keyring() */
--- /dev/null
+/* parse-packet.c - read packets
+ * Copyright (C) 1998, 1999, 2000, 2001 Free Software Foundation, Inc.
+ *
+ * This file is part of GnuPG.
+ *
+ * GnuPG is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * GnuPG is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <asm/errno.h>
+#include "local.h"
+
+static inline uint32_t buffer_to_u32(const uint8_t *buffer)
+{
+ uint32_t a;
+ a = *buffer << 24;
+ a |= buffer[1] << 16;
+ a |= buffer[2] << 8;
+ a |= buffer[3];
+ return a;
+}
+
+static inline uint16_t read_16(const uint8_t **datap)
+{
+ uint16_t a;
+ a = *(*datap)++ << 8;
+ a |= *(*datap)++;
+ return a;
+}
+
+static inline uint32_t read_32(const uint8_t **datap)
+{
+ uint32_t a;
+ a = *(*datap)++ << 24;
+ a |= *(*datap)++ << 16;
+ a |= *(*datap)++ << 8;
+ a |= *(*datap)++;
+ return a;
+}
+
+void ksign_free_signature(struct ksign_signature *sig)
+{
+ int i;
+
+ if (!sig)
+ return;
+
+ for (i = 0; i < DSA_NSIG; i++)
+ mpi_free(sig->data[i]);
+ kfree(sig->hashed_data);
+ kfree(sig->unhashed_data);
+ kfree(sig);
+}
+
+void ksign_free_public_key(struct ksign_public_key *pk)
+{
+ int i;
+
+ if (pk) {
+ for (i = 0; i < DSA_NPKEY; i++)
+ mpi_free(pk->pkey[i]);
+ kfree(pk);
+ }
+}
+
+void ksign_free_user_id(struct ksign_user_id *uid)
+{
+ if (uid)
+ kfree(uid);
+}
+
+/*****************************************************************************/
+/*
+ *
+ */
+static void ksign_calc_pk_keyid(struct crypto_tfm *sha1,
+ struct ksign_public_key *pk)
+{
+ unsigned n;
+ unsigned nb[DSA_NPKEY];
+ unsigned nn[DSA_NPKEY];
+ uint8_t *pp[DSA_NPKEY];
+ uint32_t a32;
+ int i;
+ int npkey = DSA_NPKEY;
+
+ crypto_digest_init(sha1);
+
+ n = pk->version < 4 ? 8 : 6;
+ for (i = 0; i < npkey; i++) {
+ nb[i] = mpi_get_nbits(pk->pkey[i]);
+ pp[i] = mpi_get_buffer( pk->pkey[i], nn + i, NULL);
+ n += 2 + nn[i];
+ }
+
+ SHA1_putc(sha1, 0x99); /* ctb */
+ SHA1_putc(sha1, n >> 8); /* 2 uint8_t length header */
+ SHA1_putc(sha1, n);
+
+ if( pk->version < 4)
+ SHA1_putc(sha1, 3);
+ else
+ SHA1_putc(sha1, 4);
+
+ a32 = pk->timestamp;
+ SHA1_putc(sha1, a32 >> 24 );
+ SHA1_putc(sha1, a32 >> 16 );
+ SHA1_putc(sha1, a32 >> 8 );
+ SHA1_putc(sha1, a32 >> 0 );
+
+ if (pk->version < 4) {
+ uint16_t a16;
+
+ if( pk->expiredate )
+ a16 = (uint16_t) ((pk->expiredate - pk->timestamp) / 86400L);
+ else
+ a16 = 0;
+ SHA1_putc(sha1, a16 >> 8);
+ SHA1_putc(sha1, a16 >> 0);
+ }
+
+ SHA1_putc(sha1, PUBKEY_ALGO_DSA);
+
+ for (i = 0; i < npkey; i++) {
+ SHA1_putc(sha1, nb[i] >> 8);
+ SHA1_putc(sha1, nb[i]);
+ SHA1_write(sha1, pp[i], nn[i]);
+ kfree(pp[i]);
+ }
+
+} /* end ksign_calc_pk_keyid() */
+
+/*****************************************************************************/
+/*
+ * parse a user ID embedded in a signature
+ */
+static int ksign_parse_user_id(const uint8_t *datap, const uint8_t *endp,
+ ksign_user_id_actor_t uidfnx, void *fnxdata)
+{
+ struct ksign_user_id *uid;
+ int rc = 0;
+ int n;
+
+ if (!uidfnx)
+ return 0;
+
+ n = endp - datap;
+ uid = kmalloc(sizeof(*uid) + n + 1, GFP_KERNEL);
+ if (!uid)
+ return -ENOMEM;
+ uid->len = n;
+
+ memcpy(uid->name, datap, n);
+ uid->name[n] = 0;
+
+ rc = uidfnx(uid, fnxdata);
+ if (rc == 0)
+ return rc; /* uidfnx keeps the record */
+ if (rc == 1)
+ rc = 0;
+
+ ksign_free_user_id(uid);
+ return rc;
+} /* end ksign_parse_user_id() */
+
+/*****************************************************************************/
+/*
+ * extract a public key embedded in a signature
+ */
+static int ksign_parse_key(const uint8_t *datap, const uint8_t *endp,
+ uint8_t *hdr, int hdrlen,
+ ksign_public_key_actor_t pkfnx, void *fnxdata)
+{
+ struct ksign_public_key *pk;
+ struct crypto_tfm *sha1_tfm;
+ unsigned long timestamp, expiredate;
+ uint8_t sha1[SHA1_DIGEST_SIZE];
+ int i, version;
+ int is_v4 = 0;
+ int rc = 0;
+
+ if (endp - datap < 12) {
+ printk("ksign: public key packet too short\n");
+ return -EBADMSG;
+ }
+
+ version = *datap++;
+ switch (version) {
+ case 4:
+ is_v4 = 1;
+ case 2:
+ case 3:
+ break;
+ default:
+ printk("ksign: public key packet with unknown version %d\n",
+ version);
+ return -EBADMSG;
+ }
+
+ timestamp = read_32(&datap);
+ if (is_v4)
+ expiredate = 0; /* have to get it from the selfsignature */
+ else {
+ unsigned short ndays;
+ ndays = read_16(&datap);
+ if (ndays)
+ expiredate = timestamp + ndays * 86400L;
+ else
+ expiredate = 0;
+ }
+
+ if (*datap++ != PUBKEY_ALGO_DSA) {
+ printk("ksign: public key packet with unknown version %d\n",
+ version);
+ return 0;
+ }
+
+ /* extract the stuff from the DSA public key */
+ pk = kmalloc(sizeof(struct ksign_public_key), GFP_KERNEL);
+ if (!pk)
+ return -ENOMEM;
+
+ memset(pk, 0, sizeof(struct ksign_public_key));
+ atomic_set(&pk->count, 1);
+ pk->timestamp = timestamp;
+ pk->expiredate = expiredate;
+ pk->hdrbytes = hdrlen;
+ pk->version = version;
+
+ for (i = 0; i < DSA_NPKEY; i++) {
+ unsigned int remaining = endp - datap;
+ pk->pkey[i] = mpi_read_from_buffer(datap, &remaining);
+ datap += remaining;
+ }
+
+ rc = -ENOMEM;
+
+ sha1_tfm = crypto_alloc_tfm2("sha1", 0, 1);
+ if (!sha1_tfm)
+ goto cleanup;
+
+ ksign_calc_pk_keyid(sha1_tfm, pk);
+ crypto_digest_final(sha1_tfm, sha1);
+ crypto_free_tfm(sha1_tfm);
+
+ pk->keyid[0] = sha1[12] << 24 | sha1[13] << 16 | sha1[14] << 8 | sha1[15];
+ pk->keyid[1] = sha1[16] << 24 | sha1[17] << 16 | sha1[18] << 8 | sha1[19];
+
+ rc = 0;
+ if (pkfnx)
+ rc = pkfnx(pk, fnxdata);
+
+ cleanup:
+ ksign_put_public_key(pk);
+ return rc;
+} /* end ksign_parse_key() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+static const uint8_t *ksign_find_sig_issuer(const uint8_t *buffer)
+{
+ size_t buflen;
+ size_t n;
+ int type;
+ int seq = 0;
+
+ if (!buffer)
+ return NULL;
+
+ buflen = read_16(&buffer);
+ while (buflen) {
+ n = *buffer++; buflen--;
+ if (n == 255) {
+ if (buflen < 4)
+ goto too_short;
+ n = read_32(&buffer);
+ buflen -= 4;
+ }
+ else if (n >= 192) {
+ if(buflen < 2)
+ goto too_short;
+ n = ((n - 192) << 8) + *buffer + 192;
+ buffer++;
+ buflen--;
+ }
+
+ if (buflen < n)
+ goto too_short;
+
+ type = *buffer & 0x7f;
+ if (!(++seq > 0))
+ ;
+ else if (type == SIGSUBPKT_ISSUER) { /* found */
+ buffer++;
+ n--;
+ if (n > buflen || n < 8)
+ goto too_short;
+ return buffer;
+ }
+
+ buffer += n;
+ buflen -= n;
+ }
+
+ too_short:
+ return NULL; /* end of subpackets; not found */
+} /* end ksign_find_sig_issuer() */
+
+/*****************************************************************************/
+/*
+ * extract signature data embedded in a signature
+ */
+static int ksign_parse_signature(const uint8_t *datap, const uint8_t *endp,
+ ksign_signature_actor_t sigfnx, void *fnxdata)
+{
+ struct ksign_signature *sig;
+ size_t n;
+ int version, is_v4 = 0;
+ int rc;
+ int i;
+
+ if (endp - datap < 16) {
+ printk("ksign: signature packet too short\n");
+ return -EBADMSG;
+ }
+
+ version = *datap++;
+ switch (version) {
+ case 4:
+ is_v4 = 1;
+ case 3:
+ case 2:
+ break;
+ default:
+ printk("ksign: signature packet with unknown version %d\n", version);
+ return 0;
+ }
+
+ /* store information */
+ sig = kmalloc(sizeof(*sig), GFP_KERNEL);
+ if (!sig)
+ return -ENOMEM;
+
+ memset(sig, 0, sizeof(*sig));
+ sig->version = version;
+
+ if (!is_v4)
+ datap++; /* ignore md5 length */
+
+ sig->sig_class = *datap++;
+ if (!is_v4) {
+ sig->timestamp = read_32(&datap);
+ sig->keyid[0] = read_32(&datap);
+ sig->keyid[1] = read_32(&datap);
+ }
+
+ rc = 0;
+ if (*datap++ != PUBKEY_ALGO_DSA) {
+ printk("ksign: ignoring non-DSA signature\n");
+ goto leave;
+ }
+ if (*datap++ != DIGEST_ALGO_SHA1) {
+ printk("ksign: ignoring non-SHA1 signature\n");
+ goto leave;
+ }
+
+ rc = -EBADMSG;
+ if (is_v4) { /* read subpackets */
+ n = read_16(&datap); /* length of hashed data */
+ if (n > 10000) {
+ printk("ksign: signature packet: hashed data too long\n");
+ goto leave;
+ }
+ if (n) {
+ if ((size_t)(endp - datap) < n) {
+ printk("ksign: signature packet: available data too short\n");
+ goto leave;
+ }
+ sig->hashed_data = kmalloc(n + 2, GFP_KERNEL);
+ if (!sig->hashed_data) {
+ rc = -ENOMEM;
+ goto leave;
+ }
+ sig->hashed_data[0] = n >> 8;
+ sig->hashed_data[1] = n;
+ memcpy(sig->hashed_data + 2, datap, n);
+ datap += n;
+ }
+
+ n = read_16(&datap); /* length of unhashed data */
+ if (n > 10000) {
+ printk("ksign: signature packet: unhashed data too long\n");
+ goto leave;
+ }
+ if (n) {
+ if ((size_t) (endp - datap) < n) {
+ printk("ksign: signature packet: available data too short\n");
+ goto leave;
+ }
+ sig->unhashed_data = kmalloc(n + 2, GFP_KERNEL);
+ if (!sig->unhashed_data) {
+ rc = -ENOMEM;
+ goto leave;
+ }
+ sig->unhashed_data[0] = n >> 8;
+ sig->unhashed_data[1] = n;
+ memcpy(sig->unhashed_data + 2, datap, n);
+ datap += n;
+ }
+ }
+
+ if (endp - datap < 5) { /* sanity check */
+ printk("ksign: signature packet too short\n");
+ goto leave;
+ }
+
+ sig->digest_start[0] = *datap++;
+ sig->digest_start[1] = *datap++;
+
+ if (is_v4) {
+ const uint8_t *p;
+
+ p = ksign_find_sig_issuer(sig->hashed_data);
+ if (!p)
+ p = ksign_find_sig_issuer(sig->unhashed_data);
+ if (!p)
+ printk("ksign: signature packet without issuer\n");
+ else {
+ sig->keyid[0] = buffer_to_u32(p);
+ sig->keyid[1] = buffer_to_u32(p + 4);
+ }
+ }
+
+ for (i = 0; i < DSA_NSIG; i++) {
+ unsigned remaining = endp - datap;
+ sig->data[i] = mpi_read_from_buffer(datap, &remaining);
+ datap += remaining;
+ }
+
+ rc = 0;
+ if (sigfnx) {
+ rc = sigfnx(sig, fnxdata);
+ if (rc == 0)
+ return rc; /* sigfnx keeps the signature */
+ if (rc == 1)
+ rc = 0;
+ }
+
+ leave:
+ ksign_free_signature(sig);
+ return rc;
+} /* end ksign_parse_signature() */
+
+/*****************************************************************************/
+/*
+ * parse the next packet and call appropriate handler function for known types
+ * - returns:
+ * 0 on EOF
+ * 1 if there might be more packets
+ * -EBADMSG if the packet is in an invalid format
+ * -ve on other error
+ */
+static int ksign_parse_one_packet(const uint8_t **datap,
+ const uint8_t *endp,
+ ksign_signature_actor_t sigfnx,
+ ksign_public_key_actor_t pkfnx,
+ ksign_user_id_actor_t uidfnx,
+ void *data)
+{
+ int rc, c, ctb, pkttype, lenuint8_ts;
+ unsigned long pktlen;
+ uint8_t hdr[8];
+ int hdrlen;
+
+ /* extract the next packet and dispatch it */
+ rc = 0;
+ if (*datap >= endp)
+ goto leave;
+ ctb = *(*datap)++;
+
+ rc = -EBADMSG;
+
+ hdrlen = 0;
+ hdr[hdrlen++] = ctb;
+ if (!(ctb & 0x80)) {
+ printk("ksign: invalid packet (ctb=%02x)\n", ctb);
+ goto leave;
+ }
+
+ pktlen = 0;
+ if (ctb & 0x40) {
+ pkttype = ctb & 0x3f;
+ if (*datap >= endp) {
+ printk("ksign: 1st length byte missing\n");
+ goto leave;
+ }
+ c = *(*datap)++;
+ hdr[hdrlen++] = c;
+
+ if (c < 192) {
+ pktlen = c;
+ }
+ else if (c < 224) {
+ pktlen = (c - 192) * 256;
+ if (*datap >= endp) {
+ printk("ksign: 2nd length uint8_t missing\n");
+ goto leave;
+ }
+ c = *(*datap)++;
+ hdr[hdrlen++] = c;
+ pktlen += c + 192;
+ }
+ else if (c == 255) {
+ if (*datap + 3 >= endp) {
+ printk("ksign: 4 uint8_t length invalid\n");
+ goto leave;
+ }
+ pktlen = (hdr[hdrlen++] = *(*datap)++ << 24 );
+ pktlen |= (hdr[hdrlen++] = *(*datap)++ << 16 );
+ pktlen |= (hdr[hdrlen++] = *(*datap)++ << 8 );
+ pktlen |= (hdr[hdrlen++] = *(*datap)++ << 0 );
+ }
+ else {
+ pktlen = 0;/* to indicate partial length */
+ }
+ }
+ else {
+ pkttype = (ctb >> 2) & 0xf;
+ lenuint8_ts = ((ctb & 3) == 3) ? 0 : (1 << (ctb & 3));
+ if( !lenuint8_ts ) {
+ pktlen = 0; /* don't know the value */
+ }
+ else {
+ if (*datap + lenuint8_ts > endp) {
+ printk("ksign: length uint8_ts missing\n");
+ goto leave;
+ }
+ for( ; lenuint8_ts; lenuint8_ts-- ) {
+ pktlen <<= 8;
+ pktlen |= hdr[hdrlen++] = *(*datap)++;
+ }
+ }
+ }
+
+ if (*datap + pktlen > endp) {
+ printk("ksign: packet length longer than available data\n");
+ goto leave;
+ }
+
+ /* deal with the next packet appropriately */
+ switch (pkttype) {
+ case PKT_PUBLIC_KEY:
+ rc = ksign_parse_key(*datap, *datap + pktlen, hdr, hdrlen, pkfnx, data);
+ break;
+ case PKT_SIGNATURE:
+ rc = ksign_parse_signature(*datap, *datap + pktlen, sigfnx, data);
+ break;
+ case PKT_USER_ID:
+ rc = ksign_parse_user_id(*datap, *datap + pktlen, uidfnx, data);
+ break;
+ default:
+ rc = 0; /* unknown packet */
+ break;
+ }
+
+ *datap += pktlen;
+ leave:
+ return rc;
+} /* end ksign_parse_one_packet() */
+
+/*****************************************************************************/
+/*
+ * parse the contents of a packet buffer, passing the signature, public key and
+ * user ID to the caller's callback functions
+ */
+int ksign_parse_packets(const uint8_t *buf,
+ size_t size,
+ ksign_signature_actor_t sigfnx,
+ ksign_public_key_actor_t pkfnx,
+ ksign_user_id_actor_t uidfnx,
+ void *data)
+{
+ const uint8_t *datap, *endp;
+ int rc;
+
+ datap = buf;
+ endp = buf + size;
+ do {
+ rc = ksign_parse_one_packet(&datap, endp,
+ sigfnx, pkfnx, uidfnx, data);
+ } while (rc == 0 && datap < endp);
+
+ return rc;
+} /* end ksign_parse_packets() */
--- /dev/null
+#include "local.h"
+
+#include "key.h"
+
+static int __init ksign_init(void)
+{
+ int rc;
+
+ printk("ksign: Installing public key data\n");
+
+ rc = ksign_load_keyring_from_buffer(ksign_def_public_key,
+ ksign_def_public_key_size);
+ if (rc < 0)
+ printk("Unable to load default keyring: error=%d\n", -rc);
+
+ return rc;
+}
+
+module_init(ksign_init)
--- /dev/null
+/*
+ * Cryptographic API.
+ *
+ * TEA and Xtended TEA Algorithms
+ *
+ * The TEA and Xtended TEA algorithms were developed by David Wheeler
+ * and Roger Needham at the Computer Laboratory of Cambridge University.
+ *
+ * Copyright (c) 2004 Aaron Grothe ajgrothe@yahoo.com
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <asm/scatterlist.h>
+#include <linux/crypto.h>
+
+#define TEA_KEY_SIZE 16
+#define TEA_BLOCK_SIZE 8
+#define TEA_ROUNDS 32
+#define TEA_DELTA 0x9e3779b9
+
+#define XTEA_KEY_SIZE 16
+#define XTEA_BLOCK_SIZE 8
+#define XTEA_ROUNDS 32
+#define XTEA_DELTA 0x9e3779b9
+
+#define u32_in(x) le32_to_cpu(*(const u32 *)(x))
+#define u32_out(to, from) (*(u32 *)(to) = cpu_to_le32(from))
+
+struct tea_ctx {
+ u32 KEY[4];
+};
+
+struct xtea_ctx {
+ u32 KEY[4];
+};
+
+static int tea_setkey(void *ctx_arg, const u8 *in_key,
+ unsigned int key_len, u32 *flags)
+{
+
+ struct tea_ctx *ctx = ctx_arg;
+
+ if (key_len != 16)
+ {
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
+ ctx->KEY[0] = u32_in (in_key);
+ ctx->KEY[1] = u32_in (in_key + 4);
+ ctx->KEY[2] = u32_in (in_key + 8);
+ ctx->KEY[3] = u32_in (in_key + 12);
+
+ return 0;
+
+}
+
+static void tea_encrypt(void *ctx_arg, u8 *dst, const u8 *src)
+{
+ u32 y, z, n, sum = 0;
+ u32 k0, k1, k2, k3;
+
+ struct tea_ctx *ctx = ctx_arg;
+
+ y = u32_in (src);
+ z = u32_in (src + 4);
+
+ k0 = ctx->KEY[0];
+ k1 = ctx->KEY[1];
+ k2 = ctx->KEY[2];
+ k3 = ctx->KEY[3];
+
+ n = TEA_ROUNDS;
+
+ while (n-- > 0) {
+ sum += TEA_DELTA;
+ y += ((z << 4) + k0) ^ (z + sum) ^ ((z >> 5) + k1);
+ z += ((y << 4) + k2) ^ (y + sum) ^ ((y >> 5) + k3);
+ }
+
+ u32_out (dst, y);
+ u32_out (dst + 4, z);
+}
+
+static void tea_decrypt(void *ctx_arg, u8 *dst, const u8 *src)
+{
+ u32 y, z, n, sum;
+ u32 k0, k1, k2, k3;
+
+ struct tea_ctx *ctx = ctx_arg;
+
+ y = u32_in (src);
+ z = u32_in (src + 4);
+
+ k0 = ctx->KEY[0];
+ k1 = ctx->KEY[1];
+ k2 = ctx->KEY[2];
+ k3 = ctx->KEY[3];
+
+ sum = TEA_DELTA << 5;
+
+ n = TEA_ROUNDS;
+
+ while (n-- > 0) {
+ z -= ((y << 4) + k2) ^ (y + sum) ^ ((y >> 5) + k3);
+ y -= ((z << 4) + k0) ^ (z + sum) ^ ((z >> 5) + k1);
+ sum -= TEA_DELTA;
+ }
+
+ u32_out (dst, y);
+ u32_out (dst + 4, z);
+
+}
+
+static int xtea_setkey(void *ctx_arg, const u8 *in_key,
+ unsigned int key_len, u32 *flags)
+{
+
+ struct xtea_ctx *ctx = ctx_arg;
+
+ if (key_len != 16)
+ {
+ *flags |= CRYPTO_TFM_RES_BAD_KEY_LEN;
+ return -EINVAL;
+ }
+
+ ctx->KEY[0] = u32_in (in_key);
+ ctx->KEY[1] = u32_in (in_key + 4);
+ ctx->KEY[2] = u32_in (in_key + 8);
+ ctx->KEY[3] = u32_in (in_key + 12);
+
+ return 0;
+
+}
+
+static void xtea_encrypt(void *ctx_arg, u8 *dst, const u8 *src)
+{
+
+ u32 y, z, sum = 0;
+ u32 limit = XTEA_DELTA * XTEA_ROUNDS;
+
+ struct xtea_ctx *ctx = ctx_arg;
+
+ y = u32_in (src);
+ z = u32_in (src + 4);
+
+ while (sum != limit) {
+ y += (z << 4 ^ z >> 5) + (z ^ sum) + ctx->KEY[sum&3];
+ sum += XTEA_DELTA;
+ z += (y << 4 ^ y >> 5) + (y ^ sum) + ctx->KEY[sum>>11 &3];
+ }
+
+ u32_out (dst, y);
+ u32_out (dst + 4, z);
+
+}
+
+static void xtea_decrypt(void *ctx_arg, u8 *dst, const u8 *src)
+{
+
+ u32 y, z, sum;
+ struct tea_ctx *ctx = ctx_arg;
+
+ y = u32_in (src);
+ z = u32_in (src + 4);
+
+ sum = XTEA_DELTA * XTEA_ROUNDS;
+
+ while (sum) {
+ z -= (y << 4 ^ y >> 5) + (y ^ sum) + ctx->KEY[sum>>11 & 3];
+ sum -= XTEA_DELTA;
+ y -= (z << 4 ^ z >> 5) + (z ^ sum) + ctx->KEY[sum & 3];
+ }
+
+ u32_out (dst, y);
+ u32_out (dst + 4, z);
+
+}
+
+static struct crypto_alg tea_alg = {
+ .cra_name = "tea",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = TEA_BLOCK_SIZE,
+ .cra_ctxsize = sizeof (struct tea_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(tea_alg.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = TEA_KEY_SIZE,
+ .cia_max_keysize = TEA_KEY_SIZE,
+ .cia_setkey = tea_setkey,
+ .cia_encrypt = tea_encrypt,
+ .cia_decrypt = tea_decrypt } }
+};
+
+static struct crypto_alg xtea_alg = {
+ .cra_name = "xtea",
+ .cra_flags = CRYPTO_ALG_TYPE_CIPHER,
+ .cra_blocksize = XTEA_BLOCK_SIZE,
+ .cra_ctxsize = sizeof (struct xtea_ctx),
+ .cra_module = THIS_MODULE,
+ .cra_list = LIST_HEAD_INIT(xtea_alg.cra_list),
+ .cra_u = { .cipher = {
+ .cia_min_keysize = XTEA_KEY_SIZE,
+ .cia_max_keysize = XTEA_KEY_SIZE,
+ .cia_setkey = xtea_setkey,
+ .cia_encrypt = xtea_encrypt,
+ .cia_decrypt = xtea_decrypt } }
+};
+
+static int __init init(void)
+{
+ int ret = 0;
+
+ ret = crypto_register_alg(&tea_alg);
+ if (ret < 0)
+ goto out;
+
+ ret = crypto_register_alg(&xtea_alg);
+ if (ret < 0) {
+ crypto_unregister_alg(&tea_alg);
+ goto out;
+ }
+
+out:
+ return ret;
+}
+
+static void __exit fini(void)
+{
+ crypto_unregister_alg(&tea_alg);
+ crypto_unregister_alg(&xtea_alg);
+}
+
+MODULE_ALIAS("xtea");
+
+module_init(init);
+module_exit(fini);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("TEA & XTEA Cryptographic Algorithms");
--- /dev/null
+/*
+ * linux/drivers/block/diskdump.c
+ *
+ * Copyright (C) 2004 FUJITSU LIMITED
+ * Copyright (C) 2002 Red Hat, Inc.
+ * Written by Nobuhiro Tachino (ntachino@jp.fujitsu.com)
+ *
+ * Some codes were derived from netdump and copyright belongs to
+ * Red Hat, Inc.
+ */
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/reboot.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include <linux/smp_lock.h>
+#include <linux/nmi.h>
+#include <linux/crc32.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/seq_file.h>
+#include <linux/proc_fs.h>
+#include <linux/diskdump.h>
+#include <asm/diskdump.h>
+
+#define Dbg(x, ...) pr_debug("disk_dump: " x "\n", ## __VA_ARGS__)
+#define Err(x, ...) pr_err ("disk_dump: " x "\n", ## __VA_ARGS__)
+#define Warn(x, ...) pr_warn ("disk_dump: " x "\n", ## __VA_ARGS__)
+#define Info(x, ...) pr_info ("disk_dump: " x "\n", ## __VA_ARGS__)
+
+#define ROUNDUP(x, y) (((x) + ((y)-1))/(y))
+
+/* 512byte sectors to blocks */
+#define SECTOR_BLOCK(s) ((s) >> (DUMP_BLOCK_SHIFT - 9))
+
+/* The number of block which is used for saving format information */
+#define USER_PARAM_BLOCK 2
+
+static int fallback_on_err = 1;
+static int allow_risky_dumps = 1;
+static unsigned int block_order = 2;
+static int sample_rate = 8;
+module_param_named(fallback_on_err, fallback_on_err, bool, S_IRUGO|S_IWUSR);
+module_param_named(allow_risky_dumps, allow_risky_dumps, bool, S_IRUGO|S_IWUSR);
+module_param_named(block_order, block_order, uint, S_IRUGO|S_IWUSR);
+module_param_named(sample_rate, sample_rate, int, S_IRUGO|S_IWUSR);
+
+static unsigned long timestamp_1sec;
+static uint32_t module_crc;
+static char *scratch;
+static struct disk_dump_header dump_header;
+static struct disk_dump_sub_header dump_sub_header;
+
+/* Registered dump devices */
+static LIST_HEAD(disk_dump_devices);
+
+/* Registered dump types, e.g. SCSI, ... */
+static LIST_HEAD(disk_dump_types);
+
+static DECLARE_MUTEX(disk_dump_mutex);
+
+static unsigned int header_blocks; /* The size of all headers */
+static unsigned int bitmap_blocks; /* The size of bitmap header */
+static unsigned int total_ram_blocks; /* The size of memory */
+static unsigned int total_blocks; /* The sum of above */
+/*
+ * This is not a parameter actually, but used to pass the number of
+ * required blocks to userland tools
+ */
+module_param_named(total_blocks, total_blocks, uint, S_IRUGO);
+
+struct notifier_block *disk_dump_notifier_list;
+EXPORT_SYMBOL_GPL(disk_dump_notifier_list);
+
+unsigned long volatile diskdump_base_jiffies;
+void *diskdump_stack;
+enum disk_dump_states disk_dump_state = DISK_DUMP_INITIAL;
+
+extern int panic_timeout;
+extern unsigned long max_pfn;
+
+static asmlinkage void disk_dump(struct pt_regs *, void *);
+
+
+#if CONFIG_SMP
+static void freeze_cpu(void *dummy)
+{
+ unsigned int cpu = smp_processor_id();
+
+ dump_header.tasks[cpu] = current;
+
+ platform_freeze_cpu();
+}
+#endif
+
+static int lapse = 0; /* 200msec unit */
+
+static inline unsigned long eta(unsigned long nr, unsigned long maxnr)
+{
+ unsigned long long eta;
+
+ if (nr == 0)
+ nr = 1;
+
+ eta = ((maxnr << 8) / nr) * (unsigned long long)lapse;
+
+ return (unsigned long)(eta >> 8) - lapse;
+}
+
+static inline void print_status(unsigned int nr, unsigned int maxnr)
+{
+ static char *spinner = "/|\\-";
+ static unsigned long long prev_timestamp = 0;
+ unsigned long long timestamp;
+
+ if (nr == 0)
+ nr++;
+
+ platform_timestamp(timestamp);
+
+ if (timestamp - prev_timestamp > (timestamp_1sec/5)) {
+ prev_timestamp = timestamp;
+ lapse++;
+ printk("%u/%u %lu ETA %c \r",
+ nr, maxnr, eta(nr, maxnr) / 5, spinner[lapse & 3]);
+ }
+}
+
+static inline void clear_status(int nr, int maxnr)
+{
+ printk(" \r");
+ lapse = 0;
+}
+
+/*
+ * Checking the signature on a block. The format is as follows.
+ *
+ * 1st word = 'disk'
+ * 2nd word = 'dump'
+ * 3rd word = block number
+ * 4th word = ((block number + 7) * 11) & 0xffffffff
+ * 5th word = ((4th word + 7)* 11) & 0xffffffff
+ * ..
+ *
+ * Return 1 if the signature is correct, else return 0
+ */
+static int check_block_signature(void *buf, unsigned int block_nr)
+{
+ int word_nr = PAGE_SIZE / sizeof(int);
+ int *words = buf;
+ unsigned int val;
+ int i;
+
+ /*
+ * Block 2 is used for the area which formatter saves options like
+ * the sampling rate or the number of blocks. the Kernel part does not
+ * check this block.
+ */
+ if (block_nr == USER_PARAM_BLOCK)
+ return 1;
+
+ if (memcmp(buf, DUMP_PARTITION_SIGNATURE, sizeof(*words)))
+ return 0;
+
+ val = block_nr;
+ for (i = 2; i < word_nr; i++) {
+ if (words[i] != val)
+ return 0;
+ val = (val + 7) * 11;
+ }
+
+ return 1;
+}
+
+/*
+ * Read one block into the dump partition
+ */
+static int read_blocks(struct disk_dump_partition *dump_part, unsigned int nr,
+ char *buf, int len)
+{
+ struct disk_dump_device *device = dump_part->device;
+ int ret;
+
+ local_irq_disable();
+ touch_nmi_watchdog();
+ ret = device->ops.rw_block(dump_part, READ, nr, buf, len);
+ if (ret < 0) {
+ Err("read error on block %u", nr);
+ return ret;
+ }
+ return 0;
+}
+
+static int write_blocks(struct disk_dump_partition *dump_part, unsigned int offs, char *buf, int len)
+{
+ struct disk_dump_device *device = dump_part->device;
+ int ret;
+
+ local_irq_disable();
+ touch_nmi_watchdog();
+ ret = device->ops.rw_block(dump_part, WRITE, offs, buf, len);
+ if (ret < 0) {
+ Err("write error on block %u", offs);
+ return ret;
+ }
+ return 0;
+}
+
+/*
+ * Initialize the common header
+ */
+
+/*
+ * Write the common header
+ */
+static int write_header(struct disk_dump_partition *dump_part)
+{
+ memset(scratch, 0, PAGE_SIZE);
+ memcpy(scratch, &dump_header, sizeof(dump_header));
+
+ return write_blocks(dump_part, 1, scratch, 1);
+}
+
+/*
+ * Check the signaures in all blocks of the dump partition
+ * Return 1 if the signature is correct, else return 0
+ */
+static int check_dump_partition(struct disk_dump_partition *dump_part,
+ unsigned int partition_size)
+{
+ unsigned int blk;
+ int ret;
+ unsigned int chunk_blks, skips;
+ int i;
+
+ if (sample_rate < 0) /* No check */
+ return 1;
+
+ /*
+ * If the device has limitations of transfer size, use it.
+ */
+ chunk_blks = 1 << block_order;
+ if (dump_part->device->max_blocks)
+ chunk_blks = min(chunk_blks, dump_part->device->max_blocks);
+ skips = chunk_blks << sample_rate;
+
+ lapse = 0;
+ for (blk = 0; blk < partition_size; blk += skips) {
+ unsigned int len;
+redo:
+ len = min(chunk_blks, partition_size - blk);
+ if ((ret = read_blocks(dump_part, blk, scratch, len)) < 0)
+ return 0;
+ print_status(blk + 1, partition_size);
+ for (i = 0; i < len; i++)
+ if (!check_block_signature(scratch + i * DUMP_BLOCK_SIZE, blk + i)) {
+ Err("bad signature in block %u", blk + i);
+ return 0;
+ }
+ }
+ /* Check the end of the dump partition */
+ if (blk - skips + chunk_blks < partition_size) {
+ blk = partition_size - chunk_blks;
+ goto redo;
+ }
+ clear_status(blk, partition_size);
+ return 1;
+}
+
+/*
+ * Write memory bitmap after location of dump headers.
+ */
+#define PAGE_PER_BLOCK (PAGE_SIZE * 8)
+#define idx_to_pfn(nr, byte, bit) (((nr) * PAGE_SIZE + (byte)) * 8 + (bit))
+
+static int write_bitmap(struct disk_dump_partition *dump_part,
+ unsigned int bitmap_offset, unsigned int bitmap_blocks)
+{
+ unsigned int nr;
+ unsigned long pfn, next_ram_pfn;
+ int bit, byte;
+ int ret = 0;
+ unsigned char val;
+
+ for (nr = 0; nr < bitmap_blocks; nr++) {
+ pfn = idx_to_pfn(nr, 0, 0);
+ next_ram_pfn = next_ram_page(pfn - 1);
+
+ if (pfn + PAGE_PER_BLOCK <= next_ram_pfn)
+ memset(scratch, 0, PAGE_SIZE);
+ else
+ for (byte = 0; byte < PAGE_SIZE; byte++) {
+ val = 0;
+ for (bit = 0; bit < 8; bit++)
+ if (page_is_ram(idx_to_pfn(nr, byte,
+ bit)))
+ val |= (1 << bit);
+ scratch[byte] = (char)val;
+ }
+ if ((ret = write_blocks(dump_part, bitmap_offset + nr,
+ scratch, 1)) < 0) {
+ Err("I/O error %d on block %u", ret, bitmap_offset + nr);
+ break;
+ }
+ }
+ return ret;
+}
+
+/*
+ * Write whole memory to dump partition.
+ * Return value is the number of writen blocks.
+ */
+static int write_memory(struct disk_dump_partition *dump_part, int offset,
+ unsigned int max_blocks_written,
+ unsigned int *blocks_written)
+{
+ char *kaddr;
+ unsigned int blocks = 0;
+ struct page *page;
+ unsigned long nr;
+ int ret = 0;
+ int blk_in_chunk = 0;
+
+ for (nr = next_ram_page(ULONG_MAX); nr < ULONG_MAX; nr = next_ram_page(nr)) {
+ print_status(blocks, max_blocks_written);
+
+
+ if (blocks >= max_blocks_written) {
+ Warn("dump device is too small. %lu pages were not saved", max_pfn - blocks);
+ goto out;
+ }
+
+ page = pfn_to_page(nr);
+ if (nr != page_to_pfn(page)) {
+ /* page_to_pfn() is called from kmap_atomic().
+ * If page->flag is broken, it specified a wrong
+ * zone and it causes kmap_atomic() fail.
+ */
+ Err("Bad page. PFN %lu flags %lx\n",
+ nr, (unsigned long)page->flags);
+ memset(scratch + blk_in_chunk * PAGE_SIZE, 0,
+ PAGE_SIZE);
+ sprintf(scratch + blk_in_chunk * PAGE_SIZE,
+ "Bad page. PFN %lu flags %lx\n",
+ nr, (unsigned long)page->flags);
+ goto write;
+ }
+
+ if (!kern_addr_valid((unsigned long)pfn_to_kaddr(nr))) {
+ memset(scratch + blk_in_chunk * PAGE_SIZE, 0,
+ PAGE_SIZE);
+ sprintf(scratch + blk_in_chunk * PAGE_SIZE,
+ "Unmapped page. PFN %lu\n", nr);
+ goto write;
+ }
+
+ kaddr = kmap_atomic(page, KM_CRASHDUMP);
+ /*
+ * need to copy because adapter drivers use
+ * virt_to_bus()
+ */
+ memcpy(scratch + blk_in_chunk * PAGE_SIZE, kaddr, PAGE_SIZE);
+ kunmap_atomic(kaddr, KM_CRASHDUMP);
+
+write:
+ blk_in_chunk++;
+ blocks++;
+
+ if (blk_in_chunk >= (1 << block_order)) {
+ ret = write_blocks(dump_part, offset, scratch,
+ blk_in_chunk);
+ if (ret < 0) {
+ Err("I/O error %d on block %u", ret, offset);
+ break;
+ }
+ offset += blk_in_chunk;
+ blk_in_chunk = 0;
+ }
+ }
+ if (ret >= 0 && blk_in_chunk > 0) {
+ ret = write_blocks(dump_part, offset, scratch, blk_in_chunk);
+ if (ret < 0)
+ Err("I/O error %d on block %u", ret, offset);
+ }
+
+out:
+ clear_status(nr, max_blocks_written);
+
+ *blocks_written = blocks;
+ return ret;
+}
+
+/*
+ * Select most suitable dump device. sanity_check() returns the state
+ * of each dump device. 0 means OK, negative value means NG, and
+ * positive value means it maybe work. select_dump_partition() first
+ * try to select a sane device and if it has no sane device and
+ * allow_risky_dumps is set, it select one from maybe OK devices.
+ *
+ * XXX We cannot handle multiple partitions yet.
+ */
+static struct disk_dump_partition *select_dump_partition(void)
+{
+ struct disk_dump_device *dump_device;
+ struct disk_dump_partition *dump_part;
+ int sanity;
+ int strict_check = 1;
+
+redo:
+ /*
+ * Select a sane polling driver.
+ */
+ list_for_each_entry(dump_device, &disk_dump_devices, list) {
+ sanity = 0;
+ if (dump_device->ops.sanity_check)
+ sanity = dump_device->ops.sanity_check(dump_device);
+ if (sanity < 0 || (sanity > 0 && strict_check))
+ continue;
+ list_for_each_entry(dump_part, &dump_device->partitions, list)
+ return dump_part;
+ }
+ if (allow_risky_dumps && strict_check) {
+ strict_check = 0;
+ goto redo;
+ }
+ return NULL;
+}
+
+static int dump_err = 0; /* Indicate Error state which occured in
+ * disk_dump(). We need to make it global
+ * because disk_dump() can't pass
+ * error state as return value.
+ */
+
+static void freeze_other_cpus(void)
+{
+#if CONFIG_SMP
+ int i;
+
+ smp_call_function(freeze_cpu, NULL, 1, -1);
+ diskdump_mdelay(3000);
+ printk("CPU frozen: ");
+ for (i = 0; i < NR_CPUS; i++) {
+ if (dump_header.tasks[i] != NULL)
+ printk("#%d", i);
+
+ }
+ printk("\n");
+ printk("CPU#%d is executing diskdump.\n", smp_processor_id());
+#else
+ diskdump_mdelay(1000);
+#endif
+ dump_header.tasks[smp_processor_id()] = current;
+}
+
+static void start_disk_dump(struct pt_regs *regs)
+{
+ unsigned long flags;
+
+ /* Inhibit interrupt and stop other CPUs */
+ local_irq_save(flags);
+ preempt_disable();
+
+ /*
+ * Check the checksum of myself
+ */
+ if (down_trylock(&disk_dump_mutex)) {
+ Err("down_trylock(disk_dump_mutex) failed.");
+ goto done;
+ }
+
+ if (!check_crc_module()) {
+ Err("checksum error. diskdump common module may be compromised.");
+ goto done;
+ }
+
+ disk_dump_state = DISK_DUMP_RUNNING;
+
+ diskdump_mode = 1;
+
+ Dbg("notify dump start.");
+ notifier_call_chain(&disk_dump_notifier_list, 0, NULL);
+
+ touch_nmi_watchdog();
+ freeze_other_cpus();
+
+ /*
+ * Some platforms may want to execute netdump on its own stack.
+ */
+ platform_start_crashdump(diskdump_stack, disk_dump, regs);
+
+done:
+ /*
+ * If diskdump failed and fallback_on_err is set,
+ * We just return and leave panic to netdump.
+ */
+ if (dump_err) {
+ disk_dump_state = DISK_DUMP_FAILURE;
+ if (fallback_on_err && dump_err)
+ return;
+ } else {
+ disk_dump_state = DISK_DUMP_SUCCESS;
+ }
+
+ Dbg("notify panic.");
+ notifier_call_chain(&panic_notifier_list, 0, NULL);
+
+ if (panic_timeout > 0) {
+ int i;
+ /*
+ * Delay timeout seconds before rebooting the machine.
+ * We can't use the "normal" timers since we just panicked..
+ */
+ printk(KERN_EMERG "Rebooting in %d seconds..",panic_timeout);
+ for (i = 0; i < panic_timeout; i++) {
+ touch_nmi_watchdog();
+ diskdump_mdelay(1000);
+ }
+
+ /*
+ * Should we run the reboot notifier. For the moment Im
+ * choosing not too. It might crash, be corrupt or do
+ * more harm than good for other reasons.
+ */
+ machine_restart(NULL);
+ }
+ printk(KERN_EMERG "halt\n");
+ for (;;) {
+ touch_nmi_watchdog();
+ machine_halt();
+ diskdump_mdelay(1000);
+ }
+}
+
+static asmlinkage void disk_dump(struct pt_regs *regs, void *platform_arg)
+{
+ struct pt_regs myregs;
+ unsigned int max_written_blocks, written_blocks;
+ struct disk_dump_device *dump_device = NULL;
+ struct disk_dump_partition *dump_part = NULL;
+ int ret;
+
+ dump_err = -EIO;
+
+ /*
+ * Setup timer/tasklet
+ */
+ dump_clear_timers();
+ dump_clear_tasklet();
+ dump_clear_workqueue();
+
+ /* Save original jiffies value */
+ diskdump_base_jiffies = jiffies;
+
+ diskdump_setup_timestamp();
+
+ platform_fix_regs();
+
+ if (list_empty(&disk_dump_devices)) {
+ Err("adapter driver is not registered.");
+ goto done;
+ }
+
+ printk("start dumping\n");
+
+ if (!(dump_part = select_dump_partition())) {
+ Err("No sane dump device found");
+ goto done;
+ }
+ dump_device = dump_part->device;
+
+ /*
+ * Stop ongoing I/O with polling driver and make the shift to I/O mode
+ * for dump
+ */
+ Dbg("do quiesce");
+ if (dump_device->ops.quiesce)
+ if ((ret = dump_device->ops.quiesce(dump_device)) < 0) {
+ Err("quiesce failed. error %d", ret);
+ goto done;
+ }
+
+ if (SECTOR_BLOCK(dump_part->nr_sects) < header_blocks + bitmap_blocks) {
+ Warn("dump partition is too small. Aborted");
+ goto done;
+ }
+
+ /* Check dump partition */
+ printk("check dump partition...\n");
+ if (!check_dump_partition(dump_part, total_blocks)) {
+ Err("check partition failed.");
+ goto done;
+ }
+
+ /*
+ * Write the common header
+ */
+ memcpy(dump_header.signature, DISK_DUMP_SIGNATURE,
+ sizeof(dump_header.signature));
+ dump_header.utsname = system_utsname;
+ dump_header.timestamp = xtime;
+ dump_header.status = DUMP_HEADER_INCOMPLETED;
+ dump_header.block_size = PAGE_SIZE;
+ dump_header.sub_hdr_size = size_of_sub_header();
+ dump_header.bitmap_blocks = bitmap_blocks;
+ dump_header.max_mapnr = max_pfn;
+ dump_header.total_ram_blocks = total_ram_blocks;
+ dump_header.device_blocks = SECTOR_BLOCK(dump_part->nr_sects);
+ dump_header.current_cpu = smp_processor_id();
+ dump_header.nr_cpus = num_online_cpus();
+ dump_header.written_blocks = 2;
+
+ write_header(dump_part);
+
+ /*
+ * Write the architecture dependent header
+ */
+ Dbg("write sub header");
+ if ((ret = write_sub_header()) < 0) {
+ Err("writing sub header failed. error %d", ret);
+ goto done;
+ }
+
+ Dbg("writing memory bitmaps..");
+ if ((ret = write_bitmap(dump_part, header_blocks, bitmap_blocks)) < 0)
+ goto done;
+
+ max_written_blocks = total_ram_blocks;
+ if (dump_header.device_blocks < total_blocks) {
+ Warn("dump partition is too small. actual blocks %u. expected blocks %u. whole memory will not be saved",
+ dump_header.device_blocks, total_blocks);
+ max_written_blocks -= (total_blocks - dump_header.device_blocks);
+ }
+
+ dump_header.written_blocks += dump_header.sub_hdr_size;
+ dump_header.written_blocks += dump_header.bitmap_blocks;
+ write_header(dump_part);
+
+ printk("dumping memory..\n");
+ if ((ret = write_memory(dump_part, header_blocks + bitmap_blocks,
+ max_written_blocks, &written_blocks)) < 0)
+ goto done;
+
+ /*
+ * Set the number of block that is written into and write it
+ * into partition again.
+ */
+ dump_header.written_blocks += written_blocks;
+ dump_header.status = DUMP_HEADER_COMPLETED;
+ write_header(dump_part);
+
+ dump_err = 0;
+
+done:
+ Dbg("do adapter shutdown.");
+ if (dump_device && dump_device->ops.shutdown)
+ if (dump_device->ops.shutdown(dump_device))
+ Err("adapter shutdown failed.");
+}
+
+static struct disk_dump_partition *find_dump_partition(struct block_device *bdev)
+{
+ struct disk_dump_device *dump_device;
+ struct disk_dump_partition *dump_part;
+
+ list_for_each_entry(dump_device, &disk_dump_devices, list)
+ list_for_each_entry(dump_part, &dump_device->partitions, list)
+ if (dump_part->bdev == bdev)
+ return dump_part;
+ return NULL;
+}
+
+static struct disk_dump_device *find_dump_device(struct disk_dump_device *device)
+{
+ struct disk_dump_device *dump_device;
+
+ list_for_each_entry(dump_device, &disk_dump_devices, list)
+ if (device->device == dump_device->device)
+ return dump_device;
+ return NULL;
+}
+
+static void *find_real_device(struct device *dev,
+ struct disk_dump_type **_dump_type)
+{
+ void *real_device;
+ struct disk_dump_type *dump_type;
+
+ list_for_each_entry(dump_type, &disk_dump_types, list)
+ if ((real_device = dump_type->probe(dev)) != NULL) {
+ *_dump_type = dump_type;
+ return real_device;
+ }
+ return NULL;
+}
+
+/*
+ * Add dump partition structure corresponding to file to the dump device
+ * structure.
+ */
+static int add_dump_partition(struct disk_dump_device *dump_device,
+ struct block_device *bdev)
+{
+ struct disk_dump_partition *dump_part;
+ char buffer[BDEVNAME_SIZE];
+
+ if (!(dump_part = kmalloc(sizeof(*dump_part), GFP_KERNEL)))
+ return -ENOMEM;
+
+ dump_part->device = dump_device;
+ dump_part->bdev = bdev;
+
+ if (!bdev || !bdev->bd_part)
+ return -EINVAL;
+ dump_part->nr_sects = bdev->bd_part->nr_sects;
+ dump_part->start_sect = bdev->bd_part->start_sect;
+
+ if (SECTOR_BLOCK(dump_part->nr_sects) < total_blocks)
+ Warn("%s is too small to save whole system memory\n",
+ bdevname(bdev, buffer));
+
+ list_add(&dump_part->list, &dump_device->partitions);
+
+ return 0;
+}
+
+/*
+ * Add dump device and partition.
+ * Must be called with disk_dump_mutex held.
+ */
+static int add_dump(struct device *dev, struct block_device *bdev)
+{
+ struct disk_dump_type *dump_type = NULL;
+ struct disk_dump_device *dump_device;
+ void *real_device;
+ int ret;
+
+ if ((ret = blkdev_get(bdev, FMODE_READ, 0)) < 0)
+ return ret;
+
+ /* Check whether this block device is already registered */
+ if (find_dump_partition(bdev)) {
+ blkdev_put(bdev);
+ return -EEXIST;
+ }
+
+ /* find dump_type and real device for this inode */
+ if (!(real_device = find_real_device(dev, &dump_type))) {
+ blkdev_put(bdev);
+ return -ENXIO;
+ }
+
+ /* Check whether this device is already registered */
+ dump_device = find_dump_device(real_device);
+ if (dump_device == NULL) {
+ /* real_device is not registered. create new dump_device */
+ if (!(dump_device = kmalloc(sizeof(*dump_device), GFP_KERNEL))) {
+ blkdev_put(bdev);
+ return -ENOMEM;
+ }
+
+ memset(dump_device, 0, sizeof(*dump_device));
+ INIT_LIST_HEAD(&dump_device->partitions);
+
+ dump_device->dump_type = dump_type;
+ dump_device->device = real_device;
+ if ((ret = dump_type->add_device(dump_device)) < 0) {
+ kfree(dump_device);
+ blkdev_put(bdev);
+ return ret;
+ }
+ if (!try_module_get(dump_type->owner))
+ return -EINVAL;
+ list_add(&dump_device->list, &disk_dump_devices);
+ }
+
+ ret = add_dump_partition(dump_device, bdev);
+ if (ret < 0 && list_empty(&dump_device->list)) {
+ dump_type->remove_device(dump_device);
+ module_put(dump_type->owner);
+ list_del(&dump_device->list);
+ kfree(dump_device);
+ }
+ if (ret < 0)
+ blkdev_put(bdev);
+
+ return ret;
+}
+
+/*
+ * Remove dump partition corresponding to bdev.
+ * Must be called with disk_dump_mutex held.
+ */
+static int remove_dump(struct block_device *bdev)
+{
+ struct disk_dump_device *dump_device;
+ struct disk_dump_partition *dump_part;
+ struct disk_dump_type *dump_type;
+
+ if (!(dump_part = find_dump_partition(bdev))) {
+ bdput(bdev);
+ return -ENOENT;
+ }
+
+ blkdev_put(bdev);
+ dump_device = dump_part->device;
+ list_del(&dump_part->list);
+ kfree(dump_part);
+
+ if (list_empty(&dump_device->partitions)) {
+ dump_type = dump_device->dump_type;
+ dump_type->remove_device(dump_device);
+ module_put(dump_type->owner);
+ list_del(&dump_device->list);
+ kfree(dump_device);
+ }
+
+ return 0;
+}
+
+#ifdef CONFIG_PROC_FS
+static struct disk_dump_partition *dump_part_by_pos(struct seq_file *seq,
+ loff_t pos)
+{
+ struct disk_dump_device *dump_device;
+ struct disk_dump_partition *dump_part;
+
+ list_for_each_entry(dump_device, &disk_dump_devices, list) {
+ seq->private = dump_device;
+ list_for_each_entry(dump_part, &dump_device->partitions, list)
+ if (!pos--)
+ return dump_part;
+ }
+ return NULL;
+}
+
+static void *disk_dump_seq_start(struct seq_file *seq, loff_t *pos)
+{
+ loff_t n = *pos;
+
+ down(&disk_dump_mutex);
+
+ if (!n--)
+ return (void *)1; /* header */
+
+ return dump_part_by_pos(seq, n);
+}
+
+static void *disk_dump_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+ struct list_head *partition = v;
+ struct list_head *device = seq->private;
+ struct disk_dump_device *dump_device;
+
+ (*pos)++;
+ if (v == (void *)1)
+ return dump_part_by_pos(seq, 0);
+
+ dump_device = list_entry(device, struct disk_dump_device, list);
+
+ partition = partition->next;
+ if (partition != &dump_device->partitions)
+ return partition;
+
+ device = device->next;
+ seq->private = device;
+ if (device == &disk_dump_devices)
+ return NULL;
+
+ dump_device = list_entry(device, struct disk_dump_device, list);
+
+ return dump_device->partitions.next;
+}
+
+static void disk_dump_seq_stop(struct seq_file *seq, void *v)
+{
+ up(&disk_dump_mutex);
+}
+
+static int disk_dump_seq_show(struct seq_file *seq, void *v)
+{
+ struct disk_dump_partition *dump_part = v;
+ char buf[BDEVNAME_SIZE];
+
+ if (v == (void *)1) { /* header */
+ seq_printf(seq, "# sample_rate: %u\n", sample_rate);
+ seq_printf(seq, "# block_order: %u\n", block_order);
+ seq_printf(seq, "# fallback_on_err: %u\n", fallback_on_err);
+ seq_printf(seq, "# allow_risky_dumps: %u\n", allow_risky_dumps);
+ seq_printf(seq, "# total_blocks: %u\n", total_blocks);
+ seq_printf(seq, "#\n");
+
+ return 0;
+ }
+
+ seq_printf(seq, "%s %lu %lu\n", bdevname(dump_part->bdev, buf),
+ dump_part->start_sect, dump_part->nr_sects);
+ return 0;
+}
+
+static struct seq_operations disk_dump_seq_ops = {
+ .start = disk_dump_seq_start,
+ .next = disk_dump_seq_next,
+ .stop = disk_dump_seq_stop,
+ .show = disk_dump_seq_show,
+};
+
+static int disk_dump_open(struct inode *inode, struct file *file)
+{
+ return seq_open(file, &disk_dump_seq_ops);
+}
+
+static struct file_operations disk_dump_fops = {
+ .owner = THIS_MODULE,
+ .open = disk_dump_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+#endif
+
+int register_disk_dump_device(struct device *dev, struct block_device *bdev)
+{
+ int ret;
+
+ down(&disk_dump_mutex);
+ ret = add_dump(dev, bdev);
+ set_crc_modules();
+ up(&disk_dump_mutex);
+
+ return ret;
+}
+
+int unregister_disk_dump_device(struct block_device *bdev)
+{
+ int ret;
+
+ down(&disk_dump_mutex);
+ ret = remove_dump(bdev);
+ set_crc_modules();
+ up(&disk_dump_mutex);
+
+ return ret;
+}
+
+int find_disk_dump_device(struct block_device *bdev)
+{
+ int ret;
+
+ down(&disk_dump_mutex);
+ ret = (find_dump_partition(bdev) != NULL);
+ up(&disk_dump_mutex);
+
+ return ret;
+}
+
+int register_disk_dump_type(struct disk_dump_type *dump_type)
+{
+ down(&disk_dump_mutex);
+ list_add(&dump_type->list, &disk_dump_types);
+ set_crc_modules();
+ up(&disk_dump_mutex);
+
+ return 0;
+}
+
+EXPORT_SYMBOL_GPL(register_disk_dump_type);
+
+int unregister_disk_dump_type(struct disk_dump_type *dump_type)
+{
+ down(&disk_dump_mutex);
+ list_del(&dump_type->list);
+ set_crc_modules();
+ up(&disk_dump_mutex);
+
+ return 0;
+}
+
+EXPORT_SYMBOL_GPL(unregister_disk_dump_type);
+
+static void compute_total_blocks(void)
+{
+ unsigned long nr;
+
+ /*
+ * the number of block of the common header and the header
+ * that is depend on the architecture
+ *
+ * block 0: dump partition header
+ * block 1: dump header
+ * block 2: dump subheader
+ * block 3..n: memory bitmap
+ * block (n + 1)...: saved memory
+ *
+ * We never overwrite block 0
+ */
+ header_blocks = 2 + size_of_sub_header();
+
+ total_ram_blocks = 0;
+ for (nr = next_ram_page(ULONG_MAX); nr < ULONG_MAX; nr = next_ram_page(nr))
+ total_ram_blocks++;
+
+ bitmap_blocks = ROUNDUP(max_pfn, 8 * PAGE_SIZE);
+
+ /*
+ * The necessary size of area for dump is:
+ * 1 block for common header
+ * m blocks for architecture dependent header
+ * n blocks for memory bitmap
+ * and whole memory
+ */
+ total_blocks = header_blocks + bitmap_blocks + total_ram_blocks;
+
+ Info("total blocks required: %u (header %u + bitmap %u + memory %u)",
+ total_blocks, header_blocks, bitmap_blocks, total_ram_blocks);
+}
+
+struct disk_dump_ops dump_ops = {
+ .add_dump = register_disk_dump_device,
+ .remove_dump = unregister_disk_dump_device,
+ .find_dump = find_disk_dump_device,
+};
+
+static int init_diskdump(void)
+{
+ unsigned long long t0;
+ unsigned long long t1;
+ struct page *page;
+
+ if (!platform_supports_diskdump) {
+ Err("platform does not support diskdump.");
+ return -1;
+ }
+
+ /* Allocate one block that is used temporally */
+ do {
+ page = alloc_pages(GFP_KERNEL, block_order);
+ if (page != NULL)
+ break;
+ } while (--block_order >= 0);
+ if (!page) {
+ Err("alloc_pages failed.");
+ return -1;
+ }
+ scratch = page_address(page);
+ Info("Maximum block size: %lu", PAGE_SIZE << block_order);
+
+ if (diskdump_register_hook(start_disk_dump)) {
+ Err("failed to register hooks.");
+ return -1;
+ }
+
+ if (diskdump_register_ops(&dump_ops)) {
+ Err("failed to register ops.");
+ return -1;
+ }
+
+ compute_total_blocks();
+
+ platform_timestamp(t0);
+ diskdump_mdelay(1);
+ platform_timestamp(t1);
+ timestamp_1sec = (unsigned long)(t1 - t0) * 1000;
+
+ /*
+ * Allocate a separate stack for diskdump.
+ */
+ platform_init_stack(&diskdump_stack);
+
+ down(&disk_dump_mutex);
+ set_crc_modules();
+ up(&disk_dump_mutex);
+
+#ifdef CONFIG_PROC_FS
+ {
+ struct proc_dir_entry *p;
+
+ p = create_proc_entry("diskdump", S_IRUGO|S_IWUSR, NULL);
+ if (p)
+ p->proc_fops = &disk_dump_fops;
+ }
+#endif
+
+ return 0;
+}
+
+static void cleanup_diskdump(void)
+{
+ Info("shut down.");
+ diskdump_unregister_hook();
+ diskdump_unregister_ops();
+ platform_cleanup_stack(diskdump_stack);
+ free_pages((unsigned long)scratch, block_order);
+#ifdef CONFIG_PROC_FS
+ remove_proc_entry("diskdump", NULL);
+#endif
+}
+
+module_init(init_diskdump);
+module_exit(cleanup_diskdump);
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * sx8.c: Driver for Promise SATA SX8 looks-like-I2O hardware
+ *
+ * Copyright 2004 Red Hat, Inc.
+ *
+ * Author/maintainer: Jeff Garzik <jgarzik@pobox.com>
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/blkdev.h>
+#include <linux/sched.h>
+#include <linux/devfs_fs_kernel.h>
+#include <linux/interrupt.h>
+#include <linux/compiler.h>
+#include <linux/workqueue.h>
+#include <linux/bitops.h>
+#include <linux/delay.h>
+#include <linux/time.h>
+#include <linux/hdreg.h>
+#include <asm/io.h>
+#include <asm/semaphore.h>
+#include <asm/uaccess.h>
+
+MODULE_AUTHOR("Jeff Garzik");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Promise SATA SX8 block driver");
+
+#if 0
+#define CARM_DEBUG
+#define CARM_VERBOSE_DEBUG
+#else
+#undef CARM_DEBUG
+#undef CARM_VERBOSE_DEBUG
+#endif
+#undef CARM_NDEBUG
+
+#define DRV_NAME "sx8"
+#define DRV_VERSION "0.8"
+#define PFX DRV_NAME ": "
+
+#define NEXT_RESP(idx) ((idx + 1) % RMSG_Q_LEN)
+
+/* 0xf is just arbitrary, non-zero noise; this is sorta like poisoning */
+#define TAG_ENCODE(tag) (((tag) << 16) | 0xf)
+#define TAG_DECODE(tag) (((tag) >> 16) & 0x1f)
+#define TAG_VALID(tag) ((((tag) & 0xf) == 0xf) && (TAG_DECODE(tag) < 32))
+
+/* note: prints function name for you */
+#ifdef CARM_DEBUG
+#define DPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __FUNCTION__, ## args)
+#ifdef CARM_VERBOSE_DEBUG
+#define VPRINTK(fmt, args...) printk(KERN_ERR "%s: " fmt, __FUNCTION__, ## args)
+#else
+#define VPRINTK(fmt, args...)
+#endif /* CARM_VERBOSE_DEBUG */
+#else
+#define DPRINTK(fmt, args...)
+#define VPRINTK(fmt, args...)
+#endif /* CARM_DEBUG */
+
+#ifdef CARM_NDEBUG
+#define assert(expr)
+#else
+#define assert(expr) \
+ if(unlikely(!(expr))) { \
+ printk(KERN_ERR "Assertion failed! %s,%s,%s,line=%d\n", \
+ #expr,__FILE__,__FUNCTION__,__LINE__); \
+ }
+#endif
+
+/* defines only for the constants which don't work well as enums */
+struct carm_host;
+
+enum {
+ /* adapter-wide limits */
+ CARM_MAX_PORTS = 8,
+ CARM_SHM_SIZE = (4096 << 7),
+ CARM_MINORS_PER_MAJOR = 256 / CARM_MAX_PORTS,
+ CARM_MAX_WAIT_Q = CARM_MAX_PORTS + 1,
+
+ /* command message queue limits */
+ CARM_MAX_REQ = 64, /* max command msgs per host */
+ CARM_MAX_Q = 1, /* one command at a time */
+ CARM_MSG_LOW_WATER = (CARM_MAX_REQ / 4), /* refill mark */
+
+ /* S/G limits, host-wide and per-request */
+ CARM_MAX_REQ_SG = 32, /* max s/g entries per request */
+ CARM_SG_BOUNDARY = 0xffffUL, /* s/g segment boundary */
+ CARM_MAX_HOST_SG = 600, /* max s/g entries per host */
+ CARM_SG_LOW_WATER = (CARM_MAX_HOST_SG / 4), /* re-fill mark */
+
+ /* hardware registers */
+ CARM_IHQP = 0x1c,
+ CARM_INT_STAT = 0x10, /* interrupt status */
+ CARM_INT_MASK = 0x14, /* interrupt mask */
+ CARM_HMUC = 0x18, /* host message unit control */
+ RBUF_ADDR_LO = 0x20, /* response msg DMA buf low 32 bits */
+ RBUF_ADDR_HI = 0x24, /* response msg DMA buf high 32 bits */
+ RBUF_BYTE_SZ = 0x28,
+ CARM_RESP_IDX = 0x2c,
+ CARM_CMS0 = 0x30, /* command message size reg 0 */
+ CARM_LMUC = 0x48,
+ CARM_HMPHA = 0x6c,
+ CARM_INITC = 0xb5,
+
+ /* bits in CARM_INT_{STAT,MASK} */
+ INT_RESERVED = 0xfffffff0,
+ INT_WATCHDOG = (1 << 3), /* watchdog timer */
+ INT_Q_OVERFLOW = (1 << 2), /* cmd msg q overflow */
+ INT_Q_AVAILABLE = (1 << 1), /* cmd msg q has free space */
+ INT_RESPONSE = (1 << 0), /* response msg available */
+ INT_ACK_MASK = INT_WATCHDOG | INT_Q_OVERFLOW,
+ INT_DEF_MASK = INT_RESERVED | INT_Q_OVERFLOW |
+ INT_RESPONSE,
+
+ /* command messages, and related register bits */
+ CARM_HAVE_RESP = 0x01,
+ CARM_MSG_READ = 1,
+ CARM_MSG_WRITE = 2,
+ CARM_MSG_VERIFY = 3,
+ CARM_MSG_GET_CAPACITY = 4,
+ CARM_MSG_FLUSH = 5,
+ CARM_MSG_IOCTL = 6,
+ CARM_MSG_ARRAY = 8,
+ CARM_MSG_MISC = 9,
+ CARM_CME = (1 << 2),
+ CARM_RME = (1 << 1),
+ CARM_WZBC = (1 << 0),
+ CARM_RMI = (1 << 0),
+ CARM_Q_FULL = (1 << 3),
+ CARM_MSG_SIZE = 288,
+ CARM_Q_LEN = 48,
+
+ /* CARM_MSG_IOCTL messages */
+ CARM_IOC_SCAN_CHAN = 5, /* scan channels for devices */
+ CARM_IOC_GET_TCQ = 13, /* get tcq/ncq depth */
+ CARM_IOC_SET_TCQ = 14, /* set tcq/ncq depth */
+
+ IOC_SCAN_CHAN_NODEV = 0x1f,
+ IOC_SCAN_CHAN_OFFSET = 0x40,
+
+ /* CARM_MSG_ARRAY messages */
+ CARM_ARRAY_INFO = 0,
+
+ ARRAY_NO_EXIST = (1 << 31),
+
+ /* response messages */
+ RMSG_SZ = 8, /* sizeof(struct carm_response) */
+ RMSG_Q_LEN = 48, /* resp. msg list length */
+ RMSG_OK = 1, /* bit indicating msg was successful */
+ /* length of entire resp. msg buffer */
+ RBUF_LEN = RMSG_SZ * RMSG_Q_LEN,
+
+ PDC_SHM_SIZE = (4096 << 7), /* length of entire h/w buffer */
+
+ /* CARM_MSG_MISC messages */
+ MISC_GET_FW_VER = 2,
+ MISC_ALLOC_MEM = 3,
+ MISC_SET_TIME = 5,
+
+ /* MISC_GET_FW_VER feature bits */
+ FW_VER_4PORT = (1 << 2), /* 1=4 ports, 0=8 ports */
+ FW_VER_NON_RAID = (1 << 1), /* 1=non-RAID firmware, 0=RAID */
+ FW_VER_ZCR = (1 << 0), /* zero channel RAID (whatever that is) */
+
+ /* carm_host flags */
+ FL_NON_RAID = FW_VER_NON_RAID,
+ FL_4PORT = FW_VER_4PORT,
+ FL_FW_VER_MASK = (FW_VER_NON_RAID | FW_VER_4PORT),
+ FL_DAC = (1 << 16),
+ FL_DYN_MAJOR = (1 << 17),
+};
+
+enum scatter_gather_types {
+ SGT_32BIT = 0,
+ SGT_64BIT = 1,
+};
+
+enum host_states {
+ HST_INVALID, /* invalid state; never used */
+ HST_ALLOC_BUF, /* setting up master SHM area */
+ HST_ERROR, /* we never leave here */
+ HST_PORT_SCAN, /* start dev scan */
+ HST_DEV_SCAN_START, /* start per-device probe */
+ HST_DEV_SCAN, /* continue per-device probe */
+ HST_DEV_ACTIVATE, /* activate devices we found */
+ HST_PROBE_FINISHED, /* probe is complete */
+ HST_PROBE_START, /* initiate probe */
+ HST_SYNC_TIME, /* tell firmware what time it is */
+ HST_GET_FW_VER, /* get firmware version, adapter port cnt */
+};
+
+#ifdef CARM_DEBUG
+static const char *state_name[] = {
+ "HST_INVALID",
+ "HST_ALLOC_BUF",
+ "HST_ERROR",
+ "HST_PORT_SCAN",
+ "HST_DEV_SCAN_START",
+ "HST_DEV_SCAN",
+ "HST_DEV_ACTIVATE",
+ "HST_PROBE_FINISHED",
+ "HST_PROBE_START",
+ "HST_SYNC_TIME",
+ "HST_GET_FW_VER",
+};
+#endif
+
+struct carm_port {
+ unsigned int port_no;
+ unsigned int n_queued;
+ struct gendisk *disk;
+ struct carm_host *host;
+
+ /* attached device characteristics */
+ u64 capacity;
+ char name[41];
+ u16 dev_geom_head;
+ u16 dev_geom_sect;
+ u16 dev_geom_cyl;
+};
+
+struct carm_request {
+ unsigned int tag;
+ int n_elem;
+ unsigned int msg_type;
+ unsigned int msg_subtype;
+ unsigned int msg_bucket;
+ struct request *rq;
+ struct carm_port *port;
+ struct scatterlist sg[CARM_MAX_REQ_SG];
+};
+
+struct carm_host {
+ unsigned long flags;
+ void __iomem *mmio;
+ void *shm;
+ dma_addr_t shm_dma;
+
+ int major;
+ int id;
+ char name[32];
+
+ spinlock_t lock;
+ struct pci_dev *pdev;
+ unsigned int state;
+ u32 fw_ver;
+
+ request_queue_t *oob_q;
+ unsigned int n_oob;
+
+ unsigned int hw_sg_used;
+
+ unsigned int resp_idx;
+
+ unsigned int wait_q_prod;
+ unsigned int wait_q_cons;
+ request_queue_t *wait_q[CARM_MAX_WAIT_Q];
+
+ unsigned int n_msgs;
+ u64 msg_alloc;
+ struct carm_request req[CARM_MAX_REQ];
+ void *msg_base;
+ dma_addr_t msg_dma;
+
+ int cur_scan_dev;
+ unsigned long dev_active;
+ unsigned long dev_present;
+ struct carm_port port[CARM_MAX_PORTS];
+
+ struct work_struct fsm_task;
+
+ struct semaphore probe_sem;
+};
+
+struct carm_response {
+ __le32 ret_handle;
+ __le32 status;
+} __attribute__((packed));
+
+struct carm_msg_sg {
+ __le32 start;
+ __le32 len;
+} __attribute__((packed));
+
+struct carm_msg_rw {
+ u8 type;
+ u8 id;
+ u8 sg_count;
+ u8 sg_type;
+ __le32 handle;
+ __le32 lba;
+ __le16 lba_count;
+ __le16 lba_high;
+ struct carm_msg_sg sg[32];
+} __attribute__((packed));
+
+struct carm_msg_allocbuf {
+ u8 type;
+ u8 subtype;
+ u8 n_sg;
+ u8 sg_type;
+ __le32 handle;
+ __le32 addr;
+ __le32 len;
+ __le32 evt_pool;
+ __le32 n_evt;
+ __le32 rbuf_pool;
+ __le32 n_rbuf;
+ __le32 msg_pool;
+ __le32 n_msg;
+ struct carm_msg_sg sg[8];
+} __attribute__((packed));
+
+struct carm_msg_ioctl {
+ u8 type;
+ u8 subtype;
+ u8 array_id;
+ u8 reserved1;
+ __le32 handle;
+ __le32 data_addr;
+ u32 reserved2;
+} __attribute__((packed));
+
+struct carm_msg_sync_time {
+ u8 type;
+ u8 subtype;
+ u16 reserved1;
+ __le32 handle;
+ u32 reserved2;
+ __le32 timestamp;
+} __attribute__((packed));
+
+struct carm_msg_get_fw_ver {
+ u8 type;
+ u8 subtype;
+ u16 reserved1;
+ __le32 handle;
+ __le32 data_addr;
+ u32 reserved2;
+} __attribute__((packed));
+
+struct carm_fw_ver {
+ __le32 version;
+ u8 features;
+ u8 reserved1;
+ u16 reserved2;
+} __attribute__((packed));
+
+struct carm_array_info {
+ __le32 size;
+
+ __le16 size_hi;
+ __le16 stripe_size;
+
+ __le32 mode;
+
+ __le16 stripe_blk_sz;
+ __le16 reserved1;
+
+ __le16 cyl;
+ __le16 head;
+
+ __le16 sect;
+ u8 array_id;
+ u8 reserved2;
+
+ char name[40];
+
+ __le32 array_status;
+
+ /* device list continues beyond this point? */
+} __attribute__((packed));
+
+static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
+static void carm_remove_one (struct pci_dev *pdev);
+static int carm_bdev_ioctl(struct inode *ino, struct file *fil,
+ unsigned int cmd, unsigned long arg);
+
+static struct pci_device_id carm_pci_tbl[] = {
+ { PCI_VENDOR_ID_PROMISE, 0x8000, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
+ { PCI_VENDOR_ID_PROMISE, 0x8002, PCI_ANY_ID, PCI_ANY_ID, 0, 0, },
+ { } /* terminate list */
+};
+MODULE_DEVICE_TABLE(pci, carm_pci_tbl);
+
+static struct pci_driver carm_driver = {
+ .name = DRV_NAME,
+ .id_table = carm_pci_tbl,
+ .probe = carm_init_one,
+ .remove = carm_remove_one,
+};
+
+static struct block_device_operations carm_bd_ops = {
+ .owner = THIS_MODULE,
+ .ioctl = carm_bdev_ioctl,
+};
+
+static unsigned int carm_host_id;
+static unsigned long carm_major_alloc;
+
+
+
+static int carm_bdev_ioctl(struct inode *ino, struct file *fil,
+ unsigned int cmd, unsigned long arg)
+{
+ void __user *usermem = (void __user *) arg;
+ struct carm_port *port = ino->i_bdev->bd_disk->private_data;
+ struct hd_geometry geom;
+
+ switch (cmd) {
+ case HDIO_GETGEO:
+ if (!usermem)
+ return -EINVAL;
+
+ geom.heads = (u8) port->dev_geom_head;
+ geom.sectors = (u8) port->dev_geom_sect;
+ geom.cylinders = port->dev_geom_cyl;
+ geom.start = get_start_sect(ino->i_bdev);
+
+ if (copy_to_user(usermem, &geom, sizeof(geom)))
+ return -EFAULT;
+ return 0;
+
+ default:
+ break;
+ }
+
+ return -EOPNOTSUPP;
+}
+
+static const u32 msg_sizes[] = { 32, 64, 128, CARM_MSG_SIZE };
+
+static inline int carm_lookup_bucket(u32 msg_size)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(msg_sizes); i++)
+ if (msg_size <= msg_sizes[i])
+ return i;
+
+ return -ENOENT;
+}
+
+static void carm_init_buckets(void __iomem *mmio)
+{
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(msg_sizes); i++)
+ writel(msg_sizes[i], mmio + CARM_CMS0 + (4 * i));
+}
+
+static inline void *carm_ref_msg(struct carm_host *host,
+ unsigned int msg_idx)
+{
+ return host->msg_base + (msg_idx * CARM_MSG_SIZE);
+}
+
+static inline dma_addr_t carm_ref_msg_dma(struct carm_host *host,
+ unsigned int msg_idx)
+{
+ return host->msg_dma + (msg_idx * CARM_MSG_SIZE);
+}
+
+static int carm_send_msg(struct carm_host *host,
+ struct carm_request *crq)
+{
+ void __iomem *mmio = host->mmio;
+ u32 msg = (u32) carm_ref_msg_dma(host, crq->tag);
+ u32 cm_bucket = crq->msg_bucket;
+ u32 tmp;
+ int rc = 0;
+
+ VPRINTK("ENTER\n");
+
+ tmp = readl(mmio + CARM_HMUC);
+ if (tmp & CARM_Q_FULL) {
+#if 0
+ tmp = readl(mmio + CARM_INT_MASK);
+ tmp |= INT_Q_AVAILABLE;
+ writel(tmp, mmio + CARM_INT_MASK);
+ readl(mmio + CARM_INT_MASK); /* flush */
+#endif
+ DPRINTK("host msg queue full\n");
+ rc = -EBUSY;
+ } else {
+ writel(msg | (cm_bucket << 1), mmio + CARM_IHQP);
+ readl(mmio + CARM_IHQP); /* flush */
+ }
+
+ return rc;
+}
+
+static struct carm_request *carm_get_request(struct carm_host *host)
+{
+ unsigned int i;
+
+ /* obey global hardware limit on S/G entries */
+ if (host->hw_sg_used >= (CARM_MAX_HOST_SG - CARM_MAX_REQ_SG))
+ return NULL;
+
+ for (i = 0; i < CARM_MAX_Q; i++)
+ if ((host->msg_alloc & (1ULL << i)) == 0) {
+ struct carm_request *crq = &host->req[i];
+ crq->port = NULL;
+ crq->n_elem = 0;
+
+ host->msg_alloc |= (1ULL << i);
+ host->n_msgs++;
+
+ assert(host->n_msgs <= CARM_MAX_REQ);
+ return crq;
+ }
+
+ DPRINTK("no request available, returning NULL\n");
+ return NULL;
+}
+
+static int carm_put_request(struct carm_host *host, struct carm_request *crq)
+{
+ assert(crq->tag < CARM_MAX_Q);
+
+ if (unlikely((host->msg_alloc & (1ULL << crq->tag)) == 0))
+ return -EINVAL; /* tried to clear a tag that was not active */
+
+ assert(host->hw_sg_used >= crq->n_elem);
+
+ host->msg_alloc &= ~(1ULL << crq->tag);
+ host->hw_sg_used -= crq->n_elem;
+ host->n_msgs--;
+
+ return 0;
+}
+
+static struct carm_request *carm_get_special(struct carm_host *host)
+{
+ unsigned long flags;
+ struct carm_request *crq = NULL;
+ struct request *rq;
+ int tries = 5000;
+
+ while (tries-- > 0) {
+ spin_lock_irqsave(&host->lock, flags);
+ crq = carm_get_request(host);
+ spin_unlock_irqrestore(&host->lock, flags);
+
+ if (crq)
+ break;
+ msleep(10);
+ }
+
+ if (!crq)
+ return NULL;
+
+ rq = blk_get_request(host->oob_q, WRITE /* bogus */, GFP_KERNEL);
+ if (!rq) {
+ spin_lock_irqsave(&host->lock, flags);
+ carm_put_request(host, crq);
+ spin_unlock_irqrestore(&host->lock, flags);
+ return NULL;
+ }
+
+ crq->rq = rq;
+ return crq;
+}
+
+static int carm_array_info (struct carm_host *host, unsigned int array_idx)
+{
+ struct carm_msg_ioctl *ioc;
+ unsigned int idx;
+ u32 msg_data;
+ dma_addr_t msg_dma;
+ struct carm_request *crq;
+ int rc;
+
+ crq = carm_get_special(host);
+ if (!crq) {
+ rc = -ENOMEM;
+ goto err_out;
+ }
+
+ idx = crq->tag;
+
+ ioc = carm_ref_msg(host, idx);
+ msg_dma = carm_ref_msg_dma(host, idx);
+ msg_data = (u32) (msg_dma + sizeof(struct carm_array_info));
+
+ crq->msg_type = CARM_MSG_ARRAY;
+ crq->msg_subtype = CARM_ARRAY_INFO;
+ rc = carm_lookup_bucket(sizeof(struct carm_msg_ioctl) +
+ sizeof(struct carm_array_info));
+ BUG_ON(rc < 0);
+ crq->msg_bucket = (u32) rc;
+
+ memset(ioc, 0, sizeof(*ioc));
+ ioc->type = CARM_MSG_ARRAY;
+ ioc->subtype = CARM_ARRAY_INFO;
+ ioc->array_id = (u8) array_idx;
+ ioc->handle = cpu_to_le32(TAG_ENCODE(idx));
+ ioc->data_addr = cpu_to_le32(msg_data);
+
+ spin_lock_irq(&host->lock);
+ assert(host->state == HST_DEV_SCAN_START ||
+ host->state == HST_DEV_SCAN);
+ spin_unlock_irq(&host->lock);
+
+ DPRINTK("blk_insert_request, tag == %u\n", idx);
+ blk_insert_request(host->oob_q, crq->rq, 1, crq, 0);
+
+ return 0;
+
+err_out:
+ spin_lock_irq(&host->lock);
+ host->state = HST_ERROR;
+ spin_unlock_irq(&host->lock);
+ return rc;
+}
+
+typedef unsigned int (*carm_sspc_t)(struct carm_host *, unsigned int, void *);
+
+static int carm_send_special (struct carm_host *host, carm_sspc_t func)
+{
+ struct carm_request *crq;
+ struct carm_msg_ioctl *ioc;
+ void *mem;
+ unsigned int idx, msg_size;
+ int rc;
+
+ crq = carm_get_special(host);
+ if (!crq)
+ return -ENOMEM;
+
+ idx = crq->tag;
+
+ mem = carm_ref_msg(host, idx);
+
+ msg_size = func(host, idx, mem);
+
+ ioc = mem;
+ crq->msg_type = ioc->type;
+ crq->msg_subtype = ioc->subtype;
+ rc = carm_lookup_bucket(msg_size);
+ BUG_ON(rc < 0);
+ crq->msg_bucket = (u32) rc;
+
+ DPRINTK("blk_insert_request, tag == %u\n", idx);
+ blk_insert_request(host->oob_q, crq->rq, 1, crq, 0);
+
+ return 0;
+}
+
+static unsigned int carm_fill_sync_time(struct carm_host *host,
+ unsigned int idx, void *mem)
+{
+ struct timeval tv;
+ struct carm_msg_sync_time *st = mem;
+
+ do_gettimeofday(&tv);
+
+ memset(st, 0, sizeof(*st));
+ st->type = CARM_MSG_MISC;
+ st->subtype = MISC_SET_TIME;
+ st->handle = cpu_to_le32(TAG_ENCODE(idx));
+ st->timestamp = cpu_to_le32(tv.tv_sec);
+
+ return sizeof(struct carm_msg_sync_time);
+}
+
+static unsigned int carm_fill_alloc_buf(struct carm_host *host,
+ unsigned int idx, void *mem)
+{
+ struct carm_msg_allocbuf *ab = mem;
+
+ memset(ab, 0, sizeof(*ab));
+ ab->type = CARM_MSG_MISC;
+ ab->subtype = MISC_ALLOC_MEM;
+ ab->handle = cpu_to_le32(TAG_ENCODE(idx));
+ ab->n_sg = 1;
+ ab->sg_type = SGT_32BIT;
+ ab->addr = cpu_to_le32(host->shm_dma + (PDC_SHM_SIZE >> 1));
+ ab->len = cpu_to_le32(PDC_SHM_SIZE >> 1);
+ ab->evt_pool = cpu_to_le32(host->shm_dma + (16 * 1024));
+ ab->n_evt = cpu_to_le32(1024);
+ ab->rbuf_pool = cpu_to_le32(host->shm_dma);
+ ab->n_rbuf = cpu_to_le32(RMSG_Q_LEN);
+ ab->msg_pool = cpu_to_le32(host->shm_dma + RBUF_LEN);
+ ab->n_msg = cpu_to_le32(CARM_Q_LEN);
+ ab->sg[0].start = cpu_to_le32(host->shm_dma + (PDC_SHM_SIZE >> 1));
+ ab->sg[0].len = cpu_to_le32(65536);
+
+ return sizeof(struct carm_msg_allocbuf);
+}
+
+static unsigned int carm_fill_scan_channels(struct carm_host *host,
+ unsigned int idx, void *mem)
+{
+ struct carm_msg_ioctl *ioc = mem;
+ u32 msg_data = (u32) (carm_ref_msg_dma(host, idx) +
+ IOC_SCAN_CHAN_OFFSET);
+
+ memset(ioc, 0, sizeof(*ioc));
+ ioc->type = CARM_MSG_IOCTL;
+ ioc->subtype = CARM_IOC_SCAN_CHAN;
+ ioc->handle = cpu_to_le32(TAG_ENCODE(idx));
+ ioc->data_addr = cpu_to_le32(msg_data);
+
+ /* fill output data area with "no device" default values */
+ mem += IOC_SCAN_CHAN_OFFSET;
+ memset(mem, IOC_SCAN_CHAN_NODEV, CARM_MAX_PORTS);
+
+ return IOC_SCAN_CHAN_OFFSET + CARM_MAX_PORTS;
+}
+
+static unsigned int carm_fill_get_fw_ver(struct carm_host *host,
+ unsigned int idx, void *mem)
+{
+ struct carm_msg_get_fw_ver *ioc = mem;
+ u32 msg_data = (u32) (carm_ref_msg_dma(host, idx) + sizeof(*ioc));
+
+ memset(ioc, 0, sizeof(*ioc));
+ ioc->type = CARM_MSG_MISC;
+ ioc->subtype = MISC_GET_FW_VER;
+ ioc->handle = cpu_to_le32(TAG_ENCODE(idx));
+ ioc->data_addr = cpu_to_le32(msg_data);
+
+ return sizeof(struct carm_msg_get_fw_ver) +
+ sizeof(struct carm_fw_ver);
+}
+
+static inline void carm_end_request_queued(struct carm_host *host,
+ struct carm_request *crq,
+ int uptodate)
+{
+ struct request *req = crq->rq;
+ int rc;
+
+ rc = end_that_request_first(req, uptodate, req->hard_nr_sectors);
+ assert(rc == 0);
+
+ end_that_request_last(req);
+
+ rc = carm_put_request(host, crq);
+ assert(rc == 0);
+}
+
+static inline void carm_push_q (struct carm_host *host, request_queue_t *q)
+{
+ unsigned int idx = host->wait_q_prod % CARM_MAX_WAIT_Q;
+
+ blk_stop_queue(q);
+ VPRINTK("STOPPED QUEUE %p\n", q);
+
+ host->wait_q[idx] = q;
+ host->wait_q_prod++;
+ BUG_ON(host->wait_q_prod == host->wait_q_cons); /* overrun */
+}
+
+static inline request_queue_t *carm_pop_q(struct carm_host *host)
+{
+ unsigned int idx;
+
+ if (host->wait_q_prod == host->wait_q_cons)
+ return NULL;
+
+ idx = host->wait_q_cons % CARM_MAX_WAIT_Q;
+ host->wait_q_cons++;
+
+ return host->wait_q[idx];
+}
+
+static inline void carm_round_robin(struct carm_host *host)
+{
+ request_queue_t *q = carm_pop_q(host);
+ if (q) {
+ blk_start_queue(q);
+ VPRINTK("STARTED QUEUE %p\n", q);
+ }
+}
+
+static inline void carm_end_rq(struct carm_host *host, struct carm_request *crq,
+ int is_ok)
+{
+ carm_end_request_queued(host, crq, is_ok);
+ if (CARM_MAX_Q == 1)
+ carm_round_robin(host);
+ else if ((host->n_msgs <= CARM_MSG_LOW_WATER) &&
+ (host->hw_sg_used <= CARM_SG_LOW_WATER)) {
+ carm_round_robin(host);
+ }
+}
+
+static void carm_oob_rq_fn(request_queue_t *q)
+{
+ struct carm_host *host = q->queuedata;
+ struct carm_request *crq;
+ struct request *rq;
+ int rc;
+
+ while (1) {
+ DPRINTK("get req\n");
+ rq = elv_next_request(q);
+ if (!rq)
+ break;
+
+ blkdev_dequeue_request(rq);
+
+ crq = rq->special;
+ assert(crq != NULL);
+ assert(crq->rq == rq);
+
+ crq->n_elem = 0;
+
+ DPRINTK("send req\n");
+ rc = carm_send_msg(host, crq);
+ if (rc) {
+ blk_requeue_request(q, rq);
+ carm_push_q(host, q);
+ return; /* call us again later, eventually */
+ }
+ }
+}
+
+static void carm_rq_fn(request_queue_t *q)
+{
+ struct carm_port *port = q->queuedata;
+ struct carm_host *host = port->host;
+ struct carm_msg_rw *msg;
+ struct carm_request *crq;
+ struct request *rq;
+ struct scatterlist *sg;
+ int writing = 0, pci_dir, i, n_elem, rc;
+ u32 tmp;
+ unsigned int msg_size;
+
+queue_one_request:
+ VPRINTK("get req\n");
+ rq = elv_next_request(q);
+ if (!rq)
+ return;
+
+ crq = carm_get_request(host);
+ if (!crq) {
+ carm_push_q(host, q);
+ return; /* call us again later, eventually */
+ }
+ crq->rq = rq;
+
+ blkdev_dequeue_request(rq);
+
+ if (rq_data_dir(rq) == WRITE) {
+ writing = 1;
+ pci_dir = PCI_DMA_TODEVICE;
+ } else {
+ pci_dir = PCI_DMA_FROMDEVICE;
+ }
+
+ /* get scatterlist from block layer */
+ sg = &crq->sg[0];
+ n_elem = blk_rq_map_sg(q, rq, sg);
+ if (n_elem <= 0) {
+ carm_end_rq(host, crq, 0);
+ return; /* request with no s/g entries? */
+ }
+
+ /* map scatterlist to PCI bus addresses */
+ n_elem = pci_map_sg(host->pdev, sg, n_elem, pci_dir);
+ if (n_elem <= 0) {
+ carm_end_rq(host, crq, 0);
+ return; /* request with no s/g entries? */
+ }
+ crq->n_elem = n_elem;
+ crq->port = port;
+ host->hw_sg_used += n_elem;
+
+ /*
+ * build read/write message
+ */
+
+ VPRINTK("build msg\n");
+ msg = (struct carm_msg_rw *) carm_ref_msg(host, crq->tag);
+
+ if (writing) {
+ msg->type = CARM_MSG_WRITE;
+ crq->msg_type = CARM_MSG_WRITE;
+ } else {
+ msg->type = CARM_MSG_READ;
+ crq->msg_type = CARM_MSG_READ;
+ }
+
+ msg->id = port->port_no;
+ msg->sg_count = n_elem;
+ msg->sg_type = SGT_32BIT;
+ msg->handle = cpu_to_le32(TAG_ENCODE(crq->tag));
+ msg->lba = cpu_to_le32(rq->sector & 0xffffffff);
+ tmp = (rq->sector >> 16) >> 16;
+ msg->lba_high = cpu_to_le16( (u16) tmp );
+ msg->lba_count = cpu_to_le16(rq->nr_sectors);
+
+ msg_size = sizeof(struct carm_msg_rw) - sizeof(msg->sg);
+ for (i = 0; i < n_elem; i++) {
+ struct carm_msg_sg *carm_sg = &msg->sg[i];
+ carm_sg->start = cpu_to_le32(sg_dma_address(&crq->sg[i]));
+ carm_sg->len = cpu_to_le32(sg_dma_len(&crq->sg[i]));
+ msg_size += sizeof(struct carm_msg_sg);
+ }
+
+ rc = carm_lookup_bucket(msg_size);
+ BUG_ON(rc < 0);
+ crq->msg_bucket = (u32) rc;
+
+ /*
+ * queue read/write message to hardware
+ */
+
+ VPRINTK("send msg, tag == %u\n", crq->tag);
+ rc = carm_send_msg(host, crq);
+ if (rc) {
+ carm_put_request(host, crq);
+ blk_requeue_request(q, rq);
+ carm_push_q(host, q);
+ return; /* call us again later, eventually */
+ }
+
+ goto queue_one_request;
+}
+
+static void carm_handle_array_info(struct carm_host *host,
+ struct carm_request *crq, u8 *mem,
+ int is_ok)
+{
+ struct carm_port *port;
+ u8 *msg_data = mem + sizeof(struct carm_array_info);
+ struct carm_array_info *desc = (struct carm_array_info *) msg_data;
+ u64 lo, hi;
+ int cur_port;
+ size_t slen;
+
+ DPRINTK("ENTER\n");
+
+ carm_end_rq(host, crq, is_ok);
+
+ if (!is_ok)
+ goto out;
+ if (le32_to_cpu(desc->array_status) & ARRAY_NO_EXIST)
+ goto out;
+
+ cur_port = host->cur_scan_dev;
+
+ /* should never occur */
+ if ((cur_port < 0) || (cur_port >= CARM_MAX_PORTS)) {
+ printk(KERN_ERR PFX "BUG: cur_scan_dev==%d, array_id==%d\n",
+ cur_port, (int) desc->array_id);
+ goto out;
+ }
+
+ port = &host->port[cur_port];
+
+ lo = (u64) le32_to_cpu(desc->size);
+ hi = (u64) le16_to_cpu(desc->size_hi);
+
+ port->capacity = lo | (hi << 32);
+ port->dev_geom_head = le16_to_cpu(desc->head);
+ port->dev_geom_sect = le16_to_cpu(desc->sect);
+ port->dev_geom_cyl = le16_to_cpu(desc->cyl);
+
+ host->dev_active |= (1 << cur_port);
+
+ strncpy(port->name, desc->name, sizeof(port->name));
+ port->name[sizeof(port->name) - 1] = 0;
+ slen = strlen(port->name);
+ while (slen && (port->name[slen - 1] == ' ')) {
+ port->name[slen - 1] = 0;
+ slen--;
+ }
+
+ printk(KERN_INFO DRV_NAME "(%s): port %u device %Lu sectors\n",
+ pci_name(host->pdev), port->port_no,
+ (unsigned long long) port->capacity);
+ printk(KERN_INFO DRV_NAME "(%s): port %u device \"%s\"\n",
+ pci_name(host->pdev), port->port_no, port->name);
+
+out:
+ assert(host->state == HST_DEV_SCAN);
+ schedule_work(&host->fsm_task);
+}
+
+static void carm_handle_scan_chan(struct carm_host *host,
+ struct carm_request *crq, u8 *mem,
+ int is_ok)
+{
+ u8 *msg_data = mem + IOC_SCAN_CHAN_OFFSET;
+ unsigned int i, dev_count = 0;
+ int new_state = HST_DEV_SCAN_START;
+
+ DPRINTK("ENTER\n");
+
+ carm_end_rq(host, crq, is_ok);
+
+ if (!is_ok) {
+ new_state = HST_ERROR;
+ goto out;
+ }
+
+ /* TODO: scan and support non-disk devices */
+ for (i = 0; i < 8; i++)
+ if (msg_data[i] == 0) { /* direct-access device (disk) */
+ host->dev_present |= (1 << i);
+ dev_count++;
+ }
+
+ printk(KERN_INFO DRV_NAME "(%s): found %u interesting devices\n",
+ pci_name(host->pdev), dev_count);
+
+out:
+ assert(host->state == HST_PORT_SCAN);
+ host->state = new_state;
+ schedule_work(&host->fsm_task);
+}
+
+static void carm_handle_generic(struct carm_host *host,
+ struct carm_request *crq, int is_ok,
+ int cur_state, int next_state)
+{
+ DPRINTK("ENTER\n");
+
+ carm_end_rq(host, crq, is_ok);
+
+ assert(host->state == cur_state);
+ if (is_ok)
+ host->state = next_state;
+ else
+ host->state = HST_ERROR;
+ schedule_work(&host->fsm_task);
+}
+
+static inline void carm_handle_rw(struct carm_host *host,
+ struct carm_request *crq, int is_ok)
+{
+ int pci_dir;
+
+ VPRINTK("ENTER\n");
+
+ if (rq_data_dir(crq->rq) == WRITE)
+ pci_dir = PCI_DMA_TODEVICE;
+ else
+ pci_dir = PCI_DMA_FROMDEVICE;
+
+ pci_unmap_sg(host->pdev, &crq->sg[0], crq->n_elem, pci_dir);
+
+ carm_end_rq(host, crq, is_ok);
+}
+
+static inline void carm_handle_resp(struct carm_host *host,
+ __le32 ret_handle_le, u32 status)
+{
+ u32 handle = le32_to_cpu(ret_handle_le);
+ unsigned int msg_idx;
+ struct carm_request *crq;
+ int is_ok = (status == RMSG_OK);
+ u8 *mem;
+
+ VPRINTK("ENTER, handle == 0x%x\n", handle);
+
+ if (unlikely(!TAG_VALID(handle))) {
+ printk(KERN_ERR DRV_NAME "(%s): BUG: invalid tag 0x%x\n",
+ pci_name(host->pdev), handle);
+ return;
+ }
+
+ msg_idx = TAG_DECODE(handle);
+ VPRINTK("tag == %u\n", msg_idx);
+
+ crq = &host->req[msg_idx];
+
+ /* fast path */
+ if (likely(crq->msg_type == CARM_MSG_READ ||
+ crq->msg_type == CARM_MSG_WRITE)) {
+ carm_handle_rw(host, crq, is_ok);
+ return;
+ }
+
+ mem = carm_ref_msg(host, msg_idx);
+
+ switch (crq->msg_type) {
+ case CARM_MSG_IOCTL: {
+ switch (crq->msg_subtype) {
+ case CARM_IOC_SCAN_CHAN:
+ carm_handle_scan_chan(host, crq, mem, is_ok);
+ break;
+ default:
+ /* unknown / invalid response */
+ goto err_out;
+ }
+ break;
+ }
+
+ case CARM_MSG_MISC: {
+ switch (crq->msg_subtype) {
+ case MISC_ALLOC_MEM:
+ carm_handle_generic(host, crq, is_ok,
+ HST_ALLOC_BUF, HST_SYNC_TIME);
+ break;
+ case MISC_SET_TIME:
+ carm_handle_generic(host, crq, is_ok,
+ HST_SYNC_TIME, HST_GET_FW_VER);
+ break;
+ case MISC_GET_FW_VER: {
+ struct carm_fw_ver *ver = (struct carm_fw_ver *)
+ mem + sizeof(struct carm_msg_get_fw_ver);
+ if (is_ok) {
+ host->fw_ver = le32_to_cpu(ver->version);
+ host->flags |= (ver->features & FL_FW_VER_MASK);
+ }
+ carm_handle_generic(host, crq, is_ok,
+ HST_GET_FW_VER, HST_PORT_SCAN);
+ break;
+ }
+ default:
+ /* unknown / invalid response */
+ goto err_out;
+ }
+ break;
+ }
+
+ case CARM_MSG_ARRAY: {
+ switch (crq->msg_subtype) {
+ case CARM_ARRAY_INFO:
+ carm_handle_array_info(host, crq, mem, is_ok);
+ break;
+ default:
+ /* unknown / invalid response */
+ goto err_out;
+ }
+ break;
+ }
+
+ default:
+ /* unknown / invalid response */
+ goto err_out;
+ }
+
+ return;
+
+err_out:
+ printk(KERN_WARNING DRV_NAME "(%s): BUG: unhandled message type %d/%d\n",
+ pci_name(host->pdev), crq->msg_type, crq->msg_subtype);
+ carm_end_rq(host, crq, 0);
+}
+
+static inline void carm_handle_responses(struct carm_host *host)
+{
+ void __iomem *mmio = host->mmio;
+ struct carm_response *resp = (struct carm_response *) host->shm;
+ unsigned int work = 0;
+ unsigned int idx = host->resp_idx % RMSG_Q_LEN;
+
+ while (1) {
+ u32 status = le32_to_cpu(resp[idx].status);
+
+ if (status == 0xffffffff) {
+ VPRINTK("ending response on index %u\n", idx);
+ writel(idx << 3, mmio + CARM_RESP_IDX);
+ break;
+ }
+
+ /* response to a message we sent */
+ else if ((status & (1 << 31)) == 0) {
+ VPRINTK("handling msg response on index %u\n", idx);
+ carm_handle_resp(host, resp[idx].ret_handle, status);
+ resp[idx].status = cpu_to_le32(0xffffffff);
+ }
+
+ /* asynchronous events the hardware throws our way */
+ else if ((status & 0xff000000) == (1 << 31)) {
+ u8 *evt_type_ptr = (u8 *) &resp[idx];
+ u8 evt_type = *evt_type_ptr;
+ printk(KERN_WARNING DRV_NAME "(%s): unhandled event type %d\n",
+ pci_name(host->pdev), (int) evt_type);
+ resp[idx].status = cpu_to_le32(0xffffffff);
+ }
+
+ idx = NEXT_RESP(idx);
+ work++;
+ }
+
+ VPRINTK("EXIT, work==%u\n", work);
+ host->resp_idx += work;
+}
+
+static irqreturn_t carm_interrupt(int irq, void *__host, struct pt_regs *regs)
+{
+ struct carm_host *host = __host;
+ void __iomem *mmio;
+ u32 mask;
+ int handled = 0;
+ unsigned long flags;
+
+ if (!host) {
+ VPRINTK("no host\n");
+ return IRQ_NONE;
+ }
+
+ spin_lock_irqsave(&host->lock, flags);
+
+ mmio = host->mmio;
+
+ /* reading should also clear interrupts */
+ mask = readl(mmio + CARM_INT_STAT);
+
+ if (mask == 0 || mask == 0xffffffff) {
+ VPRINTK("no work, mask == 0x%x\n", mask);
+ goto out;
+ }
+
+ if (mask & INT_ACK_MASK)
+ writel(mask, mmio + CARM_INT_STAT);
+
+ if (unlikely(host->state == HST_INVALID)) {
+ VPRINTK("not initialized yet, mask = 0x%x\n", mask);
+ goto out;
+ }
+
+ if (mask & CARM_HAVE_RESP) {
+ handled = 1;
+ carm_handle_responses(host);
+ }
+
+out:
+ spin_unlock_irqrestore(&host->lock, flags);
+ VPRINTK("EXIT\n");
+ return IRQ_RETVAL(handled);
+}
+
+static void carm_fsm_task (void *_data)
+{
+ struct carm_host *host = _data;
+ unsigned long flags;
+ unsigned int state;
+ int rc, i, next_dev;
+ int reschedule = 0;
+ int new_state = HST_INVALID;
+
+ spin_lock_irqsave(&host->lock, flags);
+ state = host->state;
+ spin_unlock_irqrestore(&host->lock, flags);
+
+ DPRINTK("ENTER, state == %s\n", state_name[state]);
+
+ switch (state) {
+ case HST_PROBE_START:
+ new_state = HST_ALLOC_BUF;
+ reschedule = 1;
+ break;
+
+ case HST_ALLOC_BUF:
+ rc = carm_send_special(host, carm_fill_alloc_buf);
+ if (rc) {
+ new_state = HST_ERROR;
+ reschedule = 1;
+ }
+ break;
+
+ case HST_SYNC_TIME:
+ rc = carm_send_special(host, carm_fill_sync_time);
+ if (rc) {
+ new_state = HST_ERROR;
+ reschedule = 1;
+ }
+ break;
+
+ case HST_GET_FW_VER:
+ rc = carm_send_special(host, carm_fill_get_fw_ver);
+ if (rc) {
+ new_state = HST_ERROR;
+ reschedule = 1;
+ }
+ break;
+
+ case HST_PORT_SCAN:
+ rc = carm_send_special(host, carm_fill_scan_channels);
+ if (rc) {
+ new_state = HST_ERROR;
+ reschedule = 1;
+ }
+ break;
+
+ case HST_DEV_SCAN_START:
+ host->cur_scan_dev = -1;
+ new_state = HST_DEV_SCAN;
+ reschedule = 1;
+ break;
+
+ case HST_DEV_SCAN:
+ next_dev = -1;
+ for (i = host->cur_scan_dev + 1; i < CARM_MAX_PORTS; i++)
+ if (host->dev_present & (1 << i)) {
+ next_dev = i;
+ break;
+ }
+
+ if (next_dev >= 0) {
+ host->cur_scan_dev = next_dev;
+ rc = carm_array_info(host, next_dev);
+ if (rc) {
+ new_state = HST_ERROR;
+ reschedule = 1;
+ }
+ } else {
+ new_state = HST_DEV_ACTIVATE;
+ reschedule = 1;
+ }
+ break;
+
+ case HST_DEV_ACTIVATE: {
+ int activated = 0;
+ for (i = 0; i < CARM_MAX_PORTS; i++)
+ if (host->dev_active & (1 << i)) {
+ struct carm_port *port = &host->port[i];
+ struct gendisk *disk = port->disk;
+
+ set_capacity(disk, port->capacity);
+ add_disk(disk);
+ activated++;
+ }
+
+ printk(KERN_INFO DRV_NAME "(%s): %d ports activated\n",
+ pci_name(host->pdev), activated);
+
+ new_state = HST_PROBE_FINISHED;
+ reschedule = 1;
+ break;
+ }
+
+ case HST_PROBE_FINISHED:
+ up(&host->probe_sem);
+ break;
+
+ case HST_ERROR:
+ /* FIXME: TODO */
+ break;
+
+ default:
+ /* should never occur */
+ printk(KERN_ERR PFX "BUG: unknown state %d\n", state);
+ assert(0);
+ break;
+ }
+
+ if (new_state != HST_INVALID) {
+ spin_lock_irqsave(&host->lock, flags);
+ host->state = new_state;
+ spin_unlock_irqrestore(&host->lock, flags);
+ }
+ if (reschedule)
+ schedule_work(&host->fsm_task);
+}
+
+static int carm_init_wait(void __iomem *mmio, u32 bits, unsigned int test_bit)
+{
+ unsigned int i;
+
+ for (i = 0; i < 50000; i++) {
+ u32 tmp = readl(mmio + CARM_LMUC);
+ udelay(100);
+
+ if (test_bit) {
+ if ((tmp & bits) == bits)
+ return 0;
+ } else {
+ if ((tmp & bits) == 0)
+ return 0;
+ }
+
+ cond_resched();
+ }
+
+ printk(KERN_ERR PFX "carm_init_wait timeout, bits == 0x%x, test_bit == %s\n",
+ bits, test_bit ? "yes" : "no");
+ return -EBUSY;
+}
+
+static void carm_init_responses(struct carm_host *host)
+{
+ void __iomem *mmio = host->mmio;
+ unsigned int i;
+ struct carm_response *resp = (struct carm_response *) host->shm;
+
+ for (i = 0; i < RMSG_Q_LEN; i++)
+ resp[i].status = cpu_to_le32(0xffffffff);
+
+ writel(0, mmio + CARM_RESP_IDX);
+}
+
+static int carm_init_host(struct carm_host *host)
+{
+ void __iomem *mmio = host->mmio;
+ u32 tmp;
+ u8 tmp8;
+ int rc;
+
+ DPRINTK("ENTER\n");
+
+ writel(0, mmio + CARM_INT_MASK);
+
+ tmp8 = readb(mmio + CARM_INITC);
+ if (tmp8 & 0x01) {
+ tmp8 &= ~0x01;
+ writeb(tmp8, mmio + CARM_INITC);
+ readb(mmio + CARM_INITC); /* flush */
+
+ DPRINTK("snooze...\n");
+ msleep(5000);
+ }
+
+ tmp = readl(mmio + CARM_HMUC);
+ if (tmp & CARM_CME) {
+ DPRINTK("CME bit present, waiting\n");
+ rc = carm_init_wait(mmio, CARM_CME, 1);
+ if (rc) {
+ DPRINTK("EXIT, carm_init_wait 1 failed\n");
+ return rc;
+ }
+ }
+ if (tmp & CARM_RME) {
+ DPRINTK("RME bit present, waiting\n");
+ rc = carm_init_wait(mmio, CARM_RME, 1);
+ if (rc) {
+ DPRINTK("EXIT, carm_init_wait 2 failed\n");
+ return rc;
+ }
+ }
+
+ tmp &= ~(CARM_RME | CARM_CME);
+ writel(tmp, mmio + CARM_HMUC);
+ readl(mmio + CARM_HMUC); /* flush */
+
+ rc = carm_init_wait(mmio, CARM_RME | CARM_CME, 0);
+ if (rc) {
+ DPRINTK("EXIT, carm_init_wait 3 failed\n");
+ return rc;
+ }
+
+ carm_init_buckets(mmio);
+
+ writel(host->shm_dma & 0xffffffff, mmio + RBUF_ADDR_LO);
+ writel((host->shm_dma >> 16) >> 16, mmio + RBUF_ADDR_HI);
+ writel(RBUF_LEN, mmio + RBUF_BYTE_SZ);
+
+ tmp = readl(mmio + CARM_HMUC);
+ tmp |= (CARM_RME | CARM_CME | CARM_WZBC);
+ writel(tmp, mmio + CARM_HMUC);
+ readl(mmio + CARM_HMUC); /* flush */
+
+ rc = carm_init_wait(mmio, CARM_RME | CARM_CME, 1);
+ if (rc) {
+ DPRINTK("EXIT, carm_init_wait 4 failed\n");
+ return rc;
+ }
+
+ writel(0, mmio + CARM_HMPHA);
+ writel(INT_DEF_MASK, mmio + CARM_INT_MASK);
+
+ carm_init_responses(host);
+
+ /* start initialization, probing state machine */
+ spin_lock_irq(&host->lock);
+ assert(host->state == HST_INVALID);
+ host->state = HST_PROBE_START;
+ spin_unlock_irq(&host->lock);
+ schedule_work(&host->fsm_task);
+
+ DPRINTK("EXIT\n");
+ return 0;
+}
+
+static int carm_init_disks(struct carm_host *host)
+{
+ unsigned int i;
+ int rc = 0;
+
+ for (i = 0; i < CARM_MAX_PORTS; i++) {
+ struct gendisk *disk;
+ request_queue_t *q;
+ struct carm_port *port;
+
+ port = &host->port[i];
+ port->host = host;
+ port->port_no = i;
+
+ disk = alloc_disk(CARM_MINORS_PER_MAJOR);
+ if (!disk) {
+ rc = -ENOMEM;
+ break;
+ }
+
+ port->disk = disk;
+ sprintf(disk->disk_name, DRV_NAME "/%u", (host->id * CARM_MAX_PORTS) + i);
+ sprintf(disk->devfs_name, DRV_NAME "/%u_%u", host->id, i);
+ disk->major = host->major;
+ disk->first_minor = i * CARM_MINORS_PER_MAJOR;
+ disk->fops = &carm_bd_ops;
+ disk->private_data = port;
+
+ q = blk_init_queue(carm_rq_fn, &host->lock);
+ if (!q) {
+ rc = -ENOMEM;
+ break;
+ }
+ disk->queue = q;
+ blk_queue_max_hw_segments(q, CARM_MAX_REQ_SG);
+ blk_queue_max_phys_segments(q, CARM_MAX_REQ_SG);
+ blk_queue_segment_boundary(q, CARM_SG_BOUNDARY);
+
+ q->queuedata = port;
+ }
+
+ return rc;
+}
+
+static void carm_free_disks(struct carm_host *host)
+{
+ unsigned int i;
+
+ for (i = 0; i < CARM_MAX_PORTS; i++) {
+ struct gendisk *disk = host->port[i].disk;
+ if (disk) {
+ request_queue_t *q = disk->queue;
+
+ if (disk->flags & GENHD_FL_UP)
+ del_gendisk(disk);
+ if (q)
+ blk_cleanup_queue(q);
+ put_disk(disk);
+ }
+ }
+}
+
+static int carm_init_shm(struct carm_host *host)
+{
+ host->shm = pci_alloc_consistent(host->pdev, CARM_SHM_SIZE,
+ &host->shm_dma);
+ if (!host->shm)
+ return -ENOMEM;
+
+ host->msg_base = host->shm + RBUF_LEN;
+ host->msg_dma = host->shm_dma + RBUF_LEN;
+
+ memset(host->shm, 0xff, RBUF_LEN);
+ memset(host->msg_base, 0, PDC_SHM_SIZE - RBUF_LEN);
+
+ return 0;
+}
+
+static int carm_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ static unsigned int printed_version;
+ struct carm_host *host;
+ unsigned int pci_dac;
+ int rc;
+ request_queue_t *q;
+ unsigned int i;
+
+ if (!printed_version++)
+ printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
+
+ rc = pci_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ rc = pci_request_regions(pdev, DRV_NAME);
+ if (rc)
+ goto err_out;
+
+#if IF_64BIT_DMA_IS_POSSIBLE /* grrrr... */
+ rc = pci_set_dma_mask(pdev, 0xffffffffffffffffULL);
+ if (!rc) {
+ rc = pci_set_consistent_dma_mask(pdev, 0xffffffffffffffffULL);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): consistent DMA mask failure\n",
+ pci_name(pdev));
+ goto err_out_regions;
+ }
+ pci_dac = 1;
+ } else {
+#endif
+ rc = pci_set_dma_mask(pdev, 0xffffffffULL);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): DMA mask failure\n",
+ pci_name(pdev));
+ goto err_out_regions;
+ }
+ pci_dac = 0;
+#if IF_64BIT_DMA_IS_POSSIBLE /* grrrr... */
+ }
+#endif
+
+ host = kmalloc(sizeof(*host), GFP_KERNEL);
+ if (!host) {
+ printk(KERN_ERR DRV_NAME "(%s): memory alloc failure\n",
+ pci_name(pdev));
+ rc = -ENOMEM;
+ goto err_out_regions;
+ }
+
+ memset(host, 0, sizeof(*host));
+ host->pdev = pdev;
+ host->flags = pci_dac ? FL_DAC : 0;
+ spin_lock_init(&host->lock);
+ INIT_WORK(&host->fsm_task, carm_fsm_task, host);
+ init_MUTEX_LOCKED(&host->probe_sem);
+
+ for (i = 0; i < ARRAY_SIZE(host->req); i++)
+ host->req[i].tag = i;
+
+ host->mmio = ioremap(pci_resource_start(pdev, 0),
+ pci_resource_len(pdev, 0));
+ if (!host->mmio) {
+ printk(KERN_ERR DRV_NAME "(%s): MMIO alloc failure\n",
+ pci_name(pdev));
+ rc = -ENOMEM;
+ goto err_out_kfree;
+ }
+
+ rc = carm_init_shm(host);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): DMA SHM alloc failure\n",
+ pci_name(pdev));
+ goto err_out_iounmap;
+ }
+
+ q = blk_init_queue(carm_oob_rq_fn, &host->lock);
+ if (!q) {
+ printk(KERN_ERR DRV_NAME "(%s): OOB queue alloc failure\n",
+ pci_name(pdev));
+ rc = -ENOMEM;
+ goto err_out_pci_free;
+ }
+ host->oob_q = q;
+ q->queuedata = host;
+
+ /*
+ * Figure out which major to use: 160, 161, or dynamic
+ */
+ if (!test_and_set_bit(0, &carm_major_alloc))
+ host->major = 160;
+ else if (!test_and_set_bit(1, &carm_major_alloc))
+ host->major = 161;
+ else
+ host->flags |= FL_DYN_MAJOR;
+
+ host->id = carm_host_id;
+ sprintf(host->name, DRV_NAME "%d", carm_host_id);
+
+ rc = register_blkdev(host->major, host->name);
+ if (rc < 0)
+ goto err_out_free_majors;
+ if (host->flags & FL_DYN_MAJOR)
+ host->major = rc;
+
+ devfs_mk_dir(DRV_NAME);
+
+ rc = carm_init_disks(host);
+ if (rc)
+ goto err_out_blkdev_disks;
+
+ pci_set_master(pdev);
+
+ rc = request_irq(pdev->irq, carm_interrupt, SA_SHIRQ, DRV_NAME, host);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): irq alloc failure\n",
+ pci_name(pdev));
+ goto err_out_blkdev_disks;
+ }
+
+ rc = carm_init_host(host);
+ if (rc)
+ goto err_out_free_irq;
+
+ DPRINTK("waiting for probe_sem\n");
+ down(&host->probe_sem);
+
+ printk(KERN_INFO "%s: pci %s, ports %d, io %lx, irq %u, major %d\n",
+ host->name, pci_name(pdev), (int) CARM_MAX_PORTS,
+ pci_resource_start(pdev, 0), pdev->irq, host->major);
+
+ carm_host_id++;
+ pci_set_drvdata(pdev, host);
+ return 0;
+
+err_out_free_irq:
+ free_irq(pdev->irq, host);
+err_out_blkdev_disks:
+ carm_free_disks(host);
+ unregister_blkdev(host->major, host->name);
+err_out_free_majors:
+ if (host->major == 160)
+ clear_bit(0, &carm_major_alloc);
+ else if (host->major == 161)
+ clear_bit(1, &carm_major_alloc);
+ blk_cleanup_queue(host->oob_q);
+err_out_pci_free:
+ pci_free_consistent(pdev, CARM_SHM_SIZE, host->shm, host->shm_dma);
+err_out_iounmap:
+ iounmap(host->mmio);
+err_out_kfree:
+ kfree(host);
+err_out_regions:
+ pci_release_regions(pdev);
+err_out:
+ pci_disable_device(pdev);
+ return rc;
+}
+
+static void carm_remove_one (struct pci_dev *pdev)
+{
+ struct carm_host *host = pci_get_drvdata(pdev);
+
+ if (!host) {
+ printk(KERN_ERR PFX "BUG: no host data for PCI(%s)\n",
+ pci_name(pdev));
+ return;
+ }
+
+ free_irq(pdev->irq, host);
+ carm_free_disks(host);
+ devfs_remove(DRV_NAME);
+ unregister_blkdev(host->major, host->name);
+ if (host->major == 160)
+ clear_bit(0, &carm_major_alloc);
+ else if (host->major == 161)
+ clear_bit(1, &carm_major_alloc);
+ blk_cleanup_queue(host->oob_q);
+ pci_free_consistent(pdev, CARM_SHM_SIZE, host->shm, host->shm_dma);
+ iounmap(host->mmio);
+ kfree(host);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+}
+
+static int __init carm_init(void)
+{
+ return pci_module_init(&carm_driver);
+}
+
+static void __exit carm_exit(void)
+{
+ pci_unregister_driver(&carm_driver);
+}
+
+module_init(carm_init);
+module_exit(carm_exit);
+
+
--- /dev/null
+/**
+ * \file drm_irq.h
+ * IRQ support
+ *
+ * \author Rickard E. (Rik) Faith <faith@valinux.com>
+ * \author Gareth Hughes <gareth@valinux.com>
+ */
+
+/*
+ * Created: Fri Mar 19 14:30:16 1999 by faith@valinux.com
+ *
+ * Copyright 1999, 2000 Precision Insight, Inc., Cedar Park, Texas.
+ * Copyright 2000 VA Linux Systems, Inc., Sunnyvale, California.
+ * All Rights Reserved.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * VA LINUX SYSTEMS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "drmP.h"
+
+#include <linux/interrupt.h> /* For task queue support */
+
+/**
+ * Get interrupt from bus id.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_irq_busid structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Finds the PCI device with the specified bus id and gets its IRQ number.
+ * This IOCTL is deprecated, and will now return EINVAL for any busid not equal
+ * to that of the device that this DRM instance attached to.
+ */
+int DRM(irq_by_busid)(struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg)
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_irq_busid_t __user *argp = (void __user *)arg;
+ drm_irq_busid_t p;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
+ return -EINVAL;
+
+ if (copy_from_user(&p, argp, sizeof(p)))
+ return -EFAULT;
+
+ if ((p.busnum >> 8) != dev->pci_domain ||
+ (p.busnum & 0xff) != dev->pci_bus ||
+ p.devnum != dev->pci_slot ||
+ p.funcnum != dev->pci_func)
+ return -EINVAL;
+
+ p.irq = dev->irq;
+
+ DRM_DEBUG("%d:%d:%d => IRQ %d\n",
+ p.busnum, p.devnum, p.funcnum, p.irq);
+ if (copy_to_user(argp, &p, sizeof(p)))
+ return -EFAULT;
+ return 0;
+}
+
+/**
+ * Install IRQ handler.
+ *
+ * \param dev DRM device.
+ * \param irq IRQ number.
+ *
+ * Initializes the IRQ related data, and setups drm_device::vbl_queue. Installs the handler, calling the driver
+ * \c DRM(driver_irq_preinstall)() and \c DRM(driver_irq_postinstall)() functions
+ * before and after the installation.
+ */
+int DRM(irq_install)( drm_device_t *dev )
+{
+ int ret;
+ unsigned long sh_flags=0;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
+ return -EINVAL;
+
+ if ( dev->irq == 0 )
+ return -EINVAL;
+
+ down( &dev->struct_sem );
+
+ /* Driver must have been initialized */
+ if ( !dev->dev_private ) {
+ up( &dev->struct_sem );
+ return -EINVAL;
+ }
+
+ if ( dev->irq_enabled ) {
+ up( &dev->struct_sem );
+ return -EBUSY;
+ }
+ dev->irq_enabled = 1;
+ up( &dev->struct_sem );
+
+ DRM_DEBUG( "%s: irq=%d\n", __FUNCTION__, dev->irq );
+
+ if (drm_core_check_feature(dev, DRIVER_IRQ_VBL)) {
+ init_waitqueue_head(&dev->vbl_queue);
+
+ spin_lock_init( &dev->vbl_lock );
+
+ INIT_LIST_HEAD( &dev->vbl_sigs.head );
+
+ dev->vbl_pending = 0;
+ }
+
+ /* Before installing handler */
+ dev->fn_tbl.irq_preinstall(dev);
+
+ /* Install handler */
+ if (drm_core_check_feature(dev, DRIVER_IRQ_SHARED))
+ sh_flags = SA_SHIRQ;
+
+ ret = request_irq( dev->irq, dev->fn_tbl.irq_handler,
+ sh_flags, dev->devname, dev );
+ if ( ret < 0 ) {
+ down( &dev->struct_sem );
+ dev->irq_enabled = 0;
+ up( &dev->struct_sem );
+ return ret;
+ }
+
+ /* After installing handler */
+ dev->fn_tbl.irq_postinstall(dev);
+
+ return 0;
+}
+
+/**
+ * Uninstall the IRQ handler.
+ *
+ * \param dev DRM device.
+ *
+ * Calls the driver's \c DRM(driver_irq_uninstall)() function, and stops the irq.
+ */
+int DRM(irq_uninstall)( drm_device_t *dev )
+{
+ int irq_enabled;
+
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
+ return -EINVAL;
+
+ down( &dev->struct_sem );
+ irq_enabled = dev->irq_enabled;
+ dev->irq_enabled = 0;
+ up( &dev->struct_sem );
+
+ if ( !irq_enabled )
+ return -EINVAL;
+
+ DRM_DEBUG( "%s: irq=%d\n", __FUNCTION__, dev->irq );
+
+ dev->fn_tbl.irq_uninstall(dev);
+
+ free_irq( dev->irq, dev );
+
+ return 0;
+}
+
+/**
+ * IRQ control ioctl.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param arg user argument, pointing to a drm_control structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Calls irq_install() or irq_uninstall() according to \p arg.
+ */
+int DRM(control)( struct inode *inode, struct file *filp,
+ unsigned int cmd, unsigned long arg )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_control_t ctl;
+
+ /* if we haven't irq we fallback for compatibility reasons - this used to be a separate function in drm_dma.h */
+
+ if ( copy_from_user( &ctl, (drm_control_t __user *)arg, sizeof(ctl) ) )
+ return -EFAULT;
+
+ switch ( ctl.func ) {
+ case DRM_INST_HANDLER:
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
+ return 0;
+ if (dev->if_version < DRM_IF_VERSION(1, 2) &&
+ ctl.irq != dev->irq)
+ return -EINVAL;
+ return DRM(irq_install)( dev );
+ case DRM_UNINST_HANDLER:
+ if (!drm_core_check_feature(dev, DRIVER_HAVE_IRQ))
+ return 0;
+ return DRM(irq_uninstall)( dev );
+ default:
+ return -EINVAL;
+ }
+}
+
+/**
+ * Wait for VBLANK.
+ *
+ * \param inode device inode.
+ * \param filp file pointer.
+ * \param cmd command.
+ * \param data user argument, pointing to a drm_wait_vblank structure.
+ * \return zero on success or a negative number on failure.
+ *
+ * Verifies the IRQ is installed.
+ *
+ * If a signal is requested checks if this task has already scheduled the same signal
+ * for the same vblank sequence number - nothing to be done in
+ * that case. If the number of tasks waiting for the interrupt exceeds 100 the
+ * function fails. Otherwise adds a new entry to drm_device::vbl_sigs for this
+ * task.
+ *
+ * If a signal is not requested, then calls vblank_wait().
+ */
+int DRM(wait_vblank)( DRM_IOCTL_ARGS )
+{
+ drm_file_t *priv = filp->private_data;
+ drm_device_t *dev = priv->dev;
+ drm_wait_vblank_t __user *argp = (void __user *)data;
+ drm_wait_vblank_t vblwait;
+ struct timeval now;
+ int ret = 0;
+ unsigned int flags;
+
+ if (!drm_core_check_feature(dev, DRIVER_IRQ_VBL))
+ return -EINVAL;
+
+ if (!dev->irq)
+ return -EINVAL;
+
+ DRM_COPY_FROM_USER_IOCTL( vblwait, argp, sizeof(vblwait) );
+
+ switch ( vblwait.request.type & ~_DRM_VBLANK_FLAGS_MASK ) {
+ case _DRM_VBLANK_RELATIVE:
+ vblwait.request.sequence += atomic_read( &dev->vbl_received );
+ vblwait.request.type &= ~_DRM_VBLANK_RELATIVE;
+ case _DRM_VBLANK_ABSOLUTE:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ flags = vblwait.request.type & _DRM_VBLANK_FLAGS_MASK;
+
+ if ( flags & _DRM_VBLANK_SIGNAL ) {
+ unsigned long irqflags;
+ drm_vbl_sig_t *vbl_sig;
+
+ vblwait.reply.sequence = atomic_read( &dev->vbl_received );
+
+ spin_lock_irqsave( &dev->vbl_lock, irqflags );
+
+ /* Check if this task has already scheduled the same signal
+ * for the same vblank sequence number; nothing to be done in
+ * that case
+ */
+ list_for_each_entry( vbl_sig, &dev->vbl_sigs.head, head ) {
+ if (vbl_sig->sequence == vblwait.request.sequence
+ && vbl_sig->info.si_signo == vblwait.request.signal
+ && vbl_sig->task == current)
+ {
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+ goto done;
+ }
+ }
+
+ if ( dev->vbl_pending >= 100 ) {
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+ return -EBUSY;
+ }
+
+ dev->vbl_pending++;
+
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+
+ if ( !( vbl_sig = DRM_MALLOC( sizeof( drm_vbl_sig_t ) ) ) ) {
+ return -ENOMEM;
+ }
+
+ memset( (void *)vbl_sig, 0, sizeof(*vbl_sig) );
+
+ vbl_sig->sequence = vblwait.request.sequence;
+ vbl_sig->info.si_signo = vblwait.request.signal;
+ vbl_sig->task = current;
+
+ spin_lock_irqsave( &dev->vbl_lock, irqflags );
+
+ list_add_tail( (struct list_head *) vbl_sig, &dev->vbl_sigs.head );
+
+ spin_unlock_irqrestore( &dev->vbl_lock, irqflags );
+ } else {
+ if (dev->fn_tbl.vblank_wait)
+ ret = dev->fn_tbl.vblank_wait( dev, &vblwait.request.sequence );
+
+ do_gettimeofday( &now );
+ vblwait.reply.tval_sec = now.tv_sec;
+ vblwait.reply.tval_usec = now.tv_usec;
+ }
+
+done:
+ DRM_COPY_TO_USER_IOCTL( argp, vblwait, sizeof(vblwait) );
+
+ return ret;
+}
+
+/**
+ * Send the VBLANK signals.
+ *
+ * \param dev DRM device.
+ *
+ * Sends a signal for each task in drm_device::vbl_sigs and empties the list.
+ *
+ * If a signal is not requested, then calls vblank_wait().
+ */
+void DRM(vbl_send_signals)( drm_device_t *dev )
+{
+ struct list_head *list, *tmp;
+ drm_vbl_sig_t *vbl_sig;
+ unsigned int vbl_seq = atomic_read( &dev->vbl_received );
+ unsigned long flags;
+
+ spin_lock_irqsave( &dev->vbl_lock, flags );
+
+ list_for_each_safe( list, tmp, &dev->vbl_sigs.head ) {
+ vbl_sig = list_entry( list, drm_vbl_sig_t, head );
+ if ( ( vbl_seq - vbl_sig->sequence ) <= (1<<23) ) {
+ vbl_sig->info.si_code = vbl_seq;
+ send_sig_info( vbl_sig->info.si_signo, &vbl_sig->info, vbl_sig->task );
+
+ list_del( list );
+
+ DRM_FREE( vbl_sig, sizeof(*vbl_sig) );
+
+ dev->vbl_pending--;
+ }
+ }
+
+ spin_unlock_irqrestore( &dev->vbl_lock, flags );
+}
+
+
--- /dev/null
+/*
+ This file is auto-generated from the drm_pciids.txt in the DRM CVS
+ Please contact dri-devel@lists.sf.net to add new cards to this list
+*/
+#define radeon_PCI_IDS \
+ {0x1002, 0x4136, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4137, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4237, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4242, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4336, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4337, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4437, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4964, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4965, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4966, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4967, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C57, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C58, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C59, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C5A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C65, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C66, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4C67, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5144, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5145, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5146, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5147, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5148, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5149, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x514F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5157, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5158, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5159, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x515A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5168, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5169, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x516A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x516B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x516C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5834, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5835, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5836, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5837, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5960, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5961, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5962, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5963, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5964, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5968, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5969, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x596A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x596B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5c61, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5c62, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5c63, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5c64, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define r128_PCI_IDS \
+ {0x1002, 0x4c45, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c46, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4d46, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4d4c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5041, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5042, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5043, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5044, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5045, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5046, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5047, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5048, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5049, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504E, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x504F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5051, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5052, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5053, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5054, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5055, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5056, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5057, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5058, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5245, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5246, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5247, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x524b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x524c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x534d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5446, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x544C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x5452, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define mga_PCI_IDS \
+ {0x102b, 0x0521, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x102b, 0x0525, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x102b, 0x2527, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define mach64_PCI_IDS \
+ {0x1002, 0x4749, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4750, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4751, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4742, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4744, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c49, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c50, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c51, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c42, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c44, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x474c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x474f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4752, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4753, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x474d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x474e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c52, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c53, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c4d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1002, 0x4c4e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define sisdrv_PCI_IDS \
+ {0x1039, 0x0300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1039, 0x5300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1039, 0x6300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1039, 0x7300, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define tdfx_PCI_IDS \
+ {0x121a, 0x0003, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x121a, 0x0004, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x121a, 0x0005, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x121a, 0x0007, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x121a, 0x0009, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x121a, 0x000b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define viadrv_PCI_IDS \
+ {0x1106, 0x3022, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1106, 0x3122, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1106, 0x7205, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x1106, 0x7204, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define i810_PCI_IDS \
+ {0x8086, 0x7121, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x7123, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x7125, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x1132, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define i830_PCI_IDS \
+ {0x8086, 0x3577, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x2562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x3582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x2572, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define gamma_PCI_IDS \
+ {0x3d3d, 0x0008, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define savage_PCI_IDS \
+ {0x5333, 0x8a22, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8a23, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c10, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c11, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c12, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c13, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c20, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c21, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c22, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c24, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c26, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2a, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2b, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2d, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2e, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8c2f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8a25, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8a26, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8d01, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8d02, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x5333, 0x8d04, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
+#define ffb_PCI_IDS \
+ {0, 0, 0}
+
+#define i915_PCI_IDS \
+ {0x8086, 0x3577, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x2562, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x3582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x2572, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0x8086, 0x2582, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, \
+ {0, 0, 0}
+
--- /dev/null
+/*
+ * Intel & MS High Precision Event Timer Implementation.
+ *
+ * Copyright (C) 2003 Intel Corporation
+ * Venki Pallipadi
+ * (c) Copyright 2004 Hewlett-Packard Development Company, L.P.
+ * Bob Picco <robert.picco@hp.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/config.h>
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/miscdevice.h>
+#include <linux/major.h>
+#include <linux/ioport.h>
+#include <linux/fcntl.h>
+#include <linux/init.h>
+#include <linux/poll.h>
+#include <linux/proc_fs.h>
+#include <linux/spinlock.h>
+#include <linux/sysctl.h>
+#include <linux/wait.h>
+#include <linux/bcd.h>
+#include <linux/seq_file.h>
+#include <linux/bitops.h>
+
+#include <asm/current.h>
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/div64.h>
+
+#include <linux/acpi.h>
+#include <acpi/acpi_bus.h>
+#include <linux/hpet.h>
+
+/*
+ * The High Precision Event Timer driver.
+ * This driver is closely modelled after the rtc.c driver.
+ * http://www.intel.com/labs/platcomp/hpet/hpetspec.htm
+ */
+#define HPET_USER_FREQ (64)
+#define HPET_DRIFT (500)
+
+static u32 hpet_ntimer, hpet_nhpet, hpet_max_freq = HPET_USER_FREQ;
+
+/* A lock for concurrent access by app and isr hpet activity. */
+static spinlock_t hpet_lock = SPIN_LOCK_UNLOCKED;
+/* A lock for concurrent intermodule access to hpet and isr hpet activity. */
+static spinlock_t hpet_task_lock = SPIN_LOCK_UNLOCKED;
+
+#define HPET_DEV_NAME (7)
+
+struct hpet_dev {
+ struct hpets *hd_hpets;
+ struct hpet __iomem *hd_hpet;
+ struct hpet_timer __iomem *hd_timer;
+ unsigned long hd_ireqfreq;
+ unsigned long hd_irqdata;
+ wait_queue_head_t hd_waitqueue;
+ struct fasync_struct *hd_async_queue;
+ struct hpet_task *hd_task;
+ unsigned int hd_flags;
+ unsigned int hd_irq;
+ unsigned int hd_hdwirq;
+ char hd_name[HPET_DEV_NAME];
+};
+
+struct hpets {
+ struct hpets *hp_next;
+ struct hpet __iomem *hp_hpet;
+ struct time_interpolator *hp_interpolator;
+ unsigned long hp_period;
+ unsigned long hp_delta;
+ unsigned int hp_ntimer;
+ unsigned int hp_which;
+ struct hpet_dev hp_dev[1];
+};
+
+static struct hpets *hpets;
+
+#define HPET_OPEN 0x0001
+#define HPET_IE 0x0002 /* interrupt enabled */
+#define HPET_PERIODIC 0x0004
+
+#if BITS_PER_LONG == 64
+#define write_counter(V, MC) writeq(V, MC)
+#define read_counter(MC) readq(MC)
+#else
+#define write_counter(V, MC) writel(V, MC)
+#define read_counter(MC) readl(MC)
+#endif
+
+#ifndef readq
+static unsigned long long __inline readq(void __iomem *addr)
+{
+ return readl(addr) | (((unsigned long long)readl(addr + 4)) << 32LL);
+}
+#endif
+
+#ifndef writeq
+static void __inline writeq(unsigned long long v, void __iomem *addr)
+{
+ writel(v & 0xffffffff, addr);
+ writel(v >> 32, addr + 4);
+}
+#endif
+
+static irqreturn_t hpet_interrupt(int irq, void *data, struct pt_regs *regs)
+{
+ struct hpet_dev *devp;
+ unsigned long isr;
+
+ devp = data;
+
+ spin_lock(&hpet_lock);
+ devp->hd_irqdata++;
+
+ /*
+ * For non-periodic timers, increment the accumulator.
+ * This has the effect of treating non-periodic like periodic.
+ */
+ if ((devp->hd_flags & (HPET_IE | HPET_PERIODIC)) == HPET_IE) {
+ unsigned long m, t;
+
+ t = devp->hd_ireqfreq;
+ m = read_counter(&devp->hd_hpet->hpet_mc);
+ write_counter(t + m + devp->hd_hpets->hp_delta,
+ &devp->hd_timer->hpet_compare);
+ }
+
+ isr = (1 << (devp - devp->hd_hpets->hp_dev));
+ writeq(isr, &devp->hd_hpet->hpet_isr);
+ spin_unlock(&hpet_lock);
+
+ spin_lock(&hpet_task_lock);
+ if (devp->hd_task)
+ devp->hd_task->ht_func(devp->hd_task->ht_data);
+ spin_unlock(&hpet_task_lock);
+
+ wake_up_interruptible(&devp->hd_waitqueue);
+
+ kill_fasync(&devp->hd_async_queue, SIGIO, POLL_IN);
+
+ return IRQ_HANDLED;
+}
+
+static int hpet_open(struct inode *inode, struct file *file)
+{
+ struct hpet_dev *devp;
+ struct hpets *hpetp;
+ int i;
+
+ if (file->f_mode & FMODE_WRITE)
+ return -EINVAL;
+
+ spin_lock_irq(&hpet_lock);
+
+ for (devp = NULL, hpetp = hpets; hpetp && !devp; hpetp = hpetp->hp_next)
+ for (i = 0; i < hpetp->hp_ntimer; i++)
+ if (hpetp->hp_dev[i].hd_flags & HPET_OPEN
+ || hpetp->hp_dev[i].hd_task)
+ continue;
+ else {
+ devp = &hpetp->hp_dev[i];
+ break;
+ }
+
+ if (!devp) {
+ spin_unlock_irq(&hpet_lock);
+ return -EBUSY;
+ }
+
+ file->private_data = devp;
+ devp->hd_irqdata = 0;
+ devp->hd_flags |= HPET_OPEN;
+ spin_unlock_irq(&hpet_lock);
+
+ return 0;
+}
+
+static ssize_t
+hpet_read(struct file *file, char __user *buf, size_t count, loff_t * ppos)
+{
+ DECLARE_WAITQUEUE(wait, current);
+ unsigned long data;
+ ssize_t retval;
+ struct hpet_dev *devp;
+
+ devp = file->private_data;
+ if (!devp->hd_ireqfreq)
+ return -EIO;
+
+ if (count < sizeof(unsigned long))
+ return -EINVAL;
+
+ add_wait_queue(&devp->hd_waitqueue, &wait);
+
+ for ( ; ; ) {
+ set_current_state(TASK_INTERRUPTIBLE);
+
+ spin_lock_irq(&hpet_lock);
+ data = devp->hd_irqdata;
+ devp->hd_irqdata = 0;
+ spin_unlock_irq(&hpet_lock);
+
+ if (data)
+ break;
+ else if (file->f_flags & O_NONBLOCK) {
+ retval = -EAGAIN;
+ goto out;
+ } else if (signal_pending(current)) {
+ retval = -ERESTARTSYS;
+ goto out;
+ }
+ schedule();
+ }
+
+ retval = put_user(data, (unsigned long __user *)buf);
+ if (!retval)
+ retval = sizeof(unsigned long);
+out:
+ __set_current_state(TASK_RUNNING);
+ remove_wait_queue(&devp->hd_waitqueue, &wait);
+
+ return retval;
+}
+
+static unsigned int hpet_poll(struct file *file, poll_table * wait)
+{
+ unsigned long v;
+ struct hpet_dev *devp;
+
+ devp = file->private_data;
+
+ if (!devp->hd_ireqfreq)
+ return 0;
+
+ poll_wait(file, &devp->hd_waitqueue, wait);
+
+ spin_lock_irq(&hpet_lock);
+ v = devp->hd_irqdata;
+ spin_unlock_irq(&hpet_lock);
+
+ if (v != 0)
+ return POLLIN | POLLRDNORM;
+
+ return 0;
+}
+
+static int hpet_mmap(struct file *file, struct vm_area_struct *vma)
+{
+#ifdef CONFIG_HPET_MMAP
+ struct hpet_dev *devp;
+ unsigned long addr;
+
+ if (((vma->vm_end - vma->vm_start) != PAGE_SIZE) || vma->vm_pgoff)
+ return -EINVAL;
+
+ devp = file->private_data;
+ addr = (unsigned long)devp->hd_hpet;
+
+ if (addr & (PAGE_SIZE - 1))
+ return -ENOSYS;
+
+ vma->vm_flags |= VM_IO;
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
+ addr = __pa(addr);
+
+ if (remap_pfn_range(vma, vma->vm_start, addr >> PAGE_SHIFT,
+ PAGE_SIZE, vma->vm_page_prot)) {
+ printk(KERN_ERR "remap_pfn_range failed in hpet.c\n");
+ return -EAGAIN;
+ }
+
+ return 0;
+#else
+ return -ENOSYS;
+#endif
+}
+
+static int hpet_fasync(int fd, struct file *file, int on)
+{
+ struct hpet_dev *devp;
+
+ devp = file->private_data;
+
+ if (fasync_helper(fd, file, on, &devp->hd_async_queue) >= 0)
+ return 0;
+ else
+ return -EIO;
+}
+
+static int hpet_release(struct inode *inode, struct file *file)
+{
+ struct hpet_dev *devp;
+ struct hpet_timer __iomem *timer;
+ int irq = 0;
+
+ devp = file->private_data;
+ timer = devp->hd_timer;
+
+ spin_lock_irq(&hpet_lock);
+
+ writeq((readq(&timer->hpet_config) & ~Tn_INT_ENB_CNF_MASK),
+ &timer->hpet_config);
+
+ irq = devp->hd_irq;
+ devp->hd_irq = 0;
+
+ devp->hd_ireqfreq = 0;
+
+ if (devp->hd_flags & HPET_PERIODIC
+ && readq(&timer->hpet_config) & Tn_TYPE_CNF_MASK) {
+ unsigned long v;
+
+ v = readq(&timer->hpet_config);
+ v ^= Tn_TYPE_CNF_MASK;
+ writeq(v, &timer->hpet_config);
+ }
+
+ devp->hd_flags &= ~(HPET_OPEN | HPET_IE | HPET_PERIODIC);
+ spin_unlock_irq(&hpet_lock);
+
+ if (irq)
+ free_irq(irq, devp);
+
+ if (file->f_flags & FASYNC)
+ hpet_fasync(-1, file, 0);
+
+ file->private_data = NULL;
+ return 0;
+}
+
+static int hpet_ioctl_common(struct hpet_dev *, int, unsigned long, int);
+
+static int
+hpet_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ struct hpet_dev *devp;
+
+ devp = file->private_data;
+ return hpet_ioctl_common(devp, cmd, arg, 0);
+}
+
+static int hpet_ioctl_ieon(struct hpet_dev *devp)
+{
+ struct hpet_timer __iomem *timer;
+ struct hpet __iomem *hpet;
+ struct hpets *hpetp;
+ int irq;
+ unsigned long g, v, t, m;
+ unsigned long flags, isr;
+
+ timer = devp->hd_timer;
+ hpet = devp->hd_hpet;
+ hpetp = devp->hd_hpets;
+
+ v = readq(&timer->hpet_config);
+ spin_lock_irq(&hpet_lock);
+
+ if (devp->hd_flags & HPET_IE) {
+ spin_unlock_irq(&hpet_lock);
+ return -EBUSY;
+ }
+
+ devp->hd_flags |= HPET_IE;
+ spin_unlock_irq(&hpet_lock);
+
+ t = readq(&timer->hpet_config);
+ irq = devp->hd_hdwirq;
+
+ if (irq) {
+ sprintf(devp->hd_name, "hpet%d", (int)(devp - hpetp->hp_dev));
+
+ if (request_irq
+ (irq, hpet_interrupt, SA_INTERRUPT, devp->hd_name, (void *)devp)) {
+ printk(KERN_ERR "hpet: IRQ %d is not free\n", irq);
+ irq = 0;
+ }
+ }
+
+ if (irq == 0) {
+ spin_lock_irq(&hpet_lock);
+ devp->hd_flags ^= HPET_IE;
+ spin_unlock_irq(&hpet_lock);
+ return -EIO;
+ }
+
+ devp->hd_irq = irq;
+ t = devp->hd_ireqfreq;
+ v = readq(&timer->hpet_config);
+ g = v | Tn_INT_ENB_CNF_MASK;
+
+ if (devp->hd_flags & HPET_PERIODIC) {
+ write_counter(t, &timer->hpet_compare);
+ g |= Tn_TYPE_CNF_MASK;
+ v |= Tn_TYPE_CNF_MASK;
+ writeq(v, &timer->hpet_config);
+ v |= Tn_VAL_SET_CNF_MASK;
+ writeq(v, &timer->hpet_config);
+ local_irq_save(flags);
+ m = read_counter(&hpet->hpet_mc);
+ write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare);
+ } else {
+ local_irq_save(flags);
+ m = read_counter(&hpet->hpet_mc);
+ write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare);
+ }
+
+ isr = (1 << (devp - hpets->hp_dev));
+ writeq(isr, &hpet->hpet_isr);
+ writeq(g, &timer->hpet_config);
+ local_irq_restore(flags);
+
+ return 0;
+}
+
+static inline unsigned long hpet_time_div(unsigned long dis)
+{
+ unsigned long long m = 1000000000000000ULL;
+
+ do_div(m, dis);
+
+ return (unsigned long)m;
+}
+
+static int
+hpet_ioctl_common(struct hpet_dev *devp, int cmd, unsigned long arg, int kernel)
+{
+ struct hpet_timer __iomem *timer;
+ struct hpet __iomem *hpet;
+ struct hpets *hpetp;
+ int err;
+ unsigned long v;
+
+ switch (cmd) {
+ case HPET_IE_OFF:
+ case HPET_INFO:
+ case HPET_EPI:
+ case HPET_DPI:
+ case HPET_IRQFREQ:
+ timer = devp->hd_timer;
+ hpet = devp->hd_hpet;
+ hpetp = devp->hd_hpets;
+ break;
+ case HPET_IE_ON:
+ return hpet_ioctl_ieon(devp);
+ default:
+ return -EINVAL;
+ }
+
+ err = 0;
+
+ switch (cmd) {
+ case HPET_IE_OFF:
+ if ((devp->hd_flags & HPET_IE) == 0)
+ break;
+ v = readq(&timer->hpet_config);
+ v &= ~Tn_INT_ENB_CNF_MASK;
+ writeq(v, &timer->hpet_config);
+ if (devp->hd_irq) {
+ free_irq(devp->hd_irq, devp);
+ devp->hd_irq = 0;
+ }
+ devp->hd_flags ^= HPET_IE;
+ break;
+ case HPET_INFO:
+ {
+ struct hpet_info info;
+
+ info.hi_ireqfreq = hpet_time_div(hpetp->hp_period *
+ devp->hd_ireqfreq);
+ info.hi_flags =
+ readq(&timer->hpet_config) & Tn_PER_INT_CAP_MASK;
+ info.hi_hpet = devp->hd_hpets->hp_which;
+ info.hi_timer = devp - devp->hd_hpets->hp_dev;
+ if (copy_to_user((void __user *)arg, &info, sizeof(info)))
+ err = -EFAULT;
+ break;
+ }
+ case HPET_EPI:
+ v = readq(&timer->hpet_config);
+ if ((v & Tn_PER_INT_CAP_MASK) == 0) {
+ err = -ENXIO;
+ break;
+ }
+ devp->hd_flags |= HPET_PERIODIC;
+ break;
+ case HPET_DPI:
+ v = readq(&timer->hpet_config);
+ if ((v & Tn_PER_INT_CAP_MASK) == 0) {
+ err = -ENXIO;
+ break;
+ }
+ if (devp->hd_flags & HPET_PERIODIC &&
+ readq(&timer->hpet_config) & Tn_TYPE_CNF_MASK) {
+ v = readq(&timer->hpet_config);
+ v ^= Tn_TYPE_CNF_MASK;
+ writeq(v, &timer->hpet_config);
+ }
+ devp->hd_flags &= ~HPET_PERIODIC;
+ break;
+ case HPET_IRQFREQ:
+ if (!kernel && (arg > hpet_max_freq) &&
+ !capable(CAP_SYS_RESOURCE)) {
+ err = -EACCES;
+ break;
+ }
+
+ if (arg & (arg - 1)) {
+ err = -EINVAL;
+ break;
+ }
+
+ devp->hd_ireqfreq = hpet_time_div(hpetp->hp_period * arg);
+ }
+
+ return err;
+}
+
+static struct file_operations hpet_fops = {
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .read = hpet_read,
+ .poll = hpet_poll,
+ .ioctl = hpet_ioctl,
+ .open = hpet_open,
+ .release = hpet_release,
+ .fasync = hpet_fasync,
+ .mmap = hpet_mmap,
+};
+
+EXPORT_SYMBOL(hpet_alloc);
+EXPORT_SYMBOL(hpet_register);
+EXPORT_SYMBOL(hpet_unregister);
+EXPORT_SYMBOL(hpet_control);
+
+int hpet_register(struct hpet_task *tp, int periodic)
+{
+ unsigned int i;
+ u64 mask;
+ struct hpet_timer __iomem *timer;
+ struct hpet_dev *devp;
+ struct hpets *hpetp;
+
+ switch (periodic) {
+ case 1:
+ mask = Tn_PER_INT_CAP_MASK;
+ break;
+ case 0:
+ mask = 0;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ spin_lock_irq(&hpet_task_lock);
+ spin_lock(&hpet_lock);
+
+ for (devp = NULL, hpetp = hpets; hpetp && !devp; hpetp = hpetp->hp_next)
+ for (timer = hpetp->hp_hpet->hpet_timers, i = 0;
+ i < hpetp->hp_ntimer; i++, timer++) {
+ if ((readq(&timer->hpet_config) & Tn_PER_INT_CAP_MASK)
+ != mask)
+ continue;
+
+ devp = &hpetp->hp_dev[i];
+
+ if (devp->hd_flags & HPET_OPEN || devp->hd_task) {
+ devp = NULL;
+ continue;
+ }
+
+ tp->ht_opaque = devp;
+ devp->hd_task = tp;
+ break;
+ }
+
+ spin_unlock(&hpet_lock);
+ spin_unlock_irq(&hpet_task_lock);
+
+ if (tp->ht_opaque)
+ return 0;
+ else
+ return -EBUSY;
+}
+
+static inline int hpet_tpcheck(struct hpet_task *tp)
+{
+ struct hpet_dev *devp;
+ struct hpets *hpetp;
+
+ devp = tp->ht_opaque;
+
+ if (!devp)
+ return -ENXIO;
+
+ for (hpetp = hpets; hpetp; hpetp = hpetp->hp_next)
+ if (devp >= hpetp->hp_dev
+ && devp < (hpetp->hp_dev + hpetp->hp_ntimer)
+ && devp->hd_hpet == hpetp->hp_hpet)
+ return 0;
+
+ return -ENXIO;
+}
+
+int hpet_unregister(struct hpet_task *tp)
+{
+ struct hpet_dev *devp;
+ struct hpet_timer __iomem *timer;
+ int err;
+
+ if ((err = hpet_tpcheck(tp)))
+ return err;
+
+ spin_lock_irq(&hpet_task_lock);
+ spin_lock(&hpet_lock);
+
+ devp = tp->ht_opaque;
+ if (devp->hd_task != tp) {
+ spin_unlock(&hpet_lock);
+ spin_unlock_irq(&hpet_task_lock);
+ return -ENXIO;
+ }
+
+ timer = devp->hd_timer;
+ writeq((readq(&timer->hpet_config) & ~Tn_INT_ENB_CNF_MASK),
+ &timer->hpet_config);
+ devp->hd_flags &= ~(HPET_IE | HPET_PERIODIC);
+ devp->hd_task = NULL;
+ spin_unlock(&hpet_lock);
+ spin_unlock_irq(&hpet_task_lock);
+
+ return 0;
+}
+
+int hpet_control(struct hpet_task *tp, unsigned int cmd, unsigned long arg)
+{
+ struct hpet_dev *devp;
+ int err;
+
+ if ((err = hpet_tpcheck(tp)))
+ return err;
+
+ spin_lock_irq(&hpet_lock);
+ devp = tp->ht_opaque;
+ if (devp->hd_task != tp) {
+ spin_unlock_irq(&hpet_lock);
+ return -ENXIO;
+ }
+ spin_unlock_irq(&hpet_lock);
+ return hpet_ioctl_common(devp, cmd, arg, 1);
+}
+
+static ctl_table hpet_table[] = {
+ {
+ .ctl_name = 1,
+ .procname = "max-user-freq",
+ .data = &hpet_max_freq,
+ .maxlen = sizeof(int),
+ .mode = 0644,
+ .proc_handler = &proc_dointvec,
+ },
+ {.ctl_name = 0}
+};
+
+static ctl_table hpet_root[] = {
+ {
+ .ctl_name = 1,
+ .procname = "hpet",
+ .maxlen = 0,
+ .mode = 0555,
+ .child = hpet_table,
+ },
+ {.ctl_name = 0}
+};
+
+static ctl_table dev_root[] = {
+ {
+ .ctl_name = CTL_DEV,
+ .procname = "dev",
+ .maxlen = 0,
+ .mode = 0555,
+ .child = hpet_root,
+ },
+ {.ctl_name = 0}
+};
+
+static struct ctl_table_header *sysctl_header;
+
+static void hpet_register_interpolator(struct hpets *hpetp)
+{
+#ifdef CONFIG_TIME_INTERPOLATION
+ struct time_interpolator *ti;
+
+ ti = kmalloc(sizeof(*ti), GFP_KERNEL);
+ if (!ti)
+ return;
+
+ memset(ti, 0, sizeof(*ti));
+ ti->source = TIME_SOURCE_MMIO64;
+ ti->shift = 10;
+ ti->addr = &hpetp->hp_hpet->hpet_mc;
+ ti->frequency = hpet_time_div(hpets->hp_period);
+ ti->drift = ti->frequency * HPET_DRIFT / 1000000;
+ ti->mask = -1;
+
+ hpetp->hp_interpolator = ti;
+ register_time_interpolator(ti);
+#endif
+}
+
+/*
+ * Adjustment for when arming the timer with
+ * initial conditions. That is, main counter
+ * ticks expired before interrupts are enabled.
+ */
+#define TICK_CALIBRATE (1000UL)
+
+static unsigned long hpet_calibrate(struct hpets *hpetp)
+{
+ struct hpet_timer __iomem *timer = NULL;
+ unsigned long t, m, count, i, flags, start;
+ struct hpet_dev *devp;
+ int j;
+ struct hpet __iomem *hpet;
+
+ for (j = 0, devp = hpetp->hp_dev; j < hpetp->hp_ntimer; j++, devp++)
+ if ((devp->hd_flags & HPET_OPEN) == 0) {
+ timer = devp->hd_timer;
+ break;
+ }
+
+ if (!timer)
+ return 0;
+
+ hpet = hpets->hp_hpet;
+ t = read_counter(&timer->hpet_compare);
+
+ i = 0;
+ count = hpet_time_div(hpetp->hp_period * TICK_CALIBRATE);
+
+ local_irq_save(flags);
+
+ start = read_counter(&hpet->hpet_mc);
+
+ do {
+ m = read_counter(&hpet->hpet_mc);
+ write_counter(t + m + hpetp->hp_delta, &timer->hpet_compare);
+ } while (i++, (m - start) < count);
+
+ local_irq_restore(flags);
+
+ return (m - start) / i;
+}
+
+int hpet_alloc(struct hpet_data *hdp)
+{
+ u64 cap, mcfg;
+ struct hpet_dev *devp;
+ u32 i, ntimer;
+ struct hpets *hpetp;
+ size_t siz;
+ struct hpet __iomem *hpet;
+ static struct hpets *last = (struct hpets *)0;
+ unsigned long ns;
+
+ /*
+ * hpet_alloc can be called by platform dependent code.
+ * if platform dependent code has allocated the hpet
+ * ACPI also reports hpet, then we catch it here.
+ */
+ for (hpetp = hpets; hpetp; hpetp = hpetp->hp_next)
+ if (hpetp->hp_hpet == hdp->hd_address)
+ return 0;
+
+ siz = sizeof(struct hpets) + ((hdp->hd_nirqs - 1) *
+ sizeof(struct hpet_dev));
+
+ hpetp = kmalloc(siz, GFP_KERNEL);
+
+ if (!hpetp)
+ return -ENOMEM;
+
+ memset(hpetp, 0, siz);
+
+ hpetp->hp_which = hpet_nhpet++;
+ hpetp->hp_hpet = hdp->hd_address;
+
+ hpetp->hp_ntimer = hdp->hd_nirqs;
+
+ for (i = 0; i < hdp->hd_nirqs; i++)
+ hpetp->hp_dev[i].hd_hdwirq = hdp->hd_irq[i];
+
+ hpet = hpetp->hp_hpet;
+
+ cap = readq(&hpet->hpet_cap);
+
+ ntimer = ((cap & HPET_NUM_TIM_CAP_MASK) >> HPET_NUM_TIM_CAP_SHIFT) + 1;
+
+ if (hpetp->hp_ntimer != ntimer) {
+ printk(KERN_WARNING "hpet: number irqs doesn't agree"
+ " with number of timers\n");
+ kfree(hpetp);
+ return -ENODEV;
+ }
+
+ if (last)
+ last->hp_next = hpetp;
+ else
+ hpets = hpetp;
+
+ last = hpetp;
+
+ hpetp->hp_period = (cap & HPET_COUNTER_CLK_PERIOD_MASK) >>
+ HPET_COUNTER_CLK_PERIOD_SHIFT;
+
+ printk(KERN_INFO "hpet%d: at MMIO 0x%lx, IRQ%s",
+ hpetp->hp_which, hdp->hd_phys_address,
+ hpetp->hp_ntimer > 1 ? "s" : "");
+ for (i = 0; i < hpetp->hp_ntimer; i++)
+ printk("%s %d", i > 0 ? "," : "", hdp->hd_irq[i]);
+ printk("\n");
+
+ ns = hpetp->hp_period; /* femptoseconds, 10^-15 */
+ do_div(ns, 1000000); /* convert to nanoseconds, 10^-9 */
+ printk(KERN_INFO "hpet%d: %ldns tick, %d %d-bit timers\n",
+ hpetp->hp_which, ns, hpetp->hp_ntimer,
+ cap & HPET_COUNTER_SIZE_MASK ? 64 : 32);
+
+ mcfg = readq(&hpet->hpet_config);
+ if ((mcfg & HPET_ENABLE_CNF_MASK) == 0) {
+ write_counter(0L, &hpet->hpet_mc);
+ mcfg |= HPET_ENABLE_CNF_MASK;
+ writeq(mcfg, &hpet->hpet_config);
+ }
+
+ for (i = 0, devp = hpetp->hp_dev; i < hpetp->hp_ntimer;
+ i++, hpet_ntimer++, devp++) {
+ unsigned long v;
+ struct hpet_timer __iomem *timer;
+
+ timer = &hpet->hpet_timers[devp - hpetp->hp_dev];
+ v = readq(&timer->hpet_config);
+
+ devp->hd_hpets = hpetp;
+ devp->hd_hpet = hpet;
+ devp->hd_timer = timer;
+
+ /*
+ * If the timer was reserved by platform code,
+ * then make timer unavailable for opens.
+ */
+ if (hdp->hd_state & (1 << i)) {
+ devp->hd_flags = HPET_OPEN;
+ continue;
+ }
+
+ init_waitqueue_head(&devp->hd_waitqueue);
+ }
+
+ hpetp->hp_delta = hpet_calibrate(hpetp);
+ hpet_register_interpolator(hpetp);
+
+ return 0;
+}
+
+static acpi_status hpet_resources(struct acpi_resource *res, void *data)
+{
+ struct hpet_data *hdp;
+ acpi_status status;
+ struct acpi_resource_address64 addr;
+ struct hpets *hpetp;
+
+ hdp = data;
+
+ status = acpi_resource_to_address64(res, &addr);
+
+ if (ACPI_SUCCESS(status)) {
+ unsigned long size;
+
+ size = addr.max_address_range - addr.min_address_range + 1;
+ hdp->hd_phys_address = addr.min_address_range;
+ hdp->hd_address = ioremap(addr.min_address_range, size);
+
+ for (hpetp = hpets; hpetp; hpetp = hpetp->hp_next)
+ if (hpetp->hp_hpet == hdp->hd_address)
+ return -EBUSY;
+ } else if (res->id == ACPI_RSTYPE_EXT_IRQ) {
+ struct acpi_resource_ext_irq *irqp;
+ int i;
+
+ irqp = &res->data.extended_irq;
+
+ if (irqp->number_of_interrupts > 0) {
+ hdp->hd_nirqs = irqp->number_of_interrupts;
+
+ for (i = 0; i < hdp->hd_nirqs; i++)
+ hdp->hd_irq[i] =
+ acpi_register_gsi(irqp->interrupts[i],
+ irqp->edge_level,
+ irqp->active_high_low);
+ }
+ }
+
+ return AE_OK;
+}
+
+static int hpet_acpi_add(struct acpi_device *device)
+{
+ acpi_status result;
+ struct hpet_data data;
+
+ memset(&data, 0, sizeof(data));
+
+ result =
+ acpi_walk_resources(device->handle, METHOD_NAME__CRS,
+ hpet_resources, &data);
+
+ if (ACPI_FAILURE(result))
+ return -ENODEV;
+
+ if (!data.hd_address || !data.hd_nirqs) {
+ printk("%s: no address or irqs in _CRS\n", __FUNCTION__);
+ return -ENODEV;
+ }
+
+ return hpet_alloc(&data);
+}
+
+static int hpet_acpi_remove(struct acpi_device *device, int type)
+{
+ /* XXX need to unregister interpolator, dealloc mem, etc */
+ return -EINVAL;
+}
+
+static struct acpi_driver hpet_acpi_driver = {
+ .name = "hpet",
+ .ids = "PNP0103",
+ .ops = {
+ .add = hpet_acpi_add,
+ .remove = hpet_acpi_remove,
+ },
+};
+
+static struct miscdevice hpet_misc = { HPET_MINOR, "hpet", &hpet_fops };
+
+static int __init hpet_init(void)
+{
+ int result;
+
+ result = misc_register(&hpet_misc);
+ if (result < 0)
+ return -ENODEV;
+
+ sysctl_header = register_sysctl_table(dev_root, 0);
+
+ result = acpi_bus_register_driver(&hpet_acpi_driver);
+ if (result < 0) {
+ if (sysctl_header)
+ unregister_sysctl_table(sysctl_header);
+ misc_deregister(&hpet_misc);
+ return result;
+ }
+
+ return 0;
+}
+
+static void __exit hpet_exit(void)
+{
+ acpi_bus_unregister_driver(&hpet_acpi_driver);
+
+ if (sysctl_header)
+ unregister_sysctl_table(sysctl_header);
+ misc_deregister(&hpet_misc);
+
+ return;
+}
+
+module_init(hpet_init);
+module_exit(hpet_exit);
+MODULE_AUTHOR("Bob Picco <Robert.Picco@hp.com>");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * IBM eServer Hypervisor Virtual Console Server Device Driver
+ * Copyright (C) 2003, 2004 IBM Corp.
+ * Ryan S. Arnold (rsa@us.ibm.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Author(s) : Ryan S. Arnold <rsa@us.ibm.com>
+ *
+ * This is the device driver for the IBM Hypervisor Virtual Console Server,
+ * "hvcs". The IBM hvcs provides a tty driver interface to allow Linux
+ * user space applications access to the system consoles of logically
+ * partitioned operating systems, e.g. Linux, running on the same partitioned
+ * Power5 ppc64 system. Physical hardware consoles per partition are not
+ * practical on this hardware so system consoles are accessed by this driver
+ * using inter-partition firmware interfaces to virtual terminal devices.
+ *
+ * A vty is known to the HMC as a "virtual serial server adapter". It is a
+ * virtual terminal device that is created by firmware upon partition creation
+ * to act as a partitioned OS's console device.
+ *
+ * Firmware dynamically (via hotplug) exposes vty-servers to a running ppc64
+ * Linux system upon their creation by the HMC or their exposure during boot.
+ * The non-user interactive backend of this driver is implemented as a vio
+ * device driver so that it can receive notification of vty-server lifetimes
+ * after it registers with the vio bus to handle vty-server probe and remove
+ * callbacks.
+ *
+ * Many vty-servers can be configured to connect to one vty, but a vty can
+ * only be actively connected to by a single vty-server, in any manner, at one
+ * time. If the HMC is currently hosting the console for a target Linux
+ * partition; attempts to open the tty device to the partition's console using
+ * the hvcs on any partition will return -EBUSY with every open attempt until
+ * the HMC frees the connection between its vty-server and the desired
+ * partition's vty device. Conversely, a vty-server may only be connected to
+ * a single vty at one time even though it may have several configured vty
+ * partner possibilities.
+ *
+ * Firmware does not provide notification of vty partner changes to this
+ * driver. This means that an HMC Super Admin may add or remove partner vtys
+ * from a vty-server's partner list but the changes will not be signaled to
+ * the vty-server. Firmware only notifies the driver when a vty-server is
+ * added or removed from the system. To compensate for this deficiency, this
+ * driver implements a sysfs update attribute which provides a method for
+ * rescanning partner information upon a user's request.
+ *
+ * Each vty-server, prior to being exposed to this driver is reference counted
+ * using the 2.6 Linux kernel kobject construct. This kobject is also used by
+ * the vio bus to provide a vio device sysfs entry that this driver attaches
+ * device specific attributes to, including partner information. The vio bus
+ * framework also provides a sysfs entry for each vio driver. The hvcs driver
+ * provides driver attributes in this entry.
+ *
+ * For direction on installation and usage of this driver please reference
+ * Documentation/powerpc/hvcs.txt.
+ */
+
+#include <linux/device.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/kobject.h>
+#include <linux/kthread.h>
+#include <linux/list.h>
+#include <linux/major.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/sched.h>
+#include <linux/spinlock.h>
+#include <linux/stat.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
+#include <asm/hvconsole.h>
+#include <asm/hvcserver.h>
+#include <asm/uaccess.h>
+#include <asm/vio.h>
+
+/*
+ * 1.3.0 -> 1.3.1 In hvcs_open memset(..,0x00,..) instead of memset(..,0x3F,00).
+ * Removed braces around single statements following conditionals. Removed '=
+ * 0' after static int declarations since these default to zero. Removed
+ * list_for_each_safe() and replaced with list_for_each_entry() in
+ * hvcs_get_by_index(). The 'safe' version is un-needed now that the driver is
+ * using spinlocks. Changed spin_lock_irqsave() to spin_lock() when locking
+ * hvcs_structs_lock and hvcs_pi_lock since these are not touched in an int
+ * handler. Initialized hvcs_structs_lock and hvcs_pi_lock to
+ * SPIN_LOCK_UNLOCKED at declaration time rather than in hvcs_module_init().
+ * Added spin_lock around list_del() in destroy_hvcs_struct() to protect the
+ * list traversals from a deletion. Removed '= NULL' from pointer declaration
+ * statements since they are initialized NULL by default. Removed wmb()
+ * instances from hvcs_try_write(). They probably aren't needed with locking in
+ * place. Added check and cleanup for hvcs_pi_buff = kmalloc() in
+ * hvcs_module_init(). Exposed hvcs_struct.index via a sysfs attribute so that
+ * the coupling between /dev/hvcs* and a vty-server can be automatically
+ * determined. Moved kobject_put() in hvcs_open outside of the
+ * spin_unlock_irqrestore().
+ *
+ * 1.3.1 -> 1.3.2 Changed method for determining hvcs_struct->index and had it
+ * align with how the tty layer always assigns the lowest index available. This
+ * change resulted in a list of ints that denotes which indexes are available.
+ * Device additions and removals use the new hvcs_get_index() and
+ * hvcs_return_index() helper functions. The list is created with
+ * hvsc_alloc_index_list() and it is destroyed with hvcs_free_index_list().
+ * Without these fixes hotplug vty-server adapter support goes crazy with this
+ * driver if the user removes a vty-server adapter. Moved free_irq() outside of
+ * the hvcs_final_close() function in order to get it out of the spinlock.
+ * Rearranged hvcs_close(). Cleaned up some printks and did some housekeeping
+ * on the changelog. Removed local CLC_LENGTH and used HVCS_CLC_LENGTH from
+ * arch/ppc64/hvcserver.h.
+ *
+ * 1.3.2 -> 1.3.3 Replaced yield() in hvcs_close() with tty_wait_until_sent() to
+ * prevent possible lockup with realtime scheduling as similarily pointed out by
+ * akpm in hvc_console. Changed resulted in the removal of hvcs_final_close()
+ * to reorder cleanup operations and prevent discarding of pending data during
+ * an hvcs_close(). Removed spinlock protection of hvcs_struct data members in
+ * hvcs_write_room() and hvcs_chars_in_buffer() because they aren't needed.
+ */
+
+#define HVCS_DRIVER_VERSION "1.3.3"
+
+MODULE_AUTHOR("Ryan S. Arnold <rsa@us.ibm.com>");
+MODULE_DESCRIPTION("IBM hvcs (Hypervisor Virtual Console Server) Driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(HVCS_DRIVER_VERSION);
+
+/*
+ * Wait this long per iteration while trying to push buffered data to the
+ * hypervisor before allowing the tty to complete a close operation.
+ */
+#define HVCS_CLOSE_WAIT (HZ/100) /* 1/10 of a second */
+
+/*
+ * Since the Linux TTY code does not currently (2-04-2004) support dynamic
+ * addition of tty derived devices and we shouldn't allocate thousands of
+ * tty_device pointers when the number of vty-server & vty partner connections
+ * will most often be much lower than this, we'll arbitrarily allocate
+ * HVCS_DEFAULT_SERVER_ADAPTERS tty_structs and cdev's by default when we
+ * register the tty_driver. This can be overridden using an insmod parameter.
+ */
+#define HVCS_DEFAULT_SERVER_ADAPTERS 64
+
+/*
+ * The user can't insmod with more than HVCS_MAX_SERVER_ADAPTERS hvcs device
+ * nodes as a sanity check. Theoretically there can be over 1 Billion
+ * vty-server & vty partner connections.
+ */
+#define HVCS_MAX_SERVER_ADAPTERS 1024
+
+/*
+ * We let Linux assign us a major number and we start the minors at zero. There
+ * is no intuitive mapping between minor number and the target vty-server
+ * adapter except that each new vty-server adapter is always assigned to the
+ * smallest minor number available.
+ */
+#define HVCS_MINOR_START 0
+
+/*
+ * The hcall interface involves putting 8 chars into each of two registers.
+ * We load up those 2 registers (in arch/ppc64/hvconsole.c) by casting char[16]
+ * to long[2]. It would work without __ALIGNED__, but a little (tiny) bit
+ * slower because an unaligned load is slower than aligned load.
+ */
+#define __ALIGNED__ __attribute__((__aligned__(8)))
+
+/*
+ * How much data can firmware send with each hvc_put_chars()? Maybe this
+ * should be moved into an architecture specific area.
+ */
+#define HVCS_BUFF_LEN 16
+
+/*
+ * This is the maximum amount of data we'll let the user send us (hvcs_write) at
+ * once in a chunk as a sanity check.
+ */
+#define HVCS_MAX_FROM_USER 4096
+
+/*
+ * Be careful when adding flags to this line discipline. Don't add anything
+ * that will cause echoing or we'll go into recursive loop echoing chars back
+ * and forth with the console drivers.
+ */
+static struct termios hvcs_tty_termios = {
+ .c_iflag = IGNBRK | IGNPAR,
+ .c_oflag = OPOST,
+ .c_cflag = B38400 | CS8 | CREAD | HUPCL,
+ .c_cc = INIT_C_CC
+};
+
+/*
+ * This value is used to take the place of a command line parameter when the
+ * module is inserted. It starts as -1 and stays as such if the user doesn't
+ * specify a module insmod parameter. If they DO specify one then it is set to
+ * the value of the integer passed in.
+ */
+static int hvcs_parm_num_devs = -1;
+module_param(hvcs_parm_num_devs, int, 0);
+
+char hvcs_driver_name[] = "hvcs";
+char hvcs_device_node[] = "hvcs";
+char hvcs_driver_string[]
+ = "IBM hvcs (Hypervisor Virtual Console Server) Driver";
+
+/* Status of partner info rescan triggered via sysfs. */
+static int hvcs_rescan_status;
+
+static struct tty_driver *hvcs_tty_driver;
+
+/*
+ * In order to be somewhat sane this driver always associates the hvcs_struct
+ * index element with the numerically equal tty->index. This means that a
+ * hotplugged vty-server adapter will always map to the lowest index valued
+ * device node. If vty-servers were hotplug removed from the system and then
+ * new ones added the new vty-server may have the largest slot number of all
+ * the vty-server adapters in the partition but it may have the lowest dev node
+ * index of all the adapters due to the hole left by the hotplug removed
+ * adapter. There are a set of functions provided to get the lowest index for
+ * a new device as well as return the index to the list. This list is allocated
+ * with a number of elements equal to the number of device nodes requested when
+ * the module was inserted.
+ */
+static int *hvcs_index_list;
+
+/*
+ * How large is the list? This is kept for traversal since the list is
+ * dynamically created.
+ */
+static int hvcs_index_count;
+
+/*
+ * Used by the khvcsd to pick up I/O operations when the kernel_thread is
+ * already awake but potentially shifted to TASK_INTERRUPTIBLE state.
+ */
+static int hvcs_kicked;
+
+/*
+ * Use by the kthread construct for task operations like waking the sleeping
+ * thread and stopping the kthread.
+ */
+static struct task_struct *hvcs_task;
+
+/*
+ * We allocate this for the use of all of the hvcs_structs when they fetch
+ * partner info.
+ */
+static unsigned long *hvcs_pi_buff;
+
+/* Only allow one hvcs_struct to use the hvcs_pi_buff at a time. */
+static spinlock_t hvcs_pi_lock = SPIN_LOCK_UNLOCKED;
+
+/* One vty-server per hvcs_struct */
+struct hvcs_struct {
+ spinlock_t lock;
+
+ /*
+ * This index identifies this hvcs device as the complement to a
+ * specific tty index.
+ */
+ unsigned int index;
+
+ struct tty_struct *tty;
+ unsigned int open_count;
+
+ /*
+ * Used to tell the driver kernel_thread what operations need to take
+ * place upon this hvcs_struct instance.
+ */
+ int todo_mask;
+
+ /*
+ * This buffer is required so that when hvcs_write_room() reports that
+ * it can send HVCS_BUFF_LEN characters that it will buffer the full
+ * HVCS_BUFF_LEN characters if need be. This is essential for opost
+ * writes since they do not do high level buffering and expect to be
+ * able to send what the driver commits to sending buffering
+ * [e.g. tab to space conversions in n_tty.c opost()].
+ */
+ char buffer[HVCS_BUFF_LEN];
+ int chars_in_buffer;
+
+ /*
+ * Any variable below the kobject is valid before a tty is connected and
+ * stays valid after the tty is disconnected. These shouldn't be
+ * whacked until the koject refcount reaches zero though some entries
+ * may be changed via sysfs initiatives.
+ */
+ struct kobject kobj; /* ref count & hvcs_struct lifetime */
+ int connected; /* is the vty-server currently connected to a vty? */
+ uint32_t p_unit_address; /* partner unit address */
+ uint32_t p_partition_ID; /* partner partition ID */
+ char p_location_code[HVCS_CLC_LENGTH + 1]; /* CLC + Null Term */
+ struct list_head next; /* list management */
+ struct vio_dev *vdev;
+};
+
+/* Required to back map a kobject to its containing object */
+#define from_kobj(kobj) container_of(kobj, struct hvcs_struct, kobj)
+
+static struct list_head hvcs_structs = LIST_HEAD_INIT(hvcs_structs);
+static spinlock_t hvcs_structs_lock = SPIN_LOCK_UNLOCKED;
+
+static void hvcs_unthrottle(struct tty_struct *tty);
+static void hvcs_throttle(struct tty_struct *tty);
+static irqreturn_t hvcs_handle_interrupt(int irq, void *dev_instance,
+ struct pt_regs *regs);
+
+static int hvcs_write(struct tty_struct *tty,
+ const unsigned char *buf, int count);
+static int hvcs_write_room(struct tty_struct *tty);
+static int hvcs_chars_in_buffer(struct tty_struct *tty);
+
+static int hvcs_has_pi(struct hvcs_struct *hvcsd);
+static void hvcs_set_pi(struct hvcs_partner_info *pi,
+ struct hvcs_struct *hvcsd);
+static int hvcs_get_pi(struct hvcs_struct *hvcsd);
+static int hvcs_rescan_devices_list(void);
+
+static int hvcs_partner_connect(struct hvcs_struct *hvcsd);
+static void hvcs_partner_free(struct hvcs_struct *hvcsd);
+
+static int hvcs_enable_device(struct hvcs_struct *hvcsd,
+ uint32_t unit_address, unsigned int irq, struct vio_dev *dev);
+
+static void destroy_hvcs_struct(struct kobject *kobj);
+static int hvcs_open(struct tty_struct *tty, struct file *filp);
+static void hvcs_close(struct tty_struct *tty, struct file *filp);
+static void hvcs_hangup(struct tty_struct * tty);
+
+static void hvcs_create_device_attrs(struct hvcs_struct *hvcsd);
+static void hvcs_remove_device_attrs(struct vio_dev *vdev);
+static void hvcs_create_driver_attrs(void);
+static void hvcs_remove_driver_attrs(void);
+
+static int __devinit hvcs_probe(struct vio_dev *dev,
+ const struct vio_device_id *id);
+static int __devexit hvcs_remove(struct vio_dev *dev);
+static int __init hvcs_module_init(void);
+static void __exit hvcs_module_exit(void);
+
+#define HVCS_SCHED_READ 0x00000001
+#define HVCS_QUICK_READ 0x00000002
+#define HVCS_TRY_WRITE 0x00000004
+#define HVCS_READ_MASK (HVCS_SCHED_READ | HVCS_QUICK_READ)
+
+static void hvcs_kick(void)
+{
+ hvcs_kicked = 1;
+ wmb();
+ wake_up_process(hvcs_task);
+}
+
+static void hvcs_unthrottle(struct tty_struct *tty)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ hvcsd->todo_mask |= HVCS_SCHED_READ;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ hvcs_kick();
+}
+
+static void hvcs_throttle(struct tty_struct *tty)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ vio_disable_interrupts(hvcsd->vdev);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+}
+
+/*
+ * If the device is being removed we don't have to worry about this interrupt
+ * handler taking any further interrupts because they are disabled which means
+ * the hvcs_struct will always be valid in this handler.
+ */
+static irqreturn_t hvcs_handle_interrupt(int irq, void *dev_instance,
+ struct pt_regs *regs)
+{
+ struct hvcs_struct *hvcsd = dev_instance;
+
+ spin_lock(&hvcsd->lock);
+ vio_disable_interrupts(hvcsd->vdev);
+ hvcsd->todo_mask |= HVCS_SCHED_READ;
+ spin_unlock(&hvcsd->lock);
+ hvcs_kick();
+
+ return IRQ_HANDLED;
+}
+
+/* This function must be called with the hvcsd->lock held */
+static void hvcs_try_write(struct hvcs_struct *hvcsd)
+{
+ uint32_t unit_address = hvcsd->vdev->unit_address;
+ struct tty_struct *tty = hvcsd->tty;
+ int sent;
+
+ if (hvcsd->todo_mask & HVCS_TRY_WRITE) {
+ /* won't send partial writes */
+ sent = hvc_put_chars(unit_address,
+ &hvcsd->buffer[0],
+ hvcsd->chars_in_buffer );
+ if (sent > 0) {
+ hvcsd->chars_in_buffer = 0;
+ /* wmb(); */
+ hvcsd->todo_mask &= ~(HVCS_TRY_WRITE);
+ /* wmb(); */
+
+ /*
+ * We are still obligated to deliver the data to the
+ * hypervisor even if the tty has been closed because
+ * we commited to delivering it. But don't try to wake
+ * a non-existent tty.
+ */
+ if (tty) {
+ tty_wakeup(tty);
+ }
+ }
+ }
+}
+
+static int hvcs_io(struct hvcs_struct *hvcsd)
+{
+ uint32_t unit_address;
+ struct tty_struct *tty;
+ char buf[HVCS_BUFF_LEN] __ALIGNED__;
+ unsigned long flags;
+ int got = 0;
+ int i;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ unit_address = hvcsd->vdev->unit_address;
+ tty = hvcsd->tty;
+
+ hvcs_try_write(hvcsd);
+
+ if (!tty || test_bit(TTY_THROTTLED, &tty->flags)) {
+ hvcsd->todo_mask &= ~(HVCS_READ_MASK);
+ goto bail;
+ } else if (!(hvcsd->todo_mask & (HVCS_READ_MASK)))
+ goto bail;
+
+ /* remove the read masks */
+ hvcsd->todo_mask &= ~(HVCS_READ_MASK);
+
+ if ((tty->flip.count + HVCS_BUFF_LEN) < TTY_FLIPBUF_SIZE) {
+ got = hvc_get_chars(unit_address,
+ &buf[0],
+ HVCS_BUFF_LEN);
+ for (i=0;got && i<got;i++)
+ tty_insert_flip_char(tty, buf[i], TTY_NORMAL);
+ }
+
+ /* Give the TTY time to process the data we just sent. */
+ if (got)
+ hvcsd->todo_mask |= HVCS_QUICK_READ;
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ if (tty->flip.count) {
+ /* This is synch because tty->low_latency == 1 */
+ tty_flip_buffer_push(tty);
+ }
+
+ if (!got) {
+ /* Do this _after_ the flip_buffer_push */
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ vio_enable_interrupts(hvcsd->vdev);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ }
+
+ return hvcsd->todo_mask;
+
+ bail:
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return hvcsd->todo_mask;
+}
+
+static int khvcsd(void *unused)
+{
+ struct hvcs_struct *hvcsd;
+ int hvcs_todo_mask;
+
+ __set_current_state(TASK_RUNNING);
+
+ do {
+ hvcs_todo_mask = 0;
+ hvcs_kicked = 0;
+ wmb();
+
+ spin_lock(&hvcs_structs_lock);
+ list_for_each_entry(hvcsd, &hvcs_structs, next) {
+ hvcs_todo_mask |= hvcs_io(hvcsd);
+ }
+ spin_unlock(&hvcs_structs_lock);
+
+ /*
+ * If any of the hvcs adapters want to try a write or quick read
+ * don't schedule(), yield a smidgen then execute the hvcs_io
+ * thread again for those that want the write.
+ */
+ if (hvcs_todo_mask & (HVCS_TRY_WRITE | HVCS_QUICK_READ)) {
+ yield();
+ continue;
+ }
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ if (!hvcs_kicked)
+ schedule();
+ __set_current_state(TASK_RUNNING);
+ } while (!kthread_should_stop());
+
+ return 0;
+}
+
+static struct vio_device_id hvcs_driver_table[] __devinitdata= {
+ {"serial-server", "hvterm2"},
+ { NULL, }
+};
+MODULE_DEVICE_TABLE(vio, hvcs_driver_table);
+
+static void hvcs_return_index(int index)
+{
+ /* Paranoia check */
+ if (!hvcs_index_list)
+ return;
+ if (index < 0 || index >= hvcs_index_count)
+ return;
+ if (hvcs_index_list[index] == -1)
+ return;
+ else
+ hvcs_index_list[index] = -1;
+}
+
+/* callback when the kboject ref count reaches zero */
+static void destroy_hvcs_struct(struct kobject *kobj)
+{
+ struct hvcs_struct *hvcsd = from_kobj(kobj);
+ struct vio_dev *vdev;
+ unsigned long flags;
+
+ spin_lock(&hvcs_structs_lock);
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ /* the list_del poisons the pointers */
+ list_del(&(hvcsd->next));
+
+ if (hvcsd->connected == 1) {
+ hvcs_partner_free(hvcsd);
+ printk(KERN_INFO "HVCS: Closed vty-server@%X and"
+ " partner vty@%X:%d connection.\n",
+ hvcsd->vdev->unit_address,
+ hvcsd->p_unit_address,
+ (uint32_t)hvcsd->p_partition_ID);
+ }
+ printk(KERN_INFO "HVCS: Destroyed hvcs_struct for vty-server@%X.\n",
+ hvcsd->vdev->unit_address);
+
+ vdev = hvcsd->vdev;
+ hvcsd->vdev = NULL;
+
+ hvcsd->p_unit_address = 0;
+ hvcsd->p_partition_ID = 0;
+ hvcs_return_index(hvcsd->index);
+ memset(&hvcsd->p_location_code[0], 0x00, HVCS_CLC_LENGTH + 1);
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ spin_unlock(&hvcs_structs_lock);
+
+ hvcs_remove_device_attrs(vdev);
+
+ kfree(hvcsd);
+}
+
+static struct kobj_type hvcs_kobj_type = {
+ .release = destroy_hvcs_struct,
+};
+
+static int hvcs_get_index(void)
+{
+ int i;
+ /* Paranoia check */
+ if (!hvcs_index_list) {
+ printk(KERN_ERR "HVCS: hvcs_index_list NOT valid!.\n");
+ return -EFAULT;
+ }
+ /* Find the numerically lowest first free index. */
+ for(i = 0; i < hvcs_index_count; i++) {
+ if (hvcs_index_list[i] == -1) {
+ hvcs_index_list[i] = 0;
+ return i;
+ }
+ }
+ return -1;
+}
+
+static int __devinit hvcs_probe(
+ struct vio_dev *dev,
+ const struct vio_device_id *id)
+{
+ struct hvcs_struct *hvcsd;
+ int index;
+
+ if (!dev || !id) {
+ printk(KERN_ERR "HVCS: probed with invalid parameter.\n");
+ return -EPERM;
+ }
+
+ /* early to avoid cleanup on failure */
+ index = hvcs_get_index();
+ if (index < 0) {
+ return -EFAULT;
+ }
+
+ hvcsd = kmalloc(sizeof(*hvcsd), GFP_KERNEL);
+ if (!hvcsd)
+ return -ENODEV;
+
+ /* hvcsd->tty is zeroed out with the memset */
+ memset(hvcsd, 0x00, sizeof(*hvcsd));
+
+ hvcsd->lock = SPIN_LOCK_UNLOCKED;
+ /* Automatically incs the refcount the first time */
+ kobject_init(&hvcsd->kobj);
+ /* Set up the callback for terminating the hvcs_struct's life */
+ hvcsd->kobj.ktype = &hvcs_kobj_type;
+
+ hvcsd->vdev = dev;
+ dev->dev.driver_data = hvcsd;
+
+ hvcsd->index = index;
+
+ /* hvcsd->index = ++hvcs_struct_count; */
+ hvcsd->chars_in_buffer = 0;
+ hvcsd->todo_mask = 0;
+ hvcsd->connected = 0;
+
+ /*
+ * This will populate the hvcs_struct's partner info fields for the
+ * first time.
+ */
+ if (hvcs_get_pi(hvcsd)) {
+ printk(KERN_ERR "HVCS: Failed to fetch partner"
+ " info for vty-server@%X on device probe.\n",
+ hvcsd->vdev->unit_address);
+ }
+
+ /*
+ * If a user app opens a tty that corresponds to this vty-server before
+ * the hvcs_struct has been added to the devices list then the user app
+ * will get -ENODEV.
+ */
+
+ spin_lock(&hvcs_structs_lock);
+
+ list_add_tail(&(hvcsd->next), &hvcs_structs);
+
+ spin_unlock(&hvcs_structs_lock);
+
+ hvcs_create_device_attrs(hvcsd);
+
+ printk(KERN_INFO "HVCS: vty-server@%X added to the vio bus.\n", dev->unit_address);
+
+ /*
+ * DON'T enable interrupts here because there is no user to receive the
+ * data.
+ */
+ return 0;
+}
+
+static int __devexit hvcs_remove(struct vio_dev *dev)
+{
+ struct hvcs_struct *hvcsd = dev->dev.driver_data;
+ unsigned long flags;
+ struct kobject *kobjp;
+ struct tty_struct *tty;
+
+ if (!hvcsd)
+ return -ENODEV;
+
+ /* By this time the vty-server won't be getting any more interrups */
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ tty = hvcsd->tty;
+
+ kobjp = &hvcsd->kobj;
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ /*
+ * Let the last holder of this object cause it to be removed, which
+ * would probably be tty_hangup below.
+ */
+ kobject_put (kobjp);
+
+ /*
+ * The hangup is a scheduled function which will auto chain call
+ * hvcs_hangup. The tty should always be valid at this time unless a
+ * simultaneous tty close already cleaned up the hvcs_struct.
+ */
+ if (tty)
+ tty_hangup(tty);
+
+ printk(KERN_INFO "HVCS: vty-server@%X removed from the"
+ " vio bus.\n", dev->unit_address);
+ return 0;
+};
+
+static struct vio_driver hvcs_vio_driver = {
+ .name = hvcs_driver_name,
+ .id_table = hvcs_driver_table,
+ .probe = hvcs_probe,
+ .remove = hvcs_remove,
+};
+
+/* Only called from hvcs_get_pi please */
+static void hvcs_set_pi(struct hvcs_partner_info *pi, struct hvcs_struct *hvcsd)
+{
+ int clclength;
+
+ hvcsd->p_unit_address = pi->unit_address;
+ hvcsd->p_partition_ID = pi->partition_ID;
+ clclength = strlen(&pi->location_code[0]);
+ if (clclength > HVCS_CLC_LENGTH)
+ clclength = HVCS_CLC_LENGTH;
+
+ /* copy the null-term char too */
+ strncpy(&hvcsd->p_location_code[0],
+ &pi->location_code[0], clclength + 1);
+}
+
+/*
+ * Traverse the list and add the partner info that is found to the hvcs_struct
+ * struct entry. NOTE: At this time I know that partner info will return a
+ * single entry but in the future there may be multiple partner info entries per
+ * vty-server and you'll want to zero out that list and reset it. If for some
+ * reason you have an old version of this driver but there IS more than one
+ * partner info then hvcsd->p_* will hold the last partner info data from the
+ * firmware query. A good way to update this code would be to replace the three
+ * partner info fields in hvcs_struct with a list of hvcs_partner_info
+ * instances.
+ *
+ * This function must be called with the hvcsd->lock held.
+ */
+static int hvcs_get_pi(struct hvcs_struct *hvcsd)
+{
+ struct hvcs_partner_info *pi;
+ uint32_t unit_address = hvcsd->vdev->unit_address;
+ struct list_head head;
+ int retval;
+
+ spin_lock(&hvcs_pi_lock);
+ if (!hvcs_pi_buff) {
+ spin_unlock(&hvcs_pi_lock);
+ return -EFAULT;
+ }
+ retval = hvcs_get_partner_info(unit_address, &head, hvcs_pi_buff);
+ spin_unlock(&hvcs_pi_lock);
+ if (retval) {
+ printk(KERN_ERR "HVCS: Failed to fetch partner"
+ " info for vty-server@%x.\n", unit_address);
+ return retval;
+ }
+
+ /* nixes the values if the partner vty went away */
+ hvcsd->p_unit_address = 0;
+ hvcsd->p_partition_ID = 0;
+
+ list_for_each_entry(pi, &head, node)
+ hvcs_set_pi(pi, hvcsd);
+
+ hvcs_free_partner_info(&head);
+ return 0;
+}
+
+/*
+ * This function is executed by the driver "rescan" sysfs entry. It shouldn't
+ * be executed elsewhere, in order to prevent deadlock issues.
+ */
+static int hvcs_rescan_devices_list(void)
+{
+ struct hvcs_struct *hvcsd;
+ unsigned long flags;
+
+ spin_lock(&hvcs_structs_lock);
+
+ list_for_each_entry(hvcsd, &hvcs_structs, next) {
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ hvcs_get_pi(hvcsd);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ }
+
+ spin_unlock(&hvcs_structs_lock);
+
+ return 0;
+}
+
+/*
+ * Farm this off into its own function because it could be more complex once
+ * multiple partners support is added. This function should be called with
+ * the hvcsd->lock held.
+ */
+static int hvcs_has_pi(struct hvcs_struct *hvcsd)
+{
+ if ((!hvcsd->p_unit_address) || (!hvcsd->p_partition_ID))
+ return 0;
+ return 1;
+}
+
+/*
+ * NOTE: It is possible that the super admin removed a partner vty and then
+ * added a different vty as the new partner.
+ *
+ * This function must be called with the hvcsd->lock held.
+ */
+static int hvcs_partner_connect(struct hvcs_struct *hvcsd)
+{
+ int retval;
+ unsigned int unit_address = hvcsd->vdev->unit_address;
+
+ /*
+ * If there wasn't any pi when the device was added it doesn't meant
+ * there isn't any now. This driver isn't notified when a new partner
+ * vty is added to a vty-server so we discover changes on our own.
+ * Please see comments in hvcs_register_connection() for justification
+ * of this bizarre code.
+ */
+ retval = hvcs_register_connection(unit_address,
+ hvcsd->p_partition_ID,
+ hvcsd->p_unit_address);
+ if (!retval) {
+ hvcsd->connected = 1;
+ return 0;
+ } else if (retval != -EINVAL)
+ return retval;
+
+ /*
+ * As per the spec re-get the pi and try again if -EINVAL after the
+ * first connection attempt.
+ */
+ if (hvcs_get_pi(hvcsd))
+ return -ENOMEM;
+
+ if (!hvcs_has_pi(hvcsd))
+ return -ENODEV;
+
+ retval = hvcs_register_connection(unit_address,
+ hvcsd->p_partition_ID,
+ hvcsd->p_unit_address);
+ if (retval != -EINVAL) {
+ hvcsd->connected = 1;
+ return retval;
+ }
+
+ /*
+ * EBUSY is the most likely scenario though the vty could have been
+ * removed or there really could be an hcall error due to the parameter
+ * data but thanks to ambiguous firmware return codes we can't really
+ * tell.
+ */
+ printk(KERN_INFO "HVCS: vty-server or partner"
+ " vty is busy. Try again later.\n");
+ return -EBUSY;
+}
+
+/* This function must be called with the hvcsd->lock held */
+static void hvcs_partner_free(struct hvcs_struct *hvcsd)
+{
+ int retval;
+ do {
+ retval = hvcs_free_connection(hvcsd->vdev->unit_address);
+ } while (retval == -EBUSY);
+ hvcsd->connected = 0;
+}
+
+/* This helper function must be called WITHOUT the hvcsd->lock held */
+static int hvcs_enable_device(struct hvcs_struct *hvcsd, uint32_t unit_address,
+ unsigned int irq, struct vio_dev *vdev)
+{
+ unsigned long flags;
+ int rc;
+
+ /*
+ * It is possible that the vty-server was removed between the time that
+ * the conn was registered and now.
+ */
+ if (!(rc = request_irq(irq, &hvcs_handle_interrupt,
+ SA_INTERRUPT, "ibmhvcs", hvcsd))) {
+ /*
+ * It is possible the vty-server was removed after the irq was
+ * requested but before we have time to enable interrupts.
+ */
+ if (vio_enable_interrupts(vdev) == H_Success)
+ return 0;
+ else {
+ printk(KERN_ERR "HVCS: int enable failed for"
+ " vty-server@%X.\n", unit_address);
+ free_irq(irq, hvcsd);
+ }
+ } else
+ printk(KERN_ERR "HVCS: irq req failed for"
+ " vty-server@%X.\n", unit_address);
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ hvcs_partner_free(hvcsd);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ return rc;
+
+}
+
+/*
+ * This always increments the kobject ref count if the call is successful.
+ * Please remember to dec when you are done with the instance.
+ *
+ * NOTICE: Do NOT hold either the hvcs_struct.lock or hvcs_structs_lock when
+ * calling this function or you will get deadlock.
+ */
+struct hvcs_struct *hvcs_get_by_index(int index)
+{
+ struct hvcs_struct *hvcsd = NULL;
+ unsigned long flags;
+
+ spin_lock(&hvcs_structs_lock);
+ /* We can immediately discard OOB requests */
+ if (index >= 0 && index < HVCS_MAX_SERVER_ADAPTERS) {
+ list_for_each_entry(hvcsd, &hvcs_structs, next) {
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ if (hvcsd->index == index) {
+ kobject_get(&hvcsd->kobj);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ spin_unlock(&hvcs_structs_lock);
+ return hvcsd;
+ }
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ }
+ hvcsd = NULL;
+ }
+
+ spin_unlock(&hvcs_structs_lock);
+ return hvcsd;
+}
+
+/*
+ * This is invoked via the tty_open interface when a user app connects to the
+ * /dev node.
+ */
+static int hvcs_open(struct tty_struct *tty, struct file *filp)
+{
+ struct hvcs_struct *hvcsd;
+ int rc, retval = 0;
+ unsigned long flags;
+ unsigned int irq;
+ struct vio_dev *vdev;
+ unsigned long unit_address;
+ struct kobject *kobjp;
+
+ if (tty->driver_data)
+ goto fast_open;
+
+ /*
+ * Is there a vty-server that shares the same index?
+ * This function increments the kobject index.
+ */
+ if (!(hvcsd = hvcs_get_by_index(tty->index))) {
+ printk(KERN_WARNING "HVCS: open failed, no device associated"
+ " with tty->index %d.\n", tty->index);
+ return -ENODEV;
+ }
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ if (hvcsd->connected == 0)
+ if ((retval = hvcs_partner_connect(hvcsd)))
+ goto error_release;
+
+ hvcsd->open_count = 1;
+ hvcsd->tty = tty;
+ tty->driver_data = hvcsd;
+
+ /*
+ * Set this driver to low latency so that we actually have a chance at
+ * catching a throttled TTY after we flip_buffer_push. Otherwise the
+ * flush_to_async may not execute until after the kernel_thread has
+ * yielded and resumed the next flip_buffer_push resulting in data
+ * loss.
+ */
+ tty->low_latency = 1;
+
+ memset(&hvcsd->buffer[0], 0x00, HVCS_BUFF_LEN);
+
+ /*
+ * Save these in the spinlock for the enable operations that need them
+ * outside of the spinlock.
+ */
+ irq = hvcsd->vdev->irq;
+ vdev = hvcsd->vdev;
+ unit_address = hvcsd->vdev->unit_address;
+
+ hvcsd->todo_mask |= HVCS_SCHED_READ;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ /*
+ * This must be done outside of the spinlock because it requests irqs
+ * and will grab the spinlock and free the connection if it fails.
+ */
+ if (((rc = hvcs_enable_device(hvcsd, unit_address, irq, vdev)))) {
+ kobject_put(&hvcsd->kobj);
+ printk(KERN_WARNING "HVCS: enable device failed.\n");
+ return rc;
+ }
+
+ goto open_success;
+
+fast_open:
+ hvcsd = tty->driver_data;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ if (!kobject_get(&hvcsd->kobj)) {
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ printk(KERN_ERR "HVCS: Kobject of open"
+ " hvcs doesn't exist.\n");
+ return -EFAULT; /* Is this the right return value? */
+ }
+
+ hvcsd->open_count++;
+
+ hvcsd->todo_mask |= HVCS_SCHED_READ;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+open_success:
+ hvcs_kick();
+
+ printk(KERN_INFO "HVCS: vty-server@%X connection opened.\n",
+ hvcsd->vdev->unit_address );
+
+ return 0;
+
+error_release:
+ kobjp = &hvcsd->kobj;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ kobject_put(&hvcsd->kobj);
+
+ printk(KERN_WARNING "HVCS: partner connect failed.\n");
+ return retval;
+}
+
+static void hvcs_close(struct tty_struct *tty, struct file *filp)
+{
+ struct hvcs_struct *hvcsd;
+ unsigned long flags;
+ struct kobject *kobjp;
+ int irq = NO_IRQ;
+
+ /*
+ * Is someone trying to close the file associated with this device after
+ * we have hung up? If so tty->driver_data wouldn't be valid.
+ */
+ if (tty_hung_up_p(filp))
+ return;
+
+ /*
+ * No driver_data means that this close was probably issued after a
+ * failed hvcs_open by the tty layer's release_dev() api and we can just
+ * exit cleanly.
+ */
+ if (!tty->driver_data)
+ return;
+
+ hvcsd = tty->driver_data;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ kobjp = &hvcsd->kobj;
+ if (--hvcsd->open_count == 0) {
+
+ vio_disable_interrupts(hvcsd->vdev);
+
+ /*
+ * NULL this early so that the kernel_thread doesn't try to
+ * execute any operations on the TTY even though it is obligated
+ * to deliver any pending I/O to the hypervisor.
+ */
+ hvcsd->tty = NULL;
+
+ irq = hvcsd->vdev->irq;
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ tty_wait_until_sent(tty, HVCS_CLOSE_WAIT);
+
+ /*
+ * This line is important because it tells hvcs_open that this
+ * device needs to be re-configured the next time hvcs_open is
+ * called.
+ */
+ tty->driver_data = NULL;
+
+ free_irq(irq, hvcsd);
+ kobject_put(kobjp);
+ return;
+ } else if (hvcsd->open_count < 0) {
+ printk(KERN_ERR "HVCS: vty-server@%X open_count: %d"
+ " is missmanaged.\n",
+ hvcsd->vdev->unit_address, hvcsd->open_count);
+ }
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ kobject_put(kobjp);
+}
+
+static void hvcs_hangup(struct tty_struct * tty)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned long flags;
+ int temp_open_count;
+ struct kobject *kobjp;
+ int irq = NO_IRQ;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ /* Preserve this so that we know how many kobject refs to put */
+ temp_open_count = hvcsd->open_count;
+
+ /*
+ * Don't kobject put inside the spinlock because the destruction
+ * callback may use the spinlock and it may get called before the
+ * spinlock has been released. Get a pointer to the kobject and
+ * kobject_put on that after releasing the spinlock.
+ */
+ kobjp = &hvcsd->kobj;
+
+ vio_disable_interrupts(hvcsd->vdev);
+
+ hvcsd->todo_mask = 0;
+
+ /* I don't think the tty needs the hvcs_struct pointer after a hangup */
+ hvcsd->tty->driver_data = NULL;
+ hvcsd->tty = NULL;
+
+ hvcsd->open_count = 0;
+
+ /* This will drop any buffered data on the floor which is OK in a hangup
+ * scenario. */
+ memset(&hvcsd->buffer[0], 0x00, HVCS_BUFF_LEN);
+ hvcsd->chars_in_buffer = 0;
+
+ irq = hvcsd->vdev->irq;
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ free_irq(irq, hvcsd);
+
+ /*
+ * We need to kobject_put() for every open_count we have since the
+ * tty_hangup() function doesn't invoke a close per open connection on a
+ * non-console device.
+ */
+ while(temp_open_count) {
+ --temp_open_count;
+ /*
+ * The final put will trigger destruction of the hvcs_struct.
+ * NOTE: If this hangup was signaled from user space then the
+ * final put will never happen.
+ */
+ kobject_put(kobjp);
+ }
+}
+
+/*
+ * NOTE: This is almost always from_user since user level apps interact with the
+ * /dev nodes. I'm trusting that if hvcs_write gets called and interrupted by
+ * hvcs_remove (which removes the target device and executes tty_hangup()) that
+ * tty_hangup will allow hvcs_write time to complete execution before it
+ * terminates our device.
+ */
+static int hvcs_write(struct tty_struct *tty,
+ const unsigned char *buf, int count)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+ unsigned int unit_address;
+ const unsigned char *charbuf;
+ unsigned long flags;
+ int total_sent = 0;
+ int tosend = 0;
+ int result = 0;
+
+ /*
+ * If they don't check the return code off of their open they may
+ * attempt this even if there is no connected device.
+ */
+ if (!hvcsd)
+ return -ENODEV;
+
+ /* Reasonable size to prevent user level flooding */
+ if (count > HVCS_MAX_FROM_USER) {
+ printk(KERN_WARNING "HVCS write: count being truncated to"
+ " HVCS_MAX_FROM_USER.\n");
+ count = HVCS_MAX_FROM_USER;
+ }
+
+ charbuf = buf;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ /*
+ * Somehow an open succedded but the device was removed or the
+ * connection terminated between the vty-server and partner vty during
+ * the middle of a write operation? This is a crummy place to do this
+ * but we want to keep it all in the spinlock.
+ */
+ if (hvcsd->open_count <= 0) {
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return -ENODEV;
+ }
+
+ unit_address = hvcsd->vdev->unit_address;
+
+ while (count > 0) {
+ tosend = min(count, (HVCS_BUFF_LEN - hvcsd->chars_in_buffer));
+ /*
+ * No more space, this probably means that the last call to
+ * hvcs_write() didn't succeed and the buffer was filled up.
+ */
+ if (!tosend)
+ break;
+
+ memcpy(&hvcsd->buffer[hvcsd->chars_in_buffer],
+ &charbuf[total_sent],
+ tosend);
+
+ hvcsd->chars_in_buffer += tosend;
+
+ result = 0;
+
+ /*
+ * If this is true then we don't want to try writing to the
+ * hypervisor because that is the kernel_threads job now. We'll
+ * just add to the buffer.
+ */
+ if (!(hvcsd->todo_mask & HVCS_TRY_WRITE))
+ /* won't send partial writes */
+ result = hvc_put_chars(unit_address,
+ &hvcsd->buffer[0],
+ hvcsd->chars_in_buffer);
+
+ /*
+ * Since we know we have enough room in hvcsd->buffer for
+ * tosend we record that it was sent regardless of whether the
+ * hypervisor actually took it because we have it buffered.
+ */
+ total_sent+=tosend;
+ count-=tosend;
+ if (result == 0) {
+ hvcsd->todo_mask |= HVCS_TRY_WRITE;
+ hvcs_kick();
+ break;
+ }
+
+ hvcsd->chars_in_buffer = 0;
+ /*
+ * Test after the chars_in_buffer reset otherwise this could
+ * deadlock our writes if hvc_put_chars fails.
+ */
+ if (result < 0)
+ break;
+ }
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+
+ if (result == -1)
+ return -EIO;
+ else
+ return total_sent;
+}
+
+/*
+ * This is really asking how much can we guarentee that we can send or that we
+ * absolutely WILL BUFFER if we can't send it. This driver MUST honor the
+ * return value, hence the reason for hvcs_struct buffering.
+ */
+static int hvcs_write_room(struct tty_struct *tty)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+
+ if (!hvcsd || hvcsd->open_count <= 0)
+ return 0;
+
+ return HVCS_BUFF_LEN - hvcsd->chars_in_buffer;
+}
+
+static int hvcs_chars_in_buffer(struct tty_struct *tty)
+{
+ struct hvcs_struct *hvcsd = tty->driver_data;
+
+ return hvcsd->chars_in_buffer;
+}
+
+static struct tty_operations hvcs_ops = {
+ .open = hvcs_open,
+ .close = hvcs_close,
+ .hangup = hvcs_hangup,
+ .write = hvcs_write,
+ .write_room = hvcs_write_room,
+ .chars_in_buffer = hvcs_chars_in_buffer,
+ .unthrottle = hvcs_unthrottle,
+ .throttle = hvcs_throttle,
+};
+
+static int hvcs_alloc_index_list(int n)
+{
+ int i;
+ hvcs_index_list = kmalloc(n * sizeof(hvcs_index_count),GFP_KERNEL);
+ if (!hvcs_index_list)
+ return -ENOMEM;
+ hvcs_index_count = n;
+ for(i = 0; i < hvcs_index_count; i++)
+ hvcs_index_list[i] = -1;
+ return 0;
+}
+
+static void hvcs_free_index_list(void)
+{
+ /* Paranoia check to be thorough. */
+ if (hvcs_index_list) {
+ kfree(hvcs_index_list);
+ hvcs_index_list = NULL;
+ hvcs_index_count = 0;
+ }
+}
+
+static int __init hvcs_module_init(void)
+{
+ int rc;
+ int num_ttys_to_alloc;
+
+ printk(KERN_INFO "Initializing %s\n", hvcs_driver_string);
+
+ /* Has the user specified an overload with an insmod param? */
+ if (hvcs_parm_num_devs <= 0 ||
+ (hvcs_parm_num_devs > HVCS_MAX_SERVER_ADAPTERS)) {
+ num_ttys_to_alloc = HVCS_DEFAULT_SERVER_ADAPTERS;
+ } else
+ num_ttys_to_alloc = hvcs_parm_num_devs;
+
+ hvcs_tty_driver = alloc_tty_driver(num_ttys_to_alloc);
+ if (!hvcs_tty_driver)
+ return -ENOMEM;
+
+ if (hvcs_alloc_index_list(num_ttys_to_alloc))
+ return -ENOMEM;
+
+ hvcs_tty_driver->owner = THIS_MODULE;
+
+ hvcs_tty_driver->driver_name = hvcs_driver_name;
+ hvcs_tty_driver->name = hvcs_device_node;
+
+ /*
+ * We'll let the system assign us a major number, indicated by leaving
+ * it blank.
+ */
+
+ hvcs_tty_driver->minor_start = HVCS_MINOR_START;
+ hvcs_tty_driver->type = TTY_DRIVER_TYPE_SYSTEM;
+
+ /*
+ * We role our own so that we DONT ECHO. We can't echo because the
+ * device we are connecting to already echoes by default and this would
+ * throw us into a horrible recursive echo-echo-echo loop.
+ */
+ hvcs_tty_driver->init_termios = hvcs_tty_termios;
+ hvcs_tty_driver->flags = TTY_DRIVER_REAL_RAW;
+
+ tty_set_operations(hvcs_tty_driver, &hvcs_ops);
+
+ /*
+ * The following call will result in sysfs entries that denote the
+ * dynamically assigned major and minor numbers for our devices.
+ */
+ if (tty_register_driver(hvcs_tty_driver)) {
+ printk(KERN_ERR "HVCS: registration "
+ " as a tty driver failed.\n");
+ hvcs_free_index_list();
+ put_tty_driver(hvcs_tty_driver);
+ return -EIO;
+ }
+
+ hvcs_pi_buff = kmalloc(PAGE_SIZE, GFP_KERNEL);
+ if (!hvcs_pi_buff) {
+ tty_unregister_driver(hvcs_tty_driver);
+ hvcs_free_index_list();
+ put_tty_driver(hvcs_tty_driver);
+ return -ENOMEM;
+ }
+
+ hvcs_task = kthread_run(khvcsd, NULL, "khvcsd");
+ if (IS_ERR(hvcs_task)) {
+ printk(KERN_ERR "HVCS: khvcsd creation failed. Driver not loaded.\n");
+ kfree(hvcs_pi_buff);
+ tty_unregister_driver(hvcs_tty_driver);
+ hvcs_free_index_list();
+ put_tty_driver(hvcs_tty_driver);
+ return -EIO;
+ }
+
+ rc = vio_register_driver(&hvcs_vio_driver);
+
+ /*
+ * This needs to be done AFTER the vio_register_driver() call or else
+ * the kobjects won't be initialized properly.
+ */
+ hvcs_create_driver_attrs();
+
+ printk(KERN_INFO "HVCS: driver module inserted.\n");
+
+ return rc;
+}
+
+static void __exit hvcs_module_exit(void)
+{
+ /*
+ * This driver receives hvcs_remove callbacks for each device upon
+ * module removal.
+ */
+
+ /*
+ * This synchronous operation will wake the khvcsd kthread if it is
+ * asleep and will return when khvcsd has terminated.
+ */
+ kthread_stop(hvcs_task);
+
+ spin_lock(&hvcs_pi_lock);
+ kfree(hvcs_pi_buff);
+ hvcs_pi_buff = NULL;
+ spin_unlock(&hvcs_pi_lock);
+
+ hvcs_remove_driver_attrs();
+
+ vio_unregister_driver(&hvcs_vio_driver);
+
+ tty_unregister_driver(hvcs_tty_driver);
+
+ hvcs_free_index_list();
+
+ put_tty_driver(hvcs_tty_driver);
+
+ printk(KERN_INFO "HVCS: driver module removed.\n");
+}
+
+module_init(hvcs_module_init);
+module_exit(hvcs_module_exit);
+
+static inline struct hvcs_struct *from_vio_dev(struct vio_dev *viod)
+{
+ return viod->dev.driver_data;
+}
+/* The sysfs interface for the driver and devices */
+
+static ssize_t hvcs_partner_vtys_show(struct device *dev, char *buf)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = sprintf(buf, "%X\n", hvcsd->p_unit_address);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+static DEVICE_ATTR(partner_vtys, S_IRUGO, hvcs_partner_vtys_show, NULL);
+
+static ssize_t hvcs_partner_clcs_show(struct device *dev, char *buf)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = sprintf(buf, "%s\n", &hvcsd->p_location_code[0]);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+static DEVICE_ATTR(partner_clcs, S_IRUGO, hvcs_partner_clcs_show, NULL);
+
+static ssize_t hvcs_current_vty_store(struct device *dev, const char * buf,
+ size_t count)
+{
+ /*
+ * Don't need this feature at the present time because firmware doesn't
+ * yet support multiple partners.
+ */
+ printk(KERN_INFO "HVCS: Denied current_vty change: -EPERM.\n");
+ return -EPERM;
+}
+
+static ssize_t hvcs_current_vty_show(struct device *dev, char *buf)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = sprintf(buf, "%s\n", &hvcsd->p_location_code[0]);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+
+static DEVICE_ATTR(current_vty,
+ S_IRUGO | S_IWUSR, hvcs_current_vty_show, hvcs_current_vty_store);
+
+static ssize_t hvcs_vterm_state_store(struct device *dev, const char *buf,
+ size_t count)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+
+ /* writing a '0' to this sysfs entry will result in the disconnect. */
+ if (simple_strtol(buf, NULL, 0) != 0)
+ return -EINVAL;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+
+ if (hvcsd->open_count > 0) {
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ printk(KERN_INFO "HVCS: vterm state unchanged. "
+ "The hvcs device node is still in use.\n");
+ return -EPERM;
+ }
+
+ if (hvcsd->connected == 0) {
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ printk(KERN_INFO "HVCS: vterm state unchanged. The"
+ " vty-server is not connected to a vty.\n");
+ return -EPERM;
+ }
+
+ hvcs_partner_free(hvcsd);
+ printk(KERN_INFO "HVCS: Closed vty-server@%X and"
+ " partner vty@%X:%d connection.\n",
+ hvcsd->vdev->unit_address,
+ hvcsd->p_unit_address,
+ (uint32_t)hvcsd->p_partition_ID);
+
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return count;
+}
+
+static ssize_t hvcs_vterm_state_show(struct device *dev, char *buf)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = sprintf(buf, "%d\n", hvcsd->connected);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+static DEVICE_ATTR(vterm_state, S_IRUGO | S_IWUSR,
+ hvcs_vterm_state_show, hvcs_vterm_state_store);
+
+static ssize_t hvcs_index_show(struct device *dev, char *buf)
+{
+ struct vio_dev *viod = to_vio_dev(dev);
+ struct hvcs_struct *hvcsd = from_vio_dev(viod);
+ unsigned long flags;
+ int retval;
+
+ spin_lock_irqsave(&hvcsd->lock, flags);
+ retval = sprintf(buf, "%d\n", hvcsd->index);
+ spin_unlock_irqrestore(&hvcsd->lock, flags);
+ return retval;
+}
+
+static DEVICE_ATTR(index, S_IRUGO, hvcs_index_show, NULL);
+
+static struct attribute *hvcs_attrs[] = {
+ &dev_attr_partner_vtys.attr,
+ &dev_attr_partner_clcs.attr,
+ &dev_attr_current_vty.attr,
+ &dev_attr_vterm_state.attr,
+ &dev_attr_index.attr,
+ NULL,
+};
+
+static struct attribute_group hvcs_attr_group = {
+ .attrs = hvcs_attrs,
+};
+
+static void hvcs_create_device_attrs(struct hvcs_struct *hvcsd)
+{
+ struct vio_dev *vdev = hvcsd->vdev;
+ sysfs_create_group(&vdev->dev.kobj, &hvcs_attr_group);
+}
+
+static void hvcs_remove_device_attrs(struct vio_dev *vdev)
+{
+ sysfs_remove_group(&vdev->dev.kobj, &hvcs_attr_group);
+}
+
+static ssize_t hvcs_rescan_show(struct device_driver *ddp, char *buf)
+{
+ /* A 1 means it is updating, a 0 means it is done updating */
+ return snprintf(buf, PAGE_SIZE, "%d\n", hvcs_rescan_status);
+}
+
+static ssize_t hvcs_rescan_store(struct device_driver *ddp, const char * buf,
+ size_t count)
+{
+ if ((simple_strtol(buf, NULL, 0) != 1)
+ && (hvcs_rescan_status != 0))
+ return -EINVAL;
+
+ hvcs_rescan_status = 1;
+ printk(KERN_INFO "HVCS: rescanning partner info for all"
+ " vty-servers.\n");
+ hvcs_rescan_devices_list();
+ hvcs_rescan_status = 0;
+ return count;
+}
+static DRIVER_ATTR(rescan,
+ S_IRUGO | S_IWUSR, hvcs_rescan_show, hvcs_rescan_store);
+
+static void hvcs_create_driver_attrs(void)
+{
+ struct device_driver *driverfs = &(hvcs_vio_driver.driver);
+ driver_create_file(driverfs, &driver_attr_rescan);
+}
+
+static void hvcs_remove_driver_attrs(void)
+{
+ struct device_driver *driverfs = &(hvcs_vio_driver.driver);
+ driver_remove_file(driverfs, &driver_attr_rescan);
+}
--- /dev/null
+/*
+ * drivers/watchdog/ixp2000_wdt.c
+ *
+ * Watchdog driver for Intel IXP2000 network processors
+ *
+ * Adapted from the IXP4xx watchdog driver by Lennert Buytenhek.
+ * The original version carries these notices:
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright 2004 (c) MontaVista, Software, Inc.
+ * Based on sa1100 driver, Copyright (C) 2000 Oleg Drokin <green@crimea.edu>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/miscdevice.h>
+#include <linux/watchdog.h>
+#include <linux/init.h>
+#include <linux/bitops.h>
+
+#include <asm/hardware.h>
+#include <asm/uaccess.h>
+
+#ifdef CONFIG_WATCHDOG_NOWAYOUT
+static int nowayout = 1;
+#else
+static int nowayout = 0;
+#endif
+static unsigned int heartbeat = 60; /* (secs) Default is 1 minute */
+static unsigned long wdt_status;
+
+#define WDT_IN_USE 0
+#define WDT_OK_TO_CLOSE 1
+
+static unsigned long wdt_tick_rate;
+
+static void
+wdt_enable(void)
+{
+ ixp2000_reg_write(IXP2000_RESET0, *(IXP2000_RESET0) | WDT_RESET_ENABLE);
+ ixp2000_reg_write(IXP2000_TWDE, WDT_ENABLE);
+ ixp2000_reg_write(IXP2000_T4_CLD, heartbeat * wdt_tick_rate);
+ ixp2000_reg_write(IXP2000_T4_CTL, TIMER_DIVIDER_256 | TIMER_ENABLE);
+}
+
+static void
+wdt_disable(void)
+{
+ ixp2000_reg_write(IXP2000_T4_CTL, 0);
+}
+
+static void
+wdt_keepalive(void)
+{
+ ixp2000_reg_write(IXP2000_T4_CLD, heartbeat * wdt_tick_rate);
+}
+
+static int
+ixp2000_wdt_open(struct inode *inode, struct file *file)
+{
+ if (test_and_set_bit(WDT_IN_USE, &wdt_status))
+ return -EBUSY;
+
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ wdt_enable();
+
+ return nonseekable_open(inode, file);
+}
+
+static ssize_t
+ixp2000_wdt_write(struct file *file, const char *data, size_t len, loff_t *ppos)
+{
+ if (len) {
+ if (!nowayout) {
+ size_t i;
+
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ for (i = 0; i != len; i++) {
+ char c;
+
+ if (get_user(c, data + i))
+ return -EFAULT;
+ if (c == 'V')
+ set_bit(WDT_OK_TO_CLOSE, &wdt_status);
+ }
+ }
+ wdt_keepalive();
+ }
+
+ return len;
+}
+
+
+static struct watchdog_info ident = {
+ .options = WDIOF_MAGICCLOSE | WDIOF_SETTIMEOUT |
+ WDIOF_KEEPALIVEPING,
+ .identity = "IXP2000 Watchdog",
+};
+
+static int
+ixp2000_wdt_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret = -ENOIOCTLCMD;
+ int time;
+
+ switch (cmd) {
+ case WDIOC_GETSUPPORT:
+ ret = copy_to_user((struct watchdog_info *)arg, &ident,
+ sizeof(ident)) ? -EFAULT : 0;
+ break;
+
+ case WDIOC_GETSTATUS:
+ ret = put_user(0, (int *)arg);
+ break;
+
+ case WDIOC_GETBOOTSTATUS:
+ ret = put_user(0, (int *)arg);
+ break;
+
+ case WDIOC_SETTIMEOUT:
+ ret = get_user(time, (int *)arg);
+ if (ret)
+ break;
+
+ if (time <= 0 || time > 60) {
+ ret = -EINVAL;
+ break;
+ }
+
+ heartbeat = time;
+ wdt_keepalive();
+ /* Fall through */
+
+ case WDIOC_GETTIMEOUT:
+ ret = put_user(heartbeat, (int *)arg);
+ break;
+
+ case WDIOC_KEEPALIVE:
+ wdt_enable();
+ ret = 0;
+ break;
+ }
+
+ return ret;
+}
+
+static int
+ixp2000_wdt_release(struct inode *inode, struct file *file)
+{
+ if (test_bit(WDT_OK_TO_CLOSE, &wdt_status)) {
+ wdt_disable();
+ } else {
+ printk(KERN_CRIT "WATCHDOG: Device closed unexpectdly - "
+ "timer will not stop\n");
+ }
+
+ clear_bit(WDT_IN_USE, &wdt_status);
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ return 0;
+}
+
+
+static struct file_operations ixp2000_wdt_fops =
+{
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .write = ixp2000_wdt_write,
+ .ioctl = ixp2000_wdt_ioctl,
+ .open = ixp2000_wdt_open,
+ .release = ixp2000_wdt_release,
+};
+
+static struct miscdevice ixp2000_wdt_miscdev =
+{
+ .minor = WATCHDOG_MINOR,
+ .name = "IXP2000 Watchdog",
+ .fops = &ixp2000_wdt_fops,
+};
+
+static int __init ixp2000_wdt_init(void)
+{
+ wdt_tick_rate = (*IXP2000_T1_CLD * HZ)/ 256;;
+
+ return misc_register(&ixp2000_wdt_miscdev);
+}
+
+static void __exit ixp2000_wdt_exit(void)
+{
+ misc_deregister(&ixp2000_wdt_miscdev);
+}
+
+module_init(ixp2000_wdt_init);
+module_exit(ixp2000_wdt_exit);
+
+MODULE_AUTHOR("Deepak Saxena <dsaxena@plexity.net">);
+MODULE_DESCRIPTION("IXP2000 Network Processor Watchdog");
+
+module_param(heartbeat, int, 0);
+MODULE_PARM_DESC(heartbeat, "Watchdog heartbeat in seconds (default 60s)");
+
+module_param(nowayout, int, 0);
+MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started");
+
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
+
--- /dev/null
+/*
+ * drivers/watchdog/ixp4xx_wdt.c
+ *
+ * Watchdog driver for Intel IXP4xx network processors
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright 2004 (c) MontaVista, Software, Inc.
+ * Based on sa1100 driver, Copyright (C) 2000 Oleg Drokin <green@crimea.edu>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/fs.h>
+#include <linux/miscdevice.h>
+#include <linux/watchdog.h>
+#include <linux/init.h>
+#include <linux/bitops.h>
+
+#include <asm/hardware.h>
+#include <asm/uaccess.h>
+
+#ifdef CONFIG_WATCHDOG_NOWAYOUT
+static int nowayout = 1;
+#else
+static int nowayout = 0;
+#endif
+static int heartbeat = 60; /* (secs) Default is 1 minute */
+static unsigned long wdt_status;
+static unsigned long boot_status;
+
+#define WDT_TICK_RATE (IXP4XX_PERIPHERAL_BUS_CLOCK * 1000000UL)
+
+#define WDT_IN_USE 0
+#define WDT_OK_TO_CLOSE 1
+
+static void
+wdt_enable(void)
+{
+ *IXP4XX_OSWK = IXP4XX_WDT_KEY;
+ *IXP4XX_OSWE = 0;
+ *IXP4XX_OSWT = WDT_TICK_RATE * heartbeat;
+ *IXP4XX_OSWE = IXP4XX_WDT_COUNT_ENABLE | IXP4XX_WDT_RESET_ENABLE;
+ *IXP4XX_OSWK = 0;
+}
+
+static void
+wdt_disable(void)
+{
+ *IXP4XX_OSWK = IXP4XX_WDT_KEY;
+ *IXP4XX_OSWE = 0;
+ *IXP4XX_OSWK = 0;
+}
+
+static int
+ixp4xx_wdt_open(struct inode *inode, struct file *file)
+{
+ if (test_and_set_bit(WDT_IN_USE, &wdt_status))
+ return -EBUSY;
+
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ wdt_enable();
+
+ return nonseekable_open(inode, file);
+}
+
+static ssize_t
+ixp4xx_wdt_write(struct file *file, const char *data, size_t len, loff_t *ppos)
+{
+ if (len) {
+ if (!nowayout) {
+ size_t i;
+
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ for (i = 0; i != len; i++) {
+ char c;
+
+ if (get_user(c, data + i))
+ return -EFAULT;
+ if (c == 'V')
+ set_bit(WDT_OK_TO_CLOSE, &wdt_status);
+ }
+ }
+ wdt_enable();
+ }
+
+ return len;
+}
+
+static struct watchdog_info ident = {
+ .options = WDIOF_CARDRESET | WDIOF_MAGICCLOSE |
+ WDIOF_SETTIMEOUT | WDIOF_KEEPALIVEPING,
+ .identity = "IXP4xx Watchdog",
+};
+
+
+static int
+ixp4xx_wdt_ioctl(struct inode *inode, struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ int ret = -ENOIOCTLCMD;
+ int time;
+
+ switch (cmd) {
+ case WDIOC_GETSUPPORT:
+ ret = copy_to_user((struct watchdog_info *)arg, &ident,
+ sizeof(ident)) ? -EFAULT : 0;
+ break;
+
+ case WDIOC_GETSTATUS:
+ ret = put_user(0, (int *)arg);
+ break;
+
+ case WDIOC_GETBOOTSTATUS:
+ ret = put_user(boot_status, (int *)arg);
+ break;
+
+ case WDIOC_SETTIMEOUT:
+ ret = get_user(time, (int *)arg);
+ if (ret)
+ break;
+
+ if (time <= 0 || time > 60) {
+ ret = -EINVAL;
+ break;
+ }
+
+ heartbeat = time;
+ wdt_enable();
+ /* Fall through */
+
+ case WDIOC_GETTIMEOUT:
+ ret = put_user(heartbeat, (int *)arg);
+ break;
+
+ case WDIOC_KEEPALIVE:
+ wdt_enable();
+ ret = 0;
+ break;
+ }
+ return ret;
+}
+
+static int
+ixp4xx_wdt_release(struct inode *inode, struct file *file)
+{
+ if (test_bit(WDT_OK_TO_CLOSE, &wdt_status)) {
+ wdt_disable();
+ } else {
+ printk(KERN_CRIT "WATCHDOG: Device closed unexpectdly - "
+ "timer will not stop\n");
+ }
+
+ clear_bit(WDT_IN_USE, &wdt_status);
+ clear_bit(WDT_OK_TO_CLOSE, &wdt_status);
+
+ return 0;
+}
+
+
+static struct file_operations ixp4xx_wdt_fops =
+{
+ .owner = THIS_MODULE,
+ .llseek = no_llseek,
+ .write = ixp4xx_wdt_write,
+ .ioctl = ixp4xx_wdt_ioctl,
+ .open = ixp4xx_wdt_open,
+ .release = ixp4xx_wdt_release,
+};
+
+static struct miscdevice ixp4xx_wdt_miscdev =
+{
+ .minor = WATCHDOG_MINOR,
+ .name = "IXP4xx Watchdog",
+ .fops = &ixp4xx_wdt_fops,
+};
+
+static int __init ixp4xx_wdt_init(void)
+{
+ int ret;
+ unsigned long processor_id;
+
+ asm("mrc p15, 0, %0, cr0, cr0, 0;" : "=r"(processor_id) :);
+ if (!(processor_id & 0xf)) {
+ printk("IXP4XXX Watchdog: Rev. A0 CPU detected - "
+ "watchdog disabled\n");
+
+ return -ENODEV;
+ }
+
+ ret = misc_register(&ixp4xx_wdt_miscdev);
+ if (ret == 0)
+ printk("IXP4xx Watchdog Timer: heartbeat %d sec\n", heartbeat);
+
+ boot_status = (*IXP4XX_OSST & IXP4XX_OSST_TIMER_WARM_RESET) ?
+ WDIOF_CARDRESET : 0;
+
+ return ret;
+}
+
+static void __exit ixp4xx_wdt_exit(void)
+{
+ misc_deregister(&ixp4xx_wdt_miscdev);
+}
+
+
+module_init(ixp4xx_wdt_init);
+module_exit(ixp4xx_wdt_exit);
+
+MODULE_AUTHOR("Deepak Saxena <dsaxena@plexity.net>");
+MODULE_DESCRIPTION("IXP4xx Network Processor Watchdog");
+
+module_param(heartbeat, int, 0);
+MODULE_PARM_DESC(heartbeat, "Watchdog heartbeat in seconds (default 60s)");
+
+module_param(nowayout, int, 0);
+MODULE_PARM_DESC(nowayout, "Watchdog cannot be stopped once started");
+
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_MISCDEV(WATCHDOG_MINOR);
+
--- /dev/null
+/*
+ * Parse the EFI PCDP table to locate the console device.
+ *
+ * (c) Copyright 2002, 2003, 2004 Hewlett-Packard Development Company, L.P.
+ * Khalid Aziz <khalid.aziz@hp.com>
+ * Alex Williamson <alex.williamson@hp.com>
+ * Bjorn Helgaas <bjorn.helgaas@hp.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/acpi.h>
+#include <linux/console.h>
+#include <linux/efi.h>
+#include <linux/serial.h>
+#include "pcdp.h"
+
+static int __init
+setup_serial_console(struct pcdp_uart *uart)
+{
+#ifdef CONFIG_SERIAL_8250_CONSOLE
+ int mmio;
+ static char options[64];
+
+ mmio = (uart->addr.address_space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY);
+ snprintf(options, sizeof(options), "console=uart,%s,0x%lx,%lun%d",
+ mmio ? "mmio" : "io", uart->addr.address, uart->baud,
+ uart->bits ? uart->bits : 8);
+
+ return early_serial_console_init(options);
+#else
+ return -ENODEV;
+#endif
+}
+
+static int __init
+setup_vga_console(struct pcdp_vga *vga)
+{
+#if defined(CONFIG_VT) && defined(CONFIG_VGA_CONSOLE)
+ if (efi_mem_type(0xA0000) == EFI_CONVENTIONAL_MEMORY) {
+ printk(KERN_ERR "PCDP: VGA selected, but frame buffer is not MMIO!\n");
+ return -ENODEV;
+ }
+
+ conswitchp = &vga_con;
+ printk(KERN_INFO "PCDP: VGA console\n");
+ return 0;
+#else
+ return -ENODEV;
+#endif
+}
+
+int __init
+efi_setup_pcdp_console(char *cmdline)
+{
+ struct pcdp *pcdp;
+ struct pcdp_uart *uart;
+ struct pcdp_device *dev, *end;
+ int i, serial = 0;
+
+ pcdp = efi.hcdp;
+ if (!pcdp)
+ return -ENODEV;
+
+ printk(KERN_INFO "PCDP: v%d at 0x%lx\n", pcdp->rev, __pa(pcdp));
+
+ if (strstr(cmdline, "console=hcdp")) {
+ if (pcdp->rev < 3)
+ serial = 1;
+ } else if (strstr(cmdline, "console=")) {
+ printk(KERN_INFO "Explicit \"console=\"; ignoring PCDP\n");
+ return -ENODEV;
+ }
+
+ if (pcdp->rev < 3 && efi_uart_console_only())
+ serial = 1;
+
+ for (i = 0, uart = pcdp->uart; i < pcdp->num_uarts; i++, uart++) {
+ if (uart->flags & PCDP_UART_PRIMARY_CONSOLE || serial) {
+ if (uart->type == PCDP_CONSOLE_UART) {
+ return setup_serial_console(uart);
+ }
+ }
+ }
+
+ end = (struct pcdp_device *) ((u8 *) pcdp + pcdp->length);
+ for (dev = (struct pcdp_device *) (pcdp->uart + pcdp->num_uarts);
+ dev < end;
+ dev = (struct pcdp_device *) ((u8 *) dev + dev->length)) {
+ if (dev->flags & PCDP_PRIMARY_CONSOLE) {
+ if (dev->type == PCDP_CONSOLE_VGA) {
+ return setup_vga_console((struct pcdp_vga *) dev);
+ }
+ }
+ }
+
+ return -ENODEV;
+}
+
+#ifdef CONFIG_IA64_EARLY_PRINTK_UART
+unsigned long
+hcdp_early_uart (void)
+{
+ efi_system_table_t *systab;
+ efi_config_table_t *config_tables;
+ unsigned long addr = 0;
+ struct pcdp *pcdp = 0;
+ struct pcdp_uart *uart;
+ int i;
+
+ systab = (efi_system_table_t *) ia64_boot_param->efi_systab;
+ if (!systab)
+ return 0;
+ systab = __va(systab);
+
+ config_tables = (efi_config_table_t *) systab->tables;
+ if (!config_tables)
+ return 0;
+ config_tables = __va(config_tables);
+
+ for (i = 0; i < systab->nr_tables; i++) {
+ if (efi_guidcmp(config_tables[i].guid, HCDP_TABLE_GUID) == 0) {
+ pcdp = (struct pcdp *) config_tables[i].table;
+ break;
+ }
+ }
+ if (!pcdp)
+ return 0;
+ pcdp = __va(pcdp);
+
+ for (i = 0, uart = pcdp->uart; i < pcdp->num_uarts; i++, uart++) {
+ if (uart->type == PCDP_CONSOLE_UART) {
+ addr = uart->addr.address;
+ break;
+ }
+ }
+ return addr;
+}
+#endif /* CONFIG_IA64_EARLY_PRINTK_UART */
--- /dev/null
+/*
+ * adm1025.c
+ *
+ * Copyright (C) 2000 Chen-Yuan Wu <gwu@esoft.com>
+ * Copyright (C) 2003-2004 Jean Delvare <khali@linux-fr.org>
+ *
+ * The ADM1025 is a sensor chip made by Analog Devices. It reports up to 6
+ * voltages (including its own power source) and up to two temperatures
+ * (its own plus up to one external one). Voltages are scaled internally
+ * (which is not the common way) with ratios such that the nominal value
+ * of each voltage correspond to a register value of 192 (which means a
+ * resolution of about 0.5% of the nominal value). Temperature values are
+ * reported with a 1 deg resolution and a 3 deg accuracy. Complete
+ * datasheet can be obtained from Analog's website at:
+ * http://www.analog.com/Analog_Root/productPage/productHome/0,2121,ADM1025,00.html
+ *
+ * This driver also supports the ADM1025A, which differs from the ADM1025
+ * only in that it has "open-drain VID inputs while the ADM1025 has
+ * on-chip 100k pull-ups on the VID inputs". It doesn't make any
+ * difference for us.
+ *
+ * This driver also supports the NE1619, a sensor chip made by Philips.
+ * That chip is similar to the ADM1025A, with a few differences. The only
+ * difference that matters to us is that the NE1619 has only two possible
+ * addresses while the ADM1025A has a third one. Complete datasheet can be
+ * obtained from Philips's website at:
+ * http://www.semiconductors.philips.com/pip/NE1619DS.html
+ *
+ * Since the ADM1025 was the first chipset supported by this driver, most
+ * comments will refer to this chipset, but are actually general and
+ * concern all supported chipsets, unless mentioned otherwise.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include <linux/i2c-sensor.h>
+#include <linux/i2c-vid.h>
+
+/*
+ * Addresses to scan
+ * ADM1025 and ADM1025A have three possible addresses: 0x2c, 0x2d and 0x2e.
+ * NE1619 has two possible addresses: 0x2c and 0x2d.
+ */
+
+static unsigned short normal_i2c[] = { 0x2c, 0x2d, 0x2e, I2C_CLIENT_END };
+static unsigned int normal_isa[] = { I2C_CLIENT_ISA_END };
+
+/*
+ * Insmod parameters
+ */
+
+SENSORS_INSMOD_2(adm1025, ne1619);
+
+/*
+ * The ADM1025 registers
+ */
+
+#define ADM1025_REG_MAN_ID 0x3E
+#define ADM1025_REG_CHIP_ID 0x3F
+#define ADM1025_REG_CONFIG 0x40
+#define ADM1025_REG_STATUS1 0x41
+#define ADM1025_REG_STATUS2 0x42
+#define ADM1025_REG_IN(nr) (0x20 + (nr))
+#define ADM1025_REG_IN_MAX(nr) (0x2B + (nr) * 2)
+#define ADM1025_REG_IN_MIN(nr) (0x2C + (nr) * 2)
+#define ADM1025_REG_TEMP(nr) (0x26 + (nr))
+#define ADM1025_REG_TEMP_HIGH(nr) (0x37 + (nr) * 2)
+#define ADM1025_REG_TEMP_LOW(nr) (0x38 + (nr) * 2)
+#define ADM1025_REG_VID 0x47
+#define ADM1025_REG_VID4 0x49
+
+/*
+ * Conversions and various macros
+ * The ADM1025 uses signed 8-bit values for temperatures.
+ */
+
+static int in_scale[6] = { 2500, 2250, 3300, 5000, 12000, 3300 };
+
+#define IN_FROM_REG(reg,scale) (((reg) * (scale) + 96) / 192)
+#define IN_TO_REG(val,scale) ((val) <= 0 ? 0 : \
+ (val) * 192 >= (scale) * 255 ? 255 : \
+ ((val) * 192 + (scale)/2) / (scale))
+
+#define TEMP_FROM_REG(reg) ((reg) * 1000)
+#define TEMP_TO_REG(val) ((val) <= -127500 ? -128 : \
+ (val) >= 126500 ? 127 : \
+ (((val) < 0 ? (val)-500 : (val)+500) / 1000))
+
+/*
+ * Functions declaration
+ */
+
+static int adm1025_attach_adapter(struct i2c_adapter *adapter);
+static int adm1025_detect(struct i2c_adapter *adapter, int address, int kind);
+static void adm1025_init_client(struct i2c_client *client);
+static int adm1025_detach_client(struct i2c_client *client);
+static struct adm1025_data *adm1025_update_device(struct device *dev);
+
+/*
+ * Driver data (common to all clients)
+ */
+
+static struct i2c_driver adm1025_driver = {
+ .owner = THIS_MODULE,
+ .name = "adm1025",
+ .id = I2C_DRIVERID_ADM1025,
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = adm1025_attach_adapter,
+ .detach_client = adm1025_detach_client,
+};
+
+/*
+ * Client data (each client gets its own)
+ */
+
+struct adm1025_data {
+ struct i2c_client client;
+ struct semaphore update_lock;
+ char valid; /* zero until following fields are valid */
+ unsigned long last_updated; /* in jiffies */
+
+ u8 in[6]; /* register value */
+ u8 in_max[6]; /* register value */
+ u8 in_min[6]; /* register value */
+ s8 temp[2]; /* register value */
+ s8 temp_min[2]; /* register value */
+ s8 temp_max[2]; /* register value */
+ u16 alarms; /* register values, combined */
+ u8 vid; /* register values, combined */
+ u8 vrm;
+};
+
+/*
+ * Internal variables
+ */
+
+static int adm1025_id;
+
+/*
+ * Sysfs stuff
+ */
+
+#define show_in(offset) \
+static ssize_t show_in##offset(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%u\n", IN_FROM_REG(data->in[offset], \
+ in_scale[offset])); \
+} \
+static ssize_t show_in##offset##_min(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%u\n", IN_FROM_REG(data->in_min[offset], \
+ in_scale[offset])); \
+} \
+static ssize_t show_in##offset##_max(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%u\n", IN_FROM_REG(data->in_max[offset], \
+ in_scale[offset])); \
+} \
+static DEVICE_ATTR(in##offset##_input, S_IRUGO, show_in##offset, NULL);
+show_in(0);
+show_in(1);
+show_in(2);
+show_in(3);
+show_in(4);
+show_in(5);
+
+#define show_temp(offset) \
+static ssize_t show_temp##offset(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp[offset-1])); \
+} \
+static ssize_t show_temp##offset##_min(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_min[offset-1])); \
+} \
+static ssize_t show_temp##offset##_max(struct device *dev, char *buf) \
+{ \
+ struct adm1025_data *data = adm1025_update_device(dev); \
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_max[offset-1])); \
+}\
+static DEVICE_ATTR(temp##offset##_input, S_IRUGO, show_temp##offset, NULL);
+show_temp(1);
+show_temp(2);
+
+#define set_in(offset) \
+static ssize_t set_in##offset##_min(struct device *dev, const char *buf, \
+ size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct adm1025_data *data = i2c_get_clientdata(client); \
+ long val = simple_strtol(buf, NULL, 10); \
+ data->in_min[offset] = IN_TO_REG(val, in_scale[offset]); \
+ i2c_smbus_write_byte_data(client, ADM1025_REG_IN_MIN(offset), \
+ data->in_min[offset]); \
+ return count; \
+} \
+static ssize_t set_in##offset##_max(struct device *dev, const char *buf, \
+ size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct adm1025_data *data = i2c_get_clientdata(client); \
+ long val = simple_strtol(buf, NULL, 10); \
+ data->in_max[offset] = IN_TO_REG(val, in_scale[offset]); \
+ i2c_smbus_write_byte_data(client, ADM1025_REG_IN_MAX(offset), \
+ data->in_max[offset]); \
+ return count; \
+} \
+static DEVICE_ATTR(in##offset##_min, S_IWUSR | S_IRUGO, \
+ show_in##offset##_min, set_in##offset##_min); \
+static DEVICE_ATTR(in##offset##_max, S_IWUSR | S_IRUGO, \
+ show_in##offset##_max, set_in##offset##_max);
+set_in(0);
+set_in(1);
+set_in(2);
+set_in(3);
+set_in(4);
+set_in(5);
+
+#define set_temp(offset) \
+static ssize_t set_temp##offset##_min(struct device *dev, const char *buf, \
+ size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct adm1025_data *data = i2c_get_clientdata(client); \
+ long val = simple_strtol(buf, NULL, 10); \
+ data->temp_min[offset-1] = TEMP_TO_REG(val); \
+ i2c_smbus_write_byte_data(client, ADM1025_REG_TEMP_LOW(offset-1), \
+ data->temp_min[offset-1]); \
+ return count; \
+} \
+static ssize_t set_temp##offset##_max(struct device *dev, const char *buf, \
+ size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct adm1025_data *data = i2c_get_clientdata(client); \
+ long val = simple_strtol(buf, NULL, 10); \
+ data->temp_max[offset-1] = TEMP_TO_REG(val); \
+ i2c_smbus_write_byte_data(client, ADM1025_REG_TEMP_HIGH(offset-1), \
+ data->temp_max[offset-1]); \
+ return count; \
+} \
+static DEVICE_ATTR(temp##offset##_min, S_IWUSR | S_IRUGO, \
+ show_temp##offset##_min, set_temp##offset##_min); \
+static DEVICE_ATTR(temp##offset##_max, S_IWUSR | S_IRUGO, \
+ show_temp##offset##_max, set_temp##offset##_max);
+set_temp(1);
+set_temp(2);
+
+static ssize_t show_alarms(struct device *dev, char *buf)
+{
+ struct adm1025_data *data = adm1025_update_device(dev);
+ return sprintf(buf, "%u\n", data->alarms);
+}
+static DEVICE_ATTR(alarms, S_IRUGO, show_alarms, NULL);
+
+static ssize_t show_vid(struct device *dev, char *buf)
+{
+ struct adm1025_data *data = adm1025_update_device(dev);
+ return sprintf(buf, "%u\n", vid_from_reg(data->vid, data->vrm));
+}
+static DEVICE_ATTR(in1_ref, S_IRUGO, show_vid, NULL);
+
+static ssize_t show_vrm(struct device *dev, char *buf)
+{
+ struct adm1025_data *data = adm1025_update_device(dev);
+ return sprintf(buf, "%u\n", data->vrm);
+}
+static ssize_t set_vrm(struct device *dev, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1025_data *data = i2c_get_clientdata(client);
+ data->vrm = simple_strtoul(buf, NULL, 10);
+ return count;
+}
+static DEVICE_ATTR(vrm, S_IRUGO | S_IWUSR, show_vrm, set_vrm);
+
+/*
+ * Real code
+ */
+
+static int adm1025_attach_adapter(struct i2c_adapter *adapter)
+{
+ if (!(adapter->class & I2C_CLASS_HWMON))
+ return 0;
+ return i2c_detect(adapter, &addr_data, adm1025_detect);
+}
+
+/*
+ * The following function does more than just detection. If detection
+ * succeeds, it also registers the new chip.
+ */
+static int adm1025_detect(struct i2c_adapter *adapter, int address, int kind)
+{
+ struct i2c_client *new_client;
+ struct adm1025_data *data;
+ int err = 0;
+ const char *name = "";
+ u8 config;
+
+ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+ goto exit;
+
+ if (!(data = kmalloc(sizeof(struct adm1025_data), GFP_KERNEL))) {
+ err = -ENOMEM;
+ goto exit;
+ }
+ memset(data, 0, sizeof(struct adm1025_data));
+
+ /* The common I2C client data is placed right before the
+ ADM1025-specific data. */
+ new_client = &data->client;
+ i2c_set_clientdata(new_client, data);
+ new_client->addr = address;
+ new_client->adapter = adapter;
+ new_client->driver = &adm1025_driver;
+ new_client->flags = 0;
+
+ /*
+ * Now we do the remaining detection. A negative kind means that
+ * the driver was loaded with no force parameter (default), so we
+ * must both detect and identify the chip. A zero kind means that
+ * the driver was loaded with the force parameter, the detection
+ * step shall be skipped. A positive kind means that the driver
+ * was loaded with the force parameter and a given kind of chip is
+ * requested, so both the detection and the identification steps
+ * are skipped.
+ */
+ config = i2c_smbus_read_byte_data(new_client, ADM1025_REG_CONFIG);
+ if (kind < 0) { /* detection */
+ if ((config & 0x80) != 0x00
+ || (i2c_smbus_read_byte_data(new_client,
+ ADM1025_REG_STATUS1) & 0xC0) != 0x00
+ || (i2c_smbus_read_byte_data(new_client,
+ ADM1025_REG_STATUS2) & 0xBC) != 0x00) {
+ dev_dbg(&adapter->dev,
+ "ADM1025 detection failed at 0x%02x.\n",
+ address);
+ goto exit_free;
+ }
+ }
+
+ if (kind <= 0) { /* identification */
+ u8 man_id, chip_id;
+
+ man_id = i2c_smbus_read_byte_data(new_client,
+ ADM1025_REG_MAN_ID);
+ chip_id = i2c_smbus_read_byte_data(new_client,
+ ADM1025_REG_CHIP_ID);
+
+ if (man_id == 0x41) { /* Analog Devices */
+ if ((chip_id & 0xF0) == 0x20) { /* ADM1025/ADM1025A */
+ kind = adm1025;
+ }
+ } else
+ if (man_id == 0xA1) { /* Philips */
+ if (address != 0x2E
+ && (chip_id & 0xF0) == 0x20) { /* NE1619 */
+ kind = ne1619;
+ }
+ }
+
+ if (kind <= 0) { /* identification failed */
+ dev_info(&adapter->dev,
+ "Unsupported chip (man_id=0x%02X, "
+ "chip_id=0x%02X).\n", man_id, chip_id);
+ goto exit_free;
+ }
+ }
+
+ if (kind == adm1025) {
+ name = "adm1025";
+ } else if (kind == ne1619) {
+ name = "ne1619";
+ }
+
+ /* We can fill in the remaining client fields */
+ strlcpy(new_client->name, name, I2C_NAME_SIZE);
+ new_client->id = adm1025_id++;
+ data->valid = 0;
+ init_MUTEX(&data->update_lock);
+
+ /* Tell the I2C layer a new client has arrived */
+ if ((err = i2c_attach_client(new_client)))
+ goto exit_free;
+
+ /* Initialize the ADM1025 chip */
+ adm1025_init_client(new_client);
+
+ /* Register sysfs hooks */
+ device_create_file(&new_client->dev, &dev_attr_in0_input);
+ device_create_file(&new_client->dev, &dev_attr_in1_input);
+ device_create_file(&new_client->dev, &dev_attr_in2_input);
+ device_create_file(&new_client->dev, &dev_attr_in3_input);
+ device_create_file(&new_client->dev, &dev_attr_in5_input);
+ device_create_file(&new_client->dev, &dev_attr_in0_min);
+ device_create_file(&new_client->dev, &dev_attr_in1_min);
+ device_create_file(&new_client->dev, &dev_attr_in2_min);
+ device_create_file(&new_client->dev, &dev_attr_in3_min);
+ device_create_file(&new_client->dev, &dev_attr_in5_min);
+ device_create_file(&new_client->dev, &dev_attr_in0_max);
+ device_create_file(&new_client->dev, &dev_attr_in1_max);
+ device_create_file(&new_client->dev, &dev_attr_in2_max);
+ device_create_file(&new_client->dev, &dev_attr_in3_max);
+ device_create_file(&new_client->dev, &dev_attr_in5_max);
+ device_create_file(&new_client->dev, &dev_attr_temp1_input);
+ device_create_file(&new_client->dev, &dev_attr_temp2_input);
+ device_create_file(&new_client->dev, &dev_attr_temp1_min);
+ device_create_file(&new_client->dev, &dev_attr_temp2_min);
+ device_create_file(&new_client->dev, &dev_attr_temp1_max);
+ device_create_file(&new_client->dev, &dev_attr_temp2_max);
+ device_create_file(&new_client->dev, &dev_attr_alarms);
+ device_create_file(&new_client->dev, &dev_attr_in1_ref);
+ device_create_file(&new_client->dev, &dev_attr_vrm);
+
+ /* Pin 11 is either in4 (+12V) or VID4 */
+ if (!(config & 0x20)) {
+ device_create_file(&new_client->dev, &dev_attr_in4_input);
+ device_create_file(&new_client->dev, &dev_attr_in4_min);
+ device_create_file(&new_client->dev, &dev_attr_in4_max);
+ }
+
+ return 0;
+
+exit_free:
+ kfree(data);
+exit:
+ return err;
+}
+
+static void adm1025_init_client(struct i2c_client *client)
+{
+ u8 reg;
+ struct adm1025_data *data = i2c_get_clientdata(client);
+ int i;
+
+ data->vrm = i2c_which_vrm();
+
+ /*
+ * Set high limits
+ * Usually we avoid setting limits on driver init, but it happens
+ * that the ADM1025 comes with stupid default limits (all registers
+ * set to 0). In case the chip has not gone through any limit
+ * setting yet, we better set the high limits to the max so that
+ * no alarm triggers.
+ */
+ for (i=0; i<6; i++) {
+ reg = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_IN_MAX(i));
+ if (reg == 0)
+ i2c_smbus_write_byte_data(client,
+ ADM1025_REG_IN_MAX(i),
+ 0xFF);
+ }
+ for (i=0; i<2; i++) {
+ reg = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_TEMP_HIGH(i));
+ if (reg == 0)
+ i2c_smbus_write_byte_data(client,
+ ADM1025_REG_TEMP_HIGH(i),
+ 0x7F);
+ }
+
+ /*
+ * Start the conversions
+ */
+ reg = i2c_smbus_read_byte_data(client, ADM1025_REG_CONFIG);
+ if (!(reg & 0x01))
+ i2c_smbus_write_byte_data(client, ADM1025_REG_CONFIG,
+ (reg&0x7E)|0x01);
+}
+
+static int adm1025_detach_client(struct i2c_client *client)
+{
+ int err;
+
+ if ((err = i2c_detach_client(client))) {
+ dev_err(&client->dev, "Client deregistration failed, "
+ "client not detached.\n");
+ return err;
+ }
+
+ kfree(i2c_get_clientdata(client));
+ return 0;
+}
+
+static struct adm1025_data *adm1025_update_device(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1025_data *data = i2c_get_clientdata(client);
+
+ down(&data->update_lock);
+
+ if ((jiffies - data->last_updated > HZ * 2) ||
+ (jiffies < data->last_updated) ||
+ !data->valid) {
+ int i;
+
+ dev_dbg(&client->dev, "Updating data.\n");
+ for (i=0; i<6; i++) {
+ data->in[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_IN(i));
+ data->in_min[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_IN_MIN(i));
+ data->in_max[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_IN_MAX(i));
+ }
+ for (i=0; i<2; i++) {
+ data->temp[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_TEMP(i));
+ data->temp_min[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_TEMP_LOW(i));
+ data->temp_max[i] = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_TEMP_HIGH(i));
+ }
+ data->alarms = i2c_smbus_read_byte_data(client,
+ ADM1025_REG_STATUS1)
+ | (i2c_smbus_read_byte_data(client,
+ ADM1025_REG_STATUS2) << 8);
+ data->vid = (i2c_smbus_read_byte_data(client,
+ ADM1025_REG_VID) & 0x0f)
+ | ((i2c_smbus_read_byte_data(client,
+ ADM1025_REG_VID4) & 0x01) << 4);
+
+ data->last_updated = jiffies;
+ data->valid = 1;
+ }
+
+ up(&data->update_lock);
+
+ return data;
+}
+
+static int __init sensors_adm1025_init(void)
+{
+ return i2c_add_driver(&adm1025_driver);
+}
+
+static void __exit sensors_adm1025_exit(void)
+{
+ i2c_del_driver(&adm1025_driver);
+}
+
+MODULE_AUTHOR("Jean Delvare <khali@linux-fr.org>");
+MODULE_DESCRIPTION("ADM1025 driver");
+MODULE_LICENSE("GPL");
+
+module_init(sensors_adm1025_init);
+module_exit(sensors_adm1025_exit);
--- /dev/null
+/*
+ adm1031.c - Part of lm_sensors, Linux kernel modules for hardware
+ monitoring
+ Based on lm75.c and lm85.c
+ Supports adm1030 / adm1031
+ Copyright (C) 2004 Alexandre d'Alton <alex@alexdalton.org>
+ Reworked by Jean Delvare <khali@linux-fr.org>
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+*/
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include <linux/i2c-sensor.h>
+
+/* Following macros takes channel parameter starting from 0 to 2 */
+#define ADM1031_REG_FAN_SPEED(nr) (0x08 + (nr))
+#define ADM1031_REG_FAN_DIV(nr) (0x20 + (nr))
+#define ADM1031_REG_PWM (0x22)
+#define ADM1031_REG_FAN_MIN(nr) (0x10 + (nr))
+
+#define ADM1031_REG_TEMP_MAX(nr) (0x14 + 4*(nr))
+#define ADM1031_REG_TEMP_MIN(nr) (0x15 + 4*(nr))
+#define ADM1031_REG_TEMP_CRIT(nr) (0x16 + 4*(nr))
+
+#define ADM1031_REG_TEMP(nr) (0xa + (nr))
+#define ADM1031_REG_AUTO_TEMP(nr) (0x24 + (nr))
+
+#define ADM1031_REG_STATUS(nr) (0x2 + (nr))
+
+#define ADM1031_REG_CONF1 0x0
+#define ADM1031_REG_CONF2 0x1
+#define ADM1031_REG_EXT_TEMP 0x6
+
+#define ADM1031_CONF1_MONITOR_ENABLE 0x01 /* Monitoring enable */
+#define ADM1031_CONF1_PWM_INVERT 0x08 /* PWM Invert */
+#define ADM1031_CONF1_AUTO_MODE 0x80 /* Auto FAN */
+
+#define ADM1031_CONF2_PWM1_ENABLE 0x01
+#define ADM1031_CONF2_PWM2_ENABLE 0x02
+#define ADM1031_CONF2_TACH1_ENABLE 0x04
+#define ADM1031_CONF2_TACH2_ENABLE 0x08
+#define ADM1031_CONF2_TEMP_ENABLE(chan) (0x10 << (chan))
+
+/* Addresses to scan */
+static unsigned short normal_i2c[] = { 0x2c, 0x2d, 0x2e, I2C_CLIENT_END };
+static unsigned int normal_isa[] = { I2C_CLIENT_ISA_END };
+
+/* Insmod parameters */
+SENSORS_INSMOD_2(adm1030, adm1031);
+
+typedef u8 auto_chan_table_t[8][2];
+
+/* Each client has this additional data */
+struct adm1031_data {
+ struct i2c_client client;
+ struct semaphore update_lock;
+ int chip_type;
+ char valid; /* !=0 if following fields are valid */
+ unsigned long last_updated; /* In jiffies */
+ /* The chan_select_table contains the possible configurations for
+ * auto fan control.
+ */
+ auto_chan_table_t *chan_select_table;
+ u16 alarm;
+ u8 conf1;
+ u8 conf2;
+ u8 fan[2];
+ u8 fan_div[2];
+ u8 fan_min[2];
+ u8 pwm[2];
+ u8 old_pwm[2];
+ s8 temp[3];
+ u8 ext_temp[3];
+ u8 auto_temp[3];
+ u8 auto_temp_min[3];
+ u8 auto_temp_off[3];
+ u8 auto_temp_max[3];
+ s8 temp_min[3];
+ s8 temp_max[3];
+ s8 temp_crit[3];
+};
+
+static int adm1031_attach_adapter(struct i2c_adapter *adapter);
+static int adm1031_detect(struct i2c_adapter *adapter, int address, int kind);
+static void adm1031_init_client(struct i2c_client *client);
+static int adm1031_detach_client(struct i2c_client *client);
+static struct adm1031_data *adm1031_update_device(struct device *dev);
+
+/* This is the driver that will be inserted */
+static struct i2c_driver adm1031_driver = {
+ .owner = THIS_MODULE,
+ .name = "adm1031",
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = adm1031_attach_adapter,
+ .detach_client = adm1031_detach_client,
+};
+
+static int adm1031_id;
+
+static inline u8 adm1031_read_value(struct i2c_client *client, u8 reg)
+{
+ return i2c_smbus_read_byte_data(client, reg);
+}
+
+static inline int
+adm1031_write_value(struct i2c_client *client, u8 reg, unsigned int value)
+{
+ return i2c_smbus_write_byte_data(client, reg, value);
+}
+
+
+#define TEMP_TO_REG(val) (((val) < 0 ? ((val - 500) / 1000) : \
+ ((val + 500) / 1000)))
+
+#define TEMP_FROM_REG(val) ((val) * 1000)
+
+#define TEMP_FROM_REG_EXT(val, ext) (TEMP_FROM_REG(val) + (ext) * 125)
+
+#define FAN_FROM_REG(reg, div) ((reg) ? (11250 * 60) / ((reg) * (div)) : 0)
+
+static int FAN_TO_REG(int reg, int div)
+{
+ int tmp;
+ tmp = FAN_FROM_REG(SENSORS_LIMIT(reg, 0, 65535), div);
+ return tmp > 255 ? 255 : tmp;
+}
+
+#define FAN_DIV_FROM_REG(reg) (1<<(((reg)&0xc0)>>6))
+
+#define PWM_TO_REG(val) (SENSORS_LIMIT((val), 0, 255) >> 4)
+#define PWM_FROM_REG(val) ((val) << 4)
+
+#define FAN_CHAN_FROM_REG(reg) (((reg) >> 5) & 7)
+#define FAN_CHAN_TO_REG(val, reg) \
+ (((reg) & 0x1F) | (((val) << 5) & 0xe0))
+
+#define AUTO_TEMP_MIN_TO_REG(val, reg) \
+ ((((val)/500) & 0xf8)|((reg) & 0x7))
+#define AUTO_TEMP_RANGE_FROM_REG(reg) (5000 * (1<< ((reg)&0x7)))
+#define AUTO_TEMP_MIN_FROM_REG(reg) (1000 * ((((reg) >> 3) & 0x1f) << 2))
+
+#define AUTO_TEMP_MIN_FROM_REG_DEG(reg) ((((reg) >> 3) & 0x1f) << 2)
+
+#define AUTO_TEMP_OFF_FROM_REG(reg) \
+ (AUTO_TEMP_MIN_FROM_REG(reg) - 5000)
+
+#define AUTO_TEMP_MAX_FROM_REG(reg) \
+ (AUTO_TEMP_RANGE_FROM_REG(reg) + \
+ AUTO_TEMP_MIN_FROM_REG(reg))
+
+static int AUTO_TEMP_MAX_TO_REG(int val, int reg, int pwm)
+{
+ int ret;
+ int range = val - AUTO_TEMP_MIN_FROM_REG(reg);
+
+ range = ((val - AUTO_TEMP_MIN_FROM_REG(reg))*10)/(16 - pwm);
+ ret = ((reg & 0xf8) |
+ (range < 10000 ? 0 :
+ range < 20000 ? 1 :
+ range < 40000 ? 2 : range < 80000 ? 3 : 4));
+ return ret;
+}
+
+/* FAN auto control */
+#define GET_FAN_AUTO_BITFIELD(data, idx) \
+ (*(data)->chan_select_table)[FAN_CHAN_FROM_REG((data)->conf1)][idx%2]
+
+/* The tables below contains the possible values for the auto fan
+ * control bitfields. the index in the table is the register value.
+ * MSb is the auto fan control enable bit, so the four first entries
+ * in the table disables auto fan control when both bitfields are zero.
+ */
+static auto_chan_table_t auto_channel_select_table_adm1031 = {
+ {0, 0}, {0, 0}, {0, 0}, {0, 0},
+ {2 /*0b010 */ , 4 /*0b100 */ },
+ {2 /*0b010 */ , 2 /*0b010 */ },
+ {4 /*0b100 */ , 4 /*0b100 */ },
+ {7 /*0b111 */ , 7 /*0b111 */ },
+};
+
+static auto_chan_table_t auto_channel_select_table_adm1030 = {
+ {0, 0}, {0, 0}, {0, 0}, {0, 0},
+ {2 /*0b10 */ , 0},
+ {0xff /*invalid */ , 0},
+ {0xff /*invalid */ , 0},
+ {3 /*0b11 */ , 0},
+};
+
+/* That function checks if a bitfield is valid and returns the other bitfield
+ * nearest match if no exact match where found.
+ */
+static int
+get_fan_auto_nearest(struct adm1031_data *data,
+ int chan, u8 val, u8 reg, u8 * new_reg)
+{
+ int i;
+ int first_match = -1, exact_match = -1;
+ u8 other_reg_val =
+ (*data->chan_select_table)[FAN_CHAN_FROM_REG(reg)][chan ? 0 : 1];
+
+ if (val == 0) {
+ *new_reg = 0;
+ return 0;
+ }
+
+ for (i = 0; i < 8; i++) {
+ if ((val == (*data->chan_select_table)[i][chan]) &&
+ ((*data->chan_select_table)[i][chan ? 0 : 1] ==
+ other_reg_val)) {
+ /* We found an exact match */
+ exact_match = i;
+ break;
+ } else if (val == (*data->chan_select_table)[i][chan] &&
+ first_match == -1) {
+ /* Save the first match in case of an exact match has not been
+ * found
+ */
+ first_match = i;
+ }
+ }
+
+ if (exact_match >= 0) {
+ *new_reg = exact_match;
+ } else if (first_match >= 0) {
+ *new_reg = first_match;
+ } else {
+ return -EINVAL;
+ }
+ return 0;
+}
+
+static ssize_t show_fan_auto_channel(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", GET_FAN_AUTO_BITFIELD(data, nr));
+}
+
+static ssize_t
+set_fan_auto_channel(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+ u8 reg;
+ int ret;
+ u8 old_fan_mode;
+
+ old_fan_mode = data->conf1;
+
+ down(&data->update_lock);
+ val = simple_strtol(buf, NULL, 10);
+
+ if ((ret = get_fan_auto_nearest(data, nr, val, data->conf1, ®))) {
+ up(&data->update_lock);
+ return ret;
+ }
+ if (((data->conf1 = FAN_CHAN_TO_REG(reg, data->conf1)) & ADM1031_CONF1_AUTO_MODE) ^
+ (old_fan_mode & ADM1031_CONF1_AUTO_MODE)) {
+ if (data->conf1 & ADM1031_CONF1_AUTO_MODE){
+ /* Switch to Auto Fan Mode
+ * Save PWM registers
+ * Set PWM registers to 33% Both */
+ data->old_pwm[0] = data->pwm[0];
+ data->old_pwm[1] = data->pwm[1];
+ adm1031_write_value(client, ADM1031_REG_PWM, 0x55);
+ } else {
+ /* Switch to Manual Mode */
+ data->pwm[0] = data->old_pwm[0];
+ data->pwm[1] = data->old_pwm[1];
+ /* Restore PWM registers */
+ adm1031_write_value(client, ADM1031_REG_PWM,
+ data->pwm[0] | (data->pwm[1] << 4));
+ }
+ }
+ data->conf1 = FAN_CHAN_TO_REG(reg, data->conf1);
+ adm1031_write_value(client, ADM1031_REG_CONF1, data->conf1);
+ up(&data->update_lock);
+ return count;
+}
+
+#define fan_auto_channel_offset(offset) \
+static ssize_t show_fan_auto_channel_##offset (struct device *dev, char *buf) \
+{ \
+ return show_fan_auto_channel(dev, buf, offset - 1); \
+} \
+static ssize_t set_fan_auto_channel_##offset (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_fan_auto_channel(dev, buf, count, offset - 1); \
+} \
+static DEVICE_ATTR(auto_fan##offset##_channel, S_IRUGO | S_IWUSR, \
+ show_fan_auto_channel_##offset, \
+ set_fan_auto_channel_##offset)
+
+fan_auto_channel_offset(1);
+fan_auto_channel_offset(2);
+
+/* Auto Temps */
+static ssize_t show_auto_temp_off(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n",
+ AUTO_TEMP_OFF_FROM_REG(data->auto_temp[nr]));
+}
+static ssize_t show_auto_temp_min(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n",
+ AUTO_TEMP_MIN_FROM_REG(data->auto_temp[nr]));
+}
+static ssize_t
+set_auto_temp_min(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ down(&data->update_lock);
+ val = simple_strtol(buf, NULL, 10);
+ data->auto_temp[nr] = AUTO_TEMP_MIN_TO_REG(val, data->auto_temp[nr]);
+ adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr),
+ data->auto_temp[nr]);
+ up(&data->update_lock);
+ return count;
+}
+static ssize_t show_auto_temp_max(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n",
+ AUTO_TEMP_MAX_FROM_REG(data->auto_temp[nr]));
+}
+static ssize_t
+set_auto_temp_max(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ down(&data->update_lock);
+ val = simple_strtol(buf, NULL, 10);
+ data->temp_max[nr] = AUTO_TEMP_MAX_TO_REG(val, data->auto_temp[nr], data->pwm[nr]);
+ adm1031_write_value(client, ADM1031_REG_AUTO_TEMP(nr),
+ data->temp_max[nr]);
+ up(&data->update_lock);
+ return count;
+}
+
+#define auto_temp_reg(offset) \
+static ssize_t show_auto_temp_##offset##_off (struct device *dev, char *buf) \
+{ \
+ return show_auto_temp_off(dev, buf, offset - 1); \
+} \
+static ssize_t show_auto_temp_##offset##_min (struct device *dev, char *buf) \
+{ \
+ return show_auto_temp_min(dev, buf, offset - 1); \
+} \
+static ssize_t show_auto_temp_##offset##_max (struct device *dev, char *buf) \
+{ \
+ return show_auto_temp_max(dev, buf, offset - 1); \
+} \
+static ssize_t set_auto_temp_##offset##_min (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_auto_temp_min(dev, buf, count, offset - 1); \
+} \
+static ssize_t set_auto_temp_##offset##_max (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_auto_temp_max(dev, buf, count, offset - 1); \
+} \
+static DEVICE_ATTR(auto_temp##offset##_off, S_IRUGO, \
+ show_auto_temp_##offset##_off, NULL); \
+static DEVICE_ATTR(auto_temp##offset##_min, S_IRUGO | S_IWUSR, \
+ show_auto_temp_##offset##_min, set_auto_temp_##offset##_min);\
+static DEVICE_ATTR(auto_temp##offset##_max, S_IRUGO | S_IWUSR, \
+ show_auto_temp_##offset##_max, set_auto_temp_##offset##_max)
+
+auto_temp_reg(1);
+auto_temp_reg(2);
+auto_temp_reg(3);
+
+/* pwm */
+static ssize_t show_pwm(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", PWM_FROM_REG(data->pwm[nr]));
+}
+static ssize_t
+set_pwm(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+ int reg;
+ down(&data->update_lock);
+ val = simple_strtol(buf, NULL, 10);
+ if ((data->conf1 & ADM1031_CONF1_AUTO_MODE) &&
+ (((val>>4) & 0xf) != 5)) {
+ /* In automatic mode, the only PWM accepted is 33% */
+ up(&data->update_lock);
+ return -EINVAL;
+ }
+ data->pwm[nr] = PWM_TO_REG(val);
+ reg = adm1031_read_value(client, ADM1031_REG_PWM);
+ adm1031_write_value(client, ADM1031_REG_PWM,
+ nr ? ((data->pwm[nr] << 4) & 0xf0) | (reg & 0xf)
+ : (data->pwm[nr] & 0xf) | (reg & 0xf0));
+ up(&data->update_lock);
+ return count;
+}
+
+#define pwm_reg(offset) \
+static ssize_t show_pwm_##offset (struct device *dev, char *buf) \
+{ \
+ return show_pwm(dev, buf, offset - 1); \
+} \
+static ssize_t set_pwm_##offset (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_pwm(dev, buf, count, offset - 1); \
+} \
+static DEVICE_ATTR(pwm##offset, S_IRUGO | S_IWUSR, \
+ show_pwm_##offset, set_pwm_##offset)
+
+pwm_reg(1);
+pwm_reg(2);
+
+/* Fans */
+
+/*
+ * That function checks the cases where the fan reading is not
+ * relevent. It is used to provide 0 as fan reading when the fan is
+ * not supposed to run
+ */
+static int trust_fan_readings(struct adm1031_data *data, int chan)
+{
+ int res = 0;
+
+ if (data->conf1 & ADM1031_CONF1_AUTO_MODE) {
+ switch (data->conf1 & 0x60) {
+ case 0x00: /* remote temp1 controls fan1 remote temp2 controls fan2 */
+ res = data->temp[chan+1] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[chan+1]);
+ break;
+ case 0x20: /* remote temp1 controls both fans */
+ res =
+ data->temp[1] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[1]);
+ break;
+ case 0x40: /* remote temp2 controls both fans */
+ res =
+ data->temp[2] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[2]);
+ break;
+ case 0x60: /* max controls both fans */
+ res =
+ data->temp[0] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[0])
+ || data->temp[1] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[1])
+ || (data->chip_type == adm1031
+ && data->temp[2] >=
+ AUTO_TEMP_MIN_FROM_REG_DEG(data->auto_temp[2]));
+ break;
+ }
+ } else {
+ res = data->pwm[chan] > 0;
+ }
+ return res;
+}
+
+
+static ssize_t show_fan(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ int value;
+
+ value = trust_fan_readings(data, nr) ? FAN_FROM_REG(data->fan[nr],
+ FAN_DIV_FROM_REG(data->fan_div[nr])) : 0;
+ return sprintf(buf, "%d\n", value);
+}
+
+static ssize_t show_fan_div(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", FAN_DIV_FROM_REG(data->fan_div[nr]));
+}
+static ssize_t show_fan_min(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n",
+ FAN_FROM_REG(data->fan_min[nr],
+ FAN_DIV_FROM_REG(data->fan_div[nr])));
+}
+static ssize_t
+set_fan_min(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ down(&data->update_lock);
+ val = simple_strtol(buf, NULL, 10);
+ if (val) {
+ data->fan_min[nr] =
+ FAN_TO_REG(val, FAN_DIV_FROM_REG(data->fan_div[nr]));
+ } else {
+ data->fan_min[nr] = 0xff;
+ }
+ adm1031_write_value(client, ADM1031_REG_FAN_MIN(nr), data->fan_min[nr]);
+ up(&data->update_lock);
+ return count;
+}
+static ssize_t
+set_fan_div(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+ u8 tmp;
+ int old_div = FAN_DIV_FROM_REG(data->fan_div[nr]);
+ int new_min;
+
+ val = simple_strtol(buf, NULL, 10);
+ tmp = val == 8 ? 0xc0 :
+ val == 4 ? 0x80 :
+ val == 2 ? 0x40 :
+ val == 1 ? 0x00 :
+ 0xff;
+ if (tmp == 0xff)
+ return -EINVAL;
+ down(&data->update_lock);
+ data->fan_div[nr] = (tmp & 0xC0) | (0x3f & data->fan_div[nr]);
+ new_min = data->fan_min[nr] * old_div /
+ FAN_DIV_FROM_REG(data->fan_div[nr]);
+ data->fan_min[nr] = new_min > 0xff ? 0xff : new_min;
+ data->fan[nr] = data->fan[nr] * old_div /
+ FAN_DIV_FROM_REG(data->fan_div[nr]);
+
+ adm1031_write_value(client, ADM1031_REG_FAN_DIV(nr),
+ data->fan_div[nr]);
+ adm1031_write_value(client, ADM1031_REG_FAN_MIN(nr),
+ data->fan_min[nr]);
+ up(&data->update_lock);
+ return count;
+}
+
+#define fan_offset(offset) \
+static ssize_t show_fan_##offset (struct device *dev, char *buf) \
+{ \
+ return show_fan(dev, buf, offset - 1); \
+} \
+static ssize_t show_fan_##offset##_min (struct device *dev, char *buf) \
+{ \
+ return show_fan_min(dev, buf, offset - 1); \
+} \
+static ssize_t show_fan_##offset##_div (struct device *dev, char *buf) \
+{ \
+ return show_fan_div(dev, buf, offset - 1); \
+} \
+static ssize_t set_fan_##offset##_min (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_fan_min(dev, buf, count, offset - 1); \
+} \
+static ssize_t set_fan_##offset##_div (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_fan_div(dev, buf, count, offset - 1); \
+} \
+static DEVICE_ATTR(fan##offset##_input, S_IRUGO, show_fan_##offset, \
+ NULL); \
+static DEVICE_ATTR(fan##offset##_min, S_IRUGO | S_IWUSR, \
+ show_fan_##offset##_min, set_fan_##offset##_min); \
+static DEVICE_ATTR(fan##offset##_div, S_IRUGO | S_IWUSR, \
+ show_fan_##offset##_div, set_fan_##offset##_div); \
+static DEVICE_ATTR(auto_fan##offset##_min_pwm, S_IRUGO | S_IWUSR, \
+ show_pwm_##offset, set_pwm_##offset)
+
+fan_offset(1);
+fan_offset(2);
+
+
+/* Temps */
+static ssize_t show_temp(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ int ext;
+ ext = nr == 0 ?
+ ((data->ext_temp[nr] >> 6) & 0x3) * 2 :
+ (((data->ext_temp[nr] >> ((nr - 1) * 3)) & 7));
+ return sprintf(buf, "%d\n", TEMP_FROM_REG_EXT(data->temp[nr], ext));
+}
+static ssize_t show_temp_min(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_min[nr]));
+}
+static ssize_t show_temp_max(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_max[nr]));
+}
+static ssize_t show_temp_crit(struct device *dev, char *buf, int nr)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->temp_crit[nr]));
+}
+static ssize_t
+set_temp_min(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ val = simple_strtol(buf, NULL, 10);
+ val = SENSORS_LIMIT(val, -55000, nr == 0 ? 127750 : 127875);
+ down(&data->update_lock);
+ data->temp_min[nr] = TEMP_TO_REG(val);
+ adm1031_write_value(client, ADM1031_REG_TEMP_MIN(nr),
+ data->temp_min[nr]);
+ up(&data->update_lock);
+ return count;
+}
+static ssize_t
+set_temp_max(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ val = simple_strtol(buf, NULL, 10);
+ val = SENSORS_LIMIT(val, -55000, nr == 0 ? 127750 : 127875);
+ down(&data->update_lock);
+ data->temp_max[nr] = TEMP_TO_REG(val);
+ adm1031_write_value(client, ADM1031_REG_TEMP_MAX(nr),
+ data->temp_max[nr]);
+ up(&data->update_lock);
+ return count;
+}
+static ssize_t
+set_temp_crit(struct device *dev, const char *buf, size_t count, int nr)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int val;
+
+ val = simple_strtol(buf, NULL, 10);
+ val = SENSORS_LIMIT(val, -55000, nr == 0 ? 127750 : 127875);
+ down(&data->update_lock);
+ data->temp_crit[nr] = TEMP_TO_REG(val);
+ adm1031_write_value(client, ADM1031_REG_TEMP_CRIT(nr),
+ data->temp_crit[nr]);
+ up(&data->update_lock);
+ return count;
+}
+
+#define temp_reg(offset) \
+static ssize_t show_temp_##offset (struct device *dev, char *buf) \
+{ \
+ return show_temp(dev, buf, offset - 1); \
+} \
+static ssize_t show_temp_##offset##_min (struct device *dev, char *buf) \
+{ \
+ return show_temp_min(dev, buf, offset - 1); \
+} \
+static ssize_t show_temp_##offset##_max (struct device *dev, char *buf) \
+{ \
+ return show_temp_max(dev, buf, offset - 1); \
+} \
+static ssize_t show_temp_##offset##_crit (struct device *dev, char *buf) \
+{ \
+ return show_temp_crit(dev, buf, offset - 1); \
+} \
+static ssize_t set_temp_##offset##_min (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_temp_min(dev, buf, count, offset - 1); \
+} \
+static ssize_t set_temp_##offset##_max (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_temp_max(dev, buf, count, offset - 1); \
+} \
+static ssize_t set_temp_##offset##_crit (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ return set_temp_crit(dev, buf, count, offset - 1); \
+} \
+static DEVICE_ATTR(temp##offset##_input, S_IRUGO, show_temp_##offset, \
+ NULL); \
+static DEVICE_ATTR(temp##offset##_min, S_IRUGO | S_IWUSR, \
+ show_temp_##offset##_min, set_temp_##offset##_min); \
+static DEVICE_ATTR(temp##offset##_max, S_IRUGO | S_IWUSR, \
+ show_temp_##offset##_max, set_temp_##offset##_max); \
+static DEVICE_ATTR(temp##offset##_crit, S_IRUGO | S_IWUSR, \
+ show_temp_##offset##_crit, set_temp_##offset##_crit)
+
+temp_reg(1);
+temp_reg(2);
+temp_reg(3);
+
+/* Alarms */
+static ssize_t show_alarms(struct device *dev, char *buf)
+{
+ struct adm1031_data *data = adm1031_update_device(dev);
+ return sprintf(buf, "%d\n", data->alarm);
+}
+
+static DEVICE_ATTR(alarms, S_IRUGO, show_alarms, NULL);
+
+
+static int adm1031_attach_adapter(struct i2c_adapter *adapter)
+{
+ if (!(adapter->class & I2C_CLASS_HWMON))
+ return 0;
+ return i2c_detect(adapter, &addr_data, adm1031_detect);
+}
+
+/* This function is called by i2c_detect */
+static int adm1031_detect(struct i2c_adapter *adapter, int address, int kind)
+{
+ struct i2c_client *new_client;
+ struct adm1031_data *data;
+ int err = 0;
+ const char *name = "";
+
+ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+ goto exit;
+
+ if (!(data = kmalloc(sizeof(struct adm1031_data), GFP_KERNEL))) {
+ err = -ENOMEM;
+ goto exit;
+ }
+ memset(data, 0, sizeof(struct adm1031_data));
+
+ new_client = &data->client;
+ i2c_set_clientdata(new_client, data);
+ new_client->addr = address;
+ new_client->adapter = adapter;
+ new_client->driver = &adm1031_driver;
+ new_client->flags = 0;
+
+ if (kind < 0) {
+ int id, co;
+ id = i2c_smbus_read_byte_data(new_client, 0x3d);
+ co = i2c_smbus_read_byte_data(new_client, 0x3e);
+
+ if (!((id == 0x31 || id == 0x30) && co == 0x41))
+ goto exit_free;
+ kind = (id == 0x30) ? adm1030 : adm1031;
+ }
+
+ if (kind <= 0)
+ kind = adm1031;
+
+ /* Given the detected chip type, set the chip name and the
+ * auto fan control helper table. */
+ if (kind == adm1030) {
+ name = "adm1030";
+ data->chan_select_table = &auto_channel_select_table_adm1030;
+ } else if (kind == adm1031) {
+ name = "adm1031";
+ data->chan_select_table = &auto_channel_select_table_adm1031;
+ }
+ data->chip_type = kind;
+
+ strlcpy(new_client->name, name, I2C_NAME_SIZE);
+
+ new_client->id = adm1031_id++;
+ data->valid = 0;
+ init_MUTEX(&data->update_lock);
+
+ /* Tell the I2C layer a new client has arrived */
+ if ((err = i2c_attach_client(new_client)))
+ goto exit_free;
+
+ /* Initialize the ADM1031 chip */
+ adm1031_init_client(new_client);
+
+ /* Register sysfs hooks */
+ device_create_file(&new_client->dev, &dev_attr_fan1_input);
+ device_create_file(&new_client->dev, &dev_attr_fan1_div);
+ device_create_file(&new_client->dev, &dev_attr_fan1_min);
+ device_create_file(&new_client->dev, &dev_attr_pwm1);
+ device_create_file(&new_client->dev, &dev_attr_auto_fan1_channel);
+ device_create_file(&new_client->dev, &dev_attr_temp1_input);
+ device_create_file(&new_client->dev, &dev_attr_temp1_min);
+ device_create_file(&new_client->dev, &dev_attr_temp1_max);
+ device_create_file(&new_client->dev, &dev_attr_temp1_crit);
+ device_create_file(&new_client->dev, &dev_attr_temp2_input);
+ device_create_file(&new_client->dev, &dev_attr_temp2_min);
+ device_create_file(&new_client->dev, &dev_attr_temp2_max);
+ device_create_file(&new_client->dev, &dev_attr_temp2_crit);
+
+ device_create_file(&new_client->dev, &dev_attr_auto_temp1_off);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp1_min);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp1_max);
+
+ device_create_file(&new_client->dev, &dev_attr_auto_temp2_off);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp2_min);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp2_max);
+
+ device_create_file(&new_client->dev, &dev_attr_auto_fan1_min_pwm);
+
+ device_create_file(&new_client->dev, &dev_attr_alarms);
+
+ if (kind == adm1031) {
+ device_create_file(&new_client->dev, &dev_attr_fan2_input);
+ device_create_file(&new_client->dev, &dev_attr_fan2_div);
+ device_create_file(&new_client->dev, &dev_attr_fan2_min);
+ device_create_file(&new_client->dev, &dev_attr_pwm2);
+ device_create_file(&new_client->dev,
+ &dev_attr_auto_fan2_channel);
+ device_create_file(&new_client->dev, &dev_attr_temp3_input);
+ device_create_file(&new_client->dev, &dev_attr_temp3_min);
+ device_create_file(&new_client->dev, &dev_attr_temp3_max);
+ device_create_file(&new_client->dev, &dev_attr_temp3_crit);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp3_off);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp3_min);
+ device_create_file(&new_client->dev, &dev_attr_auto_temp3_max);
+ device_create_file(&new_client->dev, &dev_attr_auto_fan2_min_pwm);
+ }
+
+ return 0;
+
+exit_free:
+ kfree(new_client);
+exit:
+ return err;
+}
+
+static int adm1031_detach_client(struct i2c_client *client)
+{
+ int ret;
+ if ((ret = i2c_detach_client(client)) != 0) {
+ return ret;
+ }
+ kfree(client);
+ return 0;
+}
+
+static void adm1031_init_client(struct i2c_client *client)
+{
+ unsigned int read_val;
+ unsigned int mask;
+ struct adm1031_data *data = i2c_get_clientdata(client);
+
+ mask = (ADM1031_CONF2_PWM1_ENABLE | ADM1031_CONF2_TACH1_ENABLE);
+ if (data->chip_type == adm1031) {
+ mask |= (ADM1031_CONF2_PWM2_ENABLE |
+ ADM1031_CONF2_TACH2_ENABLE);
+ }
+ /* Initialize the ADM1031 chip (enables fan speed reading ) */
+ read_val = adm1031_read_value(client, ADM1031_REG_CONF2);
+ if ((read_val | mask) != read_val) {
+ adm1031_write_value(client, ADM1031_REG_CONF2, read_val | mask);
+ }
+
+ read_val = adm1031_read_value(client, ADM1031_REG_CONF1);
+ if ((read_val | ADM1031_CONF1_MONITOR_ENABLE) != read_val) {
+ adm1031_write_value(client, ADM1031_REG_CONF1, read_val |
+ ADM1031_CONF1_MONITOR_ENABLE);
+ }
+
+}
+
+static struct adm1031_data *adm1031_update_device(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct adm1031_data *data = i2c_get_clientdata(client);
+ int chan;
+
+ down(&data->update_lock);
+
+ if ((jiffies - data->last_updated > HZ + HZ / 2) ||
+ (jiffies < data->last_updated) || !data->valid) {
+
+ dev_dbg(&client->dev, "Starting adm1031 update\n");
+ for (chan = 0;
+ chan < ((data->chip_type == adm1031) ? 3 : 2); chan++) {
+ u8 oldh, newh;
+
+ oldh =
+ adm1031_read_value(client, ADM1031_REG_TEMP(chan));
+ data->ext_temp[chan] =
+ adm1031_read_value(client, ADM1031_REG_EXT_TEMP);
+ newh =
+ adm1031_read_value(client, ADM1031_REG_TEMP(chan));
+ if (newh != oldh) {
+ data->ext_temp[chan] =
+ adm1031_read_value(client,
+ ADM1031_REG_EXT_TEMP);
+#ifdef DEBUG
+ oldh =
+ adm1031_read_value(client,
+ ADM1031_REG_TEMP(chan));
+
+ /* oldh is actually newer */
+ if (newh != oldh)
+ dev_warn(&client->dev,
+ "Remote temperature may be "
+ "wrong.\n");
+#endif
+ }
+ data->temp[chan] = newh;
+
+ data->temp_min[chan] =
+ adm1031_read_value(client,
+ ADM1031_REG_TEMP_MIN(chan));
+ data->temp_max[chan] =
+ adm1031_read_value(client,
+ ADM1031_REG_TEMP_MAX(chan));
+ data->temp_crit[chan] =
+ adm1031_read_value(client,
+ ADM1031_REG_TEMP_CRIT(chan));
+ data->auto_temp[chan] =
+ adm1031_read_value(client,
+ ADM1031_REG_AUTO_TEMP(chan));
+
+ }
+
+ data->conf1 = adm1031_read_value(client, ADM1031_REG_CONF1);
+ data->conf2 = adm1031_read_value(client, ADM1031_REG_CONF2);
+
+ data->alarm = adm1031_read_value(client, ADM1031_REG_STATUS(0))
+ | (adm1031_read_value(client, ADM1031_REG_STATUS(1))
+ << 8);
+ if (data->chip_type == adm1030) {
+ data->alarm &= 0xc0ff;
+ }
+
+ for (chan=0; chan<(data->chip_type == adm1030 ? 1 : 2); chan++) {
+ data->fan_div[chan] =
+ adm1031_read_value(client, ADM1031_REG_FAN_DIV(chan));
+ data->fan_min[chan] =
+ adm1031_read_value(client, ADM1031_REG_FAN_MIN(chan));
+ data->fan[chan] =
+ adm1031_read_value(client, ADM1031_REG_FAN_SPEED(chan));
+ data->pwm[chan] =
+ 0xf & (adm1031_read_value(client, ADM1031_REG_PWM) >>
+ (4*chan));
+ }
+ data->last_updated = jiffies;
+ data->valid = 1;
+ }
+
+ up(&data->update_lock);
+
+ return data;
+}
+
+static int __init sensors_adm1031_init(void)
+{
+ return i2c_add_driver(&adm1031_driver);
+}
+
+static void __exit sensors_adm1031_exit(void)
+{
+ i2c_del_driver(&adm1031_driver);
+}
+
+MODULE_AUTHOR("Alexandre d'Alton <alex@alexdalton.org>");
+MODULE_DESCRIPTION("ADM1031/ADM1030 driver");
+MODULE_LICENSE("GPL");
+
+module_init(sensors_adm1031_init);
+module_exit(sensors_adm1031_exit);
--- /dev/null
+/*
+ lm77.c - Part of lm_sensors, Linux kernel modules for hardware
+ monitoring
+
+ Copyright (c) 2004 Andras BALI <drewie@freemail.hu>
+
+ Heavily based on lm75.c by Frodo Looijaard <frodol@dds.nl>. The LM77
+ is a temperature sensor and thermal window comparator with 0.5 deg
+ resolution made by National Semiconductor. Complete datasheet can be
+ obtained at their site:
+ http://www.national.com/pf/LM/LM77.html
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+*/
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include <linux/i2c-sensor.h>
+
+
+/* Addresses to scan */
+static unsigned short normal_i2c[] = { 0x48, 0x49, 0x4a, 0x4b, I2C_CLIENT_END };
+static unsigned int normal_isa[] = { I2C_CLIENT_ISA_END };
+
+/* Insmod parameters */
+SENSORS_INSMOD_1(lm77);
+
+/* The LM77 registers */
+#define LM77_REG_TEMP 0x00
+#define LM77_REG_CONF 0x01
+#define LM77_REG_TEMP_HYST 0x02
+#define LM77_REG_TEMP_CRIT 0x03
+#define LM77_REG_TEMP_MIN 0x04
+#define LM77_REG_TEMP_MAX 0x05
+
+/* Each client has this additional data */
+struct lm77_data {
+ struct i2c_client client;
+ struct semaphore update_lock;
+ char valid;
+ unsigned long last_updated; /* In jiffies */
+ int temp_input; /* Temperatures */
+ int temp_crit;
+ int temp_min;
+ int temp_max;
+ int temp_hyst;
+ u8 alarms;
+};
+
+static int lm77_attach_adapter(struct i2c_adapter *adapter);
+static int lm77_detect(struct i2c_adapter *adapter, int address, int kind);
+static void lm77_init_client(struct i2c_client *client);
+static int lm77_detach_client(struct i2c_client *client);
+static u16 lm77_read_value(struct i2c_client *client, u8 reg);
+static int lm77_write_value(struct i2c_client *client, u8 reg, u16 value);
+
+static struct lm77_data *lm77_update_device(struct device *dev);
+
+
+/* This is the driver that will be inserted */
+static struct i2c_driver lm77_driver = {
+ .owner = THIS_MODULE,
+ .name = "lm77",
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = lm77_attach_adapter,
+ .detach_client = lm77_detach_client,
+};
+
+static int lm77_id;
+
+/* straight from the datasheet */
+#define LM77_TEMP_MIN (-55000)
+#define LM77_TEMP_MAX 125000
+
+/* In the temperature registers, the low 3 bits are not part of the
+ temperature values; they are the status bits. */
+static inline u16 LM77_TEMP_TO_REG(int temp)
+{
+ int ntemp = SENSORS_LIMIT(temp, LM77_TEMP_MIN, LM77_TEMP_MAX);
+ return (u16)((ntemp / 500) * 8);
+}
+
+static inline int LM77_TEMP_FROM_REG(u16 reg)
+{
+ return ((int)reg / 8) * 500;
+}
+
+/* sysfs stuff */
+
+/* read routines for temperature limits */
+#define show(value) \
+static ssize_t show_##value(struct device *dev, char *buf) \
+{ \
+ struct lm77_data *data = lm77_update_device(dev); \
+ return sprintf(buf, "%d\n", data->value); \
+}
+
+show(temp_input);
+show(temp_crit);
+show(temp_min);
+show(temp_max);
+show(alarms);
+
+/* read routines for hysteresis values */
+static ssize_t show_temp_crit_hyst(struct device *dev, char *buf)
+{
+ struct lm77_data *data = lm77_update_device(dev);
+ return sprintf(buf, "%d\n", data->temp_crit - data->temp_hyst);
+}
+static ssize_t show_temp_min_hyst(struct device *dev, char *buf)
+{
+ struct lm77_data *data = lm77_update_device(dev);
+ return sprintf(buf, "%d\n", data->temp_min + data->temp_hyst);
+}
+static ssize_t show_temp_max_hyst(struct device *dev, char *buf)
+{
+ struct lm77_data *data = lm77_update_device(dev);
+ return sprintf(buf, "%d\n", data->temp_max - data->temp_hyst);
+}
+
+/* write routines */
+#define set(value, reg) \
+static ssize_t set_##value(struct device *dev, const char *buf, size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct lm77_data *data = i2c_get_clientdata(client); \
+ data->value = simple_strtoul(buf, NULL, 10); \
+ lm77_write_value(client, reg, LM77_TEMP_TO_REG(data->value)); \
+ return count; \
+}
+
+set(temp_min, LM77_REG_TEMP_MIN);
+set(temp_max, LM77_REG_TEMP_MAX);
+
+/* hysteresis is stored as a relative value on the chip, so it has to be
+ converted first */
+static ssize_t set_temp_crit_hyst(struct device *dev, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct lm77_data *data = i2c_get_clientdata(client);
+ data->temp_hyst = data->temp_crit - simple_strtoul(buf, NULL, 10);
+ lm77_write_value(client, LM77_REG_TEMP_HYST,
+ LM77_TEMP_TO_REG(data->temp_hyst));
+ return count;
+}
+
+/* preserve hysteresis when setting T_crit */
+static ssize_t set_temp_crit(struct device *dev, const char *buf, size_t count)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct lm77_data *data = i2c_get_clientdata(client);
+ int oldcrithyst = data->temp_crit - data->temp_hyst;
+ data->temp_crit = simple_strtoul(buf, NULL, 10);
+ data->temp_hyst = data->temp_crit - oldcrithyst;
+ lm77_write_value(client, LM77_REG_TEMP_CRIT,
+ LM77_TEMP_TO_REG(data->temp_crit));
+ lm77_write_value(client, LM77_REG_TEMP_HYST,
+ LM77_TEMP_TO_REG(data->temp_hyst));
+ return count;
+}
+
+static DEVICE_ATTR(temp1_input, S_IRUGO,
+ show_temp_input, NULL);
+static DEVICE_ATTR(temp1_crit, S_IWUSR | S_IRUGO,
+ show_temp_crit, set_temp_crit);
+static DEVICE_ATTR(temp1_min, S_IWUSR | S_IRUGO,
+ show_temp_min, set_temp_min);
+static DEVICE_ATTR(temp1_max, S_IWUSR | S_IRUGO,
+ show_temp_max, set_temp_max);
+
+static DEVICE_ATTR(temp1_crit_hyst, S_IWUSR | S_IRUGO,
+ show_temp_crit_hyst, set_temp_crit_hyst);
+static DEVICE_ATTR(temp1_min_hyst, S_IRUGO,
+ show_temp_min_hyst, NULL);
+static DEVICE_ATTR(temp1_max_hyst, S_IRUGO,
+ show_temp_max_hyst, NULL);
+
+static DEVICE_ATTR(alarms, S_IRUGO,
+ show_alarms, NULL);
+
+static int lm77_attach_adapter(struct i2c_adapter *adapter)
+{
+ if (!(adapter->class & I2C_CLASS_HWMON))
+ return 0;
+ return i2c_detect(adapter, &addr_data, lm77_detect);
+}
+
+/* This function is called by i2c_detect */
+static int lm77_detect(struct i2c_adapter *adapter, int address, int kind)
+{
+ struct i2c_client *new_client;
+ struct lm77_data *data;
+ int err = 0;
+ const char *name = "";
+
+ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA |
+ I2C_FUNC_SMBUS_WORD_DATA))
+ goto exit;
+
+ /* OK. For now, we presume we have a valid client. We now create the
+ client structure, even though we cannot fill it completely yet.
+ But it allows us to access lm77_{read,write}_value. */
+ if (!(data = kmalloc(sizeof(struct lm77_data), GFP_KERNEL))) {
+ err = -ENOMEM;
+ goto exit;
+ }
+ memset(data, 0, sizeof(struct lm77_data));
+
+ new_client = &data->client;
+ i2c_set_clientdata(new_client, data);
+ new_client->addr = address;
+ new_client->adapter = adapter;
+ new_client->driver = &lm77_driver;
+ new_client->flags = 0;
+
+ /* Here comes the remaining detection. Since the LM77 has no
+ register dedicated to identification, we have to rely on the
+ following tricks:
+
+ 1. the high 4 bits represent the sign and thus they should
+ always be the same
+ 2. the high 3 bits are unused in the configuration register
+ 3. addresses 0x06 and 0x07 return the last read value
+ 4. registers cycling over 8-address boundaries
+
+ Word-sized registers are high-byte first. */
+ if (kind < 0) {
+ int i, cur, conf, hyst, crit, min, max;
+
+ /* addresses cycling */
+ cur = i2c_smbus_read_word_data(new_client, 0);
+ conf = i2c_smbus_read_byte_data(new_client, 1);
+ hyst = i2c_smbus_read_word_data(new_client, 2);
+ crit = i2c_smbus_read_word_data(new_client, 3);
+ min = i2c_smbus_read_word_data(new_client, 4);
+ max = i2c_smbus_read_word_data(new_client, 5);
+ for (i = 8; i <= 0xff; i += 8)
+ if (i2c_smbus_read_byte_data(new_client, i + 1) != conf
+ || i2c_smbus_read_word_data(new_client, i + 2) != hyst
+ || i2c_smbus_read_word_data(new_client, i + 3) != crit
+ || i2c_smbus_read_word_data(new_client, i + 4) != min
+ || i2c_smbus_read_word_data(new_client, i + 5) != max)
+ goto exit_free;
+
+ /* sign bits */
+ if (((cur & 0x00f0) != 0xf0 && (cur & 0x00f0) != 0x0)
+ || ((hyst & 0x00f0) != 0xf0 && (hyst & 0x00f0) != 0x0)
+ || ((crit & 0x00f0) != 0xf0 && (crit & 0x00f0) != 0x0)
+ || ((min & 0x00f0) != 0xf0 && (min & 0x00f0) != 0x0)
+ || ((max & 0x00f0) != 0xf0 && (max & 0x00f0) != 0x0))
+ goto exit_free;
+
+ /* unused bits */
+ if (conf & 0xe0)
+ goto exit_free;
+
+ /* 0x06 and 0x07 return the last read value */
+ cur = i2c_smbus_read_word_data(new_client, 0);
+ if (i2c_smbus_read_word_data(new_client, 6) != cur
+ || i2c_smbus_read_word_data(new_client, 7) != cur)
+ goto exit_free;
+ hyst = i2c_smbus_read_word_data(new_client, 2);
+ if (i2c_smbus_read_word_data(new_client, 6) != hyst
+ || i2c_smbus_read_word_data(new_client, 7) != hyst)
+ goto exit_free;
+ min = i2c_smbus_read_word_data(new_client, 4);
+ if (i2c_smbus_read_word_data(new_client, 6) != min
+ || i2c_smbus_read_word_data(new_client, 7) != min)
+ goto exit_free;
+
+ }
+
+ /* Determine the chip type - only one kind supported! */
+ if (kind <= 0)
+ kind = lm77;
+
+ if (kind == lm77) {
+ name = "lm77";
+ }
+
+ /* Fill in the remaining client fields and put it into the global list */
+ strlcpy(new_client->name, name, I2C_NAME_SIZE);
+
+ new_client->id = lm77_id++;
+ data->valid = 0;
+ init_MUTEX(&data->update_lock);
+
+ /* Tell the I2C layer a new client has arrived */
+ if ((err = i2c_attach_client(new_client)))
+ goto exit_free;
+
+ /* Initialize the LM77 chip */
+ lm77_init_client(new_client);
+
+ /* Register sysfs hooks */
+ device_create_file(&new_client->dev, &dev_attr_temp1_input);
+ device_create_file(&new_client->dev, &dev_attr_temp1_crit);
+ device_create_file(&new_client->dev, &dev_attr_temp1_min);
+ device_create_file(&new_client->dev, &dev_attr_temp1_max);
+ device_create_file(&new_client->dev, &dev_attr_temp1_crit_hyst);
+ device_create_file(&new_client->dev, &dev_attr_temp1_min_hyst);
+ device_create_file(&new_client->dev, &dev_attr_temp1_max_hyst);
+ device_create_file(&new_client->dev, &dev_attr_alarms);
+ return 0;
+
+exit_free:
+ kfree(data);
+exit:
+ return err;
+}
+
+static int lm77_detach_client(struct i2c_client *client)
+{
+ i2c_detach_client(client);
+ kfree(i2c_get_clientdata(client));
+ return 0;
+}
+
+/* All registers are word-sized, except for the configuration register.
+ The LM77 uses the high-byte first convention. */
+static u16 lm77_read_value(struct i2c_client *client, u8 reg)
+{
+ if (reg == LM77_REG_CONF)
+ return i2c_smbus_read_byte_data(client, reg);
+ else
+ return swab16(i2c_smbus_read_word_data(client, reg));
+}
+
+static int lm77_write_value(struct i2c_client *client, u8 reg, u16 value)
+{
+ if (reg == LM77_REG_CONF)
+ return i2c_smbus_write_byte_data(client, reg, value);
+ else
+ return i2c_smbus_write_word_data(client, reg, swab16(value));
+}
+
+static void lm77_init_client(struct i2c_client *client)
+{
+ /* Initialize the LM77 chip - turn off shutdown mode */
+ int conf = lm77_read_value(client, LM77_REG_CONF);
+ if (conf & 1)
+ lm77_write_value(client, LM77_REG_CONF, conf & 0xfe);
+}
+
+static struct lm77_data *lm77_update_device(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct lm77_data *data = i2c_get_clientdata(client);
+
+ down(&data->update_lock);
+
+ if ((jiffies - data->last_updated > HZ + HZ / 2) ||
+ (jiffies < data->last_updated) || !data->valid) {
+ dev_dbg(&client->dev, "Starting lm77 update\n");
+ data->temp_input =
+ LM77_TEMP_FROM_REG(lm77_read_value(client,
+ LM77_REG_TEMP));
+ data->temp_hyst =
+ LM77_TEMP_FROM_REG(lm77_read_value(client,
+ LM77_REG_TEMP_HYST));
+ data->temp_crit =
+ LM77_TEMP_FROM_REG(lm77_read_value(client,
+ LM77_REG_TEMP_CRIT));
+ data->temp_min =
+ LM77_TEMP_FROM_REG(lm77_read_value(client,
+ LM77_REG_TEMP_MIN));
+ data->temp_max =
+ LM77_TEMP_FROM_REG(lm77_read_value(client,
+ LM77_REG_TEMP_MAX));
+ data->alarms =
+ lm77_read_value(client, LM77_REG_TEMP) & 0x0007;
+ data->last_updated = jiffies;
+ data->valid = 1;
+ }
+
+ up(&data->update_lock);
+
+ return data;
+}
+
+static int __init sensors_lm77_init(void)
+{
+ return i2c_add_driver(&lm77_driver);
+}
+
+static void __exit sensors_lm77_exit(void)
+{
+ i2c_del_driver(&lm77_driver);
+}
+
+MODULE_AUTHOR("Andras BALI <drewie@freemail.hu>");
+MODULE_DESCRIPTION("LM77 driver");
+MODULE_LICENSE("GPL");
+
+module_init(sensors_lm77_init);
+module_exit(sensors_lm77_exit);
--- /dev/null
+/*
+ * max1619.c - Part of lm_sensors, Linux kernel modules for hardware
+ * monitoring
+ * Copyright (C) 2003-2004 Alexey Fisher <fishor@mail.ru>
+ * Jean Delvare <khali@linux-fr.org>
+ *
+ * Based on the lm90 driver. The MAX1619 is a sensor chip made by Maxim.
+ * It reports up to two temperatures (its own plus up to
+ * one external one). Complete datasheet can be
+ * obtained from Maxim's website at:
+ * http://pdfserv.maxim-ic.com/en/ds/MAX1619.pdf
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/i2c.h>
+#include <linux/i2c-sensor.h>
+
+
+static unsigned short normal_i2c[] = { 0x18, 0x19, 0x1a,
+ 0x29, 0x2a, 0x2b,
+ 0x4c, 0x4d, 0x4e,
+ I2C_CLIENT_END };
+static unsigned int normal_isa[] = { I2C_CLIENT_ISA_END };
+
+/*
+ * Insmod parameters
+ */
+
+SENSORS_INSMOD_1(max1619);
+
+/*
+ * The MAX1619 registers
+ */
+
+#define MAX1619_REG_R_MAN_ID 0xFE
+#define MAX1619_REG_R_CHIP_ID 0xFF
+#define MAX1619_REG_R_CONFIG 0x03
+#define MAX1619_REG_W_CONFIG 0x09
+#define MAX1619_REG_R_CONVRATE 0x04
+#define MAX1619_REG_W_CONVRATE 0x0A
+#define MAX1619_REG_R_STATUS 0x02
+#define MAX1619_REG_R_LOCAL_TEMP 0x00
+#define MAX1619_REG_R_REMOTE_TEMP 0x01
+#define MAX1619_REG_R_REMOTE_HIGH 0x07
+#define MAX1619_REG_W_REMOTE_HIGH 0x0D
+#define MAX1619_REG_R_REMOTE_LOW 0x08
+#define MAX1619_REG_W_REMOTE_LOW 0x0E
+#define MAX1619_REG_R_REMOTE_CRIT 0x10
+#define MAX1619_REG_W_REMOTE_CRIT 0x12
+#define MAX1619_REG_R_TCRIT_HYST 0x11
+#define MAX1619_REG_W_TCRIT_HYST 0x13
+
+/*
+ * Conversions and various macros
+ */
+
+#define TEMP_FROM_REG(val) ((val & 0x80 ? val-0x100 : val) * 1000)
+#define TEMP_TO_REG(val) ((val < 0 ? val+0x100*1000 : val) / 1000)
+
+/*
+ * Functions declaration
+ */
+
+static int max1619_attach_adapter(struct i2c_adapter *adapter);
+static int max1619_detect(struct i2c_adapter *adapter, int address,
+ int kind);
+static void max1619_init_client(struct i2c_client *client);
+static int max1619_detach_client(struct i2c_client *client);
+static struct max1619_data *max1619_update_device(struct device *dev);
+
+/*
+ * Driver data (common to all clients)
+ */
+
+static struct i2c_driver max1619_driver = {
+ .owner = THIS_MODULE,
+ .name = "max1619",
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = max1619_attach_adapter,
+ .detach_client = max1619_detach_client,
+};
+
+/*
+ * Client data (each client gets its own)
+ */
+
+struct max1619_data {
+ struct i2c_client client;
+ struct semaphore update_lock;
+ char valid; /* zero until following fields are valid */
+ unsigned long last_updated; /* in jiffies */
+
+ /* registers values */
+ u8 temp_input1; /* local */
+ u8 temp_input2, temp_low2, temp_high2; /* remote */
+ u8 temp_crit2;
+ u8 temp_hyst2;
+ u8 alarms;
+};
+
+/*
+ * Internal variables
+ */
+
+static int max1619_id;
+
+/*
+ * Sysfs stuff
+ */
+
+#define show_temp(value) \
+static ssize_t show_##value(struct device *dev, char *buf) \
+{ \
+ struct max1619_data *data = max1619_update_device(dev); \
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(data->value)); \
+}
+show_temp(temp_input1);
+show_temp(temp_input2);
+show_temp(temp_low2);
+show_temp(temp_high2);
+show_temp(temp_crit2);
+show_temp(temp_hyst2);
+
+#define set_temp2(value, reg) \
+static ssize_t set_##value(struct device *dev, const char *buf, \
+ size_t count) \
+{ \
+ struct i2c_client *client = to_i2c_client(dev); \
+ struct max1619_data *data = i2c_get_clientdata(client); \
+ long val = simple_strtol(buf, NULL, 10); \
+ data->value = TEMP_TO_REG(val); \
+ i2c_smbus_write_byte_data(client, reg, data->value); \
+ return count; \
+}
+
+set_temp2(temp_low2, MAX1619_REG_W_REMOTE_LOW);
+set_temp2(temp_high2, MAX1619_REG_W_REMOTE_HIGH);
+set_temp2(temp_crit2, MAX1619_REG_W_REMOTE_CRIT);
+set_temp2(temp_hyst2, MAX1619_REG_W_TCRIT_HYST);
+
+static ssize_t show_alarms(struct device *dev, char *buf)
+{
+ struct max1619_data *data = max1619_update_device(dev);
+ return sprintf(buf, "%d\n", data->alarms);
+}
+
+static DEVICE_ATTR(temp1_input, S_IRUGO, show_temp_input1, NULL);
+static DEVICE_ATTR(temp2_input, S_IRUGO, show_temp_input2, NULL);
+static DEVICE_ATTR(temp2_min, S_IWUSR | S_IRUGO, show_temp_low2,
+ set_temp_low2);
+static DEVICE_ATTR(temp2_max, S_IWUSR | S_IRUGO, show_temp_high2,
+ set_temp_high2);
+static DEVICE_ATTR(temp2_crit, S_IWUSR | S_IRUGO, show_temp_crit2,
+ set_temp_crit2);
+static DEVICE_ATTR(temp2_crit_hyst, S_IWUSR | S_IRUGO, show_temp_hyst2,
+ set_temp_hyst2);
+static DEVICE_ATTR(alarms, S_IRUGO, show_alarms, NULL);
+
+/*
+ * Real code
+ */
+
+static int max1619_attach_adapter(struct i2c_adapter *adapter)
+{
+ if (!(adapter->class & I2C_CLASS_HWMON))
+ return 0;
+ return i2c_detect(adapter, &addr_data, max1619_detect);
+}
+
+/*
+ * The following function does more than just detection. If detection
+ * succeeds, it also registers the new chip.
+ */
+static int max1619_detect(struct i2c_adapter *adapter, int address, int kind)
+{
+ struct i2c_client *new_client;
+ struct max1619_data *data;
+ int err = 0;
+ const char *name = "";
+ u8 reg_config=0, reg_convrate=0, reg_status=0;
+ u8 man_id, chip_id;
+ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA))
+ goto exit;
+
+ if (!(data = kmalloc(sizeof(struct max1619_data), GFP_KERNEL))) {
+ err = -ENOMEM;
+ goto exit;
+ }
+ memset(data, 0, sizeof(struct max1619_data));
+
+ /* The common I2C client data is placed right before the
+ MAX1619-specific data. */
+ new_client = &data->client;
+ i2c_set_clientdata(new_client, data);
+ new_client->addr = address;
+ new_client->adapter = adapter;
+ new_client->driver = &max1619_driver;
+ new_client->flags = 0;
+
+ /*
+ * Now we do the remaining detection. A negative kind means that
+ * the driver was loaded with no force parameter (default), so we
+ * must both detect and identify the chip. A zero kind means that
+ * the driver was loaded with the force parameter, the detection
+ * step shall be skipped. A positive kind means that the driver
+ * was loaded with the force parameter and a given kind of chip is
+ * requested, so both the detection and the identification steps
+ * are skipped.
+ */
+ if (kind < 0) { /* detection */
+ reg_config = i2c_smbus_read_byte_data(new_client,
+ MAX1619_REG_R_CONFIG);
+ reg_convrate = i2c_smbus_read_byte_data(new_client,
+ MAX1619_REG_R_CONVRATE);
+ reg_status = i2c_smbus_read_byte_data(new_client,
+ MAX1619_REG_R_STATUS);
+ if ((reg_config & 0x03) != 0x00
+ || reg_convrate > 0x07 || (reg_status & 0x61 ) !=0x00) {
+ dev_dbg(&adapter->dev,
+ "MAX1619 detection failed at 0x%02x.\n",
+ address);
+ goto exit_free;
+ }
+ }
+
+ if (kind <= 0) { /* identification */
+
+ man_id = i2c_smbus_read_byte_data(new_client,
+ MAX1619_REG_R_MAN_ID);
+ chip_id = i2c_smbus_read_byte_data(new_client,
+ MAX1619_REG_R_CHIP_ID);
+
+ if ((man_id == 0x4D) && (chip_id == 0x04)){
+ kind = max1619;
+ }
+ }
+
+ if (kind <= 0) { /* identification failed */
+ dev_info(&adapter->dev,
+ "Unsupported chip (man_id=0x%02X, "
+ "chip_id=0x%02X).\n", man_id, chip_id);
+ goto exit_free;
+ }
+
+
+ if (kind == max1619){
+ name = "max1619";
+ }
+
+ /* We can fill in the remaining client fields */
+ strlcpy(new_client->name, name, I2C_NAME_SIZE);
+ new_client->id = max1619_id++;
+ data->valid = 0;
+ init_MUTEX(&data->update_lock);
+
+ /* Tell the I2C layer a new client has arrived */
+ if ((err = i2c_attach_client(new_client)))
+ goto exit_free;
+
+ /* Initialize the MAX1619 chip */
+ max1619_init_client(new_client);
+
+ /* Register sysfs hooks */
+ device_create_file(&new_client->dev, &dev_attr_temp1_input);
+ device_create_file(&new_client->dev, &dev_attr_temp2_input);
+ device_create_file(&new_client->dev, &dev_attr_temp2_min);
+ device_create_file(&new_client->dev, &dev_attr_temp2_max);
+ device_create_file(&new_client->dev, &dev_attr_temp2_crit);
+ device_create_file(&new_client->dev, &dev_attr_temp2_crit_hyst);
+ device_create_file(&new_client->dev, &dev_attr_alarms);
+
+ return 0;
+
+exit_free:
+ kfree(data);
+exit:
+ return err;
+}
+
+static void max1619_init_client(struct i2c_client *client)
+{
+ u8 config;
+
+ /*
+ * Start the conversions.
+ */
+ i2c_smbus_write_byte_data(client, MAX1619_REG_W_CONVRATE,
+ 5); /* 2 Hz */
+ config = i2c_smbus_read_byte_data(client, MAX1619_REG_R_CONFIG);
+ if (config & 0x40)
+ i2c_smbus_write_byte_data(client, MAX1619_REG_W_CONFIG,
+ config & 0xBF); /* run */
+}
+
+static int max1619_detach_client(struct i2c_client *client)
+{
+ int err;
+
+ if ((err = i2c_detach_client(client))) {
+ dev_err(&client->dev, "Client deregistration failed, "
+ "client not detached.\n");
+ return err;
+ }
+
+ kfree(i2c_get_clientdata(client));
+ return 0;
+}
+
+static struct max1619_data *max1619_update_device(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct max1619_data *data = i2c_get_clientdata(client);
+
+ down(&data->update_lock);
+
+ if ((jiffies - data->last_updated > HZ * 2) ||
+ (jiffies < data->last_updated) ||
+ !data->valid) {
+
+ dev_dbg(&client->dev, "Updating max1619 data.\n");
+ data->temp_input1 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_LOCAL_TEMP);
+ data->temp_input2 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_REMOTE_TEMP);
+ data->temp_high2 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_REMOTE_HIGH);
+ data->temp_low2 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_REMOTE_LOW);
+ data->temp_crit2 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_REMOTE_CRIT);
+ data->temp_hyst2 = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_TCRIT_HYST);
+ data->alarms = i2c_smbus_read_byte_data(client,
+ MAX1619_REG_R_STATUS);
+
+ data->last_updated = jiffies;
+ data->valid = 1;
+ }
+
+ up(&data->update_lock);
+
+ return data;
+}
+
+static int __init sensors_max1619_init(void)
+{
+ return i2c_add_driver(&max1619_driver);
+}
+
+static void __exit sensors_max1619_exit(void)
+{
+ i2c_del_driver(&max1619_driver);
+}
+
+MODULE_AUTHOR("Alexey Fisher <fishor@mail.ru> and"
+ "Jean Delvare <khali@linux-fr.org>");
+MODULE_DESCRIPTION("MAX1619 sensor driver");
+MODULE_LICENSE("GPL");
+
+module_init(sensors_max1619_init);
+module_exit(sensors_max1619_exit);
--- /dev/null
+/*
+ * linux/drivers/i2c/chips/rtc8564.c
+ *
+ * Copyright (C) 2002-2004 Stefan Eletzhofer
+ *
+ * based on linux/drivers/acron/char/pcf8583.c
+ * Copyright (C) 2000 Russell King
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Driver for system3's EPSON RTC 8564 chip
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/i2c.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/rtc.h> /* get the user-level API */
+#include <linux/init.h>
+#include <linux/init.h>
+
+#include "rtc8564.h"
+
+#ifdef DEBUG
+# define _DBG(x, fmt, args...) do{ if (debug>=x) printk(KERN_DEBUG"%s: " fmt "\n", __FUNCTION__, ##args); } while(0);
+#else
+# define _DBG(x, fmt, args...) do { } while(0);
+#endif
+
+#define _DBGRTCTM(x, rtctm) if (debug>=x) printk("%s: secs=%d, mins=%d, hours=%d, mday=%d, " \
+ "mon=%d, year=%d, wday=%d VL=%d\n", __FUNCTION__, \
+ (rtctm).secs, (rtctm).mins, (rtctm).hours, (rtctm).mday, \
+ (rtctm).mon, (rtctm).year, (rtctm).wday, (rtctm).vl);
+
+struct rtc8564_data {
+ struct i2c_client client;
+ u16 ctrl;
+};
+
+static inline u8 _rtc8564_ctrl1(struct i2c_client *client)
+{
+ struct rtc8564_data *data = i2c_get_clientdata(client);
+ return data->ctrl & 0xff;
+}
+static inline u8 _rtc8564_ctrl2(struct i2c_client *client)
+{
+ struct rtc8564_data *data = i2c_get_clientdata(client);
+ return (data->ctrl & 0xff00) >> 8;
+}
+
+#define CTRL1(c) _rtc8564_ctrl1(c)
+#define CTRL2(c) _rtc8564_ctrl2(c)
+
+#define BCD_TO_BIN(val) (((val)&15) + ((val)>>4)*10)
+#define BIN_TO_BCD(val) ((((val)/10)<<4) + (val)%10)
+
+static int debug;;
+module_param(debug, int, S_IRUGO | S_IWUSR);
+
+static struct i2c_driver rtc8564_driver;
+
+static unsigned short ignore[] = { I2C_CLIENT_END };
+static unsigned short normal_addr[] = { 0x51, I2C_CLIENT_END };
+
+static struct i2c_client_address_data addr_data = {
+ .normal_i2c = normal_addr,
+ .normal_i2c_range = ignore,
+ .probe = ignore,
+ .probe_range = ignore,
+ .ignore = ignore,
+ .ignore_range = ignore,
+ .force = ignore,
+};
+
+static int rtc8564_read_mem(struct i2c_client *client, struct mem *mem);
+static int rtc8564_write_mem(struct i2c_client *client, struct mem *mem);
+
+static int rtc8564_read(struct i2c_client *client, unsigned char adr,
+ unsigned char *buf, unsigned char len)
+{
+ int ret = -EIO;
+ unsigned char addr[1] = { adr };
+ struct i2c_msg msgs[2] = {
+ {client->addr, 0, 1, addr},
+ {client->addr, I2C_M_RD, len, buf}
+ };
+
+ _DBG(1, "client=%p, adr=%d, buf=%p, len=%d", client, adr, buf, len);
+
+ if (!buf || !client) {
+ ret = -EINVAL;
+ goto done;
+ }
+
+ ret = i2c_transfer(client->adapter, msgs, 2);
+ if (ret == 2) {
+ ret = 0;
+ }
+
+done:
+ return ret;
+}
+
+static int rtc8564_write(struct i2c_client *client, unsigned char adr,
+ unsigned char *data, unsigned char len)
+{
+ int ret = 0;
+ unsigned char _data[16];
+ struct i2c_msg wr;
+ int i;
+
+ if (!client || !data || len > 15) {
+ ret = -EINVAL;
+ goto done;
+ }
+
+ _DBG(1, "client=%p, adr=%d, buf=%p, len=%d", client, adr, data, len);
+
+ _data[0] = adr;
+ for (i = 0; i < len; i++) {
+ _data[i + 1] = data[i];
+ _DBG(5, "data[%d] = 0x%02x (%d)", i, data[i], data[i]);
+ }
+
+ wr.addr = client->addr;
+ wr.flags = 0;
+ wr.len = len + 1;
+ wr.buf = _data;
+
+ ret = i2c_transfer(client->adapter, &wr, 1);
+ if (ret == 1) {
+ ret = 0;
+ }
+
+done:
+ return ret;
+}
+
+static int rtc8564_attach(struct i2c_adapter *adap, int addr, int kind)
+{
+ int ret;
+ struct i2c_client *new_client;
+ struct rtc8564_data *d;
+ unsigned char data[10];
+ unsigned char ad[1] = { 0 };
+ struct i2c_msg ctrl_wr[1] = {
+ {addr, 0, 2, data}
+ };
+ struct i2c_msg ctrl_rd[2] = {
+ {addr, 0, 1, ad},
+ {addr, I2C_M_RD, 2, data}
+ };
+
+ d = kmalloc(sizeof(struct rtc8564_data), GFP_KERNEL);
+ if (!d) {
+ ret = -ENOMEM;
+ goto done;
+ }
+ memset(d, 0, sizeof(struct rtc8564_data));
+ new_client = &d->client;
+
+ strlcpy(new_client->name, "RTC8564", I2C_NAME_SIZE);
+ i2c_set_clientdata(new_client, d);
+ new_client->id = rtc8564_driver.id;
+ new_client->flags = I2C_CLIENT_ALLOW_USE | I2C_DF_NOTIFY;
+ new_client->addr = addr;
+ new_client->adapter = adap;
+ new_client->driver = &rtc8564_driver;
+
+ _DBG(1, "client=%p", new_client);
+ _DBG(1, "client.id=%d", new_client->id);
+
+ /* init ctrl1 reg */
+ data[0] = 0;
+ data[1] = 0;
+ ret = i2c_transfer(new_client->adapter, ctrl_wr, 1);
+ if (ret != 1) {
+ printk(KERN_INFO "rtc8564: cant init ctrl1\n");
+ ret = -ENODEV;
+ goto done;
+ }
+
+ /* read back ctrl1 and ctrl2 */
+ ret = i2c_transfer(new_client->adapter, ctrl_rd, 2);
+ if (ret != 2) {
+ printk(KERN_INFO "rtc8564: cant read ctrl\n");
+ ret = -ENODEV;
+ goto done;
+ }
+
+ d->ctrl = data[0] | (data[1] << 8);
+
+ _DBG(1, "RTC8564_REG_CTRL1=%02x, RTC8564_REG_CTRL2=%02x",
+ data[0], data[1]);
+
+ ret = i2c_attach_client(new_client);
+done:
+ if (ret) {
+ kfree(d);
+ }
+ return ret;
+}
+
+static int rtc8564_probe(struct i2c_adapter *adap)
+{
+ return i2c_probe(adap, &addr_data, rtc8564_attach);
+}
+
+static int rtc8564_detach(struct i2c_client *client)
+{
+ i2c_detach_client(client);
+ kfree(i2c_get_clientdata(client));
+ return 0;
+}
+
+static int rtc8564_get_datetime(struct i2c_client *client, struct rtc_tm *dt)
+{
+ int ret = -EIO;
+ unsigned char buf[15];
+
+ _DBG(1, "client=%p, dt=%p", client, dt);
+
+ if (!dt || !client)
+ return -EINVAL;
+
+ memset(buf, 0, sizeof(buf));
+
+ ret = rtc8564_read(client, 0, buf, 15);
+ if (ret)
+ return ret;
+
+ /* century stored in minute alarm reg */
+ dt->year = BCD_TO_BIN(buf[RTC8564_REG_YEAR]);
+ dt->year += 100 * BCD_TO_BIN(buf[RTC8564_REG_AL_MIN] & 0x3f);
+ dt->mday = BCD_TO_BIN(buf[RTC8564_REG_DAY] & 0x3f);
+ dt->wday = BCD_TO_BIN(buf[RTC8564_REG_WDAY] & 7);
+ dt->mon = BCD_TO_BIN(buf[RTC8564_REG_MON_CENT] & 0x1f);
+
+ dt->secs = BCD_TO_BIN(buf[RTC8564_REG_SEC] & 0x7f);
+ dt->vl = (buf[RTC8564_REG_SEC] & 0x80) == 0x80;
+ dt->mins = BCD_TO_BIN(buf[RTC8564_REG_MIN] & 0x7f);
+ dt->hours = BCD_TO_BIN(buf[RTC8564_REG_HR] & 0x3f);
+
+ _DBGRTCTM(2, *dt);
+
+ return 0;
+}
+
+static int
+rtc8564_set_datetime(struct i2c_client *client, struct rtc_tm *dt, int datetoo)
+{
+ int ret, len = 5;
+ unsigned char buf[15];
+
+ _DBG(1, "client=%p, dt=%p", client, dt);
+
+ if (!dt || !client)
+ return -EINVAL;
+
+ _DBGRTCTM(2, *dt);
+
+ buf[RTC8564_REG_CTRL1] = CTRL1(client) | RTC8564_CTRL1_STOP;
+ buf[RTC8564_REG_CTRL2] = CTRL2(client);
+ buf[RTC8564_REG_SEC] = BIN_TO_BCD(dt->secs);
+ buf[RTC8564_REG_MIN] = BIN_TO_BCD(dt->mins);
+ buf[RTC8564_REG_HR] = BIN_TO_BCD(dt->hours);
+
+ if (datetoo) {
+ len += 5;
+ buf[RTC8564_REG_DAY] = BIN_TO_BCD(dt->mday);
+ buf[RTC8564_REG_WDAY] = BIN_TO_BCD(dt->wday);
+ buf[RTC8564_REG_MON_CENT] = BIN_TO_BCD(dt->mon) & 0x1f;
+ /* century stored in minute alarm reg */
+ buf[RTC8564_REG_YEAR] = BIN_TO_BCD(dt->year % 100);
+ buf[RTC8564_REG_AL_MIN] = BIN_TO_BCD(dt->year / 100);
+ }
+
+ ret = rtc8564_write(client, 0, buf, len);
+ if (ret) {
+ _DBG(1, "error writing data! %d", ret);
+ }
+
+ buf[RTC8564_REG_CTRL1] = CTRL1(client);
+ ret = rtc8564_write(client, 0, buf, 1);
+ if (ret) {
+ _DBG(1, "error writing data! %d", ret);
+ }
+
+ return ret;
+}
+
+static int rtc8564_get_ctrl(struct i2c_client *client, unsigned int *ctrl)
+{
+ struct rtc8564_data *data = i2c_get_clientdata(client);
+
+ if (!ctrl || !client)
+ return -1;
+
+ *ctrl = data->ctrl;
+ return 0;
+}
+
+static int rtc8564_set_ctrl(struct i2c_client *client, unsigned int *ctrl)
+{
+ struct rtc8564_data *data = i2c_get_clientdata(client);
+ unsigned char buf[2];
+
+ if (!ctrl || !client)
+ return -1;
+
+ buf[0] = *ctrl & 0xff;
+ buf[1] = (*ctrl & 0xff00) >> 8;
+ data->ctrl = *ctrl;
+
+ return rtc8564_write(client, 0, buf, 2);
+}
+
+static int rtc8564_read_mem(struct i2c_client *client, struct mem *mem)
+{
+
+ if (!mem || !client)
+ return -EINVAL;
+
+ return rtc8564_read(client, mem->loc, mem->data, mem->nr);
+}
+
+static int rtc8564_write_mem(struct i2c_client *client, struct mem *mem)
+{
+
+ if (!mem || !client)
+ return -EINVAL;
+
+ return rtc8564_write(client, mem->loc, mem->data, mem->nr);
+}
+
+static int
+rtc8564_command(struct i2c_client *client, unsigned int cmd, void *arg)
+{
+
+ _DBG(1, "cmd=%d", cmd);
+
+ switch (cmd) {
+ case RTC_GETDATETIME:
+ return rtc8564_get_datetime(client, arg);
+
+ case RTC_SETTIME:
+ return rtc8564_set_datetime(client, arg, 0);
+
+ case RTC_SETDATETIME:
+ return rtc8564_set_datetime(client, arg, 1);
+
+ case RTC_GETCTRL:
+ return rtc8564_get_ctrl(client, arg);
+
+ case RTC_SETCTRL:
+ return rtc8564_set_ctrl(client, arg);
+
+ case MEM_READ:
+ return rtc8564_read_mem(client, arg);
+
+ case MEM_WRITE:
+ return rtc8564_write_mem(client, arg);
+
+ default:
+ return -EINVAL;
+ }
+}
+
+static struct i2c_driver rtc8564_driver = {
+ .owner = THIS_MODULE,
+ .name = "RTC8564",
+ .id = I2C_DRIVERID_RTC8564,
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = rtc8564_probe,
+ .detach_client = rtc8564_detach,
+ .command = rtc8564_command
+};
+
+static __init int rtc8564_init(void)
+{
+ return i2c_add_driver(&rtc8564_driver);
+}
+
+static __exit void rtc8564_exit(void)
+{
+ i2c_del_driver(&rtc8564_driver);
+}
+
+MODULE_AUTHOR("Stefan Eletzhofer <Stefan.Eletzhofer@eletztrick.de>");
+MODULE_DESCRIPTION("EPSON RTC8564 Driver");
+MODULE_LICENSE("GPL");
+
+module_init(rtc8564_init);
+module_exit(rtc8564_exit);
--- /dev/null
+/*
+ * linux/drivers/ide/pci/delkin_cb.c
+ *
+ * Created 20 Oct 2004 by Mark Lord
+ *
+ * Basic support for Delkin/ASKA/Workbit Cardbus CompactFlash adapter
+ *
+ * Modeled after the 16-bit PCMCIA driver: ide-cs.c
+ *
+ * This is slightly peculiar, in that it is a PCI driver,
+ * but is NOT an IDE PCI driver -- the IDE layer does not directly
+ * support hot insertion/removal of PCI interfaces, so this driver
+ * is unable to use the IDE PCI interfaces. Instead, it uses the
+ * same interfaces as the ide-cs (PCMCIA) driver uses.
+ * On the plus side, the driver is also smaller/simpler this way.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive for
+ * more details.
+ */
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/blkdev.h>
+#include <linux/hdreg.h>
+#include <linux/ide.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <asm/io.h>
+
+/*
+ * No chip documentation has yet been found,
+ * so these configuration values were pulled from
+ * a running Win98 system using "debug".
+ * This gives around 3MByte/second read performance,
+ * which is about 2/3 of what the chip is capable of.
+ *
+ * There is also a 4KByte mmio region on the card,
+ * but its purpose has yet to be reverse-engineered.
+ */
+static const u8 setup[] = {
+ 0x00, 0x05, 0xbe, 0x01, 0x20, 0x8f, 0x00, 0x00,
+ 0xa4, 0x1f, 0xb3, 0x1b, 0x00, 0x00, 0x00, 0x80,
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ 0x00, 0x00, 0x00, 0x00, 0xa4, 0x83, 0x02, 0x13,
+};
+
+static int __devinit
+delkin_cb_probe (struct pci_dev *dev, const struct pci_device_id *id)
+{
+ unsigned long base;
+ hw_regs_t hw;
+ ide_hwif_t *hwif = NULL;
+ ide_drive_t *drive;
+ int i, rc;
+
+ rc = pci_enable_device(dev);
+ if (rc) {
+ printk(KERN_ERR "delkin_cb: pci_enable_device failed (%d)\n", rc);
+ return rc;
+ }
+ rc = pci_request_regions(dev, "delkin_cb");
+ if (rc) {
+ printk(KERN_ERR "delkin_cb: pci_request_regions failed (%d)\n", rc);
+ pci_disable_device(dev);
+ return rc;
+ }
+ base = pci_resource_start(dev, 0);
+ outb(0x02, base + 0x1e); /* set nIEN to block interrupts */
+ inb(base + 0x17); /* read status to clear interrupts */
+ for (i = 0; i < sizeof(setup); ++i) {
+ if (setup[i])
+ outb(setup[i], base + i);
+ }
+ pci_release_regions(dev); /* IDE layer handles regions itself */
+
+ memset(&hw, 0, sizeof(hw));
+ ide_std_init_ports(&hw, base + 0x10, base + 0x1e);
+ hw.irq = dev->irq;
+ hw.chipset = ide_pci; /* this enables IRQ sharing */
+
+ rc = ide_register_hw_with_fixup(&hw, &hwif, ide_undecoded_slave);
+ if (rc < 0) {
+ printk(KERN_ERR "delkin_cb: ide_register_hw failed (%d)\n", rc);
+ pci_disable_device(dev);
+ return -ENODEV;
+ }
+ pci_set_drvdata(dev, hwif);
+ hwif->pci_dev = dev;
+ drive = &hwif->drives[0];
+ if (drive->present) {
+ drive->io_32bit = 1;
+ drive->unmask = 1;
+ }
+ return 0;
+}
+
+static void
+delkin_cb_remove (struct pci_dev *dev)
+{
+ ide_hwif_t *hwif = pci_get_drvdata(dev);
+
+ if (hwif)
+ ide_unregister_hwif(hwif);
+ pci_disable_device(dev);
+}
+
+static struct pci_device_id delkin_cb_pci_tbl[] __devinitdata = {
+ { PCI_VENDOR_ID_WORKBIT, PCI_DEVICE_ID_WORKBIT_CB, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
+ { 0, },
+};
+MODULE_DEVICE_TABLE(pci, delkin_cb_pci_tbl);
+
+static struct pci_driver driver = {
+ .name = "Delkin/ASKA/Workbit Cardbus IDE",
+ .id_table = delkin_cb_pci_tbl,
+ .probe = delkin_cb_probe,
+ .remove = delkin_cb_remove,
+};
+
+static int
+delkin_cb_init (void)
+{
+ return pci_module_init(&driver);
+}
+
+static void
+delkin_cb_exit (void)
+{
+ pci_unregister_driver(&driver);
+}
+
+module_init(delkin_cb_init);
+module_exit(delkin_cb_exit);
+
+MODULE_AUTHOR("Mark Lord");
+MODULE_DESCRIPTION("Basic support for Delkin/ASKA/Workbit Cardbus IDE");
+MODULE_LICENSE("GPL");
+
--- /dev/null
+
+/*
+ * linux/drivers/ide/pci/it821x.c Version 0.09 December 2004
+ *
+ * Copyright (C) 2004 Red Hat <alan@redhat.com>
+ *
+ * May be copied or modified under the terms of the GNU General Public License
+ * Based in part on the ITE vendor provided SCSI driver.
+ *
+ * Documentation available from
+ * http://www.ite.com.tw/pc/IT8212F_V04.pdf
+ * Some other documents are NDA.
+ *
+ * The ITE8212 isn't exactly a standard IDE controller. It has two
+ * modes. In pass through mode then it is an IDE controller. In its smart
+ * mode its actually quite a capable hardware raid controller disguised
+ * as an IDE controller. Smart mode only understands DMA read/write and
+ * identify, none of the fancier commands apply. The IT8211 is identical
+ * in other respects but lacks the raid mode.
+ *
+ * Errata:
+ * o Rev 0x10 also requires master/slave hold the same DMA timings and
+ * cannot do ATAPI MWDMA.
+ * o The identify data for raid volumes lacks CHS info (technically ok)
+ * but also fails to set the LBA28 and other bits. We fix these in
+ * the IDE probe quirk code.
+ * o If you write LBA48 sized I/O's (ie > 256 sector) in smart mode
+ * raid then the controller firmware dies
+ * o Smart mode without RAID doesn't clear all the necessary identify
+ * bits to reduce the command set to the one used
+ *
+ * This has a few impacts on the driver
+ * - In pass through mode we do all the work you would expect
+ * - In smart mode the clocking set up is done by the controller generally
+ * but we must watch the other limits and filter.
+ * - There are a few extra vendor commands that actually talk to the
+ * controller but only work PIO with no IRQ.
+ *
+ * Vendor areas of the identify block in smart mode are used for the
+ * timing and policy set up. Each HDD in raid mode also has a serial
+ * block on the disk. The hardware extra commands are get/set chip status,
+ * rebuild, get rebuild status.
+ *
+ * In Linux the driver supports pass through mode as if the device was
+ * just another IDE controller. If the smart mode is running then
+ * volumes are managed by the controller firmware and each IDE "disk"
+ * is a raid volume. Even more cute - the controller can do automated
+ * hotplug and rebuild.
+ *
+ * The pass through controller itself is a little demented. It has a
+ * flaw that it has a single set of PIO/MWDMA timings per channel so
+ * non UDMA devices restrict each others performance. It also has a
+ * single clock source per channel so mixed UDMA100/133 performance
+ * isn't perfect and we have to pick a clock. Thankfully none of this
+ * matters in smart mode. ATAPI DMA is not currently supported.
+ *
+ * It seems the smart mode is a win for RAID1/RAID10 but otherwise not.
+ *
+ * TODO
+ * - ATAPI UDMA is ok but not MWDMA it seems
+ * - RAID configuration ioctls
+ * - Move to libata once it grows up
+ */
+
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+#include <linux/hdreg.h>
+#include <linux/ide.h>
+#include <linux/init.h>
+
+#include <asm/io.h>
+
+struct it821x_dev
+{
+ unsigned int smart:1, /* Are we in smart raid mode */
+ timing10:1; /* Rev 0x10 */
+ u8 clock_mode; /* 0, ATA_50 or ATA_66 */
+ u8 want[2][2]; /* Mode/Pri log for master slave */
+ /* We need these for switching the clock when DMA goes on/off
+ The high byte is the 66Mhz timing */
+ u16 pio[2]; /* Cached PIO values */
+ u16 mwdma[2]; /* Cached MWDMA values */
+ u16 udma[2]; /* Cached UDMA values (per drive) */
+};
+
+#define ATA_66 0
+#define ATA_50 1
+#define ATA_ANY 2
+
+#define UDMA_OFF 0
+#define MWDMA_OFF 0
+
+/*
+ * We allow users to force the card into non raid mode without
+ * flashing the alternative BIOS. This is also neccessary right now
+ * for embedded platforms that cannot run a PC BIOS but are using this
+ * device.
+ */
+
+static int it8212_noraid;
+
+/**
+ * it821x_program - program the PIO/MWDMA registers
+ * @drive: drive to tune
+ *
+ * Program the PIO/MWDMA timing for this channel according to the
+ * current clock.
+ */
+
+static void it821x_program(ide_drive_t *drive, u16 timing)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct it821x_dev *itdev = ide_get_hwifdata(hwif);
+ int channel = hwif->channel;
+ u8 conf;
+
+ /* Program PIO/MWDMA timing bits */
+ if(itdev->clock_mode == ATA_66)
+ conf = timing >> 8;
+ else
+ conf = timing & 0xFF;
+ pci_write_config_byte(hwif->pci_dev, 0x54 + 4 * channel, conf);
+}
+
+/**
+ * it821x_program_udma - program the UDMA registers
+ * @drive: drive to tune
+ *
+ * Program the UDMA timing for this drive according to the
+ * current clock.
+ */
+
+static void it821x_program_udma(ide_drive_t *drive, u16 timing)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct it821x_dev *itdev = ide_get_hwifdata(hwif);
+ int channel = hwif->channel;
+ int unit = drive->select.b.unit;
+ u8 conf;
+
+ /* Program UDMA timing bits */
+ if(itdev->clock_mode == ATA_66)
+ conf = timing >> 8;
+ else
+ conf = timing & 0xFF;
+ if(itdev->timing10 == 0)
+ pci_write_config_byte(hwif->pci_dev, 0x56 + 4 * channel + unit, conf);
+ else {
+ pci_write_config_byte(hwif->pci_dev, 0x56 + 4 * channel, conf);
+ pci_write_config_byte(hwif->pci_dev, 0x56 + 4 * channel + 1, conf);
+ }
+}
+
+
+/**
+ * it821x_clock_strategy
+ * @hwif: hardware interface
+ *
+ * Select between the 50 and 66Mhz base clocks to get the best
+ * results for this interface.
+ */
+
+static void it821x_clock_strategy(ide_drive_t *drive)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct it821x_dev *itdev = ide_get_hwifdata(hwif);
+
+ u8 unit = drive->select.b.unit;
+ ide_drive_t *pair = &hwif->drives[1-unit];
+
+ int clock, altclock;
+ u8 v;
+ int sel = 0;
+
+ if(itdev->want[0][0] > itdev->want[1][0]) {
+ clock = itdev->want[0][1];
+ altclock = itdev->want[1][1];
+ } else {
+ clock = itdev->want[1][1];
+ altclock = itdev->want[0][1];
+ }
+
+ /* Master doesn't care does the slave ? */
+ if(clock == ATA_ANY)
+ clock = altclock;
+
+ /* Nobody cares - keep the same clock */
+ if(clock == ATA_ANY)
+ return;
+ /* No change */
+ if(clock == itdev->clock_mode)
+ return;
+
+ /* Load this into the controller ? */
+ if(clock == ATA_66)
+ itdev->clock_mode = ATA_66;
+ else {
+ itdev->clock_mode = ATA_50;
+ sel = 1;
+ }
+ pci_read_config_byte(hwif->pci_dev, 0x50, &v);
+ v &= ~(1 << (1 + hwif->channel));
+ v |= sel << (1 + hwif->channel);
+ pci_write_config_byte(hwif->pci_dev, 0x50, v);
+
+ /*
+ * Reprogram the UDMA/PIO of the pair drive for the switch
+ * MWDMA will be dealt with by the dma switcher
+ */
+ if(pair && itdev->udma[1-unit] != UDMA_OFF) {
+ it821x_program_udma(pair, itdev->udma[1-unit]);
+ it821x_program(pair, itdev->pio[1-unit]);
+ }
+ /*
+ * Reprogram the UDMA/PIO of our drive for the switch.
+ * MWDMA will be dealt with by the dma switcher
+ */
+ if(itdev->udma[unit] != UDMA_OFF) {
+ it821x_program_udma(drive, itdev->udma[unit]);
+ it821x_program(drive, itdev->pio[unit]);
+ }
+}
+
+/**
+ * it821x_ratemask - Compute available modes
+ * @drive: IDE drive
+ *
+ * Compute the available speeds for the devices on the interface. This
+ * is all modes to ATA133 clipped by drive cable setup.
+ */
+
+static u8 it821x_ratemask (ide_drive_t *drive)
+{
+ u8 mode = 4;
+ if (!eighty_ninty_three(drive))
+ mode = min(mode, (u8)1);
+ return mode;
+}
+
+/**
+ * it821x_tuneproc - tune a drive
+ * @drive: drive to tune
+ * @mode_wanted: the target operating mode
+ *
+ * Load the timing settings for this device mode into the
+ * controller. By the time we are called the mode has been
+ * modified as neccessary to handle the absence of seperate
+ * master/slave timers for MWDMA/PIO.
+ *
+ * This code is only used in pass through mode.
+ */
+
+static void it821x_tuneproc (ide_drive_t *drive, byte mode_wanted)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct it821x_dev *itdev = ide_get_hwifdata(hwif);
+ int unit = drive->select.b.unit;
+
+ /* Spec says 89 ref driver uses 88 */
+ static u16 pio[] = { 0xAA88, 0xA382, 0xA181, 0x3332, 0x3121 };
+ static u8 pio_want[] = { ATA_66, ATA_66, ATA_66, ATA_66, ATA_ANY };
+
+ if(itdev->smart)
+ return;
+
+ /* We prefer 66Mhz clock for PIO 0-3, don't care for PIO4 */
+ itdev->want[unit][1] = pio_want[mode_wanted];
+ itdev->want[unit][0] = 1; /* PIO is lowest priority */
+ itdev->pio[unit] = pio[mode_wanted];
+ it821x_clock_strategy(drive);
+ it821x_program(drive, itdev->pio[unit]);
+}
+
+/**
+ * it821x_tune_mwdma - tune a channel for MWDMA
+ * @drive: drive to set up
+ * @mode_wanted: the target operating mode
+ *
+ * Load the timing settings for this device mode into the
+ * controller when doing MWDMA in pass through mode. The caller
+ * must manage the whole lack of per device MWDMA/PIO timings and
+ * the shared MWDMA/PIO timing register.
+ */
+
+static void it821x_tune_mwdma (ide_drive_t *drive, byte mode_wanted)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct it821x_dev *itdev = (void *)ide_get_hwifdata(hwif);
+ int unit = drive->select.b.unit;
+ int channel = hwif->channel;
+ u8 conf;
+
+ static u16 dma[] = { 0x8866, 0x3222, 0x3121 };
+ static u8 mwdma_want[] = { ATA_ANY, ATA_66, ATA_ANY };
+
+ itdev->want[unit][1] = mwdma_want[mode_wanted];
+ itdev->want[unit][0] = 2; /* MWDMA is low priority */
+ itdev->mwdma[unit] = dma[mode_wanted];
+ itdev->udma[unit] = UDMA_OFF;
+
+ /* UDMA bits off - Revision 0x10 do them in pairs */
+ pci_read_config_byte(hwif->pci_dev, 0x50, &conf);
+ if(itdev->timing10)
+ conf |= channel ? 0x60: 0x18;
+ else
+ conf |= 1 << (3 + 2 * channel + unit);
+ pci_write_config_byte(hwif->pci_dev, 0x50, conf);
+
+ it821x_clock_strategy(drive);
+ /* FIXME: do we need to program this ? */
+ /* it821x_program(drive, itdev->mwdma[unit]); */
+}
+
+/**
+ * it821x_tune_udma - tune a channel for UDMA
+ * @drive: drive to set up
+ * @mode_wanted: the target operating mode
+ *
+ * Load the timing settings for this device mode into the
+ * controller when doing UDMA modes in pass through.
+ */
+
+static void it821x_tune_udma (ide_drive_t *drive, byte mode_wanted)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct it821x_dev *itdev = ide_get_hwifdata(hwif);
+ int unit = drive->select.b.unit;
+ int channel = hwif->channel;
+ u8 conf;
+
+ static u16 udma[] = { 0x4433, 0x4231, 0x3121, 0x2121, 0x1111, 0x2211, 0x1111 };
+ static u8 udma_want[] = { ATA_ANY, ATA_50, ATA_ANY, ATA_66, ATA_66, ATA_50, ATA_66 };
+
+ itdev->want[unit][1] = udma_want[mode_wanted];
+ itdev->want[unit][0] = 3; /* UDMA is high priority */
+ itdev->mwdma[unit] = MWDMA_OFF;
+ itdev->udma[unit] = udma[mode_wanted];
+ if(mode_wanted >= 5)
+ itdev->udma[unit] |= 0x8080; /* UDMA 5/6 select on */
+
+ /* UDMA on. Again revision 0x10 must do the pair */
+ pci_read_config_byte(hwif->pci_dev, 0x50, &conf);
+ if(itdev->timing10)
+ conf &= channel ? 0x9F: 0xE7;
+ else
+ conf &= ~ (1 << (3 + 2 * channel + unit));
+ pci_write_config_byte(hwif->pci_dev, 0x50, conf);
+
+ it821x_clock_strategy(drive);
+ it821x_program_udma(drive, itdev->udma[unit]);
+
+}
+
+/**
+ * config_it821x_chipset_for_pio - set drive timings
+ * @drive: drive to tune
+ * @speed we want
+ *
+ * Compute the best pio mode we can for a given device. We must
+ * pick a speed that does not cause problems with the other device
+ * on the cable.
+ */
+
+static void config_it821x_chipset_for_pio (ide_drive_t *drive, byte set_speed)
+{
+ u8 unit = drive->select.b.unit;
+ ide_hwif_t *hwif = drive->hwif;
+ ide_drive_t *pair = &hwif->drives[1-unit];
+ u8 speed = 0, set_pio = ide_get_best_pio_mode(drive, 255, 5, NULL);
+ u8 pair_pio;
+
+ /* We have to deal with this mess in pairs */
+ if(pair != NULL) {
+ pair_pio = ide_get_best_pio_mode(pair, 255, 5, NULL);
+ /* Trim PIO to the slowest of the master/slave */
+ if(pair_pio < set_pio)
+ set_pio = pair_pio;
+ }
+ it821x_tuneproc(drive, set_pio);
+ speed = XFER_PIO_0 + set_pio;
+ /* XXX - We trim to the lowest of the pair so the other drive
+ will always be fine at this point until we do hotplug passthru */
+
+ if (set_speed)
+ (void) ide_config_drive_speed(drive, speed);
+}
+
+/**
+ * it821x_dma_read - DMA hook
+ * @drive: drive for DMA
+ *
+ * The IT821x has a single timing register for MWDMA and for PIO
+ * operations. As we flip back and forth we have to reload the
+ * clock. In addition the rev 0x10 device only works if the same
+ * timing value is loaded into the master and slave UDMA clock
+ * so we must also reload that.
+ *
+ * FIXME: we could figure out in advance if we need to do reloads
+ */
+
+static void it821x_dma_start(ide_drive_t *drive)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ struct it821x_dev *itdev = ide_get_hwifdata(hwif);
+ int unit = drive->select.b.unit;
+ if(itdev->mwdma[unit] != MWDMA_OFF)
+ it821x_program(drive, itdev->mwdma[unit]);
+ else if(itdev->udma[unit] != UDMA_OFF && itdev->timing10)
+ it821x_program_udma(drive, itdev->udma[unit]);
+ ide_dma_start(drive);
+}
+
+/**
+ * it821x_dma_write - DMA hook
+ * @drive: drive for DMA stop
+ *
+ * The IT821x has a single timing register for MWDMA and for PIO
+ * operations. As we flip back and forth we have to reload the
+ * clock.
+ */
+
+static int it821x_dma_end(ide_drive_t *drive)
+{
+ ide_hwif_t *hwif = drive->hwif;
+ int unit = drive->select.b.unit;
+ struct it821x_dev *itdev = ide_get_hwifdata(hwif);
+ int ret = __ide_dma_end(drive);
+ if(itdev->mwdma[unit] != MWDMA_OFF)
+ it821x_program(drive, itdev->pio[unit]);
+ return ret;
+}
+
+
+/**
+ * it821x_tune_chipset - set controller timings
+ * @drive: Drive to set up
+ * @xferspeed: speed we want to achieve
+ *
+ * Tune the ITE chipset for the desired mode. If we can't achieve
+ * the desired mode then tune for a lower one, but ultimately
+ * make the thing work.
+ */
+
+static int it821x_tune_chipset (ide_drive_t *drive, byte xferspeed)
+{
+
+ ide_hwif_t *hwif = drive->hwif;
+ struct it821x_dev *itdev = ide_get_hwifdata(hwif);
+ u8 speed = ide_rate_filter(it821x_ratemask(drive), xferspeed);
+
+ if(!itdev->smart) {
+ switch(speed) {
+ case XFER_PIO_4:
+ case XFER_PIO_3:
+ case XFER_PIO_2:
+ case XFER_PIO_1:
+ case XFER_PIO_0:
+ it821x_tuneproc(drive, (speed - XFER_PIO_0));
+ break;
+ /* MWDMA tuning is really hard because our MWDMA and PIO
+ timings are kept in the same place. We can switch in the
+ host dma on/off callbacks */
+ case XFER_MW_DMA_2:
+ case XFER_MW_DMA_1:
+ case XFER_MW_DMA_0:
+ it821x_tune_mwdma(drive, (speed - XFER_MW_DMA_0));
+ break;
+ case XFER_UDMA_6:
+ case XFER_UDMA_5:
+ case XFER_UDMA_4:
+ case XFER_UDMA_3:
+ case XFER_UDMA_2:
+ case XFER_UDMA_1:
+ case XFER_UDMA_0:
+ it821x_tune_udma(drive, (speed - XFER_UDMA_0));
+ break;
+ default:
+ return 1;
+ }
+ }
+ /*
+ * In smart mode the clocking is done by the host controller
+ * snooping the mode we picked. The rest of it is not our problem
+ */
+ return ide_config_drive_speed(drive, speed);
+}
+
+/**
+ * config_chipset_for_dma - configure for DMA
+ * @drive: drive to configure
+ *
+ * Called by the IDE layer when it wants the timings set up.
+ */
+
+static int config_chipset_for_dma (ide_drive_t *drive)
+{
+ u8 speed = ide_dma_speed(drive, it821x_ratemask(drive));
+
+ config_it821x_chipset_for_pio(drive, !speed);
+ it821x_tune_chipset(drive, speed);
+ return ide_dma_enable(drive);
+}
+
+/**
+ * it821x_configure_drive_for_dma - set up for DMA transfers
+ * @drive: drive we are going to set up
+ *
+ * Set up the drive for DMA, tune the controller and drive as
+ * required. If the drive isn't suitable for DMA or we hit
+ * other problems then we will drop down to PIO and set up
+ * PIO appropriately
+ */
+
+static int it821x_config_drive_for_dma (ide_drive_t *drive)
+{
+ ide_hwif_t *hwif = drive->hwif;
+
+ if (ide_use_dma(drive)) {
+ if (config_chipset_for_dma(drive))
+ return hwif->ide_dma_on(drive);
+ }
+ config_it821x_chipset_for_pio(drive, 1);
+ return hwif->ide_dma_off_quietly(drive);
+}
+
+/**
+ * ata66_it821x - check for 80 pin cable
+ * @hwif: interface to check
+ *
+ * Check for the presence of an ATA66 capable cable on the
+ * interface. Problematic as it seems some cards don't have
+ * the needed logic onboard.
+ */
+
+static unsigned int __devinit ata66_it821x(ide_hwif_t *hwif)
+{
+ /* The reference driver also only does disk side */
+ return 1;
+}
+
+/**
+ * it821x_fixup - post init callback
+ * @hwif: interface
+ *
+ * This callback is run after the drives have been probed but
+ * before anything gets attached. It allows drivers to do any
+ * final tuning that is needed, or fixups to work around bugs.
+ */
+
+static void __devinit it821x_fixups(ide_hwif_t *hwif)
+{
+ struct it821x_dev *itdev = ide_get_hwifdata(hwif);
+ int i;
+
+ if(!itdev->smart) {
+ /*
+ * If we are in pass through mode then not much
+ * needs to be done, but we do bother to clear the
+ * IRQ mask as we may well be in PIO (eg rev 0x10)
+ * for now and we know unmasking is safe on this chipset.
+ */
+ for (i = 0; i < 2; i++) {
+ ide_drive_t *drive = &hwif->drives[i];
+ if(drive->present)
+ drive->unmask = 1;
+ }
+ return;
+ }
+ /*
+ * Perform fixups on smart mode. We need to "lose" some
+ * capabilities the firmware lacks but does not filter, and
+ * also patch up some capability bits that it forgets to set
+ * in RAID mode.
+ */
+
+ for(i = 0; i < 2; i++) {
+ ide_drive_t *drive = &hwif->drives[i];
+ struct hd_driveid *id;
+ u16 *idbits;
+
+ if(!drive->present)
+ continue;
+ id = drive->id;
+ idbits = (u16 *)drive->id;
+
+ /* Check for RAID v native */
+ if(strstr(id->model, "Integrated Technology Express")) {
+ /* In raid mode the ident block is slightly buggy
+ We need to set the bits so that the IDE layer knows
+ LBA28. LBA48 and DMA ar valid */
+ id->capability |= 3; /* LBA28, DMA */
+ id->command_set_2 |= 0x0400; /* LBA48 valid */
+ id->cfs_enable_2 |= 0x0400; /* LBA48 on */
+ /* Reporting logic */
+ printk(KERN_INFO "%s: IT8212 %sRAID %d volume",
+ drive->name,
+ idbits[147] ? "Bootable ":"",
+ idbits[129]);
+ if(idbits[129] != 1)
+ printk("(%dK stripe)", idbits[146]);
+ printk(".\n");
+ /* Now the core code will have wrongly decided no DMA
+ so we need to fix this */
+ hwif->ide_dma_off_quietly(drive);
+#ifdef CONFIG_IDEDMA_ONLYDISK
+ if (drive->media == ide_disk)
+#endif
+ hwif->ide_dma_check(drive);
+ } else {
+ /* Non RAID volume. Fixups to stop the core code
+ doing unsupported things */
+ id->field_valid &= 1;
+ id->queue_depth = 0;
+ id->command_set_1 = 0;
+ id->command_set_2 &= 0xC400;
+ id->cfsse &= 0xC000;
+ id->cfs_enable_1 = 0;
+ id->cfs_enable_2 &= 0xC400;
+ id->csf_default &= 0xC000;
+ id->word127 = 0;
+ id->dlf = 0;
+ id->csfo = 0;
+ id->cfa_power = 0;
+ printk(KERN_INFO "%s: Performing identify fixups.\n",
+ drive->name);
+ }
+ }
+
+}
+
+/**
+ * init_hwif_it821x - set up hwif structs
+ * @hwif: interface to set up
+ *
+ * We do the basic set up of the interface structure. The IT8212
+ * requires several custom handlers so we override the default
+ * ide DMA handlers appropriately
+ */
+
+static void __devinit init_hwif_it821x(ide_hwif_t *hwif)
+{
+ struct it821x_dev *idev = kmalloc(sizeof(struct it821x_dev), GFP_KERNEL);
+ u8 conf;
+
+ if(idev == NULL) {
+ printk(KERN_ERR "it821x: out of memory, falling back to legacy behaviour.\n");
+ goto fallback;
+ }
+ memset(idev, 0, sizeof(struct it821x_dev));
+ ide_set_hwifdata(hwif, idev);
+
+ pci_read_config_byte(hwif->pci_dev, 0x50, &conf);
+ if(conf & 1) {
+ idev->smart = 1;
+ hwif->atapi_dma = 0;
+ /* Long I/O's although allowed in LBA48 space cause the
+ onboard firmware to enter the twighlight zone */
+ hwif->rqsize = 256;
+ }
+
+ /* Pull the current clocks from 0x50 also */
+ if (conf & (1 << (1 + hwif->channel)))
+ idev->clock_mode = ATA_50;
+ else
+ idev->clock_mode = ATA_66;
+
+ idev->want[0][1] = ATA_ANY;
+ idev->want[1][1] = ATA_ANY;
+
+ /*
+ * Not in the docs but according to the reference driver
+ * this is neccessary.
+ */
+
+ pci_read_config_byte(hwif->pci_dev, 0x08, &conf);
+ if(conf == 0x10) {
+ idev->timing10 = 1;
+ hwif->atapi_dma = 0;
+ if(!idev->smart)
+ printk(KERN_WARNING "it821x: Revision 0x10, workarounds activated.\n");
+ }
+
+ hwif->speedproc = &it821x_tune_chipset;
+ hwif->tuneproc = &it821x_tuneproc;
+
+ /* MWDMA/PIO clock switching for pass through mode */
+ if(!idev->smart) {
+ hwif->ide_dma_start = &it821x_dma_start;
+ hwif->ide_dma_end = &it821x_dma_end;
+ }
+
+ hwif->drives[0].autotune = 1;
+ hwif->drives[1].autotune = 1;
+
+ if (!hwif->dma_base)
+ goto fallback;
+
+ hwif->ultra_mask = 0x7f;
+ hwif->mwdma_mask = 0x07;
+ hwif->swdma_mask = 0x07;
+
+ hwif->ide_dma_check = &it821x_config_drive_for_dma;
+ if (!(hwif->udma_four))
+ hwif->udma_four = ata66_it821x(hwif);
+
+ /*
+ * The BIOS often doesn't set up DMA on this controller
+ * so we always do it.
+ */
+
+ hwif->autodma = 1;
+ hwif->drives[0].autodma = hwif->autodma;
+ hwif->drives[1].autodma = hwif->autodma;
+ return;
+fallback:
+ hwif->autodma = 0;
+ return;
+}
+
+static void __devinit it8212_disable_raid(struct pci_dev *dev)
+{
+ /* Reset local CPU, and set BIOS not ready */
+ pci_write_config_byte(dev, 0x5E, 0x01);
+
+ /* Set to bypass mode, and reset PCI bus */
+ pci_write_config_byte(dev, 0x50, 0x00);
+ pci_write_config_word(dev, PCI_COMMAND,
+ PCI_COMMAND_PARITY | PCI_COMMAND_IO |
+ PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER);
+ pci_write_config_word(dev, 0x40, 0xA0F3);
+
+ pci_write_config_dword(dev,0x4C, 0x02040204);
+ pci_write_config_byte(dev, 0x42, 0x36);
+ pci_write_config_byte(dev, PCI_LATENCY_TIMER, 0);
+}
+
+static unsigned int __devinit init_chipset_it821x(struct pci_dev *dev, const char *name)
+{
+ u8 conf;
+ static char *mode[2] = { "pass through", "smart" };
+
+ /* Force the card into bypass mode if so requested */
+ if (it8212_noraid) {
+ printk(KERN_INFO "it8212: forcing bypass mode.\n");
+ it8212_disable_raid(dev);
+ }
+ pci_read_config_byte(dev, 0x50, &conf);
+ printk(KERN_INFO "it821x: controller in %s mode.\n", mode[conf & 1]);
+ return 0;
+}
+
+
+#define DECLARE_ITE_DEV(name_str) \
+ { \
+ .name = name_str, \
+ .init_chipset = init_chipset_it821x, \
+ .init_hwif = init_hwif_it821x, \
+ .channels = 2, \
+ .autodma = AUTODMA, \
+ .bootable = ON_BOARD, \
+ .fixup = it821x_fixups \
+ }
+
+static ide_pci_device_t it821x_chipsets[] __devinitdata = {
+ /* 0 */ DECLARE_ITE_DEV("IT8212"),
+};
+
+/**
+ * it821x_init_one - pci layer discovery entry
+ * @dev: PCI device
+ * @id: ident table entry
+ *
+ * Called by the PCI code when it finds an ITE821x controller.
+ * We then use the IDE PCI generic helper to do most of the work.
+ */
+
+static int __devinit it821x_init_one(struct pci_dev *dev, const struct pci_device_id *id)
+{
+ ide_setup_pci_device(dev, &it821x_chipsets[id->driver_data]);
+ return 0;
+}
+
+static struct pci_device_id it821x_pci_tbl[] = {
+ { PCI_VENDOR_ID_ITE, PCI_DEVICE_ID_ITE_8211, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
+ { PCI_VENDOR_ID_ITE, PCI_DEVICE_ID_ITE_8212, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
+ { 0, },
+};
+
+MODULE_DEVICE_TABLE(pci, it821x_pci_tbl);
+
+static struct pci_driver driver = {
+ .name = "ITE821x IDE",
+ .id_table = it821x_pci_tbl,
+ .probe = it821x_init_one,
+};
+
+static int __init it821x_ide_init(void)
+{
+ return ide_pci_register_driver(&driver);
+}
+
+module_init(it821x_ide_init);
+
+module_param_named(noraid, it8212_noraid, int, S_IRUGO);
+MODULE_PARM_DESC(it8212_noraid, "Force card into bypass mode");
+
+MODULE_AUTHOR("Alan Cox");
+MODULE_DESCRIPTION("PCI driver module for the ITE 821x");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Copyright (C) 2003 Sistina Software
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm-io.h"
+
+#include <linux/bio.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+
+#define BIO_POOL_SIZE 256
+
+
+/*-----------------------------------------------------------------
+ * Bio set, move this to bio.c
+ *---------------------------------------------------------------*/
+#define BV_NAME_SIZE 16
+struct biovec_pool {
+ int nr_vecs;
+ char name[BV_NAME_SIZE];
+ kmem_cache_t *slab;
+ mempool_t *pool;
+ atomic_t allocated; /* FIXME: debug */
+};
+
+#define BIOVEC_NR_POOLS 6
+struct bio_set {
+ char name[BV_NAME_SIZE];
+ kmem_cache_t *bio_slab;
+ mempool_t *bio_pool;
+ struct biovec_pool pools[BIOVEC_NR_POOLS];
+};
+
+static void bio_set_exit(struct bio_set *bs)
+{
+ unsigned i;
+ struct biovec_pool *bp;
+
+ if (bs->bio_pool)
+ mempool_destroy(bs->bio_pool);
+
+ if (bs->bio_slab)
+ kmem_cache_destroy(bs->bio_slab);
+
+ for (i = 0; i < BIOVEC_NR_POOLS; i++) {
+ bp = bs->pools + i;
+ if (bp->pool)
+ mempool_destroy(bp->pool);
+
+ if (bp->slab)
+ kmem_cache_destroy(bp->slab);
+ }
+}
+
+static void mk_name(char *str, size_t len, const char *prefix, unsigned count)
+{
+ snprintf(str, len, "%s-%u", prefix, count);
+}
+
+static int bio_set_init(struct bio_set *bs, const char *slab_prefix,
+ unsigned pool_entries, unsigned scale)
+{
+ /* FIXME: this must match bvec_index(), why not go the
+ * whole hog and have a pool per power of 2 ? */
+ static unsigned _vec_lengths[BIOVEC_NR_POOLS] = {
+ 1, 4, 16, 64, 128, BIO_MAX_PAGES
+ };
+
+
+ unsigned i, size;
+ struct biovec_pool *bp;
+
+ /* zero the bs so we can tear down properly on error */
+ memset(bs, 0, sizeof(*bs));
+
+ /*
+ * Set up the bio pool.
+ */
+ snprintf(bs->name, sizeof(bs->name), "%s-bio", slab_prefix);
+
+ bs->bio_slab = kmem_cache_create(bs->name, sizeof(struct bio), 0,
+ SLAB_HWCACHE_ALIGN, NULL, NULL);
+ if (!bs->bio_slab) {
+ DMWARN("can't init bio slab");
+ goto bad;
+ }
+
+ bs->bio_pool = mempool_create(pool_entries, mempool_alloc_slab,
+ mempool_free_slab, bs->bio_slab);
+ if (!bs->bio_pool) {
+ DMWARN("can't init bio pool");
+ goto bad;
+ }
+
+ /*
+ * Set up the biovec pools.
+ */
+ for (i = 0; i < BIOVEC_NR_POOLS; i++) {
+ bp = bs->pools + i;
+ bp->nr_vecs = _vec_lengths[i];
+ atomic_set(&bp->allocated, 1); /* FIXME: debug */
+
+
+ size = bp->nr_vecs * sizeof(struct bio_vec);
+
+ mk_name(bp->name, sizeof(bp->name), slab_prefix, i);
+ bp->slab = kmem_cache_create(bp->name, size, 0,
+ SLAB_HWCACHE_ALIGN, NULL, NULL);
+ if (!bp->slab) {
+ DMWARN("can't init biovec slab cache");
+ goto bad;
+ }
+
+ if (i >= scale)
+ pool_entries >>= 1;
+
+ bp->pool = mempool_create(pool_entries, mempool_alloc_slab,
+ mempool_free_slab, bp->slab);
+ if (!bp->pool) {
+ DMWARN("can't init biovec mempool");
+ goto bad;
+ }
+ }
+
+ return 0;
+
+ bad:
+ bio_set_exit(bs);
+ return -ENOMEM;
+}
+
+/* FIXME: blech */
+static inline unsigned bvec_index(unsigned nr)
+{
+ switch (nr) {
+ case 1: return 0;
+ case 2 ... 4: return 1;
+ case 5 ... 16: return 2;
+ case 17 ... 64: return 3;
+ case 65 ... 128:return 4;
+ case 129 ... BIO_MAX_PAGES: return 5;
+ }
+
+ BUG();
+ return 0;
+}
+
+static inline void bs_bio_init(struct bio *bio)
+{
+ bio->bi_next = NULL;
+ bio->bi_flags = 1 << BIO_UPTODATE;
+ bio->bi_rw = 0;
+ bio->bi_vcnt = 0;
+ bio->bi_idx = 0;
+ bio->bi_phys_segments = 0;
+ bio->bi_hw_segments = 0;
+ bio->bi_size = 0;
+ bio->bi_max_vecs = 0;
+ bio->bi_end_io = NULL;
+ atomic_set(&bio->bi_cnt, 1);
+ bio->bi_private = NULL;
+}
+
+static unsigned _bio_count = 0;
+struct bio *bio_set_alloc(struct bio_set *bs, int gfp_mask, int nr_iovecs)
+{
+ struct biovec_pool *bp;
+ struct bio_vec *bv = NULL;
+ unsigned long idx;
+ struct bio *bio;
+
+ bio = mempool_alloc(bs->bio_pool, gfp_mask);
+ if (unlikely(!bio))
+ return NULL;
+
+ bio_init(bio);
+
+ if (likely(nr_iovecs)) {
+ idx = bvec_index(nr_iovecs);
+ bp = bs->pools + idx;
+ bv = mempool_alloc(bp->pool, gfp_mask);
+ if (!bv) {
+ mempool_free(bio, bs->bio_pool);
+ return NULL;
+ }
+
+ memset(bv, 0, bp->nr_vecs * sizeof(*bv));
+ bio->bi_flags |= idx << BIO_POOL_OFFSET;
+ bio->bi_max_vecs = bp->nr_vecs;
+ atomic_inc(&bp->allocated);
+ }
+
+ bio->bi_io_vec = bv;
+ return bio;
+}
+
+static void bio_set_free(struct bio_set *bs, struct bio *bio)
+{
+ struct biovec_pool *bp = bs->pools + BIO_POOL_IDX(bio);
+
+ if (atomic_dec_and_test(&bp->allocated))
+ BUG();
+
+ mempool_free(bio->bi_io_vec, bp->pool);
+ mempool_free(bio, bs->bio_pool);
+}
+
+/*-----------------------------------------------------------------
+ * dm-io proper
+ *---------------------------------------------------------------*/
+static struct bio_set _bios;
+
+/* FIXME: can we shrink this ? */
+struct io {
+ unsigned long error;
+ atomic_t count;
+ struct task_struct *sleeper;
+ io_notify_fn callback;
+ void *context;
+};
+
+/*
+ * io contexts are only dynamically allocated for asynchronous
+ * io. Since async io is likely to be the majority of io we'll
+ * have the same number of io contexts as buffer heads ! (FIXME:
+ * must reduce this).
+ */
+static unsigned _num_ios;
+static mempool_t *_io_pool;
+
+static void *alloc_io(int gfp_mask, void *pool_data)
+{
+ return kmalloc(sizeof(struct io), gfp_mask);
+}
+
+static void free_io(void *element, void *pool_data)
+{
+ kfree(element);
+}
+
+static unsigned int pages_to_ios(unsigned int pages)
+{
+ return 4 * pages; /* too many ? */
+}
+
+static int resize_pool(unsigned int new_ios)
+{
+ int r = 0;
+
+ if (_io_pool) {
+ if (new_ios == 0) {
+ /* free off the pool */
+ mempool_destroy(_io_pool);
+ _io_pool = NULL;
+ bio_set_exit(&_bios);
+
+ } else {
+ /* resize the pool */
+ r = mempool_resize(_io_pool, new_ios, GFP_KERNEL);
+ }
+
+ } else {
+ /* create new pool */
+ _io_pool = mempool_create(new_ios, alloc_io, free_io, NULL);
+ if (!_io_pool)
+ return -ENOMEM;
+
+ r = bio_set_init(&_bios, "dm-io", 512, 1);
+ if (r) {
+ mempool_destroy(_io_pool);
+ _io_pool = NULL;
+ }
+ }
+
+ if (!r)
+ _num_ios = new_ios;
+
+ return r;
+}
+
+int dm_io_get(unsigned int num_pages)
+{
+ return resize_pool(_num_ios + pages_to_ios(num_pages));
+}
+
+void dm_io_put(unsigned int num_pages)
+{
+ resize_pool(_num_ios - pages_to_ios(num_pages));
+}
+
+/*-----------------------------------------------------------------
+ * We need to keep track of which region a bio is doing io for.
+ * In order to save a memory allocation we store this the last
+ * bvec which we know is unused (blech).
+ *---------------------------------------------------------------*/
+static inline void bio_set_region(struct bio *bio, unsigned region)
+{
+ bio->bi_io_vec[bio->bi_max_vecs - 1].bv_len = region;
+}
+
+static inline unsigned bio_get_region(struct bio *bio)
+{
+ return bio->bi_io_vec[bio->bi_max_vecs - 1].bv_len;
+}
+
+/*-----------------------------------------------------------------
+ * We need an io object to keep track of the number of bios that
+ * have been dispatched for a particular io.
+ *---------------------------------------------------------------*/
+static void dec_count(struct io *io, unsigned int region, int error)
+{
+ if (error)
+ set_bit(region, &io->error);
+
+ if (atomic_dec_and_test(&io->count)) {
+ if (io->sleeper)
+ wake_up_process(io->sleeper);
+
+ else {
+ int r = io->error;
+ io_notify_fn fn = io->callback;
+ void *context = io->context;
+
+ mempool_free(io, _io_pool);
+ fn(r, context);
+ }
+ }
+}
+
+/* FIXME Move this to bio.h? */
+static void zero_fill_bio(struct bio *bio)
+{
+ unsigned long flags;
+ struct bio_vec *bv;
+ int i;
+
+ bio_for_each_segment(bv, bio, i) {
+ char *data = bvec_kmap_irq(bv, &flags);
+ memset(data, 0, bv->bv_len);
+ flush_dcache_page(bv->bv_page);
+ bvec_kunmap_irq(data, &flags);
+ }
+}
+
+static int endio(struct bio *bio, unsigned int done, int error)
+{
+ struct io *io = (struct io *) bio->bi_private;
+
+ /* keep going until we've finished */
+ if (bio->bi_size)
+ return 1;
+
+ if (error && bio_data_dir(bio) == READ)
+ zero_fill_bio(bio);
+
+ dec_count(io, bio_get_region(bio), error);
+ bio_put(bio);
+
+ return 0;
+}
+
+static void bio_dtr(struct bio *bio)
+{
+ _bio_count--;
+ bio_set_free(&_bios, bio);
+}
+
+/*-----------------------------------------------------------------
+ * These little objects provide an abstraction for getting a new
+ * destination page for io.
+ *---------------------------------------------------------------*/
+struct dpages {
+ void (*get_page)(struct dpages *dp,
+ struct page **p, unsigned long *len, unsigned *offset);
+ void (*next_page)(struct dpages *dp);
+
+ unsigned context_u;
+ void *context_ptr;
+};
+
+/*
+ * Functions for getting the pages from a list.
+ */
+static void list_get_page(struct dpages *dp,
+ struct page **p, unsigned long *len, unsigned *offset)
+{
+ unsigned o = dp->context_u;
+ struct page_list *pl = (struct page_list *) dp->context_ptr;
+
+ *p = pl->page;
+ *len = PAGE_SIZE - o;
+ *offset = o;
+}
+
+static void list_next_page(struct dpages *dp)
+{
+ struct page_list *pl = (struct page_list *) dp->context_ptr;
+ dp->context_ptr = pl->next;
+ dp->context_u = 0;
+}
+
+static void list_dp_init(struct dpages *dp, struct page_list *pl, unsigned offset)
+{
+ dp->get_page = list_get_page;
+ dp->next_page = list_next_page;
+ dp->context_u = offset;
+ dp->context_ptr = pl;
+}
+
+/*
+ * Functions for getting the pages from a bvec.
+ */
+static void bvec_get_page(struct dpages *dp,
+ struct page **p, unsigned long *len, unsigned *offset)
+{
+ struct bio_vec *bvec = (struct bio_vec *) dp->context_ptr;
+ *p = bvec->bv_page;
+ *len = bvec->bv_len;
+ *offset = bvec->bv_offset;
+}
+
+static void bvec_next_page(struct dpages *dp)
+{
+ struct bio_vec *bvec = (struct bio_vec *) dp->context_ptr;
+ dp->context_ptr = bvec + 1;
+}
+
+static void bvec_dp_init(struct dpages *dp, struct bio_vec *bvec)
+{
+ dp->get_page = bvec_get_page;
+ dp->next_page = bvec_next_page;
+ dp->context_ptr = bvec;
+}
+
+static void vm_get_page(struct dpages *dp,
+ struct page **p, unsigned long *len, unsigned *offset)
+{
+ *p = vmalloc_to_page(dp->context_ptr);
+ *offset = dp->context_u;
+ *len = PAGE_SIZE - dp->context_u;
+}
+
+static void vm_next_page(struct dpages *dp)
+{
+ dp->context_ptr += PAGE_SIZE - dp->context_u;
+ dp->context_u = 0;
+}
+
+static void vm_dp_init(struct dpages *dp, void *data)
+{
+ dp->get_page = vm_get_page;
+ dp->next_page = vm_next_page;
+ dp->context_u = ((unsigned long) data) & (PAGE_SIZE - 1);
+ dp->context_ptr = data;
+}
+
+/*-----------------------------------------------------------------
+ * IO routines that accept a list of pages.
+ *---------------------------------------------------------------*/
+static void do_region(int rw, unsigned int region, struct io_region *where,
+ struct dpages *dp, struct io *io)
+{
+ struct bio *bio;
+ struct page *page;
+ unsigned long len;
+ unsigned offset;
+ unsigned num_bvecs;
+ sector_t remaining = where->count;
+
+ while (remaining) {
+ /*
+ * Allocate a suitably sized bio, we add an extra
+ * bvec for bio_get/set_region().
+ */
+ num_bvecs = (remaining / (PAGE_SIZE >> 9)) + 2;
+ _bio_count++;
+ bio = bio_set_alloc(&_bios, GFP_NOIO, num_bvecs);
+ bio->bi_sector = where->sector + (where->count - remaining);
+ bio->bi_bdev = where->bdev;
+ bio->bi_end_io = endio;
+ bio->bi_private = io;
+ bio->bi_destructor = bio_dtr;
+ bio_set_region(bio, region);
+
+ /*
+ * Try and add as many pages as possible.
+ */
+ while (remaining) {
+ dp->get_page(dp, &page, &len, &offset);
+ len = min(len, to_bytes(remaining));
+ if (!bio_add_page(bio, page, len, offset))
+ break;
+
+ offset = 0;
+ remaining -= to_sector(len);
+ dp->next_page(dp);
+ }
+
+ atomic_inc(&io->count);
+ submit_bio(rw, bio);
+ }
+}
+
+static void dispatch_io(int rw, unsigned int num_regions,
+ struct io_region *where, struct dpages *dp,
+ struct io *io, int sync)
+{
+ int i;
+ struct dpages old_pages = *dp;
+
+ if (sync)
+ rw |= (1 << BIO_RW_SYNC);
+
+ /*
+ * For multiple regions we need to be careful to rewind
+ * the dp object for each call to do_region.
+ */
+ for (i = 0; i < num_regions; i++) {
+ *dp = old_pages;
+ if (where[i].count)
+ do_region(rw, i, where + i, dp, io);
+ }
+
+ /*
+ * Drop the extra refence that we were holding to avoid
+ * the io being completed too early.
+ */
+ dec_count(io, 0, 0);
+}
+
+static int sync_io(unsigned int num_regions, struct io_region *where,
+ int rw, struct dpages *dp, unsigned long *error_bits)
+{
+ struct io io;
+
+ if (num_regions > 1 && rw != WRITE) {
+ WARN_ON(1);
+ return -EIO;
+ }
+
+ io.error = 0;
+ atomic_set(&io.count, 1); /* see dispatch_io() */
+ io.sleeper = current;
+
+ dispatch_io(rw, num_regions, where, dp, &io, 1);
+
+ while (1) {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+
+ if (!atomic_read(&io.count) || signal_pending(current))
+ break;
+
+ io_schedule();
+ }
+ set_current_state(TASK_RUNNING);
+
+ if (atomic_read(&io.count))
+ return -EINTR;
+
+ *error_bits = io.error;
+ return io.error ? -EIO : 0;
+}
+
+static int async_io(unsigned int num_regions, struct io_region *where, int rw,
+ struct dpages *dp, io_notify_fn fn, void *context)
+{
+ struct io *io;
+
+ if (num_regions > 1 && rw != WRITE) {
+ WARN_ON(1);
+ fn(1, context);
+ return -EIO;
+ }
+
+ io = mempool_alloc(_io_pool, GFP_NOIO);
+ io->error = 0;
+ atomic_set(&io->count, 1); /* see dispatch_io() */
+ io->sleeper = NULL;
+ io->callback = fn;
+ io->context = context;
+
+ dispatch_io(rw, num_regions, where, dp, io, 0);
+ return 0;
+}
+
+int dm_io_sync(unsigned int num_regions, struct io_region *where, int rw,
+ struct page_list *pl, unsigned int offset,
+ unsigned long *error_bits)
+{
+ struct dpages dp;
+ list_dp_init(&dp, pl, offset);
+ return sync_io(num_regions, where, rw, &dp, error_bits);
+}
+
+int dm_io_sync_bvec(unsigned int num_regions, struct io_region *where, int rw,
+ struct bio_vec *bvec, unsigned long *error_bits)
+{
+ struct dpages dp;
+ bvec_dp_init(&dp, bvec);
+ return sync_io(num_regions, where, rw, &dp, error_bits);
+}
+
+int dm_io_sync_vm(unsigned int num_regions, struct io_region *where, int rw,
+ void *data, unsigned long *error_bits)
+{
+ struct dpages dp;
+ vm_dp_init(&dp, data);
+ return sync_io(num_regions, where, rw, &dp, error_bits);
+}
+
+int dm_io_async(unsigned int num_regions, struct io_region *where, int rw,
+ struct page_list *pl, unsigned int offset,
+ io_notify_fn fn, void *context)
+{
+ struct dpages dp;
+ list_dp_init(&dp, pl, offset);
+ return async_io(num_regions, where, rw, &dp, fn, context);
+}
+
+int dm_io_async_bvec(unsigned int num_regions, struct io_region *where, int rw,
+ struct bio_vec *bvec, io_notify_fn fn, void *context)
+{
+ struct dpages dp;
+ bvec_dp_init(&dp, bvec);
+ return async_io(num_regions, where, rw, &dp, fn, context);
+}
+
+int dm_io_async_vm(unsigned int num_regions, struct io_region *where, int rw,
+ void *data, io_notify_fn fn, void *context)
+{
+ struct dpages dp;
+ vm_dp_init(&dp, data);
+ return async_io(num_regions, where, rw, &dp, fn, context);
+}
+
+EXPORT_SYMBOL(dm_io_get);
+EXPORT_SYMBOL(dm_io_put);
+EXPORT_SYMBOL(dm_io_sync);
+EXPORT_SYMBOL(dm_io_async);
+EXPORT_SYMBOL(dm_io_sync_bvec);
+EXPORT_SYMBOL(dm_io_async_bvec);
+EXPORT_SYMBOL(dm_io_sync_vm);
+EXPORT_SYMBOL(dm_io_async_vm);
--- /dev/null
+/*
+ * Copyright (C) 2003 Sistina Software
+ *
+ * This file is released under the LGPL.
+ */
+
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/vmalloc.h>
+
+#include "dm-log.h"
+#include "dm-io.h"
+
+static LIST_HEAD(_log_types);
+static spinlock_t _lock = SPIN_LOCK_UNLOCKED;
+
+int dm_register_dirty_log_type(struct dirty_log_type *type)
+{
+ if (!try_module_get(type->module))
+ return -EINVAL;
+
+ spin_lock(&_lock);
+ type->use_count = 0;
+ list_add(&type->list, &_log_types);
+ spin_unlock(&_lock);
+
+ return 0;
+}
+
+int dm_unregister_dirty_log_type(struct dirty_log_type *type)
+{
+ spin_lock(&_lock);
+
+ if (type->use_count)
+ DMWARN("Attempt to unregister a log type that is still in use");
+ else {
+ list_del(&type->list);
+ module_put(type->module);
+ }
+
+ spin_unlock(&_lock);
+
+ return 0;
+}
+
+static struct dirty_log_type *get_type(const char *type_name)
+{
+ struct dirty_log_type *type;
+
+ spin_lock(&_lock);
+ list_for_each_entry (type, &_log_types, list)
+ if (!strcmp(type_name, type->name)) {
+ type->use_count++;
+ spin_unlock(&_lock);
+ return type;
+ }
+
+ spin_unlock(&_lock);
+ return NULL;
+}
+
+static void put_type(struct dirty_log_type *type)
+{
+ spin_lock(&_lock);
+ type->use_count--;
+ spin_unlock(&_lock);
+}
+
+struct dirty_log *dm_create_dirty_log(const char *type_name, struct dm_target *ti,
+ unsigned int argc, char **argv)
+{
+ struct dirty_log_type *type;
+ struct dirty_log *log;
+
+ log = kmalloc(sizeof(*log), GFP_KERNEL);
+ if (!log)
+ return NULL;
+
+ type = get_type(type_name);
+ if (!type) {
+ kfree(log);
+ return NULL;
+ }
+
+ log->type = type;
+ if (type->ctr(log, ti, argc, argv)) {
+ kfree(log);
+ put_type(type);
+ return NULL;
+ }
+
+ return log;
+}
+
+void dm_destroy_dirty_log(struct dirty_log *log)
+{
+ log->type->dtr(log);
+ put_type(log->type);
+ kfree(log);
+}
+
+/*-----------------------------------------------------------------
+ * Persistent and core logs share a lot of their implementation.
+ * FIXME: need a reload method to be called from a resume
+ *---------------------------------------------------------------*/
+/*
+ * Magic for persistent mirrors: "MiRr"
+ */
+#define MIRROR_MAGIC 0x4D695272
+
+/*
+ * The on-disk version of the metadata.
+ */
+#define MIRROR_DISK_VERSION 1
+#define LOG_OFFSET 2
+
+struct log_header {
+ uint32_t magic;
+
+ /*
+ * Simple, incrementing version. no backward
+ * compatibility.
+ */
+ uint32_t version;
+ sector_t nr_regions;
+};
+
+struct log_c {
+ struct dm_target *ti;
+ int touched;
+ sector_t region_size;
+ unsigned int region_count;
+ region_t sync_count;
+
+ unsigned bitset_uint32_count;
+ uint32_t *clean_bits;
+ uint32_t *sync_bits;
+ uint32_t *recovering_bits; /* FIXME: this seems excessive */
+
+ int sync_search;
+
+ /* Resync flag */
+ enum sync {
+ DEFAULTSYNC, /* Synchronize if necessary */
+ NOSYNC, /* Devices known to be already in sync */
+ FORCESYNC, /* Force a sync to happen */
+ } sync;
+
+ /*
+ * Disk log fields
+ */
+ struct dm_dev *log_dev;
+ struct log_header header;
+
+ struct io_region header_location;
+ struct log_header *disk_header;
+
+ struct io_region bits_location;
+ uint32_t *disk_bits;
+};
+
+/*
+ * The touched member needs to be updated every time we access
+ * one of the bitsets.
+ */
+static inline int log_test_bit(uint32_t *bs, unsigned bit)
+{
+ return test_bit(bit, (unsigned long *) bs) ? 1 : 0;
+}
+
+static inline void log_set_bit(struct log_c *l,
+ uint32_t *bs, unsigned bit)
+{
+ set_bit(bit, (unsigned long *) bs);
+ l->touched = 1;
+}
+
+static inline void log_clear_bit(struct log_c *l,
+ uint32_t *bs, unsigned bit)
+{
+ clear_bit(bit, (unsigned long *) bs);
+ l->touched = 1;
+}
+
+/*----------------------------------------------------------------
+ * Header IO
+ *--------------------------------------------------------------*/
+static void header_to_disk(struct log_header *core, struct log_header *disk)
+{
+ disk->magic = cpu_to_le32(core->magic);
+ disk->version = cpu_to_le32(core->version);
+ disk->nr_regions = cpu_to_le64(core->nr_regions);
+}
+
+static void header_from_disk(struct log_header *core, struct log_header *disk)
+{
+ core->magic = le32_to_cpu(disk->magic);
+ core->version = le32_to_cpu(disk->version);
+ core->nr_regions = le64_to_cpu(disk->nr_regions);
+}
+
+static int read_header(struct log_c *log)
+{
+ int r;
+ unsigned long ebits;
+
+ r = dm_io_sync_vm(1, &log->header_location, READ,
+ log->disk_header, &ebits);
+ if (r)
+ return r;
+
+ header_from_disk(&log->header, log->disk_header);
+
+ /* New log required? */
+ if (log->sync != DEFAULTSYNC || log->header.magic != MIRROR_MAGIC) {
+ log->header.magic = MIRROR_MAGIC;
+ log->header.version = MIRROR_DISK_VERSION;
+ log->header.nr_regions = 0;
+ }
+
+ if (log->header.version != MIRROR_DISK_VERSION) {
+ DMWARN("incompatible disk log version");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static inline int write_header(struct log_c *log)
+{
+ unsigned long ebits;
+
+ header_to_disk(&log->header, log->disk_header);
+ return dm_io_sync_vm(1, &log->header_location, WRITE,
+ log->disk_header, &ebits);
+}
+
+/*----------------------------------------------------------------
+ * Bits IO
+ *--------------------------------------------------------------*/
+static inline void bits_to_core(uint32_t *core, uint32_t *disk, unsigned count)
+{
+ unsigned i;
+
+ for (i = 0; i < count; i++)
+ core[i] = le32_to_cpu(disk[i]);
+}
+
+static inline void bits_to_disk(uint32_t *core, uint32_t *disk, unsigned count)
+{
+ unsigned i;
+
+ /* copy across the clean/dirty bitset */
+ for (i = 0; i < count; i++)
+ disk[i] = cpu_to_le32(core[i]);
+}
+
+static int read_bits(struct log_c *log)
+{
+ int r;
+ unsigned long ebits;
+
+ r = dm_io_sync_vm(1, &log->bits_location, READ,
+ log->disk_bits, &ebits);
+ if (r)
+ return r;
+
+ bits_to_core(log->clean_bits, log->disk_bits,
+ log->bitset_uint32_count);
+ return 0;
+}
+
+static int write_bits(struct log_c *log)
+{
+ unsigned long ebits;
+ bits_to_disk(log->clean_bits, log->disk_bits,
+ log->bitset_uint32_count);
+ return dm_io_sync_vm(1, &log->bits_location, WRITE,
+ log->disk_bits, &ebits);
+}
+
+/*----------------------------------------------------------------
+ * core log constructor/destructor
+ *
+ * argv contains region_size followed optionally by [no]sync
+ *--------------------------------------------------------------*/
+#define BYTE_SHIFT 3
+static int core_ctr(struct dirty_log *log, struct dm_target *ti,
+ unsigned int argc, char **argv)
+{
+ enum sync sync = DEFAULTSYNC;
+
+ struct log_c *lc;
+ sector_t region_size;
+ unsigned int region_count;
+ size_t bitset_size;
+
+ if (argc < 1 || argc > 2) {
+ DMWARN("wrong number of arguments to mirror log");
+ return -EINVAL;
+ }
+
+ if (argc > 1) {
+ if (!strcmp(argv[1], "sync"))
+ sync = FORCESYNC;
+ else if (!strcmp(argv[1], "nosync"))
+ sync = NOSYNC;
+ else {
+ DMWARN("unrecognised sync argument to mirror log: %s",
+ argv[1]);
+ return -EINVAL;
+ }
+ }
+
+ if (sscanf(argv[0], SECTOR_FORMAT, ®ion_size) != 1) {
+ DMWARN("invalid region size string");
+ return -EINVAL;
+ }
+
+ region_count = dm_div_up(ti->len, region_size);
+
+ lc = kmalloc(sizeof(*lc), GFP_KERNEL);
+ if (!lc) {
+ DMWARN("couldn't allocate core log");
+ return -ENOMEM;
+ }
+
+ lc->ti = ti;
+ lc->touched = 0;
+ lc->region_size = region_size;
+ lc->region_count = region_count;
+ lc->sync = sync;
+
+ /*
+ * Work out how many words we need to hold the bitset.
+ */
+ bitset_size = dm_round_up(region_count,
+ sizeof(*lc->clean_bits) << BYTE_SHIFT);
+ bitset_size >>= BYTE_SHIFT;
+
+ lc->bitset_uint32_count = bitset_size / 4;
+ lc->clean_bits = vmalloc(bitset_size);
+ if (!lc->clean_bits) {
+ DMWARN("couldn't allocate clean bitset");
+ kfree(lc);
+ return -ENOMEM;
+ }
+ memset(lc->clean_bits, -1, bitset_size);
+
+ lc->sync_bits = vmalloc(bitset_size);
+ if (!lc->sync_bits) {
+ DMWARN("couldn't allocate sync bitset");
+ vfree(lc->clean_bits);
+ kfree(lc);
+ return -ENOMEM;
+ }
+ memset(lc->sync_bits, (sync == NOSYNC) ? -1 : 0, bitset_size);
+ lc->sync_count = (sync == NOSYNC) ? region_count : 0;
+
+ lc->recovering_bits = vmalloc(bitset_size);
+ if (!lc->recovering_bits) {
+ DMWARN("couldn't allocate sync bitset");
+ vfree(lc->sync_bits);
+ vfree(lc->clean_bits);
+ kfree(lc);
+ return -ENOMEM;
+ }
+ memset(lc->recovering_bits, 0, bitset_size);
+ lc->sync_search = 0;
+ log->context = lc;
+ return 0;
+}
+
+static void core_dtr(struct dirty_log *log)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ vfree(lc->clean_bits);
+ vfree(lc->sync_bits);
+ vfree(lc->recovering_bits);
+ kfree(lc);
+}
+
+/*----------------------------------------------------------------
+ * disk log constructor/destructor
+ *
+ * argv contains log_device region_size followed optionally by [no]sync
+ *--------------------------------------------------------------*/
+static int disk_ctr(struct dirty_log *log, struct dm_target *ti,
+ unsigned int argc, char **argv)
+{
+ int r;
+ size_t size;
+ struct log_c *lc;
+ struct dm_dev *dev;
+
+ if (argc < 2 || argc > 3) {
+ DMWARN("wrong number of arguments to disk mirror log");
+ return -EINVAL;
+ }
+
+ r = dm_get_device(ti, argv[0], 0, 0 /* FIXME */,
+ FMODE_READ | FMODE_WRITE, &dev);
+ if (r)
+ return r;
+
+ r = core_ctr(log, ti, argc - 1, argv + 1);
+ if (r) {
+ dm_put_device(ti, dev);
+ return r;
+ }
+
+ lc = (struct log_c *) log->context;
+ lc->log_dev = dev;
+
+ /* setup the disk header fields */
+ lc->header_location.bdev = lc->log_dev->bdev;
+ lc->header_location.sector = 0;
+ lc->header_location.count = 1;
+
+ /*
+ * We can't read less than this amount, even though we'll
+ * not be using most of this space.
+ */
+ lc->disk_header = vmalloc(1 << SECTOR_SHIFT);
+ if (!lc->disk_header)
+ goto bad;
+
+ /* setup the disk bitset fields */
+ lc->bits_location.bdev = lc->log_dev->bdev;
+ lc->bits_location.sector = LOG_OFFSET;
+
+ size = dm_round_up(lc->bitset_uint32_count * sizeof(uint32_t),
+ 1 << SECTOR_SHIFT);
+ lc->bits_location.count = size >> SECTOR_SHIFT;
+ lc->disk_bits = vmalloc(size);
+ if (!lc->disk_bits) {
+ vfree(lc->disk_header);
+ goto bad;
+ }
+ return 0;
+
+ bad:
+ dm_put_device(ti, lc->log_dev);
+ core_dtr(log);
+ return -ENOMEM;
+}
+
+static void disk_dtr(struct dirty_log *log)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ dm_put_device(lc->ti, lc->log_dev);
+ vfree(lc->disk_header);
+ vfree(lc->disk_bits);
+ core_dtr(log);
+}
+
+static int count_bits32(uint32_t *addr, unsigned size)
+{
+ int count = 0, i;
+
+ for (i = 0; i < size; i++) {
+ count += hweight32(*(addr+i));
+ }
+ return count;
+}
+
+static int disk_resume(struct dirty_log *log)
+{
+ int r;
+ unsigned i;
+ struct log_c *lc = (struct log_c *) log->context;
+ size_t size = lc->bitset_uint32_count * sizeof(uint32_t);
+
+ /* read the disk header */
+ r = read_header(lc);
+ if (r)
+ return r;
+
+ /* read the bits */
+ r = read_bits(lc);
+ if (r)
+ return r;
+
+ /* set or clear any new bits */
+ if (lc->sync == NOSYNC)
+ for (i = lc->header.nr_regions; i < lc->region_count; i++)
+ /* FIXME: amazingly inefficient */
+ log_set_bit(lc, lc->clean_bits, i);
+ else
+ for (i = lc->header.nr_regions; i < lc->region_count; i++)
+ /* FIXME: amazingly inefficient */
+ log_clear_bit(lc, lc->clean_bits, i);
+
+ /* copy clean across to sync */
+ memcpy(lc->sync_bits, lc->clean_bits, size);
+ lc->sync_count = count_bits32(lc->clean_bits, lc->bitset_uint32_count);
+
+ /* write the bits */
+ r = write_bits(lc);
+ if (r)
+ return r;
+
+ /* set the correct number of regions in the header */
+ lc->header.nr_regions = lc->region_count;
+
+ /* write the new header */
+ return write_header(lc);
+}
+
+static sector_t core_get_region_size(struct dirty_log *log)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ return lc->region_size;
+}
+
+static int core_is_clean(struct dirty_log *log, region_t region)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ return log_test_bit(lc->clean_bits, region);
+}
+
+static int core_in_sync(struct dirty_log *log, region_t region, int block)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ return log_test_bit(lc->sync_bits, region);
+}
+
+static int core_flush(struct dirty_log *log)
+{
+ /* no op */
+ return 0;
+}
+
+static int disk_flush(struct dirty_log *log)
+{
+ int r;
+ struct log_c *lc = (struct log_c *) log->context;
+
+ /* only write if the log has changed */
+ if (!lc->touched)
+ return 0;
+
+ r = write_bits(lc);
+ if (!r)
+ lc->touched = 0;
+
+ return r;
+}
+
+static void core_mark_region(struct dirty_log *log, region_t region)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ log_clear_bit(lc, lc->clean_bits, region);
+}
+
+static void core_clear_region(struct dirty_log *log, region_t region)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+ log_set_bit(lc, lc->clean_bits, region);
+}
+
+static int core_get_resync_work(struct dirty_log *log, region_t *region)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+
+ if (lc->sync_search >= lc->region_count)
+ return 0;
+
+ do {
+ *region = find_next_zero_bit((unsigned long *) lc->sync_bits,
+ lc->region_count,
+ lc->sync_search);
+ lc->sync_search = *region + 1;
+
+ if (*region == lc->region_count)
+ return 0;
+
+ } while (log_test_bit(lc->recovering_bits, *region));
+
+ log_set_bit(lc, lc->recovering_bits, *region);
+ return 1;
+}
+
+static void core_complete_resync_work(struct dirty_log *log, region_t region,
+ int success)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+
+ log_clear_bit(lc, lc->recovering_bits, region);
+ if (success) {
+ log_set_bit(lc, lc->sync_bits, region);
+ lc->sync_count++;
+ }
+}
+
+static region_t core_get_sync_count(struct dirty_log *log)
+{
+ struct log_c *lc = (struct log_c *) log->context;
+
+ return lc->sync_count;
+}
+
+#define DMEMIT_SYNC \
+ if (lc->sync != DEFAULTSYNC) \
+ DMEMIT("%ssync ", lc->sync == NOSYNC ? "no" : "")
+
+static int core_status(struct dirty_log *log, status_type_t status,
+ char *result, unsigned int maxlen)
+{
+ int sz = 0;
+ struct log_c *lc = log->context;
+
+ switch(status) {
+ case STATUSTYPE_INFO:
+ break;
+
+ case STATUSTYPE_TABLE:
+ DMEMIT("%s %u " SECTOR_FORMAT " ", log->type->name,
+ lc->sync == DEFAULTSYNC ? 1 : 2, lc->region_size);
+ DMEMIT_SYNC;
+ }
+
+ return sz;
+}
+
+static int disk_status(struct dirty_log *log, status_type_t status,
+ char *result, unsigned int maxlen)
+{
+ int sz = 0;
+ char buffer[16];
+ struct log_c *lc = log->context;
+
+ switch(status) {
+ case STATUSTYPE_INFO:
+ break;
+
+ case STATUSTYPE_TABLE:
+ format_dev_t(buffer, lc->log_dev->bdev->bd_dev);
+ DMEMIT("%s %u %s " SECTOR_FORMAT " ", log->type->name,
+ lc->sync == DEFAULTSYNC ? 2 : 3, buffer,
+ lc->region_size);
+ DMEMIT_SYNC;
+ }
+
+ return sz;
+}
+
+static struct dirty_log_type _core_type = {
+ .name = "core",
+ .module = THIS_MODULE,
+ .ctr = core_ctr,
+ .dtr = core_dtr,
+ .get_region_size = core_get_region_size,
+ .is_clean = core_is_clean,
+ .in_sync = core_in_sync,
+ .flush = core_flush,
+ .mark_region = core_mark_region,
+ .clear_region = core_clear_region,
+ .get_resync_work = core_get_resync_work,
+ .complete_resync_work = core_complete_resync_work,
+ .get_sync_count = core_get_sync_count,
+ .status = core_status,
+};
+
+static struct dirty_log_type _disk_type = {
+ .name = "disk",
+ .module = THIS_MODULE,
+ .ctr = disk_ctr,
+ .dtr = disk_dtr,
+ .suspend = disk_flush,
+ .resume = disk_resume,
+ .get_region_size = core_get_region_size,
+ .is_clean = core_is_clean,
+ .in_sync = core_in_sync,
+ .flush = disk_flush,
+ .mark_region = core_mark_region,
+ .clear_region = core_clear_region,
+ .get_resync_work = core_get_resync_work,
+ .complete_resync_work = core_complete_resync_work,
+ .get_sync_count = core_get_sync_count,
+ .status = disk_status,
+};
+
+int __init dm_dirty_log_init(void)
+{
+ int r;
+
+ r = dm_register_dirty_log_type(&_core_type);
+ if (r)
+ DMWARN("couldn't register core log");
+
+ r = dm_register_dirty_log_type(&_disk_type);
+ if (r) {
+ DMWARN("couldn't register disk type");
+ dm_unregister_dirty_log_type(&_core_type);
+ }
+
+ return r;
+}
+
+void dm_dirty_log_exit(void)
+{
+ dm_unregister_dirty_log_type(&_disk_type);
+ dm_unregister_dirty_log_type(&_core_type);
+}
+
+EXPORT_SYMBOL(dm_register_dirty_log_type);
+EXPORT_SYMBOL(dm_unregister_dirty_log_type);
+EXPORT_SYMBOL(dm_create_dirty_log);
+EXPORT_SYMBOL(dm_destroy_dirty_log);
--- /dev/null
+/*
+ * Copyright (C) 2003 Sistina Software
+ *
+ * This file is released under the LGPL.
+ */
+
+#ifndef DM_DIRTY_LOG
+#define DM_DIRTY_LOG
+
+#include "dm.h"
+
+typedef sector_t region_t;
+
+struct dirty_log_type;
+
+struct dirty_log {
+ struct dirty_log_type *type;
+ void *context;
+};
+
+struct dirty_log_type {
+ struct list_head list;
+ const char *name;
+ struct module *module;
+ unsigned int use_count;
+
+ int (*ctr)(struct dirty_log *log, struct dm_target *ti,
+ unsigned int argc, char **argv);
+ void (*dtr)(struct dirty_log *log);
+
+ /*
+ * There are times when we don't want the log to touch
+ * the disk.
+ */
+ int (*suspend)(struct dirty_log *log);
+ int (*resume)(struct dirty_log *log);
+
+ /*
+ * Retrieves the smallest size of region that the log can
+ * deal with.
+ */
+ sector_t (*get_region_size)(struct dirty_log *log);
+
+ /*
+ * A predicate to say whether a region is clean or not.
+ * May block.
+ */
+ int (*is_clean)(struct dirty_log *log, region_t region);
+
+ /*
+ * Returns: 0, 1, -EWOULDBLOCK, < 0
+ *
+ * A predicate function to check the area given by
+ * [sector, sector + len) is in sync.
+ *
+ * If -EWOULDBLOCK is returned the state of the region is
+ * unknown, typically this will result in a read being
+ * passed to a daemon to deal with, since a daemon is
+ * allowed to block.
+ */
+ int (*in_sync)(struct dirty_log *log, region_t region, int can_block);
+
+ /*
+ * Flush the current log state (eg, to disk). This
+ * function may block.
+ */
+ int (*flush)(struct dirty_log *log);
+
+ /*
+ * Mark an area as clean or dirty. These functions may
+ * block, though for performance reasons blocking should
+ * be extremely rare (eg, allocating another chunk of
+ * memory for some reason).
+ */
+ void (*mark_region)(struct dirty_log *log, region_t region);
+ void (*clear_region)(struct dirty_log *log, region_t region);
+
+ /*
+ * Returns: <0 (error), 0 (no region), 1 (region)
+ *
+ * The mirrord will need perform recovery on regions of
+ * the mirror that are in the NOSYNC state. This
+ * function asks the log to tell the caller about the
+ * next region that this machine should recover.
+ *
+ * Do not confuse this function with 'in_sync()', one
+ * tells you if an area is synchronised, the other
+ * assigns recovery work.
+ */
+ int (*get_resync_work)(struct dirty_log *log, region_t *region);
+
+ /*
+ * This notifies the log that the resync of an area has
+ * been completed. The log should then mark this region
+ * as CLEAN.
+ */
+ void (*complete_resync_work)(struct dirty_log *log,
+ region_t region, int success);
+
+ /*
+ * Returns the number of regions that are in sync.
+ */
+ region_t (*get_sync_count)(struct dirty_log *log);
+
+ /*
+ * Support function for mirror status requests.
+ */
+ int (*status)(struct dirty_log *log, status_type_t status_type,
+ char *result, unsigned int maxlen);
+};
+
+int dm_register_dirty_log_type(struct dirty_log_type *type);
+int dm_unregister_dirty_log_type(struct dirty_log_type *type);
+
+
+/*
+ * Make sure you use these two functions, rather than calling
+ * type->constructor/destructor() directly.
+ */
+struct dirty_log *dm_create_dirty_log(const char *type_name, struct dm_target *ti,
+ unsigned int argc, char **argv);
+void dm_destroy_dirty_log(struct dirty_log *log);
+
+/*
+ * init/exit functions.
+ */
+int dm_dirty_log_init(void);
+void dm_dirty_log_exit(void);
+
+#endif
--- /dev/null
+/*
+ * Copyright (C) 2003 Sistina Software Limited.
+ *
+ * This file is released under the GPL.
+ */
+
+#include "dm.h"
+#include "dm-bio-list.h"
+#include "dm-io.h"
+#include "dm-log.h"
+#include "kcopyd.h"
+
+#include <linux/ctype.h>
+#include <linux/init.h>
+#include <linux/mempool.h>
+#include <linux/module.h>
+#include <linux/pagemap.h>
+#include <linux/slab.h>
+#include <linux/time.h>
+#include <linux/vmalloc.h>
+#include <linux/workqueue.h>
+
+static struct workqueue_struct *_kmirrord_wq;
+static struct work_struct _kmirrord_work;
+
+static inline void wake(void)
+{
+ queue_work(_kmirrord_wq, &_kmirrord_work);
+}
+
+/*-----------------------------------------------------------------
+ * Region hash
+ *
+ * The mirror splits itself up into discrete regions. Each
+ * region can be in one of three states: clean, dirty,
+ * nosync. There is no need to put clean regions in the hash.
+ *
+ * In addition to being present in the hash table a region _may_
+ * be present on one of three lists.
+ *
+ * clean_regions: Regions on this list have no io pending to
+ * them, they are in sync, we are no longer interested in them,
+ * they are dull. rh_update_states() will remove them from the
+ * hash table.
+ *
+ * quiesced_regions: These regions have been spun down, ready
+ * for recovery. rh_recovery_start() will remove regions from
+ * this list and hand them to kmirrord, which will schedule the
+ * recovery io with kcopyd.
+ *
+ * recovered_regions: Regions that kcopyd has successfully
+ * recovered. rh_update_states() will now schedule any delayed
+ * io, up the recovery_count, and remove the region from the
+ * hash.
+ *
+ * There are 2 locks:
+ * A rw spin lock 'hash_lock' protects just the hash table,
+ * this is never held in write mode from interrupt context,
+ * which I believe means that we only have to disable irqs when
+ * doing a write lock.
+ *
+ * An ordinary spin lock 'region_lock' that protects the three
+ * lists in the region_hash, with the 'state', 'list' and
+ * 'bhs_delayed' fields of the regions. This is used from irq
+ * context, so all other uses will have to suspend local irqs.
+ *---------------------------------------------------------------*/
+struct mirror_set;
+struct region_hash {
+ struct mirror_set *ms;
+ sector_t region_size;
+ unsigned region_shift;
+
+ /* holds persistent region state */
+ struct dirty_log *log;
+
+ /* hash table */
+ rwlock_t hash_lock;
+ mempool_t *region_pool;
+ unsigned int mask;
+ unsigned int nr_buckets;
+ struct list_head *buckets;
+
+ spinlock_t region_lock;
+ struct semaphore recovery_count;
+ struct list_head clean_regions;
+ struct list_head quiesced_regions;
+ struct list_head recovered_regions;
+};
+
+enum {
+ RH_CLEAN,
+ RH_DIRTY,
+ RH_NOSYNC,
+ RH_RECOVERING
+};
+
+struct region {
+ struct region_hash *rh; /* FIXME: can we get rid of this ? */
+ region_t key;
+ int state;
+
+ struct list_head hash_list;
+ struct list_head list;
+
+ atomic_t pending;
+ struct bio_list delayed_bios;
+};
+
+/*
+ * Conversion fns
+ */
+static inline region_t bio_to_region(struct region_hash *rh, struct bio *bio)
+{
+ return bio->bi_sector >> rh->region_shift;
+}
+
+static inline sector_t region_to_sector(struct region_hash *rh, region_t region)
+{
+ return region << rh->region_shift;
+}
+
+/* FIXME move this */
+static void queue_bio(struct mirror_set *ms, struct bio *bio, int rw);
+
+static void *region_alloc(int gfp_mask, void *pool_data)
+{
+ return kmalloc(sizeof(struct region), gfp_mask);
+}
+
+static void region_free(void *element, void *pool_data)
+{
+ kfree(element);
+}
+
+#define MIN_REGIONS 64
+#define MAX_RECOVERY 1
+static int rh_init(struct region_hash *rh, struct mirror_set *ms,
+ struct dirty_log *log, sector_t region_size,
+ region_t nr_regions)
+{
+ unsigned int nr_buckets, max_buckets;
+ size_t i;
+
+ /*
+ * Calculate a suitable number of buckets for our hash
+ * table.
+ */
+ max_buckets = nr_regions >> 6;
+ for (nr_buckets = 128u; nr_buckets < max_buckets; nr_buckets <<= 1)
+ ;
+ nr_buckets >>= 1;
+
+ rh->ms = ms;
+ rh->log = log;
+ rh->region_size = region_size;
+ rh->region_shift = ffs(region_size) - 1;
+ rwlock_init(&rh->hash_lock);
+ rh->mask = nr_buckets - 1;
+ rh->nr_buckets = nr_buckets;
+
+ rh->buckets = vmalloc(nr_buckets * sizeof(*rh->buckets));
+ if (!rh->buckets) {
+ DMERR("unable to allocate region hash memory");
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < nr_buckets; i++)
+ INIT_LIST_HEAD(rh->buckets + i);
+
+ spin_lock_init(&rh->region_lock);
+ sema_init(&rh->recovery_count, 0);
+ INIT_LIST_HEAD(&rh->clean_regions);
+ INIT_LIST_HEAD(&rh->quiesced_regions);
+ INIT_LIST_HEAD(&rh->recovered_regions);
+
+ rh->region_pool = mempool_create(MIN_REGIONS, region_alloc,
+ region_free, NULL);
+ if (!rh->region_pool) {
+ vfree(rh->buckets);
+ rh->buckets = NULL;
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static void rh_exit(struct region_hash *rh)
+{
+ unsigned int h;
+ struct region *reg, *nreg;
+
+ BUG_ON(!list_empty(&rh->quiesced_regions));
+ for (h = 0; h < rh->nr_buckets; h++) {
+ list_for_each_entry_safe(reg, nreg, rh->buckets + h, hash_list) {
+ BUG_ON(atomic_read(®->pending));
+ mempool_free(reg, rh->region_pool);
+ }
+ }
+
+ if (rh->log)
+ dm_destroy_dirty_log(rh->log);
+ if (rh->region_pool)
+ mempool_destroy(rh->region_pool);
+ vfree(rh->buckets);
+}
+
+#define RH_HASH_MULT 2654435387U
+
+static inline unsigned int rh_hash(struct region_hash *rh, region_t region)
+{
+ return (unsigned int) ((region * RH_HASH_MULT) >> 12) & rh->mask;
+}
+
+static struct region *__rh_lookup(struct region_hash *rh, region_t region)
+{
+ struct region *reg;
+
+ list_for_each_entry (reg, rh->buckets + rh_hash(rh, region), hash_list)
+ if (reg->key == region)
+ return reg;
+
+ return NULL;
+}
+
+static void __rh_insert(struct region_hash *rh, struct region *reg)
+{
+ unsigned int h = rh_hash(rh, reg->key);
+ list_add(®->hash_list, rh->buckets + h);
+}
+
+static struct region *__rh_alloc(struct region_hash *rh, region_t region)
+{
+ struct region *reg, *nreg;
+
+ read_unlock(&rh->hash_lock);
+ nreg = mempool_alloc(rh->region_pool, GFP_NOIO);
+ nreg->state = rh->log->type->in_sync(rh->log, region, 1) ?
+ RH_CLEAN : RH_NOSYNC;
+ nreg->rh = rh;
+ nreg->key = region;
+
+ INIT_LIST_HEAD(&nreg->list);
+
+ atomic_set(&nreg->pending, 0);
+ bio_list_init(&nreg->delayed_bios);
+ write_lock_irq(&rh->hash_lock);
+
+ reg = __rh_lookup(rh, region);
+ if (reg)
+ /* we lost the race */
+ mempool_free(nreg, rh->region_pool);
+
+ else {
+ __rh_insert(rh, nreg);
+ if (nreg->state == RH_CLEAN) {
+ spin_lock_irq(&rh->region_lock);
+ list_add(&nreg->list, &rh->clean_regions);
+ spin_unlock_irq(&rh->region_lock);
+ }
+ reg = nreg;
+ }
+ write_unlock_irq(&rh->hash_lock);
+ read_lock(&rh->hash_lock);
+
+ return reg;
+}
+
+static inline struct region *__rh_find(struct region_hash *rh, region_t region)
+{
+ struct region *reg;
+
+ reg = __rh_lookup(rh, region);
+ if (!reg)
+ reg = __rh_alloc(rh, region);
+
+ return reg;
+}
+
+static int rh_state(struct region_hash *rh, region_t region, int may_block)
+{
+ int r;
+ struct region *reg;
+
+ read_lock(&rh->hash_lock);
+ reg = __rh_lookup(rh, region);
+ read_unlock(&rh->hash_lock);
+
+ if (reg)
+ return reg->state;
+
+ /*
+ * The region wasn't in the hash, so we fall back to the
+ * dirty log.
+ */
+ r = rh->log->type->in_sync(rh->log, region, may_block);
+
+ /*
+ * Any error from the dirty log (eg. -EWOULDBLOCK) gets
+ * taken as a RH_NOSYNC
+ */
+ return r == 1 ? RH_CLEAN : RH_NOSYNC;
+}
+
+static inline int rh_in_sync(struct region_hash *rh,
+ region_t region, int may_block)
+{
+ int state = rh_state(rh, region, may_block);
+ return state == RH_CLEAN || state == RH_DIRTY;
+}
+
+static void dispatch_bios(struct mirror_set *ms, struct bio_list *bio_list)
+{
+ struct bio *bio;
+
+ while ((bio = bio_list_pop(bio_list))) {
+ queue_bio(ms, bio, WRITE);
+ }
+}
+
+static void rh_update_states(struct region_hash *rh)
+{
+ struct region *reg, *next;
+
+ LIST_HEAD(clean);
+ LIST_HEAD(recovered);
+
+ /*
+ * Quickly grab the lists.
+ */
+ write_lock_irq(&rh->hash_lock);
+ spin_lock(&rh->region_lock);
+ if (!list_empty(&rh->clean_regions)) {
+ list_splice(&rh->clean_regions, &clean);
+ INIT_LIST_HEAD(&rh->clean_regions);
+
+ list_for_each_entry (reg, &clean, list) {
+ rh->log->type->clear_region(rh->log, reg->key);
+ list_del(®->hash_list);
+ }
+ }
+
+ if (!list_empty(&rh->recovered_regions)) {
+ list_splice(&rh->recovered_regions, &recovered);
+ INIT_LIST_HEAD(&rh->recovered_regions);
+
+ list_for_each_entry (reg, &recovered, list)
+ list_del(®->hash_list);
+ }
+ spin_unlock(&rh->region_lock);
+ write_unlock_irq(&rh->hash_lock);
+
+ /*
+ * All the regions on the recovered and clean lists have
+ * now been pulled out of the system, so no need to do
+ * any more locking.
+ */
+ list_for_each_entry_safe (reg, next, &recovered, list) {
+ rh->log->type->clear_region(rh->log, reg->key);
+ rh->log->type->complete_resync_work(rh->log, reg->key, 1);
+ dispatch_bios(rh->ms, ®->delayed_bios);
+ up(&rh->recovery_count);
+ mempool_free(reg, rh->region_pool);
+ }
+
+ if (!list_empty(&recovered))
+ rh->log->type->flush(rh->log);
+
+ list_for_each_entry_safe (reg, next, &clean, list)
+ mempool_free(reg, rh->region_pool);
+}
+
+static void rh_inc(struct region_hash *rh, region_t region)
+{
+ struct region *reg;
+
+ read_lock(&rh->hash_lock);
+ reg = __rh_find(rh, region);
+ if (reg->state == RH_CLEAN) {
+ rh->log->type->mark_region(rh->log, reg->key);
+
+ spin_lock_irq(&rh->region_lock);
+ reg->state = RH_DIRTY;
+ list_del_init(®->list); /* take off the clean list */
+ spin_unlock_irq(&rh->region_lock);
+ }
+
+ atomic_inc(®->pending);
+ read_unlock(&rh->hash_lock);
+}
+
+static void rh_inc_pending(struct region_hash *rh, struct bio_list *bios)
+{
+ struct bio *bio;
+
+ for (bio = bios->head; bio; bio = bio->bi_next)
+ rh_inc(rh, bio_to_region(rh, bio));
+}
+
+static void rh_dec(struct region_hash *rh, region_t region)
+{
+ unsigned long flags;
+ struct region *reg;
+ int should_wake = 0;
+
+ read_lock(&rh->hash_lock);
+ reg = __rh_lookup(rh, region);
+ read_unlock(&rh->hash_lock);
+
+ if (atomic_dec_and_test(®->pending)) {
+ spin_lock_irqsave(&rh->region_lock, flags);
+ if (reg->state == RH_RECOVERING) {
+ list_add_tail(®->list, &rh->quiesced_regions);
+ } else {
+ reg->state = RH_CLEAN;
+ list_add(®->list, &rh->clean_regions);
+ }
+ spin_unlock_irqrestore(&rh->region_lock, flags);
+ should_wake = 1;
+ }
+
+ if (should_wake)
+ wake();
+}
+
+/*
+ * Starts quiescing a region in preparation for recovery.
+ */
+static int __rh_recovery_prepare(struct region_hash *rh)
+{
+ int r;
+ struct region *reg;
+ region_t region;
+
+ /*
+ * Ask the dirty log what's next.
+ */
+ r = rh->log->type->get_resync_work(rh->log, ®ion);
+ if (r <= 0)
+ return r;
+
+ /*
+ * Get this region, and start it quiescing by setting the
+ * recovering flag.
+ */
+ read_lock(&rh->hash_lock);
+ reg = __rh_find(rh, region);
+ read_unlock(&rh->hash_lock);
+
+ spin_lock_irq(&rh->region_lock);
+ reg->state = RH_RECOVERING;
+
+ /* Already quiesced ? */
+ if (atomic_read(®->pending))
+ list_del_init(®->list);
+
+ else {
+ list_del_init(®->list);
+ list_add(®->list, &rh->quiesced_regions);
+ }
+ spin_unlock_irq(&rh->region_lock);
+
+ return 1;
+}
+
+static void rh_recovery_prepare(struct region_hash *rh)
+{
+ while (!down_trylock(&rh->recovery_count))
+ if (__rh_recovery_prepare(rh) <= 0) {
+ up(&rh->recovery_count);
+ break;
+ }
+}
+
+/*
+ * Returns any quiesced regions.
+ */
+static struct region *rh_recovery_start(struct region_hash *rh)
+{
+ struct region *reg = NULL;
+
+ spin_lock_irq(&rh->region_lock);
+ if (!list_empty(&rh->quiesced_regions)) {
+ reg = list_entry(rh->quiesced_regions.next,
+ struct region, list);
+ list_del_init(®->list); /* remove from the quiesced list */
+ }
+ spin_unlock_irq(&rh->region_lock);
+
+ return reg;
+}
+
+/* FIXME: success ignored for now */
+static void rh_recovery_end(struct region *reg, int success)
+{
+ struct region_hash *rh = reg->rh;
+
+ spin_lock_irq(&rh->region_lock);
+ list_add(®->list, ®->rh->recovered_regions);
+ spin_unlock_irq(&rh->region_lock);
+
+ wake();
+}
+
+static void rh_flush(struct region_hash *rh)
+{
+ rh->log->type->flush(rh->log);
+}
+
+static void rh_delay(struct region_hash *rh, struct bio *bio)
+{
+ struct region *reg;
+
+ read_lock(&rh->hash_lock);
+ reg = __rh_find(rh, bio_to_region(rh, bio));
+ bio_list_add(®->delayed_bios, bio);
+ read_unlock(&rh->hash_lock);
+}
+
+static void rh_stop_recovery(struct region_hash *rh)
+{
+ int i;
+
+ /* wait for any recovering regions */
+ for (i = 0; i < MAX_RECOVERY; i++)
+ down(&rh->recovery_count);
+}
+
+static void rh_start_recovery(struct region_hash *rh)
+{
+ int i;
+
+ for (i = 0; i < MAX_RECOVERY; i++)
+ up(&rh->recovery_count);
+
+ wake();
+}
+
+/*-----------------------------------------------------------------
+ * Mirror set structures.
+ *---------------------------------------------------------------*/
+struct mirror {
+ atomic_t error_count;
+ struct dm_dev *dev;
+ sector_t offset;
+};
+
+struct mirror_set {
+ struct dm_target *ti;
+ struct list_head list;
+ struct region_hash rh;
+ struct kcopyd_client *kcopyd_client;
+
+ spinlock_t lock; /* protects the next two lists */
+ struct bio_list reads;
+ struct bio_list writes;
+
+ /* recovery */
+ region_t nr_regions;
+ int in_sync;
+
+ unsigned int nr_mirrors;
+ struct mirror mirror[0];
+};
+
+/*
+ * Every mirror should look like this one.
+ */
+#define DEFAULT_MIRROR 0
+
+/*
+ * This is yucky. We squirrel the mirror_set struct away inside
+ * bi_next for write buffers. This is safe since the bh
+ * doesn't get submitted to the lower levels of block layer.
+ */
+static struct mirror_set *bio_get_ms(struct bio *bio)
+{
+ return (struct mirror_set *) bio->bi_next;
+}
+
+static void bio_set_ms(struct bio *bio, struct mirror_set *ms)
+{
+ bio->bi_next = (struct bio *) ms;
+}
+
+/*-----------------------------------------------------------------
+ * Recovery.
+ *
+ * When a mirror is first activated we may find that some regions
+ * are in the no-sync state. We have to recover these by
+ * recopying from the default mirror to all the others.
+ *---------------------------------------------------------------*/
+static void recovery_complete(int read_err, unsigned int write_err,
+ void *context)
+{
+ struct region *reg = (struct region *) context;
+
+ /* FIXME: better error handling */
+ rh_recovery_end(reg, read_err || write_err);
+}
+
+static int recover(struct mirror_set *ms, struct region *reg)
+{
+ int r;
+ unsigned int i;
+ struct io_region from, to[KCOPYD_MAX_REGIONS], *dest;
+ struct mirror *m;
+ unsigned long flags = 0;
+
+ /* fill in the source */
+ m = ms->mirror + DEFAULT_MIRROR;
+ from.bdev = m->dev->bdev;
+ from.sector = m->offset + region_to_sector(reg->rh, reg->key);
+ if (reg->key == (ms->nr_regions - 1)) {
+ /*
+ * The final region may be smaller than
+ * region_size.
+ */
+ from.count = ms->ti->len & (reg->rh->region_size - 1);
+ if (!from.count)
+ from.count = reg->rh->region_size;
+ } else
+ from.count = reg->rh->region_size;
+
+ /* fill in the destinations */
+ for (i = 0, dest = to; i < ms->nr_mirrors; i++) {
+ if (i == DEFAULT_MIRROR)
+ continue;
+
+ m = ms->mirror + i;
+ dest->bdev = m->dev->bdev;
+ dest->sector = m->offset + region_to_sector(reg->rh, reg->key);
+ dest->count = from.count;
+ dest++;
+ }
+
+ /* hand to kcopyd */
+ set_bit(KCOPYD_IGNORE_ERROR, &flags);
+ r = kcopyd_copy(ms->kcopyd_client, &from, ms->nr_mirrors - 1, to, flags,
+ recovery_complete, reg);
+
+ return r;
+}
+
+static void do_recovery(struct mirror_set *ms)
+{
+ int r;
+ struct region *reg;
+ struct dirty_log *log = ms->rh.log;
+
+ /*
+ * Start quiescing some regions.
+ */
+ rh_recovery_prepare(&ms->rh);
+
+ /*
+ * Copy any already quiesced regions.
+ */
+ while ((reg = rh_recovery_start(&ms->rh))) {
+ r = recover(ms, reg);
+ if (r)
+ rh_recovery_end(reg, 0);
+ }
+
+ /*
+ * Update the in sync flag.
+ */
+ if (!ms->in_sync &&
+ (log->type->get_sync_count(log) == ms->nr_regions)) {
+ /* the sync is complete */
+ dm_table_event(ms->ti->table);
+ ms->in_sync = 1;
+ }
+}
+
+/*-----------------------------------------------------------------
+ * Reads
+ *---------------------------------------------------------------*/
+static struct mirror *choose_mirror(struct mirror_set *ms, sector_t sector)
+{
+ /* FIXME: add read balancing */
+ return ms->mirror + DEFAULT_MIRROR;
+}
+
+/*
+ * remap a buffer to a particular mirror.
+ */
+static void map_bio(struct mirror_set *ms, struct mirror *m, struct bio *bio)
+{
+ bio->bi_bdev = m->dev->bdev;
+ bio->bi_sector = m->offset + (bio->bi_sector - ms->ti->begin);
+}
+
+static void do_reads(struct mirror_set *ms, struct bio_list *reads)
+{
+ region_t region;
+ struct bio *bio;
+ struct mirror *m;
+
+ while ((bio = bio_list_pop(reads))) {
+ region = bio_to_region(&ms->rh, bio);
+
+ /*
+ * We can only read balance if the region is in sync.
+ */
+ if (rh_in_sync(&ms->rh, region, 0))
+ m = choose_mirror(ms, bio->bi_sector);
+ else
+ m = ms->mirror + DEFAULT_MIRROR;
+
+ map_bio(ms, m, bio);
+ generic_make_request(bio);
+ }
+}
+
+/*-----------------------------------------------------------------
+ * Writes.
+ *
+ * We do different things with the write io depending on the
+ * state of the region that it's in:
+ *
+ * SYNC: increment pending, use kcopyd to write to *all* mirrors
+ * RECOVERING: delay the io until recovery completes
+ * NOSYNC: increment pending, just write to the default mirror
+ *---------------------------------------------------------------*/
+static void write_callback(unsigned long error, void *context)
+{
+ unsigned int i;
+ int uptodate = 1;
+ struct bio *bio = (struct bio *) context;
+ struct mirror_set *ms;
+
+ ms = bio_get_ms(bio);
+ bio_set_ms(bio, NULL);
+
+ /*
+ * NOTE: We don't decrement the pending count here,
+ * instead it is done by the targets endio function.
+ * This way we handle both writes to SYNC and NOSYNC
+ * regions with the same code.
+ */
+
+ if (error) {
+ /*
+ * only error the io if all mirrors failed.
+ * FIXME: bogus
+ */
+ uptodate = 0;
+ for (i = 0; i < ms->nr_mirrors; i++)
+ if (!test_bit(i, &error)) {
+ uptodate = 1;
+ break;
+ }
+ }
+ bio_endio(bio, bio->bi_size, 0);
+}
+
+static void do_write(struct mirror_set *ms, struct bio *bio)
+{
+ unsigned int i;
+ struct io_region io[KCOPYD_MAX_REGIONS+1];
+ struct mirror *m;
+
+ for (i = 0; i < ms->nr_mirrors; i++) {
+ m = ms->mirror + i;
+
+ io[i].bdev = m->dev->bdev;
+ io[i].sector = m->offset + (bio->bi_sector - ms->ti->begin);
+ io[i].count = bio->bi_size >> 9;
+ }
+
+ bio_set_ms(bio, ms);
+ dm_io_async_bvec(ms->nr_mirrors, io, WRITE,
+ bio->bi_io_vec + bio->bi_idx,
+ write_callback, bio);
+}
+
+static void do_writes(struct mirror_set *ms, struct bio_list *writes)
+{
+ int state;
+ struct bio *bio;
+ struct bio_list sync, nosync, recover, *this_list = NULL;
+
+ if (!writes->head)
+ return;
+
+ /*
+ * Classify each write.
+ */
+ bio_list_init(&sync);
+ bio_list_init(&nosync);
+ bio_list_init(&recover);
+
+ while ((bio = bio_list_pop(writes))) {
+ state = rh_state(&ms->rh, bio_to_region(&ms->rh, bio), 1);
+ switch (state) {
+ case RH_CLEAN:
+ case RH_DIRTY:
+ this_list = &sync;
+ break;
+
+ case RH_NOSYNC:
+ this_list = &nosync;
+ break;
+
+ case RH_RECOVERING:
+ this_list = &recover;
+ break;
+ }
+
+ bio_list_add(this_list, bio);
+ }
+
+ /*
+ * Increment the pending counts for any regions that will
+ * be written to (writes to recover regions are going to
+ * be delayed).
+ */
+ rh_inc_pending(&ms->rh, &sync);
+ rh_inc_pending(&ms->rh, &nosync);
+ rh_flush(&ms->rh);
+
+ /*
+ * Dispatch io.
+ */
+ while ((bio = bio_list_pop(&sync)))
+ do_write(ms, bio);
+
+ while ((bio = bio_list_pop(&recover)))
+ rh_delay(&ms->rh, bio);
+
+ while ((bio = bio_list_pop(&nosync))) {
+ map_bio(ms, ms->mirror + DEFAULT_MIRROR, bio);
+ generic_make_request(bio);
+ }
+}
+
+/*-----------------------------------------------------------------
+ * kmirrord
+ *---------------------------------------------------------------*/
+static LIST_HEAD(_mirror_sets);
+static DECLARE_RWSEM(_mirror_sets_lock);
+
+static void do_mirror(struct mirror_set *ms)
+{
+ struct bio_list reads, writes;
+
+ spin_lock(&ms->lock);
+ reads = ms->reads;
+ writes = ms->writes;
+ bio_list_init(&ms->reads);
+ bio_list_init(&ms->writes);
+ spin_unlock(&ms->lock);
+
+ rh_update_states(&ms->rh);
+ do_recovery(ms);
+ do_reads(ms, &reads);
+ do_writes(ms, &writes);
+}
+
+static void do_work(void *ignored)
+{
+ struct mirror_set *ms;
+
+ down_read(&_mirror_sets_lock);
+ list_for_each_entry (ms, &_mirror_sets, list)
+ do_mirror(ms);
+ up_read(&_mirror_sets_lock);
+}
+
+/*-----------------------------------------------------------------
+ * Target functions
+ *---------------------------------------------------------------*/
+static struct mirror_set *alloc_context(unsigned int nr_mirrors,
+ sector_t region_size,
+ struct dm_target *ti,
+ struct dirty_log *dl)
+{
+ size_t len;
+ struct mirror_set *ms = NULL;
+
+ if (array_too_big(sizeof(*ms), sizeof(ms->mirror[0]), nr_mirrors))
+ return NULL;
+
+ len = sizeof(*ms) + (sizeof(ms->mirror[0]) * nr_mirrors);
+
+ ms = kmalloc(len, GFP_KERNEL);
+ if (!ms) {
+ ti->error = "dm-mirror: Cannot allocate mirror context";
+ return NULL;
+ }
+
+ memset(ms, 0, len);
+ spin_lock_init(&ms->lock);
+
+ ms->ti = ti;
+ ms->nr_mirrors = nr_mirrors;
+ ms->nr_regions = dm_div_up(ti->len, region_size);
+ ms->in_sync = 0;
+
+ if (rh_init(&ms->rh, ms, dl, region_size, ms->nr_regions)) {
+ ti->error = "dm-mirror: Error creating dirty region hash";
+ kfree(ms);
+ return NULL;
+ }
+
+ return ms;
+}
+
+static void free_context(struct mirror_set *ms, struct dm_target *ti,
+ unsigned int m)
+{
+ while (m--)
+ dm_put_device(ti, ms->mirror[m].dev);
+
+ rh_exit(&ms->rh);
+ kfree(ms);
+}
+
+static inline int _check_region_size(struct dm_target *ti, sector_t size)
+{
+ return !(size % (PAGE_SIZE >> 9) || (size & (size - 1)) ||
+ size > ti->len);
+}
+
+static int get_mirror(struct mirror_set *ms, struct dm_target *ti,
+ unsigned int mirror, char **argv)
+{
+ sector_t offset;
+
+ if (sscanf(argv[1], SECTOR_FORMAT, &offset) != 1) {
+ ti->error = "dm-mirror: Invalid offset";
+ return -EINVAL;
+ }
+
+ if (dm_get_device(ti, argv[0], offset, ti->len,
+ dm_table_get_mode(ti->table),
+ &ms->mirror[mirror].dev)) {
+ ti->error = "dm-mirror: Device lookup failure";
+ return -ENXIO;
+ }
+
+ ms->mirror[mirror].offset = offset;
+
+ return 0;
+}
+
+static int add_mirror_set(struct mirror_set *ms)
+{
+ down_write(&_mirror_sets_lock);
+ list_add_tail(&ms->list, &_mirror_sets);
+ up_write(&_mirror_sets_lock);
+ wake();
+
+ return 0;
+}
+
+static void del_mirror_set(struct mirror_set *ms)
+{
+ down_write(&_mirror_sets_lock);
+ list_del(&ms->list);
+ up_write(&_mirror_sets_lock);
+}
+
+/*
+ * Create dirty log: log_type #log_params <log_params>
+ */
+static struct dirty_log *create_dirty_log(struct dm_target *ti,
+ unsigned int argc, char **argv,
+ unsigned int *args_used)
+{
+ unsigned int param_count;
+ struct dirty_log *dl;
+
+ if (argc < 2) {
+ ti->error = "dm-mirror: Insufficient mirror log arguments";
+ return NULL;
+ }
+
+ if (sscanf(argv[1], "%u", ¶m_count) != 1) {
+ ti->error = "dm-mirror: Invalid mirror log argument count";
+ return NULL;
+ }
+
+ *args_used = 2 + param_count;
+
+ if (argc < *args_used) {
+ ti->error = "dm-mirror: Insufficient mirror log arguments";
+ return NULL;
+ }
+
+ dl = dm_create_dirty_log(argv[0], ti, param_count, argv + 2);
+ if (!dl) {
+ ti->error = "dm-mirror: Error creating mirror dirty log";
+ return NULL;
+ }
+
+ if (!_check_region_size(ti, dl->type->get_region_size(dl))) {
+ ti->error = "dm-mirror: Invalid region size";
+ dm_destroy_dirty_log(dl);
+ return NULL;
+ }
+
+ return dl;
+}
+
+/*
+ * Construct a mirror mapping:
+ *
+ * log_type #log_params <log_params>
+ * #mirrors [mirror_path offset]{2,}
+ *
+ * log_type is "core" or "disk"
+ * #log_params is between 1 and 3
+ */
+#define DM_IO_PAGES 64
+static int mirror_ctr(struct dm_target *ti, unsigned int argc, char **argv)
+{
+ int r;
+ unsigned int nr_mirrors, m, args_used;
+ struct mirror_set *ms;
+ struct dirty_log *dl;
+
+ dl = create_dirty_log(ti, argc, argv, &args_used);
+ if (!dl)
+ return -EINVAL;
+
+ argv += args_used;
+ argc -= args_used;
+
+ if (!argc || sscanf(argv[0], "%u", &nr_mirrors) != 1 ||
+ nr_mirrors < 2 || nr_mirrors > KCOPYD_MAX_REGIONS + 1) {
+ ti->error = "dm-mirror: Invalid number of mirrors";
+ dm_destroy_dirty_log(dl);
+ return -EINVAL;
+ }
+
+ argv++, argc--;
+
+ if (argc != nr_mirrors * 2) {
+ ti->error = "dm-mirror: Wrong number of mirror arguments";
+ dm_destroy_dirty_log(dl);
+ return -EINVAL;
+ }
+
+ ms = alloc_context(nr_mirrors, dl->type->get_region_size(dl), ti, dl);
+ if (!ms) {
+ dm_destroy_dirty_log(dl);
+ return -ENOMEM;
+ }
+
+ /* Get the mirror parameter sets */
+ for (m = 0; m < nr_mirrors; m++) {
+ r = get_mirror(ms, ti, m, argv);
+ if (r) {
+ free_context(ms, ti, m);
+ return r;
+ }
+ argv += 2;
+ argc -= 2;
+ }
+
+ ti->private = ms;
+
+ r = kcopyd_client_create(DM_IO_PAGES, &ms->kcopyd_client);
+ if (r) {
+ free_context(ms, ti, ms->nr_mirrors);
+ return r;
+ }
+
+ add_mirror_set(ms);
+ return 0;
+}
+
+static void mirror_dtr(struct dm_target *ti)
+{
+ struct mirror_set *ms = (struct mirror_set *) ti->private;
+
+ del_mirror_set(ms);
+ kcopyd_client_destroy(ms->kcopyd_client);
+ free_context(ms, ti, ms->nr_mirrors);
+}
+
+static void queue_bio(struct mirror_set *ms, struct bio *bio, int rw)
+{
+ int should_wake = 0;
+ struct bio_list *bl;
+
+ bl = (rw == WRITE) ? &ms->writes : &ms->reads;
+ spin_lock(&ms->lock);
+ should_wake = !(bl->head);
+ bio_list_add(bl, bio);
+ spin_unlock(&ms->lock);
+
+ if (should_wake)
+ wake();
+}
+
+/*
+ * Mirror mapping function
+ */
+static int mirror_map(struct dm_target *ti, struct bio *bio,
+ union map_info *map_context)
+{
+ int r, rw = bio_rw(bio);
+ struct mirror *m;
+ struct mirror_set *ms = ti->private;
+
+ map_context->ll = bio->bi_sector >> ms->rh.region_shift;
+
+ if (rw == WRITE) {
+ queue_bio(ms, bio, rw);
+ return 0;
+ }
+
+ r = ms->rh.log->type->in_sync(ms->rh.log,
+ bio_to_region(&ms->rh, bio), 0);
+ if (r < 0 && r != -EWOULDBLOCK)
+ return r;
+
+ if (r == -EWOULDBLOCK) /* FIXME: ugly */
+ r = 0;
+
+ /*
+ * We don't want to fast track a recovery just for a read
+ * ahead. So we just let it silently fail.
+ * FIXME: get rid of this.
+ */
+ if (!r && rw == READA)
+ return -EIO;
+
+ if (!r) {
+ /* Pass this io over to the daemon */
+ queue_bio(ms, bio, rw);
+ return 0;
+ }
+
+ m = choose_mirror(ms, bio->bi_sector);
+ if (!m)
+ return -EIO;
+
+ map_bio(ms, m, bio);
+ return 1;
+}
+
+static int mirror_end_io(struct dm_target *ti, struct bio *bio,
+ int error, union map_info *map_context)
+{
+ int rw = bio_rw(bio);
+ struct mirror_set *ms = (struct mirror_set *) ti->private;
+ region_t region = map_context->ll;
+
+ /*
+ * We need to dec pending if this was a write.
+ */
+ if (rw == WRITE)
+ rh_dec(&ms->rh, region);
+
+ return 0;
+}
+
+static void mirror_suspend(struct dm_target *ti)
+{
+ struct mirror_set *ms = (struct mirror_set *) ti->private;
+ struct dirty_log *log = ms->rh.log;
+ rh_stop_recovery(&ms->rh);
+ if (log->type->suspend && log->type->suspend(log))
+ /* FIXME: need better error handling */
+ DMWARN("log suspend failed");
+}
+
+static void mirror_resume(struct dm_target *ti)
+{
+ struct mirror_set *ms = (struct mirror_set *) ti->private;
+ struct dirty_log *log = ms->rh.log;
+ if (log->type->resume && log->type->resume(log))
+ /* FIXME: need better error handling */
+ DMWARN("log resume failed");
+ rh_start_recovery(&ms->rh);
+}
+
+static int mirror_status(struct dm_target *ti, status_type_t type,
+ char *result, unsigned int maxlen)
+{
+ char buffer[32];
+ unsigned int m, sz;
+ struct mirror_set *ms = (struct mirror_set *) ti->private;
+
+ sz = ms->rh.log->type->status(ms->rh.log, type, result, maxlen);
+
+ switch (type) {
+ case STATUSTYPE_INFO:
+ DMEMIT("%d ", ms->nr_mirrors);
+ for (m = 0; m < ms->nr_mirrors; m++) {
+ format_dev_t(buffer, ms->mirror[m].dev->bdev->bd_dev);
+ DMEMIT("%s ", buffer);
+ }
+
+ DMEMIT(SECTOR_FORMAT "/" SECTOR_FORMAT,
+ ms->rh.log->type->get_sync_count(ms->rh.log),
+ ms->nr_regions);
+ break;
+
+ case STATUSTYPE_TABLE:
+ DMEMIT("%d ", ms->nr_mirrors);
+ for (m = 0; m < ms->nr_mirrors; m++) {
+ format_dev_t(buffer, ms->mirror[m].dev->bdev->bd_dev);
+ DMEMIT("%s " SECTOR_FORMAT " ",
+ buffer, ms->mirror[m].offset);
+ }
+ }
+
+ return 0;
+}
+
+static struct target_type mirror_target = {
+ .name = "mirror",
+ .version = {1, 0, 1},
+ .module = THIS_MODULE,
+ .ctr = mirror_ctr,
+ .dtr = mirror_dtr,
+ .map = mirror_map,
+ .end_io = mirror_end_io,
+ .suspend = mirror_suspend,
+ .resume = mirror_resume,
+ .status = mirror_status,
+};
+
+static int __init dm_mirror_init(void)
+{
+ int r;
+
+ r = dm_dirty_log_init();
+ if (r)
+ return r;
+
+ _kmirrord_wq = create_workqueue("kmirrord");
+ if (!_kmirrord_wq) {
+ DMERR("couldn't start kmirrord");
+ dm_dirty_log_exit();
+ return r;
+ }
+ INIT_WORK(&_kmirrord_work, do_work, NULL);
+
+ r = dm_register_target(&mirror_target);
+ if (r < 0) {
+ DMERR("%s: Failed to register mirror target",
+ mirror_target.name);
+ dm_dirty_log_exit();
+ destroy_workqueue(_kmirrord_wq);
+ }
+
+ return r;
+}
+
+static void __exit dm_mirror_exit(void)
+{
+ int r;
+
+ r = dm_unregister_target(&mirror_target);
+ if (r < 0)
+ DMERR("%s: unregister failed %d", mirror_target.name, r);
+
+ destroy_workqueue(_kmirrord_wq);
+ dm_dirty_log_exit();
+}
+
+/* Module hooks */
+module_init(dm_mirror_init);
+module_exit(dm_mirror_exit);
+
+MODULE_DESCRIPTION(DM_NAME " mirror target");
+MODULE_AUTHOR("Joe Thornber");
+MODULE_LICENSE("GPL");
--- /dev/null
+/* Shared Code for OmniVision Camera Chip Drivers
+ *
+ * Copyright (c) 2004 Mark McClelland <mark@alpha.dyndns.org>
+ * http://alpha.dyndns.org/ov511/
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version. NO WARRANTY OF ANY KIND is expressed or implied.
+ */
+
+#define DEBUG
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include "ovcamchip_priv.h"
+
+#define DRIVER_VERSION "v2.27 for Linux 2.6"
+#define DRIVER_AUTHOR "Mark McClelland <mark@alpha.dyndns.org>"
+#define DRIVER_DESC "OV camera chip I2C driver"
+
+#define PINFO(fmt, args...) printk(KERN_INFO "ovcamchip: " fmt "\n" , ## args);
+#define PERROR(fmt, args...) printk(KERN_ERR "ovcamchip: " fmt "\n" , ## args);
+
+#ifdef DEBUG
+int ovcamchip_debug = 0;
+static int debug;
+module_param(debug, int, 0);
+MODULE_PARM_DESC(debug,
+ "Debug level: 0=none, 1=inits, 2=warning, 3=config, 4=functions, 5=all");
+#endif
+
+/* By default, let bridge driver tell us if chip is monochrome. mono=0
+ * will ignore that and always treat chips as color. mono=1 will force
+ * monochrome mode for all chips. */
+static int mono = -1;
+module_param(mono, int, 0);
+MODULE_PARM_DESC(mono,
+ "1=chips are monochrome (OVx1xx), 0=force color, -1=autodetect (default)");
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
+
+/* Registers common to all chips, that are needed for detection */
+#define GENERIC_REG_ID_HIGH 0x1C /* manufacturer ID MSB */
+#define GENERIC_REG_ID_LOW 0x1D /* manufacturer ID LSB */
+#define GENERIC_REG_COM_I 0x29 /* misc ID bits */
+
+extern struct ovcamchip_ops ov6x20_ops;
+extern struct ovcamchip_ops ov6x30_ops;
+extern struct ovcamchip_ops ov7x10_ops;
+extern struct ovcamchip_ops ov7x20_ops;
+extern struct ovcamchip_ops ov76be_ops;
+
+static char *chip_names[NUM_CC_TYPES] = {
+ [CC_UNKNOWN] = "Unknown chip",
+ [CC_OV76BE] = "OV76BE",
+ [CC_OV7610] = "OV7610",
+ [CC_OV7620] = "OV7620",
+ [CC_OV7620AE] = "OV7620AE",
+ [CC_OV6620] = "OV6620",
+ [CC_OV6630] = "OV6630",
+ [CC_OV6630AE] = "OV6630AE",
+ [CC_OV6630AF] = "OV6630AF",
+};
+
+/* Forward declarations */
+static struct i2c_driver driver;
+static struct i2c_client client_template;
+
+/* ----------------------------------------------------------------------- */
+
+int ov_write_regvals(struct i2c_client *c, struct ovcamchip_regvals *rvals)
+{
+ int rc;
+
+ while (rvals->reg != 0xff) {
+ rc = ov_write(c, rvals->reg, rvals->val);
+ if (rc < 0)
+ return rc;
+ rvals++;
+ }
+
+ return 0;
+}
+
+/* Writes bits at positions specified by mask to an I2C reg. Bits that are in
+ * the same position as 1's in "mask" are cleared and set to "value". Bits
+ * that are in the same position as 0's in "mask" are preserved, regardless
+ * of their respective state in "value".
+ */
+int ov_write_mask(struct i2c_client *c,
+ unsigned char reg,
+ unsigned char value,
+ unsigned char mask)
+{
+ int rc;
+ unsigned char oldval, newval;
+
+ if (mask == 0xff) {
+ newval = value;
+ } else {
+ rc = ov_read(c, reg, &oldval);
+ if (rc < 0)
+ return rc;
+
+ oldval &= (~mask); /* Clear the masked bits */
+ value &= mask; /* Enforce mask on value */
+ newval = oldval | value; /* Set the desired bits */
+ }
+
+ return ov_write(c, reg, newval);
+}
+
+/* ----------------------------------------------------------------------- */
+
+/* Reset the chip and ensure that I2C is synchronized. Returns <0 if failure.
+ */
+static int init_camchip(struct i2c_client *c)
+{
+ int i, success;
+ unsigned char high, low;
+
+ /* Reset the chip */
+ ov_write(c, 0x12, 0x80);
+
+ /* Wait for it to initialize */
+ msleep(150);
+
+ for (i = 0, success = 0; i < I2C_DETECT_RETRIES && !success; i++) {
+ if (ov_read(c, GENERIC_REG_ID_HIGH, &high) >= 0) {
+ if (ov_read(c, GENERIC_REG_ID_LOW, &low) >= 0) {
+ if (high == 0x7F && low == 0xA2) {
+ success = 1;
+ continue;
+ }
+ }
+ }
+
+ /* Reset the chip */
+ ov_write(c, 0x12, 0x80);
+
+ /* Wait for it to initialize */
+ msleep(150);
+
+ /* Dummy read to sync I2C */
+ ov_read(c, 0x00, &low);
+ }
+
+ if (!success)
+ return -EIO;
+
+ PDEBUG(1, "I2C synced in %d attempt(s)", i);
+
+ return 0;
+}
+
+/* This detects the OV7610, OV7620, or OV76BE chip. */
+static int ov7xx0_detect(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ int rc;
+ unsigned char val;
+
+ PDEBUG(4, "");
+
+ /* Detect chip (sub)type */
+ rc = ov_read(c, GENERIC_REG_COM_I, &val);
+ if (rc < 0) {
+ PERROR("Error detecting ov7xx0 type");
+ return rc;
+ }
+
+ if ((val & 3) == 3) {
+ PINFO("Camera chip is an OV7610");
+ ov->subtype = CC_OV7610;
+ } else if ((val & 3) == 1) {
+ rc = ov_read(c, 0x15, &val);
+ if (rc < 0) {
+ PERROR("Error detecting ov7xx0 type");
+ return rc;
+ }
+
+ if (val & 1) {
+ PINFO("Camera chip is an OV7620AE");
+ /* OV7620 is a close enough match for now. There are
+ * some definite differences though, so this should be
+ * fixed */
+ ov->subtype = CC_OV7620;
+ } else {
+ PINFO("Camera chip is an OV76BE");
+ ov->subtype = CC_OV76BE;
+ }
+ } else if ((val & 3) == 0) {
+ PINFO("Camera chip is an OV7620");
+ ov->subtype = CC_OV7620;
+ } else {
+ PERROR("Unknown camera chip version: %d", val & 3);
+ return -ENOSYS;
+ }
+
+ if (ov->subtype == CC_OV76BE)
+ ov->sops = &ov76be_ops;
+ else if (ov->subtype == CC_OV7620)
+ ov->sops = &ov7x20_ops;
+ else
+ ov->sops = &ov7x10_ops;
+
+ return 0;
+}
+
+/* This detects the OV6620, OV6630, OV6630AE, or OV6630AF chip. */
+static int ov6xx0_detect(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ int rc;
+ unsigned char val;
+
+ PDEBUG(4, "");
+
+ /* Detect chip (sub)type */
+ rc = ov_read(c, GENERIC_REG_COM_I, &val);
+ if (rc < 0) {
+ PERROR("Error detecting ov6xx0 type");
+ return -1;
+ }
+
+ if ((val & 3) == 0) {
+ ov->subtype = CC_OV6630;
+ PINFO("Camera chip is an OV6630");
+ } else if ((val & 3) == 1) {
+ ov->subtype = CC_OV6620;
+ PINFO("Camera chip is an OV6620");
+ } else if ((val & 3) == 2) {
+ ov->subtype = CC_OV6630;
+ PINFO("Camera chip is an OV6630AE");
+ } else if ((val & 3) == 3) {
+ ov->subtype = CC_OV6630;
+ PINFO("Camera chip is an OV6630AF");
+ }
+
+ if (ov->subtype == CC_OV6620)
+ ov->sops = &ov6x20_ops;
+ else
+ ov->sops = &ov6x30_ops;
+
+ return 0;
+}
+
+static int ovcamchip_detect(struct i2c_client *c)
+{
+ /* Ideally we would just try a single register write and see if it NAKs.
+ * That isn't possible since the OV518 can't report I2C transaction
+ * failures. So, we have to try to initialize the chip (i.e. reset it
+ * and check the ID registers) to detect its presence. */
+
+ /* Test for 7xx0 */
+ PDEBUG(3, "Testing for 0V7xx0");
+ c->addr = OV7xx0_SID;
+ if (init_camchip(c) < 0) {
+ /* Test for 6xx0 */
+ PDEBUG(3, "Testing for 0V6xx0");
+ c->addr = OV6xx0_SID;
+ if (init_camchip(c) < 0) {
+ return -ENODEV;
+ } else {
+ if (ov6xx0_detect(c) < 0) {
+ PERROR("Failed to init OV6xx0");
+ return -EIO;
+ }
+ }
+ } else {
+ if (ov7xx0_detect(c) < 0) {
+ PERROR("Failed to init OV7xx0");
+ return -EIO;
+ }
+ }
+
+ return 0;
+}
+
+/* ----------------------------------------------------------------------- */
+
+static int ovcamchip_attach(struct i2c_adapter *adap)
+{
+ int rc = 0;
+ struct ovcamchip *ov;
+ struct i2c_client *c;
+
+ /* I2C is not a PnP bus, so we can never be certain that we're talking
+ * to the right chip. To prevent damage to EEPROMS and such, only
+ * attach to adapters that are known to contain OV camera chips. */
+
+ switch (adap->id) {
+ case (I2C_ALGO_SMBUS | I2C_HW_SMBUS_OV511):
+ case (I2C_ALGO_SMBUS | I2C_HW_SMBUS_OV518):
+ case (I2C_ALGO_SMBUS | I2C_HW_SMBUS_OVFX2):
+ case (I2C_ALGO_SMBUS | I2C_HW_SMBUS_W9968CF):
+ PDEBUG(1, "Adapter ID 0x%06x accepted", adap->id);
+ break;
+ default:
+ PDEBUG(1, "Adapter ID 0x%06x rejected", adap->id);
+ return -ENODEV;
+ }
+
+ c = kmalloc(sizeof *c, GFP_KERNEL);
+ if (!c) {
+ rc = -ENOMEM;
+ goto no_client;
+ }
+ memcpy(c, &client_template, sizeof *c);
+ c->adapter = adap;
+ strcpy(i2c_clientname(c), "OV????");
+
+ ov = kmalloc(sizeof *ov, GFP_KERNEL);
+ if (!ov) {
+ rc = -ENOMEM;
+ goto no_ov;
+ }
+ memset(ov, 0, sizeof *ov);
+ i2c_set_clientdata(c, ov);
+
+ rc = ovcamchip_detect(c);
+ if (rc < 0)
+ goto error;
+
+ strcpy(i2c_clientname(c), chip_names[ov->subtype]);
+
+ PDEBUG(1, "Camera chip detection complete");
+
+ i2c_attach_client(c);
+
+ return rc;
+error:
+ kfree(ov);
+no_ov:
+ kfree(c);
+no_client:
+ PDEBUG(1, "returning %d", rc);
+ return rc;
+}
+
+static int ovcamchip_detach(struct i2c_client *c)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+ int rc;
+
+ rc = ov->sops->free(c);
+ if (rc < 0)
+ return rc;
+
+ i2c_detach_client(c);
+
+ kfree(ov);
+ kfree(c);
+ return 0;
+}
+
+static int ovcamchip_command(struct i2c_client *c, unsigned int cmd, void *arg)
+{
+ struct ovcamchip *ov = i2c_get_clientdata(c);
+
+ if (!ov->initialized &&
+ cmd != OVCAMCHIP_CMD_Q_SUBTYPE &&
+ cmd != OVCAMCHIP_CMD_INITIALIZE) {
+ dev_err(&c->dev, "ERROR: Camera chip not initialized yet!\n");
+ return -EPERM;
+ }
+
+ switch (cmd) {
+ case OVCAMCHIP_CMD_Q_SUBTYPE:
+ {
+ *(int *)arg = ov->subtype;
+ return 0;
+ }
+ case OVCAMCHIP_CMD_INITIALIZE:
+ {
+ int rc;
+
+ if (mono == -1)
+ ov->mono = *(int *)arg;
+ else
+ ov->mono = mono;
+
+ if (ov->mono) {
+ if (ov->subtype != CC_OV7620)
+ dev_warn(&c->dev, "Warning: Monochrome not "
+ "implemented for this chip\n");
+ else
+ dev_info(&c->dev, "Initializing chip as "
+ "monochrome\n");
+ }
+
+ rc = ov->sops->init(c);
+ if (rc < 0)
+ return rc;
+
+ ov->initialized = 1;
+ return 0;
+ }
+ default:
+ return ov->sops->command(c, cmd, arg);
+ }
+}
+
+/* ----------------------------------------------------------------------- */
+
+static struct i2c_driver driver = {
+ .owner = THIS_MODULE,
+ .name = "ovcamchip",
+ .id = I2C_DRIVERID_OVCAMCHIP,
+ .class = I2C_CLASS_CAM_DIGITAL,
+ .flags = I2C_DF_NOTIFY,
+ .attach_adapter = ovcamchip_attach,
+ .detach_client = ovcamchip_detach,
+ .command = ovcamchip_command,
+};
+
+static struct i2c_client client_template = {
+ I2C_DEVNAME("(unset)"),
+ .id = -1,
+ .driver = &driver,
+};
+
+static int __init ovcamchip_init(void)
+{
+#ifdef DEBUG
+ ovcamchip_debug = debug;
+#endif
+
+ PINFO(DRIVER_VERSION " : " DRIVER_DESC);
+ return i2c_add_driver(&driver);
+}
+
+static void __exit ovcamchip_exit(void)
+{
+ i2c_del_driver(&driver);
+}
+
+module_init(ovcamchip_init);
+module_exit(ovcamchip_exit);
--- /dev/null
+/*
+ * Common Flash Interface support:
+ * Generic utility functions not dependant on command set
+ *
+ * Copyright (C) 2002 Red Hat
+ * Copyright (C) 2003 STMicroelectronics Limited
+ *
+ * This code is covered by the GPL.
+ *
+ * $Id: cfi_util.c,v 1.5 2004/08/12 06:40:23 eric Exp $
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <asm/io.h>
+#include <asm/byteorder.h>
+
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/cfi.h>
+#include <linux/mtd/compatmac.h>
+
+struct cfi_extquery *
+cfi_read_pri(struct map_info *map, __u16 adr, __u16 size, const char* name)
+{
+ struct cfi_private *cfi = map->fldrv_priv;
+ __u32 base = 0; // cfi->chips[0].start;
+ int ofs_factor = cfi->interleave * cfi->device_type;
+ int i;
+ struct cfi_extquery *extp = NULL;
+
+ printk(" %s Extended Query Table at 0x%4.4X\n", name, adr);
+ if (!adr)
+ goto out;
+
+ /* Switch it into Query Mode */
+ cfi_send_gen_cmd(0x98, 0x55, base, map, cfi, cfi->device_type, NULL);
+
+ extp = kmalloc(size, GFP_KERNEL);
+ if (!extp) {
+ printk(KERN_ERR "Failed to allocate memory\n");
+ goto out;
+ }
+
+ /* Read in the Extended Query Table */
+ for (i=0; i<size; i++) {
+ ((unsigned char *)extp)[i] =
+ cfi_read_query(map, base+((adr+i)*ofs_factor));
+ }
+
+ if (extp->MajorVersion != '1' ||
+ (extp->MinorVersion < '0' || extp->MinorVersion > '3')) {
+ printk(KERN_WARNING " Unknown %s Extended Query "
+ "version %c.%c.\n", name, extp->MajorVersion,
+ extp->MinorVersion);
+ kfree(extp);
+ extp = NULL;
+ goto out;
+ }
+
+out:
+ /* Make sure it's in read mode */
+ cfi_send_gen_cmd(0xf0, 0, base, map, cfi, cfi->device_type, NULL);
+ cfi_send_gen_cmd(0xff, 0, base, map, cfi, cfi->device_type, NULL);
+
+ return extp;
+}
+
+EXPORT_SYMBOL(cfi_read_pri);
+
+void cfi_fixup(struct mtd_info *mtd, struct cfi_fixup *fixups)
+{
+ struct map_info *map = mtd->priv;
+ struct cfi_private *cfi = map->fldrv_priv;
+ struct cfi_fixup *f;
+
+ for (f=fixups; f->fixup; f++) {
+ if (((f->mfr == CFI_MFR_ANY) || (f->mfr == cfi->mfr)) &&
+ ((f->id == CFI_ID_ANY) || (f->id == cfi->id))) {
+ f->fixup(mtd, f->param);
+ }
+ }
+}
+
+EXPORT_SYMBOL(cfi_fixup);
+
+int cfi_varsize_frob(struct mtd_info *mtd, varsize_frob_t frob,
+ loff_t ofs, size_t len, void *thunk)
+{
+ struct map_info *map = mtd->priv;
+ struct cfi_private *cfi = map->fldrv_priv;
+ unsigned long adr;
+ int chipnum, ret = 0;
+ int i, first;
+ struct mtd_erase_region_info *regions = mtd->eraseregions;
+
+ if (ofs > mtd->size)
+ return -EINVAL;
+
+ if ((len + ofs) > mtd->size)
+ return -EINVAL;
+
+ /* Check that both start and end of the requested erase are
+ * aligned with the erasesize at the appropriate addresses.
+ */
+
+ i = 0;
+
+ /* Skip all erase regions which are ended before the start of
+ the requested erase. Actually, to save on the calculations,
+ we skip to the first erase region which starts after the
+ start of the requested erase, and then go back one.
+ */
+
+ while (i < mtd->numeraseregions && ofs >= regions[i].offset)
+ i++;
+ i--;
+
+ /* OK, now i is pointing at the erase region in which this
+ erase request starts. Check the start of the requested
+ erase range is aligned with the erase size which is in
+ effect here.
+ */
+
+ if (ofs & (regions[i].erasesize-1))
+ return -EINVAL;
+
+ /* Remember the erase region we start on */
+ first = i;
+
+ /* Next, check that the end of the requested erase is aligned
+ * with the erase region at that address.
+ */
+
+ while (i<mtd->numeraseregions && (ofs + len) >= regions[i].offset)
+ i++;
+
+ /* As before, drop back one to point at the region in which
+ the address actually falls
+ */
+ i--;
+
+ if ((ofs + len) & (regions[i].erasesize-1))
+ return -EINVAL;
+
+ chipnum = ofs >> cfi->chipshift;
+ adr = ofs - (chipnum << cfi->chipshift);
+
+ i=first;
+
+ while(len) {
+ unsigned long chipmask;
+ int size = regions[i].erasesize;
+
+ ret = (*frob)(map, &cfi->chips[chipnum], adr, size, thunk);
+
+ if (ret)
+ return ret;
+
+ adr += size;
+ len -= size;
+
+ chipmask = (1 << cfi->chipshift) - 1;
+ if ((adr & chipmask) == ((regions[i].offset + size * regions[i].numblocks) & chipmask))
+ i++;
+
+ if (adr >> cfi->chipshift) {
+ adr = 0;
+ chipnum++;
+
+ if (chipnum >= cfi->numchips)
+ break;
+ }
+ }
+
+ return 0;
+}
+
+EXPORT_SYMBOL(cfi_varsize_frob);
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/**
+ *
+ * $Id: phram.c,v 1.3 2004/11/16 18:29:01 dwmw2 Exp $
+ *
+ * Copyright (c) Jochen Schaeuble <psionic@psionic.de>
+ * 07/2003 rewritten by Joern Engel <joern@wh.fh-wedel.de>
+ *
+ * DISCLAIMER: This driver makes use of Rusty's excellent module code,
+ * so it will not work for 2.4 without changes and it wont work for 2.4
+ * as a module without major changes. Oh well!
+ *
+ * Usage:
+ *
+ * one commend line parameter per device, each in the form:
+ * phram=<name>,<start>,<len>
+ * <name> may be up to 63 characters.
+ * <start> and <len> can be octal, decimal or hexadecimal. If followed
+ * by "k", "M" or "G", the numbers will be interpreted as kilo, mega or
+ * gigabytes.
+ *
+ */
+
+#include <asm/io.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/mtd/mtd.h>
+
+#define ERROR(fmt, args...) printk(KERN_ERR "phram: " fmt , ## args)
+
+struct phram_mtd_list {
+ struct list_head list;
+ struct mtd_info *mtdinfo;
+};
+
+static LIST_HEAD(phram_list);
+
+
+
+static int phram_erase(struct mtd_info *mtd, struct erase_info *instr)
+{
+ u_char *start = (u_char *)mtd->priv;
+
+ if (instr->addr + instr->len > mtd->size)
+ return -EINVAL;
+
+ memset(start + instr->addr, 0xff, instr->len);
+
+ /* This'll catch a few races. Free the thing before returning :)
+ * I don't feel at all ashamed. This kind of thing is possible anyway
+ * with flash, but unlikely.
+ */
+
+ instr->state = MTD_ERASE_DONE;
+
+ mtd_erase_callback(instr);
+
+ return 0;
+}
+
+static int phram_point(struct mtd_info *mtd, loff_t from, size_t len,
+ size_t *retlen, u_char **mtdbuf)
+{
+ u_char *start = (u_char *)mtd->priv;
+
+ if (from + len > mtd->size)
+ return -EINVAL;
+
+ *mtdbuf = start + from;
+ *retlen = len;
+ return 0;
+}
+
+static void phram_unpoint(struct mtd_info *mtd, u_char *addr, loff_t from, size_t len)
+{
+}
+
+static int phram_read(struct mtd_info *mtd, loff_t from, size_t len,
+ size_t *retlen, u_char *buf)
+{
+ u_char *start = (u_char *)mtd->priv;
+
+ if (from + len > mtd->size)
+ return -EINVAL;
+
+ memcpy(buf, start + from, len);
+
+ *retlen = len;
+ return 0;
+}
+
+static int phram_write(struct mtd_info *mtd, loff_t to, size_t len,
+ size_t *retlen, const u_char *buf)
+{
+ u_char *start = (u_char *)mtd->priv;
+
+ if (to + len > mtd->size)
+ return -EINVAL;
+
+ memcpy(start + to, buf, len);
+
+ *retlen = len;
+ return 0;
+}
+
+
+
+static void unregister_devices(void)
+{
+ struct phram_mtd_list *this;
+
+ list_for_each_entry(this, &phram_list, list) {
+ del_mtd_device(this->mtdinfo);
+ iounmap(this->mtdinfo->priv);
+ kfree(this->mtdinfo);
+ kfree(this);
+ }
+}
+
+static int register_device(char *name, unsigned long start, unsigned long len)
+{
+ struct phram_mtd_list *new;
+ int ret = -ENOMEM;
+
+ new = kmalloc(sizeof(*new), GFP_KERNEL);
+ if (!new)
+ goto out0;
+
+ new->mtdinfo = kmalloc(sizeof(struct mtd_info), GFP_KERNEL);
+ if (!new->mtdinfo)
+ goto out1;
+
+ memset(new->mtdinfo, 0, sizeof(struct mtd_info));
+
+ ret = -EIO;
+ new->mtdinfo->priv = ioremap(start, len);
+ if (!new->mtdinfo->priv) {
+ ERROR("ioremap failed\n");
+ goto out2;
+ }
+
+
+ new->mtdinfo->name = name;
+ new->mtdinfo->size = len;
+ new->mtdinfo->flags = MTD_CAP_RAM | MTD_ERASEABLE | MTD_VOLATILE;
+ new->mtdinfo->erase = phram_erase;
+ new->mtdinfo->point = phram_point;
+ new->mtdinfo->unpoint = phram_unpoint;
+ new->mtdinfo->read = phram_read;
+ new->mtdinfo->write = phram_write;
+ new->mtdinfo->owner = THIS_MODULE;
+ new->mtdinfo->type = MTD_RAM;
+ new->mtdinfo->erasesize = 0x0;
+
+ ret = -EAGAIN;
+ if (add_mtd_device(new->mtdinfo)) {
+ ERROR("Failed to register new device\n");
+ goto out3;
+ }
+
+ list_add_tail(&new->list, &phram_list);
+ return 0;
+
+out3:
+ iounmap(new->mtdinfo->priv);
+out2:
+ kfree(new->mtdinfo);
+out1:
+ kfree(new);
+out0:
+ return ret;
+}
+
+static int ustrtoul(const char *cp, char **endp, unsigned int base)
+{
+ unsigned long result = simple_strtoul(cp, endp, base);
+
+ switch (**endp) {
+ case 'G':
+ result *= 1024;
+ case 'M':
+ result *= 1024;
+ case 'k':
+ result *= 1024;
+ endp++;
+ }
+ return result;
+}
+
+static int parse_num32(uint32_t *num32, const char *token)
+{
+ char *endp;
+ unsigned long n;
+
+ n = ustrtoul(token, &endp, 0);
+ if (*endp)
+ return -EINVAL;
+
+ *num32 = n;
+ return 0;
+}
+
+static int parse_name(char **pname, const char *token)
+{
+ size_t len;
+ char *name;
+
+ len = strlen(token) + 1;
+ if (len > 64)
+ return -ENOSPC;
+
+ name = kmalloc(len, GFP_KERNEL);
+ if (!name)
+ return -ENOMEM;
+
+ strcpy(name, token);
+
+ *pname = name;
+ return 0;
+}
+
+#define parse_err(fmt, args...) do { \
+ ERROR(fmt , ## args); \
+ return 0; \
+} while (0)
+
+static int phram_setup(const char *val, struct kernel_param *kp)
+{
+ char buf[64+12+12], *str = buf;
+ char *token[3];
+ char *name;
+ uint32_t start;
+ uint32_t len;
+ int i, ret;
+
+ if (strnlen(val, sizeof(str)) >= sizeof(str))
+ parse_err("parameter too long\n");
+
+ strcpy(str, val);
+
+ for (i=0; i<3; i++)
+ token[i] = strsep(&str, ",");
+
+ if (str)
+ parse_err("too many arguments\n");
+
+ if (!token[2])
+ parse_err("not enough arguments\n");
+
+ ret = parse_name(&name, token[0]);
+ if (ret == -ENOMEM)
+ parse_err("out of memory\n");
+ if (ret == -ENOSPC)
+ parse_err("name too long\n");
+ if (ret)
+ return 0;
+
+ ret = parse_num32(&start, token[1]);
+ if (ret)
+ parse_err("illegal start address\n");
+
+ ret = parse_num32(&len, token[2]);
+ if (ret)
+ parse_err("illegal device length\n");
+
+ register_device(name, start, len);
+
+ return 0;
+}
+
+module_param_call(phram, phram_setup, NULL, NULL, 000);
+MODULE_PARM_DESC(phram, "Memory region to map. \"map=<name>,<start><length>\"");
+
+/*
+ * Just for compatibility with slram, this is horrible and should go someday.
+ */
+static int __init slram_setup(const char *val, struct kernel_param *kp)
+{
+ char buf[256], *str = buf;
+
+ if (!val || !val[0])
+ parse_err("no arguments to \"slram=\"\n");
+
+ if (strnlen(val, sizeof(str)) >= sizeof(str))
+ parse_err("parameter too long\n");
+
+ strcpy(str, val);
+
+ while (str) {
+ char *token[3];
+ char *name;
+ uint32_t start;
+ uint32_t len;
+ int i, ret;
+
+ for (i=0; i<3; i++) {
+ token[i] = strsep(&str, ",");
+ if (token[i])
+ continue;
+ parse_err("wrong number of arguments to \"slram=\"\n");
+ }
+
+ /* name */
+ ret = parse_name(&name, token[0]);
+ if (ret == -ENOMEM)
+ parse_err("of memory\n");
+ if (ret == -ENOSPC)
+ parse_err("too long\n");
+ if (ret)
+ return 1;
+
+ /* start */
+ ret = parse_num32(&start, token[1]);
+ if (ret)
+ parse_err("illegal start address\n");
+
+ /* len */
+ if (token[2][0] == '+')
+ ret = parse_num32(&len, token[2] + 1);
+ else
+ ret = parse_num32(&len, token[2]);
+
+ if (ret)
+ parse_err("illegal device length\n");
+
+ if (token[2][0] != '+') {
+ if (len < start)
+ parse_err("end < start\n");
+ len -= start;
+ }
+
+ register_device(name, start, len);
+ }
+ return 1;
+}
+
+module_param_call(slram, slram_setup, NULL, NULL, 000);
+MODULE_PARM_DESC(slram, "List of memory regions to map. \"map=<name>,<start><length/end>\"");
+
+
+static int __init init_phram(void)
+{
+ printk(KERN_ERR "phram loaded\n");
+ return 0;
+}
+
+static void __exit cleanup_phram(void)
+{
+ unregister_devices();
+}
+
+module_init(init_phram);
+module_exit(cleanup_phram);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Jörn Engel <joern@wh.fh-wedel.de>");
+MODULE_DESCRIPTION("MTD driver for physical RAM");
--- /dev/null
+/*
+ * Flash memory access on Alchemy Db1550 board
+ *
+ * $Id: db1550-flash.c,v 1.7 2004/11/04 13:24:14 gleixner Exp $
+ *
+ * (C) 2004 Embedded Edge, LLC, based on db1550-flash.c:
+ * (C) 2003, 2004 Pete Popov <ppopov@embeddedalley.com>
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/io.h>
+
+#ifdef DEBUG_RW
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif
+
+static unsigned long window_addr;
+static unsigned long window_size;
+
+
+static struct map_info db1550_map = {
+ .name = "Db1550 flash",
+};
+
+static unsigned char flash_bankwidth = 4;
+
+/*
+ * Support only 64MB NOR Flash parts
+ */
+
+#if defined(CONFIG_MTD_DB1550_BOOT) && defined(CONFIG_MTD_DB1550_USER)
+#define DB1550_BOTH_BANKS
+#elif defined(CONFIG_MTD_DB1550_BOOT) && !defined(CONFIG_MTD_DB1550_USER)
+#define DB1550_BOOT_ONLY
+#elif !defined(CONFIG_MTD_DB1550_BOOT) && defined(CONFIG_MTD_DB1550_USER)
+#define DB1550_USER_ONLY
+#endif
+
+#ifdef DB1550_BOTH_BANKS
+/* both banks will be used. Combine the first bank and the first
+ * part of the second bank together into a single jffs/jffs2
+ * partition.
+ */
+static struct mtd_partition db1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1C00 0000 1FFF FFFF CE0 64MB Boot NOR Flash
+ * 1800 0000 1BFF FFFF CE0 64MB Param NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = (0x1FC00000 - 0x18000000),
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000 - 0x40000), /* last 256KB is yamon env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(DB1550_BOOT_ONLY)
+static struct mtd_partition db1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1C00 0000 1FFF FFFF CE0 64MB Boot NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = 0x03c00000,
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000-0x40000), /* last 256KB is yamon env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(DB1550_USER_ONLY)
+static struct mtd_partition db1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1800 0000 1BFF FFFF CE0 64MB Param NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = (0x4000000 - 0x200000), /* reserve 2MB for raw kernel */
+ .offset = 0x0000000
+ },{
+ .name = "raw kernel",
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#else
+#error MTD_DB1550 define combo error /* should never happen */
+#endif
+
+#define NB_OF(x) (sizeof(x)/sizeof(x[0]))
+
+static struct mtd_info *mymtd;
+
+/*
+ * Probe the flash density and setup window address and size
+ * based on user CONFIG options. There are times when we don't
+ * want the MTD driver to be probing the boot or user flash,
+ * so having the option to enable only one bank is important.
+ */
+int setup_flash_params(void)
+{
+#if defined(DB1550_BOTH_BANKS)
+ window_addr = 0x18000000;
+ window_size = 0x8000000;
+#elif defined(DB1550_BOOT_ONLY)
+ window_addr = 0x1C000000;
+ window_size = 0x4000000;
+#else /* USER ONLY */
+ window_addr = 0x18000000;
+ window_size = 0x4000000;
+#endif
+ return 0;
+}
+
+int __init db1550_mtd_init(void)
+{
+ struct mtd_partition *parts;
+ int nb_parts = 0;
+
+ /* Default flash bankwidth */
+ db1550_map.bankwidth = flash_bankwidth;
+
+ if (setup_flash_params())
+ return -ENXIO;
+
+ /*
+ * Static partition definition selection
+ */
+ parts = db1550_partitions;
+ nb_parts = NB_OF(db1550_partitions);
+ db1550_map.size = window_size;
+
+ /*
+ * Now let's probe for the actual flash. Do it here since
+ * specific machine settings might have been set above.
+ */
+ printk(KERN_NOTICE "Db1550 flash: probing %d-bit flash bus\n",
+ db1550_map.bankwidth*8);
+ db1550_map.virt = ioremap(window_addr, window_size);
+ mymtd = do_map_probe("cfi_probe", &db1550_map);
+ if (!mymtd) return -ENXIO;
+ mymtd->owner = THIS_MODULE;
+
+ add_mtd_partitions(mymtd, parts, nb_parts);
+ return 0;
+}
+
+static void __exit db1550_mtd_cleanup(void)
+{
+ if (mymtd) {
+ del_mtd_partitions(mymtd);
+ map_destroy(mymtd);
+ iounmap((void *) db1550_map.virt);
+ }
+}
+
+module_init(db1550_mtd_init);
+module_exit(db1550_mtd_cleanup);
+
+MODULE_AUTHOR("Embedded Edge, LLC");
+MODULE_DESCRIPTION("Db1550 mtd map driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Flash memory access on Alchemy Db1xxx boards
+ *
+ * $Id: db1x00-flash.c,v 1.6 2004/11/04 13:24:14 gleixner Exp $
+ *
+ * (C) 2003 Pete Popov <ppopov@embeddedalley.com>
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/io.h>
+
+#ifdef DEBUG_RW
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif
+
+/* MTD CONFIG OPTIONS */
+#if defined(CONFIG_MTD_DB1X00_BOOT) && defined(CONFIG_MTD_DB1X00_USER)
+#define DB1X00_BOTH_BANKS
+#elif defined(CONFIG_MTD_DB1X00_BOOT) && !defined(CONFIG_MTD_DB1X00_USER)
+#define DB1X00_BOOT_ONLY
+#elif !defined(CONFIG_MTD_DB1X00_BOOT) && defined(CONFIG_MTD_DB1X00_USER)
+#define DB1X00_USER_ONLY
+#endif
+
+static unsigned long window_addr;
+static unsigned long window_size;
+static unsigned long flash_size;
+
+static unsigned short *bcsr = (unsigned short *)0xAE000000;
+static unsigned char flash_bankwidth = 4;
+
+/*
+ * The Db1x boards support different flash densities. We setup
+ * the mtd_partition structures below for default of 64Mbit
+ * flash densities, and override the partitions sizes, if
+ * necessary, after we check the board status register.
+ */
+
+#ifdef DB1X00_BOTH_BANKS
+/* both banks will be used. Combine the first bank and the first
+ * part of the second bank together into a single jffs/jffs2
+ * partition.
+ */
+static struct mtd_partition db1x00_partitions[] = {
+ {
+ .name = "User FS",
+ .size = 0x1c00000,
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000-0x40000), /* last 256KB is env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(DB1X00_BOOT_ONLY)
+static struct mtd_partition db1x00_partitions[] = {
+ {
+ .name = "User FS",
+ .size = 0x00c00000,
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000-0x40000), /* last 256KB is env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(DB1X00_USER_ONLY)
+static struct mtd_partition db1x00_partitions[] = {
+ {
+ .name = "User FS",
+ .size = 0x0e00000,
+ .offset = 0x0000000
+ },{
+ .name = "raw kernel",
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#else
+#error MTD_DB1X00 define combo error /* should never happen */
+#endif
+#define NB_OF(x) (sizeof(x)/sizeof(x[0]))
+
+#define NAME "Db1x00 Linux Flash"
+
+static struct map_info db1xxx_mtd_map = {
+ .name = NAME,
+};
+
+static struct mtd_partition *parsed_parts;
+static struct mtd_info *db1xxx_mtd;
+
+/*
+ * Probe the flash density and setup window address and size
+ * based on user CONFIG options. There are times when we don't
+ * want the MTD driver to be probing the boot or user flash,
+ * so having the option to enable only one bank is important.
+ */
+int setup_flash_params(void)
+{
+ switch ((bcsr[2] >> 14) & 0x3) {
+ case 0: /* 64Mbit devices */
+ flash_size = 0x800000; /* 8MB per part */
+#if defined(DB1X00_BOTH_BANKS)
+ window_addr = 0x1E000000;
+ window_size = 0x2000000;
+#elif defined(DB1X00_BOOT_ONLY)
+ window_addr = 0x1F000000;
+ window_size = 0x1000000;
+#else /* USER ONLY */
+ window_addr = 0x1E000000;
+ window_size = 0x1000000;
+#endif
+ break;
+ case 1:
+ /* 128 Mbit devices */
+ flash_size = 0x1000000; /* 16MB per part */
+#if defined(DB1X00_BOTH_BANKS)
+ window_addr = 0x1C000000;
+ window_size = 0x4000000;
+ /* USERFS from 0x1C00 0000 to 0x1FC0 0000 */
+ db1x00_partitions[0].size = 0x3C00000;
+#elif defined(DB1X00_BOOT_ONLY)
+ window_addr = 0x1E000000;
+ window_size = 0x2000000;
+ /* USERFS from 0x1E00 0000 to 0x1FC0 0000 */
+ db1x00_partitions[0].size = 0x1C00000;
+#else /* USER ONLY */
+ window_addr = 0x1C000000;
+ window_size = 0x2000000;
+ /* USERFS from 0x1C00 0000 to 0x1DE00000 */
+ db1x00_partitions[0].size = 0x1DE0000;
+#endif
+ break;
+ case 2:
+ /* 256 Mbit devices */
+ flash_size = 0x4000000; /* 64MB per part */
+#if defined(DB1X00_BOTH_BANKS)
+ return 1;
+#elif defined(DB1X00_BOOT_ONLY)
+ /* Boot ROM flash bank only; no user bank */
+ window_addr = 0x1C000000;
+ window_size = 0x4000000;
+ /* USERFS from 0x1C00 0000 to 0x1FC00000 */
+ db1x00_partitions[0].size = 0x3C00000;
+#else /* USER ONLY */
+ return 1;
+#endif
+ break;
+ default:
+ return 1;
+ }
+ db1xxx_mtd_map.size = window_size;
+ db1xxx_mtd_map.bankwidth = flash_bankwidth;
+ db1xxx_mtd_map.phys = window_addr;
+ db1xxx_mtd_map.bankwidth = flash_bankwidth;
+ return 0;
+}
+
+int __init db1x00_mtd_init(void)
+{
+ struct mtd_partition *parts;
+ int nb_parts = 0;
+
+ if (setup_flash_params())
+ return -ENXIO;
+
+ /*
+ * Static partition definition selection
+ */
+ parts = db1x00_partitions;
+ nb_parts = NB_OF(db1x00_partitions);
+
+ /*
+ * Now let's probe for the actual flash. Do it here since
+ * specific machine settings might have been set above.
+ */
+ printk(KERN_NOTICE "Db1xxx flash: probing %d-bit flash bus\n",
+ db1xxx_mtd_map.bankwidth*8);
+ db1xxx_mtd_map.virt = ioremap(window_addr, window_size);
+ db1xxx_mtd = do_map_probe("cfi_probe", &db1xxx_mtd_map);
+ if (!db1xxx_mtd) return -ENXIO;
+ db1xxx_mtd->owner = THIS_MODULE;
+
+ add_mtd_partitions(db1xxx_mtd, parts, nb_parts);
+ return 0;
+}
+
+static void __exit db1x00_mtd_cleanup(void)
+{
+ if (db1xxx_mtd) {
+ del_mtd_partitions(db1xxx_mtd);
+ map_destroy(db1xxx_mtd);
+ if (parsed_parts)
+ kfree(parsed_parts);
+ }
+}
+
+module_init(db1x00_mtd_init);
+module_exit(db1x00_mtd_cleanup);
+
+MODULE_AUTHOR("Pete Popov");
+MODULE_DESCRIPTION("Db1x00 mtd map driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+
+/*
+ * drivers/mtd/maps/svme182.c
+ *
+ * Flash map driver for the Dy4 SVME182 board
+ *
+ * $Id: dmv182.c,v 1.5 2004/11/04 13:24:14 gleixner Exp $
+ *
+ * Copyright 2003-2004, TimeSys Corporation
+ *
+ * Based on the SVME181 flash map, by Tom Nelson, Dot4, Inc. for TimeSys Corp.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+#include <linux/errno.h>
+
+/*
+ * This driver currently handles only the 16MiB user flash bank 1 on the
+ * board. It does not provide access to bank 0 (contains the Dy4 FFW), bank 2
+ * (VxWorks boot), or the optional 48MiB expansion flash.
+ *
+ * scott.wood@timesys.com: On the newer boards with 128MiB flash, it
+ * now supports the first 96MiB (the boot flash bank containing FFW
+ * is excluded). The VxWorks loader is in partition 1.
+ */
+
+#define FLASH_BASE_ADDR 0xf0000000
+#define FLASH_BANK_SIZE (128*1024*1024)
+
+MODULE_AUTHOR("Scott Wood, TimeSys Corporation <scott.wood@timesys.com>");
+MODULE_DESCRIPTION("User-programmable flash device on the Dy4 SVME182 board");
+MODULE_LICENSE("GPL");
+
+static struct map_info svme182_map = {
+ .name = "Dy4 SVME182",
+ .bankwidth = 32,
+ .size = 128 * 1024 * 1024
+};
+
+#define BOOTIMAGE_PART_SIZE ((6*1024*1024)-RESERVED_PART_SIZE)
+
+// Allow 6MiB for the kernel
+#define NEW_BOOTIMAGE_PART_SIZE (6 * 1024 * 1024)
+// Allow 1MiB for the bootloader
+#define NEW_BOOTLOADER_PART_SIZE (1024 * 1024)
+// Use the remaining 9MiB at the end of flash for the RFS
+#define NEW_RFS_PART_SIZE (0x01000000 - NEW_BOOTLOADER_PART_SIZE - \
+ NEW_BOOTIMAGE_PART_SIZE)
+
+static struct mtd_partition svme182_partitions[] = {
+ // The Lower PABS is only 128KiB, but the partition code doesn't
+ // like partitions that don't end on the largest erase block
+ // size of the device, even if all of the erase blocks in the
+ // partition are small ones. The hardware should prevent
+ // writes to the actual PABS areas.
+ {
+ name: "Lower PABS and CPU 0 bootloader or kernel",
+ size: 6*1024*1024,
+ offset: 0,
+ },
+ {
+ name: "Root Filesystem",
+ size: 10*1024*1024,
+ offset: MTDPART_OFS_NXTBLK
+ },
+ {
+ name: "CPU1 Bootloader",
+ size: 1024*1024,
+ offset: MTDPART_OFS_NXTBLK,
+ },
+ {
+ name: "Extra",
+ size: 110*1024*1024,
+ offset: MTDPART_OFS_NXTBLK
+ },
+ {
+ name: "Foundation Firmware and Upper PABS",
+ size: 1024*1024,
+ offset: MTDPART_OFS_NXTBLK,
+ mask_flags: MTD_WRITEABLE // read-only
+ }
+};
+
+static struct mtd_info *this_mtd;
+
+static int __init init_svme182(void)
+{
+ struct mtd_partition *partitions;
+ int num_parts = sizeof(svme182_partitions) / sizeof(struct mtd_partition);
+
+ partitions = svme182_partitions;
+
+ svme182_map.virt = ioremap(FLASH_BASE_ADDR, svme182_map.size);
+
+ if (svme182_map.virt == 0) {
+ printk("Failed to ioremap FLASH memory area.\n");
+ return -EIO;
+ }
+
+ simple_map_init(&svme182_map);
+
+ this_mtd = do_map_probe("cfi_probe", &svme182_map);
+ if (!this_mtd)
+ {
+ iounmap((void *)svme182_map.virt);
+ return -ENXIO;
+ }
+
+ printk(KERN_NOTICE "SVME182 flash device: %dMiB at 0x%08x\n",
+ this_mtd->size >> 20, FLASH_BASE_ADDR);
+
+ this_mtd->owner = THIS_MODULE;
+ add_mtd_partitions(this_mtd, partitions, num_parts);
+
+ return 0;
+}
+
+static void __exit cleanup_svme182(void)
+{
+ if (this_mtd)
+ {
+ del_mtd_partitions(this_mtd);
+ map_destroy(this_mtd);
+ }
+
+ if (svme182_map.virt)
+ {
+ iounmap((void *)svme182_map.virt);
+ svme182_map.virt = 0;
+ }
+
+ return;
+}
+
+module_init(init_svme182);
+module_exit(cleanup_svme182);
--- /dev/null
+/*
+ * ichxrom.c
+ *
+ * Normal mappings of chips in physical memory
+ * $Id: ichxrom.c,v 1.15 2004/11/16 18:29:02 dwmw2 Exp $
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <asm/io.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/cfi.h>
+#include <linux/mtd/flashchip.h>
+#include <linux/config.h>
+#include <linux/pci.h>
+#include <linux/pci_ids.h>
+#include <linux/list.h>
+
+#define xstr(s) str(s)
+#define str(s) #s
+#define MOD_NAME xstr(KBUILD_BASENAME)
+
+#define ADDRESS_NAME_LEN 18
+
+#define ROM_PROBE_STEP_SIZE (64*1024) /* 64KiB */
+
+#define BIOS_CNTL 0x4e
+#define FWH_DEC_EN1 0xE3
+#define FWH_DEC_EN2 0xF0
+#define FWH_SEL1 0xE8
+#define FWH_SEL2 0xEE
+
+struct ichxrom_window {
+ void __iomem* virt;
+ unsigned long phys;
+ unsigned long size;
+ struct list_head maps;
+ struct resource rsrc;
+ struct pci_dev *pdev;
+};
+
+struct ichxrom_map_info {
+ struct list_head list;
+ struct map_info map;
+ struct mtd_info *mtd;
+ struct resource rsrc;
+ char map_name[sizeof(MOD_NAME) + 2 + ADDRESS_NAME_LEN];
+};
+
+static struct ichxrom_window ichxrom_window = {
+ .maps = LIST_HEAD_INIT(ichxrom_window.maps),
+};
+
+static void ichxrom_cleanup(struct ichxrom_window *window)
+{
+ struct ichxrom_map_info *map, *scratch;
+ u16 word;
+
+ /* Disable writes through the rom window */
+ pci_read_config_word(window->pdev, BIOS_CNTL, &word);
+ pci_write_config_word(window->pdev, BIOS_CNTL, word & ~1);
+
+ /* Free all of the mtd devices */
+ list_for_each_entry_safe(map, scratch, &window->maps, list) {
+ if (map->rsrc.parent)
+ release_resource(&map->rsrc);
+ del_mtd_device(map->mtd);
+ map_destroy(map->mtd);
+ list_del(&map->list);
+ kfree(map);
+ }
+ if (window->rsrc.parent)
+ release_resource(&window->rsrc);
+ if (window->virt) {
+ iounmap(window->virt);
+ window->virt = NULL;
+ window->phys = 0;
+ window->size = 0;
+ window->pdev = NULL;
+ }
+}
+
+
+static int __devinit ichxrom_init_one (struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ static char *rom_probe_types[] = { "cfi_probe", "jedec_probe", NULL };
+ struct ichxrom_window *window = &ichxrom_window;
+ struct ichxrom_map_info *map = NULL;
+ unsigned long map_top;
+ u8 byte;
+ u16 word;
+
+ /* For now I just handle the ichx and I assume there
+ * are not a lot of resources up at the top of the address
+ * space. It is possible to handle other devices in the
+ * top 16MB but it is very painful. Also since
+ * you can only really attach a FWH to an ICHX there
+ * a number of simplifications you can make.
+ *
+ * Also you can page firmware hubs if an 8MB window isn't enough
+ * but don't currently handle that case either.
+ */
+ window->pdev = pdev;
+
+ /* Find a region continuous to the end of the ROM window */
+ window->phys = 0;
+ pci_read_config_byte(pdev, FWH_DEC_EN1, &byte);
+ if (byte == 0xff) {
+ window->phys = 0xffc00000;
+ pci_read_config_byte(pdev, FWH_DEC_EN2, &byte);
+ if ((byte & 0x0f) == 0x0f) {
+ window->phys = 0xff400000;
+ }
+ else if ((byte & 0x0e) == 0x0e) {
+ window->phys = 0xff500000;
+ }
+ else if ((byte & 0x0c) == 0x0c) {
+ window->phys = 0xff600000;
+ }
+ else if ((byte & 0x08) == 0x08) {
+ window->phys = 0xff700000;
+ }
+ }
+ else if ((byte & 0xfe) == 0xfe) {
+ window->phys = 0xffc80000;
+ }
+ else if ((byte & 0xfc) == 0xfc) {
+ window->phys = 0xffd00000;
+ }
+ else if ((byte & 0xf8) == 0xf8) {
+ window->phys = 0xffd80000;
+ }
+ else if ((byte & 0xf0) == 0xf0) {
+ window->phys = 0xffe00000;
+ }
+ else if ((byte & 0xe0) == 0xe0) {
+ window->phys = 0xffe80000;
+ }
+ else if ((byte & 0xc0) == 0xc0) {
+ window->phys = 0xfff00000;
+ }
+ else if ((byte & 0x80) == 0x80) {
+ window->phys = 0xfff80000;
+ }
+
+ if (window->phys == 0) {
+ printk(KERN_ERR MOD_NAME ": Rom window is closed\n");
+ goto out;
+ }
+ window->phys -= 0x400000UL;
+ window->size = (0xffffffffUL - window->phys) + 1UL;
+
+ /* Enable writes through the rom window */
+ pci_read_config_word(pdev, BIOS_CNTL, &word);
+ if (!(word & 1) && (word & (1<<1))) {
+ /* The BIOS will generate an error if I enable
+ * this device, so don't even try.
+ */
+ printk(KERN_ERR MOD_NAME ": firmware access control, I can't enable writes\n");
+ goto out;
+ }
+ pci_write_config_word(pdev, BIOS_CNTL, word | 1);
+
+ /*
+ * Try to reserve the window mem region. If this fails then
+ * it is likely due to the window being "reseved" by the BIOS.
+ */
+ window->rsrc.name = MOD_NAME;
+ window->rsrc.start = window->phys;
+ window->rsrc.end = window->phys + window->size - 1;
+ window->rsrc.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+ if (request_resource(&iomem_resource, &window->rsrc)) {
+ window->rsrc.parent = NULL;
+ printk(KERN_DEBUG MOD_NAME
+ ": %s(): Unable to register resource"
+ " 0x%.08lx-0x%.08lx - kernel bug?\n",
+ __func__,
+ window->rsrc.start, window->rsrc.end);
+ }
+
+ /* Map the firmware hub into my address space. */
+ window->virt = ioremap_nocache(window->phys, window->size);
+ if (!window->virt) {
+ printk(KERN_ERR MOD_NAME ": ioremap(%08lx, %08lx) failed\n",
+ window->phys, window->size);
+ goto out;
+ }
+
+ /* Get the first address to look for an rom chip at */
+ map_top = window->phys;
+ if ((window->phys & 0x3fffff) != 0) {
+ map_top = window->phys + 0x400000;
+ }
+#if 1
+ /* The probe sequence run over the firmware hub lock
+ * registers sets them to 0x7 (no access).
+ * Probe at most the last 4M of the address space.
+ */
+ if (map_top < 0xffc00000) {
+ map_top = 0xffc00000;
+ }
+#endif
+ /* Loop through and look for rom chips */
+ while((map_top - 1) < 0xffffffffUL) {
+ struct cfi_private *cfi;
+ unsigned long offset;
+ int i;
+
+ if (!map) {
+ map = kmalloc(sizeof(*map), GFP_KERNEL);
+ }
+ if (!map) {
+ printk(KERN_ERR MOD_NAME ": kmalloc failed");
+ goto out;
+ }
+ memset(map, 0, sizeof(*map));
+ INIT_LIST_HEAD(&map->list);
+ map->map.name = map->map_name;
+ map->map.phys = map_top;
+ offset = map_top - window->phys;
+ map->map.virt = (void __iomem *)
+ (((unsigned long)(window->virt)) + offset);
+ map->map.size = 0xffffffffUL - map_top + 1UL;
+ /* Set the name of the map to the address I am trying */
+ sprintf(map->map_name, "%s @%08lx",
+ MOD_NAME, map->map.phys);
+
+ /* Firmware hubs only use vpp when being programmed
+ * in a factory setting. So in-place programming
+ * needs to use a different method.
+ */
+ for(map->map.bankwidth = 32; map->map.bankwidth;
+ map->map.bankwidth >>= 1)
+ {
+ char **probe_type;
+ /* Skip bankwidths that are not supported */
+ if (!map_bankwidth_supported(map->map.bankwidth))
+ continue;
+
+ /* Setup the map methods */
+ simple_map_init(&map->map);
+
+ /* Try all of the probe methods */
+ probe_type = rom_probe_types;
+ for(; *probe_type; probe_type++) {
+ map->mtd = do_map_probe(*probe_type, &map->map);
+ if (map->mtd)
+ goto found;
+ }
+ }
+ map_top += ROM_PROBE_STEP_SIZE;
+ continue;
+ found:
+ /* Trim the size if we are larger than the map */
+ if (map->mtd->size > map->map.size) {
+ printk(KERN_WARNING MOD_NAME
+ " rom(%u) larger than window(%lu). fixing...\n",
+ map->mtd->size, map->map.size);
+ map->mtd->size = map->map.size;
+ }
+ if (window->rsrc.parent) {
+ /*
+ * Registering the MTD device in iomem may not be possible
+ * if there is a BIOS "reserved" and BUSY range. If this
+ * fails then continue anyway.
+ */
+ map->rsrc.name = map->map_name;
+ map->rsrc.start = map->map.phys;
+ map->rsrc.end = map->map.phys + map->mtd->size - 1;
+ map->rsrc.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+ if (request_resource(&window->rsrc, &map->rsrc)) {
+ printk(KERN_ERR MOD_NAME
+ ": cannot reserve MTD resource\n");
+ map->rsrc.parent = NULL;
+ }
+ }
+
+ /* Make the whole region visible in the map */
+ map->map.virt = window->virt;
+ map->map.phys = window->phys;
+ cfi = map->map.fldrv_priv;
+ for(i = 0; i < cfi->numchips; i++) {
+ cfi->chips[i].start += offset;
+ }
+
+ /* Now that the mtd devices is complete claim and export it */
+ map->mtd->owner = THIS_MODULE;
+ if (add_mtd_device(map->mtd)) {
+ map_destroy(map->mtd);
+ map->mtd = NULL;
+ goto out;
+ }
+
+
+ /* Calculate the new value of map_top */
+ map_top += map->mtd->size;
+
+ /* File away the map structure */
+ list_add(&map->list, &window->maps);
+ map = NULL;
+ }
+
+ out:
+ /* Free any left over map structures */
+ if (map) {
+ kfree(map);
+ }
+ /* See if I have any map structures */
+ if (list_empty(&window->maps)) {
+ ichxrom_cleanup(window);
+ return -ENODEV;
+ }
+ return 0;
+}
+
+
+static void __devexit ichxrom_remove_one (struct pci_dev *pdev)
+{
+ struct ichxrom_window *window = &ichxrom_window;
+ ichxrom_cleanup(window);
+}
+
+static struct pci_device_id ichxrom_pci_tbl[] __devinitdata = {
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801BA_0,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801CA_0,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801DB_0,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_82801EB_0,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_ESB_1,
+ PCI_ANY_ID, PCI_ANY_ID, },
+ { 0, },
+};
+
+MODULE_DEVICE_TABLE(pci, ichxrom_pci_tbl);
+
+#if 0
+static struct pci_driver ichxrom_driver = {
+ .name = MOD_NAME,
+ .id_table = ichxrom_pci_tbl,
+ .probe = ichxrom_init_one,
+ .remove = ichxrom_remove_one,
+};
+#endif
+
+static int __init init_ichxrom(void)
+{
+ struct pci_dev *pdev;
+ struct pci_device_id *id;
+
+ pdev = NULL;
+ for (id = ichxrom_pci_tbl; id->vendor; id++) {
+ pdev = pci_find_device(id->vendor, id->device, NULL);
+ if (pdev) {
+ break;
+ }
+ }
+ if (pdev) {
+ return ichxrom_init_one(pdev, &ichxrom_pci_tbl[0]);
+ }
+ return -ENXIO;
+#if 0
+ return pci_module_init(&ichxrom_driver);
+#endif
+}
+
+static void __exit cleanup_ichxrom(void)
+{
+ ichxrom_remove_one(ichxrom_window.pdev);
+}
+
+module_init(init_ichxrom);
+module_exit(cleanup_ichxrom);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Eric Biederman <ebiederman@lnxi.com>");
+MODULE_DESCRIPTION("MTD map driver for BIOS chips on the ICHX southbridge");
--- /dev/null
+/*
+ * $Id: ixp4xx.c,v 1.7 2004/11/04 13:24:15 gleixner Exp $
+ *
+ * drivers/mtd/maps/ixp4xx.c
+ *
+ * MTD Map file for IXP4XX based systems. Please do not make per-board
+ * changes in here. If your board needs special setup, do it in your
+ * platform level code in arch/arm/mach-ixp4xx/board-setup.c
+ *
+ * Original Author: Intel Corporation
+ * Maintainer: Deepak Saxena <dsaxena@mvista.com>
+ *
+ * Copyright (C) 2002 Intel Corporation
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+#include <linux/ioport.h>
+#include <linux/device.h>
+#include <asm/io.h>
+#include <asm/mach-types.h>
+#include <asm/mach/flash.h>
+
+#include <linux/reboot.h>
+
+#ifndef __ARMEB__
+#define BYTE0(h) ((h) & 0xFF)
+#define BYTE1(h) (((h) >> 8) & 0xFF)
+#else
+#define BYTE0(h) (((h) >> 8) & 0xFF)
+#define BYTE1(h) ((h) & 0xFF)
+#endif
+
+static map_word ixp4xx_read16(struct map_info *map, unsigned long ofs)
+{
+ map_word val;
+ val.x[0] = *(__u16 *) (map->map_priv_1 + ofs);
+ return val;
+}
+
+/*
+ * The IXP4xx expansion bus only allows 16-bit wide acceses
+ * when attached to a 16-bit wide device (such as the 28F128J3A),
+ * so we can't just memcpy_fromio().
+ */
+static void ixp4xx_copy_from(struct map_info *map, void *to,
+ unsigned long from, ssize_t len)
+{
+ int i;
+ u8 *dest = (u8 *) to;
+ u16 *src = (u16 *) (map->map_priv_1 + from);
+ u16 data;
+
+ for (i = 0; i < (len / 2); i++) {
+ data = src[i];
+ dest[i * 2] = BYTE0(data);
+ dest[i * 2 + 1] = BYTE1(data);
+ }
+
+ if (len & 1)
+ dest[len - 1] = BYTE0(src[i]);
+}
+
+/*
+ * Unaligned writes are ignored, causing the 8-bit
+ * probe to fail and proceed to the 16-bit probe (which succeeds).
+ */
+static void ixp4xx_probe_write16(struct map_info *map, map_word d, unsigned long adr)
+{
+ if (!(adr & 1))
+ *(__u16 *) (map->map_priv_1 + adr) = d.x[0];
+}
+
+/*
+ * Fast write16 function without the probing check above
+ */
+static void ixp4xx_write16(struct map_info *map, map_word d, unsigned long adr)
+{
+ *(__u16 *) (map->map_priv_1 + adr) = d.x[0];
+}
+
+struct ixp4xx_flash_info {
+ struct mtd_info *mtd;
+ struct map_info map;
+ struct mtd_partition *partitions;
+ struct resource *res;
+};
+
+static const char *probes[] = { "RedBoot", "cmdlinepart", NULL };
+
+static int ixp4xx_flash_remove(struct device *_dev)
+{
+ struct platform_device *dev = to_platform_device(_dev);
+ struct flash_platform_data *plat = dev->dev.platform_data;
+ struct ixp4xx_flash_info *info = dev_get_drvdata(&dev->dev);
+ map_word d;
+
+ dev_set_drvdata(&dev->dev, NULL);
+
+ if(!info)
+ return 0;
+
+ /*
+ * This is required for a soft reboot to work.
+ */
+ d.x[0] = 0xff;
+ ixp4xx_write16(&info->map, d, 0x55 * 0x2);
+
+ if (info->mtd) {
+ del_mtd_partitions(info->mtd);
+ map_destroy(info->mtd);
+ }
+ if (info->map.map_priv_1)
+ iounmap((void *) info->map.map_priv_1);
+
+ if (info->partitions)
+ kfree(info->partitions);
+
+ if (info->res) {
+ release_resource(info->res);
+ kfree(info->res);
+ }
+
+ if (plat->exit)
+ plat->exit();
+
+ /* Disable flash write */
+ *IXP4XX_EXP_CS0 &= ~IXP4XX_FLASH_WRITABLE;
+
+ return 0;
+}
+
+static int ixp4xx_flash_probe(struct device *_dev)
+{
+ struct platform_device *dev = to_platform_device(_dev);
+ struct flash_platform_data *plat = dev->dev.platform_data;
+ struct ixp4xx_flash_info *info;
+ int err = -1;
+
+ if (!plat)
+ return -ENODEV;
+
+ if (plat->init) {
+ err = plat->init();
+ if (err)
+ return err;
+ }
+
+ info = kmalloc(sizeof(struct ixp4xx_flash_info), GFP_KERNEL);
+ if(!info) {
+ err = -ENOMEM;
+ goto Error;
+ }
+ memzero(info, sizeof(struct ixp4xx_flash_info));
+
+ dev_set_drvdata(&dev->dev, info);
+
+ /*
+ * Enable flash write
+ * TODO: Move this out to board specific code
+ */
+ *IXP4XX_EXP_CS0 |= IXP4XX_FLASH_WRITABLE;
+
+ /*
+ * Tell the MTD layer we're not 1:1 mapped so that it does
+ * not attempt to do a direct access on us.
+ */
+ info->map.phys = NO_XIP;
+ info->map.size = dev->resource->end - dev->resource->start + 1;
+
+ /*
+ * We only support 16-bit accesses for now. If and when
+ * any board use 8-bit access, we'll fixup the driver to
+ * handle that.
+ */
+ info->map.bankwidth = 2;
+ info->map.name = dev->dev.bus_id;
+ info->map.read = ixp4xx_read16,
+ info->map.write = ixp4xx_probe_write16,
+ info->map.copy_from = ixp4xx_copy_from,
+
+ info->res = request_mem_region(dev->resource->start,
+ dev->resource->end - dev->resource->start + 1,
+ "IXP4XXFlash");
+ if (!info->res) {
+ printk(KERN_ERR "IXP4XXFlash: Could not reserve memory region\n");
+ err = -ENOMEM;
+ goto Error;
+ }
+
+ info->map.map_priv_1 = ioremap(dev->resource->start,
+ dev->resource->end - dev->resource->start + 1);
+ if (!info->map.map_priv_1) {
+ printk(KERN_ERR "IXP4XXFlash: Failed to ioremap region\n");
+ err = -EIO;
+ goto Error;
+ }
+
+ info->mtd = do_map_probe(plat->map_name, &info->map);
+ if (!info->mtd) {
+ printk(KERN_ERR "IXP4XXFlash: map_probe failed\n");
+ err = -ENXIO;
+ goto Error;
+ }
+ info->mtd->owner = THIS_MODULE;
+
+ /* Use the fast version */
+ info->map.write = ixp4xx_write16,
+
+ err = parse_mtd_partitions(info->mtd, probes, &info->partitions, 0);
+ if (err > 0) {
+ err = add_mtd_partitions(info->mtd, info->partitions, err);
+ if(err)
+ printk(KERN_ERR "Could not parse partitions\n");
+ }
+
+ if (err)
+ goto Error;
+
+ return 0;
+
+Error:
+ ixp4xx_flash_remove(_dev);
+ return err;
+}
+
+static struct device_driver ixp4xx_flash_driver = {
+ .name = "IXP4XX-Flash",
+ .bus = &platform_bus_type,
+ .probe = ixp4xx_flash_probe,
+ .remove = ixp4xx_flash_remove,
+};
+
+static int __init ixp4xx_flash_init(void)
+{
+ return driver_register(&ixp4xx_flash_driver);
+}
+
+static void __exit ixp4xx_flash_exit(void)
+{
+ driver_unregister(&ixp4xx_flash_driver);
+}
+
+
+module_init(ixp4xx_flash_init);
+module_exit(ixp4xx_flash_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("MTD map driver for Intel IXP4xx systems")
+MODULE_AUTHOR("Deepak Saxena");
+
--- /dev/null
+/*
+ * Flash on MPC-1211
+ *
+ * $Id: mpc1211.c,v 1.4 2004/09/16 23:27:13 gleixner Exp $
+ *
+ * (C) 2002 Interface, Saito.K & Jeanne
+ *
+ * GPL'd
+ */
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <asm/io.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+#include <linux/config.h>
+
+static struct mtd_info *flash_mtd;
+static struct mtd_partition *parsed_parts;
+
+struct map_info mpc1211_flash_map = {
+ .name = "MPC-1211 FLASH",
+ .size = 0x80000,
+ .bankwidth = 1,
+};
+
+static struct mtd_partition mpc1211_partitions[] = {
+ {
+ .name = "IPL & ETH-BOOT",
+ .offset = 0x00000000,
+ .size = 0x10000,
+ },
+ {
+ .name = "Flash FS",
+ .offset = 0x00010000,
+ .size = MTDPART_SIZ_FULL,
+ }
+};
+
+static int __init init_mpc1211_maps(void)
+{
+ int nr_parts;
+
+ mpc1211_flash_map.phys = 0;
+ mpc1211_flash_map.virt = (void __iomem *)P2SEGADDR(0);
+
+ simple_map_init(&mpc1211_flash_map);
+
+ printk(KERN_NOTICE "Probing for flash chips at 0x00000000:\n");
+ flash_mtd = do_map_probe("jedec_probe", &mpc1211_flash_map);
+ if (!flash_mtd) {
+ printk(KERN_NOTICE "Flash chips not detected at either possible location.\n");
+ return -ENXIO;
+ }
+ printk(KERN_NOTICE "MPC-1211: Flash at 0x%08lx\n", mpc1211_flash_map.virt & 0x1fffffff);
+ flash_mtd->module = THIS_MODULE;
+
+ parsed_parts = mpc1211_partitions;
+ nr_parts = ARRAY_SIZE(mpc1211_partitions);
+
+ add_mtd_partitions(flash_mtd, parsed_parts, nr_parts);
+ return 0;
+}
+
+static void __exit cleanup_mpc1211_maps(void)
+{
+ if (parsed_parts)
+ del_mtd_partitions(flash_mtd);
+ else
+ del_mtd_device(flash_mtd);
+ map_destroy(flash_mtd);
+}
+
+module_init(init_mpc1211_maps);
+module_exit(cleanup_mpc1211_maps);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Saito.K & Jeanne <ksaito@interface.co.jp>");
+MODULE_DESCRIPTION("MTD map driver for MPC-1211 boards. Interface");
--- /dev/null
+/*
+ * NOR Flash memory access on TI Toto board
+ *
+ * jzhang@ti.com (C) 2003 Texas Instruments.
+ *
+ * (C) 2002 MontVista Software, Inc.
+ *
+ * $Id: omap-toto-flash.c,v 1.3 2004/09/16 23:27:13 gleixner Exp $
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#include <linux/errno.h>
+#include <linux/init.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+
+
+#ifndef CONFIG_ARCH_OMAP
+#error This is for OMAP architecture only
+#endif
+
+//these lines need be moved to a hardware header file
+#define OMAP_TOTO_FLASH_BASE 0xd8000000
+#define OMAP_TOTO_FLASH_SIZE 0x80000
+
+static struct map_info omap_toto_map_flash = {
+ .name = "OMAP Toto flash",
+ .bankwidth = 2,
+ .virt = (void __iomem *)OMAP_TOTO_FLASH_BASE,
+};
+
+
+static struct mtd_partition toto_flash_partitions[] = {
+ {
+ .name = "BootLoader",
+ .size = 0x00040000, /* hopefully u-boot will stay 128k + 128*/
+ .offset = 0,
+ .mask_flags = MTD_WRITEABLE, /* force read-only */
+ }, {
+ .name = "ReservedSpace",
+ .size = 0x00030000,
+ .offset = MTDPART_OFS_APPEND,
+ //mask_flags: MTD_WRITEABLE, /* force read-only */
+ }, {
+ .name = "EnvArea", /* bottom 64KiB for env vars */
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+
+static struct mtd_partition *parsed_parts;
+
+static struct mtd_info *flash_mtd;
+
+static int __init init_flash (void)
+{
+
+ struct mtd_partition *parts;
+ int nb_parts = 0;
+ int parsed_nr_parts = 0;
+ const char *part_type;
+
+ /*
+ * Static partition definition selection
+ */
+ part_type = "static";
+
+ parts = toto_flash_partitions;
+ nb_parts = ARRAY_SIZE(toto_flash_partitions);
+ omap_toto_map_flash.size = OMAP_TOTO_FLASH_SIZE;
+ omap_toto_map_flash.phys = virt_to_phys(OMAP_TOTO_FLASH_BASE);
+
+ simple_map_init(&omap_toto_map_flash);
+ /*
+ * Now let's probe for the actual flash. Do it here since
+ * specific machine settings might have been set above.
+ */
+ printk(KERN_NOTICE "OMAP toto flash: probing %d-bit flash bus\n",
+ omap_toto_map_flash.bankwidth*8);
+ flash_mtd = do_map_probe("jedec_probe", &omap_toto_map_flash);
+ if (!flash_mtd)
+ return -ENXIO;
+
+ if (parsed_nr_parts > 0) {
+ parts = parsed_parts;
+ nb_parts = parsed_nr_parts;
+ }
+
+ if (nb_parts == 0) {
+ printk(KERN_NOTICE "OMAP toto flash: no partition info available,"
+ "registering whole flash at once\n");
+ if (add_mtd_device(flash_mtd)){
+ return -ENXIO;
+ }
+ } else {
+ printk(KERN_NOTICE "Using %s partition definition\n",
+ part_type);
+ return add_mtd_partitions(flash_mtd, parts, nb_parts);
+ }
+ return 0;
+}
+
+int __init omap_toto_mtd_init(void)
+{
+ int status;
+
+ if (status = init_flash()) {
+ printk(KERN_ERR "OMAP Toto Flash: unable to init map for toto flash\n");
+ }
+ return status;
+}
+
+static void __exit omap_toto_mtd_cleanup(void)
+{
+ if (flash_mtd) {
+ del_mtd_partitions(flash_mtd);
+ map_destroy(flash_mtd);
+ if (parsed_parts)
+ kfree(parsed_parts);
+ }
+}
+
+module_init(omap_toto_mtd_init);
+module_exit(omap_toto_mtd_cleanup);
+
+MODULE_AUTHOR("Jian Zhang");
+MODULE_DESCRIPTION("OMAP Toto board map driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * Flash memory access on Alchemy Pb1550 board
+ *
+ * $Id: pb1550-flash.c,v 1.6 2004/11/04 13:24:15 gleixner Exp $
+ *
+ * (C) 2004 Embedded Edge, LLC, based on pb1550-flash.c:
+ * (C) 2003 Pete Popov <ppopov@pacbell.net>
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/map.h>
+#include <linux/mtd/partitions.h>
+
+#include <asm/io.h>
+#include <asm/au1000.h>
+#include <asm/pb1550.h>
+
+#ifdef DEBUG_RW
+#define DBG(x...) printk(x)
+#else
+#define DBG(x...)
+#endif
+
+static unsigned long window_addr;
+static unsigned long window_size;
+
+
+static struct map_info pb1550_map = {
+ .name = "Pb1550 flash",
+};
+
+static unsigned char flash_bankwidth = 4;
+
+/*
+ * Support only 64MB NOR Flash parts
+ */
+
+#ifdef PB1550_BOTH_BANKS
+/* both banks will be used. Combine the first bank and the first
+ * part of the second bank together into a single jffs/jffs2
+ * partition.
+ */
+static struct mtd_partition pb1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1C00 0000 1FFF FFFF CE0 64MB Boot NOR Flash
+ * 1800 0000 1BFF FFFF CE0 64MB Param NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = (0x1FC00000 - 0x18000000),
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000 - 0x40000), /* last 256KB is yamon env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(PB1550_BOOT_ONLY)
+static struct mtd_partition pb1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1C00 0000 1FFF FFFF CE0 64MB Boot NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = 0x03c00000,
+ .offset = 0x0000000
+ },{
+ .name = "yamon",
+ .size = 0x0100000,
+ .offset = MTDPART_OFS_APPEND,
+ .mask_flags = MTD_WRITEABLE
+ },{
+ .name = "raw kernel",
+ .size = (0x300000-0x40000), /* last 256KB is yamon env */
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#elif defined(PB1550_USER_ONLY)
+static struct mtd_partition pb1550_partitions[] = {
+ /* assume boot[2:0]:swap is '0000' or '1000', which translates to:
+ * 1800 0000 1BFF FFFF CE0 64MB Param NOR Flash
+ */
+ {
+ .name = "User FS",
+ .size = (0x4000000 - 0x200000), /* reserve 2MB for raw kernel */
+ .offset = 0x0000000
+ },{
+ .name = "raw kernel",
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+#else
+#error MTD_PB1550 define combo error /* should never happen */
+#endif
+
+#define NB_OF(x) (sizeof(x)/sizeof(x[0]))
+
+static struct mtd_info *mymtd;
+
+/*
+ * Probe the flash density and setup window address and size
+ * based on user CONFIG options. There are times when we don't
+ * want the MTD driver to be probing the boot or user flash,
+ * so having the option to enable only one bank is important.
+ */
+int setup_flash_params(void)
+{
+ u16 boot_swapboot;
+ boot_swapboot = (au_readl(MEM_STSTAT) & (0x7<<1)) |
+ ((bcsr->status >> 6) & 0x1);
+ printk("Pb1550 MTD: boot:swap %d\n", boot_swapboot);
+
+ switch (boot_swapboot) {
+ case 0: /* 512Mbit devices, both enabled */
+ case 1:
+ case 8:
+ case 9:
+#if defined(PB1550_BOTH_BANKS)
+ window_addr = 0x18000000;
+ window_size = 0x8000000;
+#elif defined(PB1550_BOOT_ONLY)
+ window_addr = 0x1C000000;
+ window_size = 0x4000000;
+#else /* USER ONLY */
+ window_addr = 0x1E000000;
+ window_size = 0x4000000;
+#endif
+ break;
+ case 0xC:
+ case 0xD:
+ case 0xE:
+ case 0xF:
+ /* 64 MB Boot NOR Flash is disabled */
+ /* and the start address is moved to 0x0C00000 */
+ window_addr = 0x0C000000;
+ window_size = 0x4000000;
+ default:
+ printk("Pb1550 MTD: unsupported boot:swap setting\n");
+ return 1;
+ }
+ return 0;
+}
+
+int __init pb1550_mtd_init(void)
+{
+ struct mtd_partition *parts;
+ int nb_parts = 0;
+
+ /* Default flash bankwidth */
+ pb1550_map.bankwidth = flash_bankwidth;
+
+ if (setup_flash_params())
+ return -ENXIO;
+
+ /*
+ * Static partition definition selection
+ */
+ parts = pb1550_partitions;
+ nb_parts = NB_OF(pb1550_partitions);
+ pb1550_map.size = window_size;
+
+ /*
+ * Now let's probe for the actual flash. Do it here since
+ * specific machine settings might have been set above.
+ */
+ printk(KERN_NOTICE "Pb1550 flash: probing %d-bit flash bus\n",
+ pb1550_map.bankwidth*8);
+ pb1550_map.virt = ioremap(window_addr, window_size);
+ mymtd = do_map_probe("cfi_probe", &pb1550_map);
+ if (!mymtd) return -ENXIO;
+ mymtd->owner = THIS_MODULE;
+
+ add_mtd_partitions(mymtd, parts, nb_parts);
+ return 0;
+}
+
+static void __exit pb1550_mtd_cleanup(void)
+{
+ if (mymtd) {
+ del_mtd_partitions(mymtd);
+ map_destroy(mymtd);
+ }
+}
+
+module_init(pb1550_mtd_init);
+module_exit(pb1550_mtd_cleanup);
+
+MODULE_AUTHOR("Embedded Edge, LLC");
+MODULE_DESCRIPTION("Pb1550 mtd map driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * drivers/mtd/nand/au1550nd.c
+ *
+ * Copyright (C) 2004 Embedded Edge, LLC
+ *
+ * $Id: au1550nd.c,v 1.11 2004/11/04 12:53:10 gleixner Exp $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/partitions.h>
+#include <asm/io.h>
+
+/* fixme: this is ugly */
+#if LINUX_VERSION_CODE > KERNEL_VERSION(2, 6, 0)
+#include <asm/mach-au1x00/au1000.h>
+#ifdef CONFIG_MIPS_PB1550
+#include <asm/mach-pb1x00/pb1550.h>
+#endif
+#ifdef CONFIG_MIPS_DB1550
+#include <asm/mach-db1x00/db1x00.h>
+#endif
+#else
+#include <asm/au1000.h>
+#ifdef CONFIG_MIPS_PB1550
+#include <asm/pb1550.h>
+#endif
+#ifdef CONFIG_MIPS_DB1550
+#include <asm/db1x00.h>
+#endif
+#endif
+
+/*
+ * MTD structure for NAND controller
+ */
+static struct mtd_info *au1550_mtd = NULL;
+static void __iomem *p_nand;
+static int nand_width = 1; /* default x8*/
+
+#define NAND_CS 1
+
+/*
+ * Define partitions for flash device
+ */
+const static struct mtd_partition partition_info[] = {
+#ifdef CONFIG_MIPS_PB1550
+#define NUM_PARTITIONS 2
+ {
+ .name = "Pb1550 NAND FS 0",
+ .offset = 0,
+ .size = 8*1024*1024
+ },
+ {
+ .name = "Pb1550 NAND FS 1",
+ .offset = MTDPART_OFS_APPEND,
+ .size = MTDPART_SIZ_FULL
+ }
+#endif
+#ifdef CONFIG_MIPS_DB1550
+#define NUM_PARTITIONS 2
+ {
+ .name = "Db1550 NAND FS 0",
+ .offset = 0,
+ .size = 8*1024*1024
+ },
+ {
+ .name = "Db1550 NAND FS 1",
+ .offset = MTDPART_OFS_APPEND,
+ .size = MTDPART_SIZ_FULL
+ }
+#endif
+};
+
+
+/**
+ * au_read_byte - read one byte from the chip
+ * @mtd: MTD device structure
+ *
+ * read function for 8bit buswith
+ */
+static u_char au_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ u_char ret = readb(this->IO_ADDR_R);
+ au_sync();
+ return ret;
+}
+
+/**
+ * au_write_byte - write one byte to the chip
+ * @mtd: MTD device structure
+ * @byte: pointer to data byte to write
+ *
+ * write function for 8it buswith
+ */
+static void au_write_byte(struct mtd_info *mtd, u_char byte)
+{
+ struct nand_chip *this = mtd->priv;
+ writeb(byte, this->IO_ADDR_W);
+ au_sync();
+}
+
+/**
+ * au_read_byte16 - read one byte endianess aware from the chip
+ * @mtd: MTD device structure
+ *
+ * read function for 16bit buswith with
+ * endianess conversion
+ */
+static u_char au_read_byte16(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ u_char ret = (u_char) cpu_to_le16(readw(this->IO_ADDR_R));
+ au_sync();
+ return ret;
+}
+
+/**
+ * au_write_byte16 - write one byte endianess aware to the chip
+ * @mtd: MTD device structure
+ * @byte: pointer to data byte to write
+ *
+ * write function for 16bit buswith with
+ * endianess conversion
+ */
+static void au_write_byte16(struct mtd_info *mtd, u_char byte)
+{
+ struct nand_chip *this = mtd->priv;
+ writew(le16_to_cpu((u16) byte), this->IO_ADDR_W);
+ au_sync();
+}
+
+/**
+ * au_read_word - read one word from the chip
+ * @mtd: MTD device structure
+ *
+ * read function for 16bit buswith without
+ * endianess conversion
+ */
+static u16 au_read_word(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ u16 ret = readw(this->IO_ADDR_R);
+ au_sync();
+ return ret;
+}
+
+/**
+ * au_write_word - write one word to the chip
+ * @mtd: MTD device structure
+ * @word: data word to write
+ *
+ * write function for 16bit buswith without
+ * endianess conversion
+ */
+static void au_write_word(struct mtd_info *mtd, u16 word)
+{
+ struct nand_chip *this = mtd->priv;
+ writew(word, this->IO_ADDR_W);
+ au_sync();
+}
+
+/**
+ * au_write_buf - write buffer to chip
+ * @mtd: MTD device structure
+ * @buf: data buffer
+ * @len: number of bytes to write
+ *
+ * write function for 8bit buswith
+ */
+static void au_write_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++) {
+ writeb(buf[i], this->IO_ADDR_W);
+ au_sync();
+ }
+}
+
+/**
+ * au_read_buf - read chip data into buffer
+ * @mtd: MTD device structure
+ * @buf: buffer to store date
+ * @len: number of bytes to read
+ *
+ * read function for 8bit buswith
+ */
+static void au_read_buf(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++) {
+ buf[i] = readb(this->IO_ADDR_R);
+ au_sync();
+ }
+}
+
+/**
+ * au_verify_buf - Verify chip data against buffer
+ * @mtd: MTD device structure
+ * @buf: buffer containing the data to compare
+ * @len: number of bytes to compare
+ *
+ * verify function for 8bit buswith
+ */
+static int au_verify_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++) {
+ if (buf[i] != readb(this->IO_ADDR_R))
+ return -EFAULT;
+ au_sync();
+ }
+
+ return 0;
+}
+
+/**
+ * au_write_buf16 - write buffer to chip
+ * @mtd: MTD device structure
+ * @buf: data buffer
+ * @len: number of bytes to write
+ *
+ * write function for 16bit buswith
+ */
+static void au_write_buf16(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+ u16 *p = (u16 *) buf;
+ len >>= 1;
+
+ for (i=0; i<len; i++) {
+ writew(p[i], this->IO_ADDR_W);
+ au_sync();
+ }
+
+}
+
+/**
+ * au_read_buf16 - read chip data into buffer
+ * @mtd: MTD device structure
+ * @buf: buffer to store date
+ * @len: number of bytes to read
+ *
+ * read function for 16bit buswith
+ */
+static void au_read_buf16(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+ u16 *p = (u16 *) buf;
+ len >>= 1;
+
+ for (i=0; i<len; i++) {
+ p[i] = readw(this->IO_ADDR_R);
+ au_sync();
+ }
+}
+
+/**
+ * au_verify_buf16 - Verify chip data against buffer
+ * @mtd: MTD device structure
+ * @buf: buffer containing the data to compare
+ * @len: number of bytes to compare
+ *
+ * verify function for 16bit buswith
+ */
+static int au_verify_buf16(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+ u16 *p = (u16 *) buf;
+ len >>= 1;
+
+ for (i=0; i<len; i++) {
+ if (p[i] != readw(this->IO_ADDR_R))
+ return -EFAULT;
+ au_sync();
+ }
+ return 0;
+}
+
+
+static void au1550_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+ register struct nand_chip *this = mtd->priv;
+
+ switch(cmd){
+
+ case NAND_CTL_SETCLE: this->IO_ADDR_W = p_nand + MEM_STNAND_CMD; break;
+ case NAND_CTL_CLRCLE: this->IO_ADDR_W = p_nand + MEM_STNAND_DATA; break;
+
+ case NAND_CTL_SETALE: this->IO_ADDR_W = p_nand + MEM_STNAND_ADDR; break;
+ case NAND_CTL_CLRALE:
+ this->IO_ADDR_W = p_nand + MEM_STNAND_DATA;
+ /* FIXME: Nobody knows why this is neccecary,
+ * but it works only that way */
+ udelay(1);
+ break;
+
+ case NAND_CTL_SETNCE:
+ /* assert (force assert) chip enable */
+ au_writel((1<<(4+NAND_CS)) , MEM_STNDCTL); break;
+ break;
+
+ case NAND_CTL_CLRNCE:
+ /* deassert chip enable */
+ au_writel(0, MEM_STNDCTL); break;
+ break;
+ }
+
+ this->IO_ADDR_R = this->IO_ADDR_W;
+
+ /* Drain the writebuffer */
+ au_sync();
+}
+
+int au1550_device_ready(struct mtd_info *mtd)
+{
+ int ret = (au_readl(MEM_STSTAT) & 0x1) ? 1 : 0;
+ au_sync();
+ return ret;
+}
+
+/*
+ * Main initialization routine
+ */
+int __init au1550_init (void)
+{
+ struct nand_chip *this;
+ u16 boot_swapboot = 0; /* default value */
+ int retval;
+
+ /* Allocate memory for MTD device structure and private data */
+ au1550_mtd = kmalloc (sizeof(struct mtd_info) +
+ sizeof (struct nand_chip), GFP_KERNEL);
+ if (!au1550_mtd) {
+ printk ("Unable to allocate NAND MTD dev structure.\n");
+ return -ENOMEM;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&au1550_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) au1550_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ au1550_mtd->priv = this;
+
+
+ /* MEM_STNDCTL: disable ints, disable nand boot */
+ au_writel(0, MEM_STNDCTL);
+
+#ifdef CONFIG_MIPS_PB1550
+ /* set gpio206 high */
+ au_writel(au_readl(GPIO2_DIR) & ~(1<<6), GPIO2_DIR);
+
+ boot_swapboot = (au_readl(MEM_STSTAT) & (0x7<<1)) |
+ ((bcsr->status >> 6) & 0x1);
+ switch (boot_swapboot) {
+ case 0:
+ case 2:
+ case 8:
+ case 0xC:
+ case 0xD:
+ /* x16 NAND Flash */
+ nand_width = 0;
+ break;
+ case 1:
+ case 9:
+ case 3:
+ case 0xE:
+ case 0xF:
+ /* x8 NAND Flash */
+ nand_width = 1;
+ break;
+ default:
+ printk("Pb1550 NAND: bad boot:swap\n");
+ retval = -EINVAL;
+ goto outmem;
+ }
+#endif
+
+ /* Configure RCE1 - should be done by YAMON */
+ au_writel(0x5 | (nand_width << 22), 0xB4001010); /* MEM_STCFG1 */
+ au_writel(NAND_TIMING, 0xB4001014); /* MEM_STTIME1 */
+ au_sync();
+
+ /* setup and enable chip select, MEM_STADDR1 */
+ /* we really need to decode offsets only up till 0x20 */
+ au_writel((1<<28) | (NAND_PHYS_ADDR>>4) |
+ (((NAND_PHYS_ADDR + 0x1000)-1) & (0x3fff<<18)>>18),
+ MEM_STADDR1);
+ au_sync();
+
+ p_nand = ioremap(NAND_PHYS_ADDR, 0x1000);
+
+ /* Set address of hardware control function */
+ this->hwcontrol = au1550_hwcontrol;
+ this->dev_ready = au1550_device_ready;
+ /* 30 us command delay time */
+ this->chip_delay = 30;
+ this->eccmode = NAND_ECC_SOFT;
+
+ this->options = NAND_NO_AUTOINCR;
+
+ if (!nand_width)
+ this->options |= NAND_BUSWIDTH_16;
+
+ this->read_byte = (!nand_width) ? au_read_byte16 : au_read_byte;
+ this->write_byte = (!nand_width) ? au_write_byte16 : au_write_byte;
+ this->write_word = au_write_word;
+ this->read_word = au_read_word;
+ this->write_buf = (!nand_width) ? au_write_buf16 : au_write_buf;
+ this->read_buf = (!nand_width) ? au_read_buf16 : au_read_buf;
+ this->verify_buf = (!nand_width) ? au_verify_buf16 : au_verify_buf;
+
+ /* Scan to find existence of the device */
+ if (nand_scan (au1550_mtd, 1)) {
+ retval = -ENXIO;
+ goto outio;
+ }
+
+ /* Register the partitions */
+ add_mtd_partitions(au1550_mtd, partition_info, NUM_PARTITIONS);
+
+ return 0;
+
+ outio:
+ iounmap ((void *)p_nand);
+
+ outmem:
+ kfree (au1550_mtd);
+ return retval;
+}
+
+module_init(au1550_init);
+
+/*
+ * Clean up routine
+ */
+#ifdef MODULE
+static void __exit au1550_cleanup (void)
+{
+ struct nand_chip *this = (struct nand_chip *) &au1550_mtd[1];
+
+ /* Release resources, unregister device */
+ nand_release (au1550_mtd);
+
+ /* Free the MTD device structure */
+ kfree (au1550_mtd);
+
+ /* Unmap */
+ iounmap ((void *)p_nand);
+}
+module_exit(au1550_cleanup);
+#endif
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Embedded Edge, LLC");
+MODULE_DESCRIPTION("Board-specific glue layer for NAND flash on Pb1550 board");
--- /dev/null
+/*
+ * drivers/mtd/nand/diskonchip.c
+ *
+ * (C) 2003 Red Hat, Inc.
+ * (C) 2004 Dan Brown <dan_brown@ieee.org>
+ * (C) 2004 Kalev Lember <kalev@smartlink.ee>
+ *
+ * Author: David Woodhouse <dwmw2@infradead.org>
+ * Additional Diskonchip 2000 and Millennium support by Dan Brown <dan_brown@ieee.org>
+ * Diskonchip Millennium Plus support by Kalev Lember <kalev@smartlink.ee>
+ *
+ * Error correction code lifted from the old docecc code
+ * Author: Fabrice Bellard (fabrice.bellard@netgem.com)
+ * Copyright (C) 2000 Netgem S.A.
+ * converted to the generic Reed-Solomon library by Thomas Gleixner <tglx@linutronix.de>
+ *
+ * Interface to generic NAND code for M-Systems DiskOnChip devices
+ *
+ * $Id: diskonchip.c,v 1.42 2004/11/16 18:29:03 dwmw2 Exp $
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/rslib.h>
+#include <asm/io.h>
+
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/doc2000.h>
+#include <linux/mtd/compatmac.h>
+#include <linux/mtd/partitions.h>
+#include <linux/mtd/inftl.h>
+
+/* Where to look for the devices? */
+#ifndef CONFIG_MTD_DISKONCHIP_PROBE_ADDRESS
+#define CONFIG_MTD_DISKONCHIP_PROBE_ADDRESS 0
+#endif
+
+static unsigned long __initdata doc_locations[] = {
+#if defined (__alpha__) || defined(__i386__) || defined(__x86_64__)
+#ifdef CONFIG_MTD_DISKONCHIP_PROBE_HIGH
+ 0xfffc8000, 0xfffca000, 0xfffcc000, 0xfffce000,
+ 0xfffd0000, 0xfffd2000, 0xfffd4000, 0xfffd6000,
+ 0xfffd8000, 0xfffda000, 0xfffdc000, 0xfffde000,
+ 0xfffe0000, 0xfffe2000, 0xfffe4000, 0xfffe6000,
+ 0xfffe8000, 0xfffea000, 0xfffec000, 0xfffee000,
+#else /* CONFIG_MTD_DOCPROBE_HIGH */
+ 0xc8000, 0xca000, 0xcc000, 0xce000,
+ 0xd0000, 0xd2000, 0xd4000, 0xd6000,
+ 0xd8000, 0xda000, 0xdc000, 0xde000,
+ 0xe0000, 0xe2000, 0xe4000, 0xe6000,
+ 0xe8000, 0xea000, 0xec000, 0xee000,
+#endif /* CONFIG_MTD_DOCPROBE_HIGH */
+#elif defined(__PPC__)
+ 0xe4000000,
+#elif defined(CONFIG_MOMENCO_OCELOT)
+ 0x2f000000,
+ 0xff000000,
+#elif defined(CONFIG_MOMENCO_OCELOT_G) || defined (CONFIG_MOMENCO_OCELOT_C)
+ 0xff000000,
+##else
+#warning Unknown architecture for DiskOnChip. No default probe locations defined
+#endif
+ 0xffffffff };
+
+static struct mtd_info *doclist = NULL;
+
+struct doc_priv {
+ void __iomem *virtadr;
+ unsigned long physadr;
+ u_char ChipID;
+ u_char CDSNControl;
+ int chips_per_floor; /* The number of chips detected on each floor */
+ int curfloor;
+ int curchip;
+ int mh0_page;
+ int mh1_page;
+ struct mtd_info *nextdoc;
+};
+
+/* Max number of eraseblocks to scan (from start of device) for the (I)NFTL
+ MediaHeader. The spec says to just keep going, I think, but that's just
+ silly. */
+#define MAX_MEDIAHEADER_SCAN 8
+
+/* This is the syndrome computed by the HW ecc generator upon reading an empty
+ page, one with all 0xff for data and stored ecc code. */
+static u_char empty_read_syndrome[6] = { 0x26, 0xff, 0x6d, 0x47, 0x73, 0x7a };
+/* This is the ecc value computed by the HW ecc generator upon writing an empty
+ page, one with all 0xff for data. */
+static u_char empty_write_ecc[6] = { 0x4b, 0x00, 0xe2, 0x0e, 0x93, 0xf7 };
+
+#define INFTL_BBT_RESERVED_BLOCKS 4
+
+#define DoC_is_MillenniumPlus(doc) ((doc)->ChipID == DOC_ChipID_DocMilPlus16 || (doc)->ChipID == DOC_ChipID_DocMilPlus32)
+#define DoC_is_Millennium(doc) ((doc)->ChipID == DOC_ChipID_DocMil)
+#define DoC_is_2000(doc) ((doc)->ChipID == DOC_ChipID_Doc2k)
+
+static void doc200x_hwcontrol(struct mtd_info *mtd, int cmd);
+static void doc200x_select_chip(struct mtd_info *mtd, int chip);
+
+static int debug=0;
+module_param(debug, int, 0);
+
+static int try_dword=1;
+module_param(try_dword, int, 0);
+
+static int no_ecc_failures=0;
+module_param(no_ecc_failures, int, 0);
+
+#ifdef CONFIG_MTD_PARTITIONS
+static int no_autopart=0;
+module_param(no_autopart, int, 0);
+#endif
+
+#ifdef MTD_NAND_DISKONCHIP_BBTWRITE
+static int inftl_bbt_write=1;
+#else
+static int inftl_bbt_write=0;
+#endif
+module_param(inftl_bbt_write, int, 0);
+
+static unsigned long doc_config_location = CONFIG_MTD_DISKONCHIP_PROBE_ADDRESS;
+module_param(doc_config_location, ulong, 0);
+MODULE_PARM_DESC(doc_config_location, "Physical memory address at which to probe for DiskOnChip");
+
+
+/* Sector size for HW ECC */
+#define SECTOR_SIZE 512
+/* The sector bytes are packed into NB_DATA 10 bit words */
+#define NB_DATA (((SECTOR_SIZE + 1) * 8 + 6) / 10)
+/* Number of roots */
+#define NROOTS 4
+/* First consective root */
+#define FCR 510
+/* Number of symbols */
+#define NN 1023
+
+/* the Reed Solomon control structure */
+static struct rs_control *rs_decoder;
+
+/*
+ * The HW decoder in the DoC ASIC's provides us a error syndrome,
+ * which we must convert to a standard syndrom usable by the generic
+ * Reed-Solomon library code.
+ *
+ * Fabrice Bellard figured this out in the old docecc code. I added
+ * some comments, improved a minor bit and converted it to make use
+ * of the generic Reed-Solomon libary. tglx
+ */
+static int doc_ecc_decode (struct rs_control *rs, uint8_t *data, uint8_t *ecc)
+{
+ int i, j, nerr, errpos[8];
+ uint8_t parity;
+ uint16_t ds[4], s[5], tmp, errval[8], syn[4];
+
+ /* Convert the ecc bytes into words */
+ ds[0] = ((ecc[4] & 0xff) >> 0) | ((ecc[5] & 0x03) << 8);
+ ds[1] = ((ecc[5] & 0xfc) >> 2) | ((ecc[2] & 0x0f) << 6);
+ ds[2] = ((ecc[2] & 0xf0) >> 4) | ((ecc[3] & 0x3f) << 4);
+ ds[3] = ((ecc[3] & 0xc0) >> 6) | ((ecc[0] & 0xff) << 2);
+ parity = ecc[1];
+
+ /* Initialize the syndrom buffer */
+ for (i = 0; i < NROOTS; i++)
+ s[i] = ds[0];
+ /*
+ * Evaluate
+ * s[i] = ds[3]x^3 + ds[2]x^2 + ds[1]x^1 + ds[0]
+ * where x = alpha^(FCR + i)
+ */
+ for(j = 1; j < NROOTS; j++) {
+ if(ds[j] == 0)
+ continue;
+ tmp = rs->index_of[ds[j]];
+ for(i = 0; i < NROOTS; i++)
+ s[i] ^= rs->alpha_to[rs_modnn(rs, tmp + (FCR + i) * j)];
+ }
+
+ /* Calc s[i] = s[i] / alpha^(v + i) */
+ for (i = 0; i < NROOTS; i++) {
+ if (syn[i])
+ syn[i] = rs_modnn(rs, rs->index_of[s[i]] + (NN - FCR - i));
+ }
+ /* Call the decoder library */
+ nerr = decode_rs16(rs, NULL, NULL, 1019, syn, 0, errpos, 0, errval);
+
+ /* Incorrectable errors ? */
+ if (nerr < 0)
+ return nerr;
+
+ /*
+ * Correct the errors. The bitpositions are a bit of magic,
+ * but they are given by the design of the de/encoder circuit
+ * in the DoC ASIC's.
+ */
+ for(i = 0;i < nerr; i++) {
+ int index, bitpos, pos = 1015 - errpos[i];
+ uint8_t val;
+ if (pos >= NB_DATA && pos < 1019)
+ continue;
+ if (pos < NB_DATA) {
+ /* extract bit position (MSB first) */
+ pos = 10 * (NB_DATA - 1 - pos) - 6;
+ /* now correct the following 10 bits. At most two bytes
+ can be modified since pos is even */
+ index = (pos >> 3) ^ 1;
+ bitpos = pos & 7;
+ if ((index >= 0 && index < SECTOR_SIZE) ||
+ index == (SECTOR_SIZE + 1)) {
+ val = (uint8_t) (errval[i] >> (2 + bitpos));
+ parity ^= val;
+ if (index < SECTOR_SIZE)
+ data[index] ^= val;
+ }
+ index = ((pos >> 3) + 1) ^ 1;
+ bitpos = (bitpos + 10) & 7;
+ if (bitpos == 0)
+ bitpos = 8;
+ if ((index >= 0 && index < SECTOR_SIZE) ||
+ index == (SECTOR_SIZE + 1)) {
+ val = (uint8_t)(errval[i] << (8 - bitpos));
+ parity ^= val;
+ if (index < SECTOR_SIZE)
+ data[index] ^= val;
+ }
+ }
+ }
+ /* If the parity is wrong, no rescue possible */
+ return parity ? -1 : nerr;
+}
+
+static void DoC_Delay(struct doc_priv *doc, unsigned short cycles)
+{
+ volatile char dummy;
+ int i;
+
+ for (i = 0; i < cycles; i++) {
+ if (DoC_is_Millennium(doc))
+ dummy = ReadDOC(doc->virtadr, NOP);
+ else if (DoC_is_MillenniumPlus(doc))
+ dummy = ReadDOC(doc->virtadr, Mplus_NOP);
+ else
+ dummy = ReadDOC(doc->virtadr, DOCStatus);
+ }
+
+}
+
+#define CDSN_CTRL_FR_B_MASK (CDSN_CTRL_FR_B0 | CDSN_CTRL_FR_B1)
+
+/* DOC_WaitReady: Wait for RDY line to be asserted by the flash chip */
+static int _DoC_WaitReady(struct doc_priv *doc)
+{
+ void __iomem *docptr = doc->virtadr;
+ unsigned long timeo = jiffies + (HZ * 10);
+
+ if(debug) printk("_DoC_WaitReady...\n");
+ /* Out-of-line routine to wait for chip response */
+ if (DoC_is_MillenniumPlus(doc)) {
+ while ((ReadDOC(docptr, Mplus_FlashControl) & CDSN_CTRL_FR_B_MASK) != CDSN_CTRL_FR_B_MASK) {
+ if (time_after(jiffies, timeo)) {
+ printk("_DoC_WaitReady timed out.\n");
+ return -EIO;
+ }
+ udelay(1);
+ cond_resched();
+ }
+ } else {
+ while (!(ReadDOC(docptr, CDSNControl) & CDSN_CTRL_FR_B)) {
+ if (time_after(jiffies, timeo)) {
+ printk("_DoC_WaitReady timed out.\n");
+ return -EIO;
+ }
+ udelay(1);
+ cond_resched();
+ }
+ }
+
+ return 0;
+}
+
+static inline int DoC_WaitReady(struct doc_priv *doc)
+{
+ void __iomem *docptr = doc->virtadr;
+ int ret = 0;
+
+ if (DoC_is_MillenniumPlus(doc)) {
+ DoC_Delay(doc, 4);
+
+ if ((ReadDOC(docptr, Mplus_FlashControl) & CDSN_CTRL_FR_B_MASK) != CDSN_CTRL_FR_B_MASK)
+ /* Call the out-of-line routine to wait */
+ ret = _DoC_WaitReady(doc);
+ } else {
+ DoC_Delay(doc, 4);
+
+ if (!(ReadDOC(docptr, CDSNControl) & CDSN_CTRL_FR_B))
+ /* Call the out-of-line routine to wait */
+ ret = _DoC_WaitReady(doc);
+ DoC_Delay(doc, 2);
+ }
+
+ if(debug) printk("DoC_WaitReady OK\n");
+ return ret;
+}
+
+static void doc2000_write_byte(struct mtd_info *mtd, u_char datum)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+
+ if(debug)printk("write_byte %02x\n", datum);
+ WriteDOC(datum, docptr, CDSNSlowIO);
+ WriteDOC(datum, docptr, 2k_CDSN_IO);
+}
+
+static u_char doc2000_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ u_char ret;
+
+ ReadDOC(docptr, CDSNSlowIO);
+ DoC_Delay(doc, 2);
+ ret = ReadDOC(docptr, 2k_CDSN_IO);
+ if (debug) printk("read_byte returns %02x\n", ret);
+ return ret;
+}
+
+static void doc2000_writebuf(struct mtd_info *mtd,
+ const u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+ if (debug)printk("writebuf of %d bytes: ", len);
+ for (i=0; i < len; i++) {
+ WriteDOC_(buf[i], docptr, DoC_2k_CDSN_IO + i);
+ if (debug && i < 16)
+ printk("%02x ", buf[i]);
+ }
+ if (debug) printk("\n");
+}
+
+static void doc2000_readbuf(struct mtd_info *mtd,
+ u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+
+ if (debug)printk("readbuf of %d bytes: ", len);
+
+ for (i=0; i < len; i++) {
+ buf[i] = ReadDOC(docptr, 2k_CDSN_IO + i);
+ }
+}
+
+static void doc2000_readbuf_dword(struct mtd_info *mtd,
+ u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+
+ if (debug) printk("readbuf_dword of %d bytes: ", len);
+
+ if (unlikely((((unsigned long)buf)|len) & 3)) {
+ for (i=0; i < len; i++) {
+ *(uint8_t *)(&buf[i]) = ReadDOC(docptr, 2k_CDSN_IO + i);
+ }
+ } else {
+ for (i=0; i < len; i+=4) {
+ *(uint32_t*)(&buf[i]) = readl(docptr + DoC_2k_CDSN_IO + i);
+ }
+ }
+}
+
+static int doc2000_verifybuf(struct mtd_info *mtd,
+ const u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+
+ for (i=0; i < len; i++)
+ if (buf[i] != ReadDOC(docptr, 2k_CDSN_IO))
+ return -EFAULT;
+ return 0;
+}
+
+static uint16_t __init doc200x_ident_chip(struct mtd_info *mtd, int nr)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ uint16_t ret;
+
+ doc200x_select_chip(mtd, nr);
+ doc200x_hwcontrol(mtd, NAND_CTL_SETCLE);
+ this->write_byte(mtd, NAND_CMD_READID);
+ doc200x_hwcontrol(mtd, NAND_CTL_CLRCLE);
+ doc200x_hwcontrol(mtd, NAND_CTL_SETALE);
+ this->write_byte(mtd, 0);
+ doc200x_hwcontrol(mtd, NAND_CTL_CLRALE);
+
+ ret = this->read_byte(mtd) << 8;
+ ret |= this->read_byte(mtd);
+
+ if (doc->ChipID == DOC_ChipID_Doc2k && try_dword && !nr) {
+ /* First chip probe. See if we get same results by 32-bit access */
+ union {
+ uint32_t dword;
+ uint8_t byte[4];
+ } ident;
+ void __iomem *docptr = doc->virtadr;
+
+ doc200x_hwcontrol(mtd, NAND_CTL_SETCLE);
+ doc2000_write_byte(mtd, NAND_CMD_READID);
+ doc200x_hwcontrol(mtd, NAND_CTL_CLRCLE);
+ doc200x_hwcontrol(mtd, NAND_CTL_SETALE);
+ doc2000_write_byte(mtd, 0);
+ doc200x_hwcontrol(mtd, NAND_CTL_CLRALE);
+
+ ident.dword = readl(docptr + DoC_2k_CDSN_IO);
+ if (((ident.byte[0] << 8) | ident.byte[1]) == ret) {
+ printk(KERN_INFO "DiskOnChip 2000 responds to DWORD access\n");
+ this->read_buf = &doc2000_readbuf_dword;
+ }
+ }
+
+ return ret;
+}
+
+static void __init doc2000_count_chips(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ uint16_t mfrid;
+ int i;
+
+ /* Max 4 chips per floor on DiskOnChip 2000 */
+ doc->chips_per_floor = 4;
+
+ /* Find out what the first chip is */
+ mfrid = doc200x_ident_chip(mtd, 0);
+
+ /* Find how many chips in each floor. */
+ for (i = 1; i < 4; i++) {
+ if (doc200x_ident_chip(mtd, i) != mfrid)
+ break;
+ }
+ doc->chips_per_floor = i;
+ printk(KERN_DEBUG "Detected %d chips per floor.\n", i);
+}
+
+static int doc200x_wait(struct mtd_info *mtd, struct nand_chip *this, int state)
+{
+ struct doc_priv *doc = (void *)this->priv;
+
+ int status;
+
+ DoC_WaitReady(doc);
+ this->cmdfunc(mtd, NAND_CMD_STATUS, -1, -1);
+ DoC_WaitReady(doc);
+ status = (int)this->read_byte(mtd);
+
+ return status;
+}
+
+static void doc2001_write_byte(struct mtd_info *mtd, u_char datum)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+
+ WriteDOC(datum, docptr, CDSNSlowIO);
+ WriteDOC(datum, docptr, Mil_CDSN_IO);
+ WriteDOC(datum, docptr, WritePipeTerm);
+}
+
+static u_char doc2001_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+
+ //ReadDOC(docptr, CDSNSlowIO);
+ /* 11.4.5 -- delay twice to allow extended length cycle */
+ DoC_Delay(doc, 2);
+ ReadDOC(docptr, ReadPipeInit);
+ //return ReadDOC(docptr, Mil_CDSN_IO);
+ return ReadDOC(docptr, LastDataRead);
+}
+
+static void doc2001_writebuf(struct mtd_info *mtd,
+ const u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+
+ for (i=0; i < len; i++)
+ WriteDOC_(buf[i], docptr, DoC_Mil_CDSN_IO + i);
+ /* Terminate write pipeline */
+ WriteDOC(0x00, docptr, WritePipeTerm);
+}
+
+static void doc2001_readbuf(struct mtd_info *mtd,
+ u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+
+ /* Start read pipeline */
+ ReadDOC(docptr, ReadPipeInit);
+
+ for (i=0; i < len-1; i++)
+ buf[i] = ReadDOC(docptr, Mil_CDSN_IO + (i & 0xff));
+
+ /* Terminate read pipeline */
+ buf[i] = ReadDOC(docptr, LastDataRead);
+}
+
+static int doc2001_verifybuf(struct mtd_info *mtd,
+ const u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+
+ /* Start read pipeline */
+ ReadDOC(docptr, ReadPipeInit);
+
+ for (i=0; i < len-1; i++)
+ if (buf[i] != ReadDOC(docptr, Mil_CDSN_IO)) {
+ ReadDOC(docptr, LastDataRead);
+ return i;
+ }
+ if (buf[i] != ReadDOC(docptr, LastDataRead))
+ return i;
+ return 0;
+}
+
+static u_char doc2001plus_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ u_char ret;
+
+ ReadDOC(docptr, Mplus_ReadPipeInit);
+ ReadDOC(docptr, Mplus_ReadPipeInit);
+ ret = ReadDOC(docptr, Mplus_LastDataRead);
+ if (debug) printk("read_byte returns %02x\n", ret);
+ return ret;
+}
+
+static void doc2001plus_writebuf(struct mtd_info *mtd,
+ const u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+
+ if (debug)printk("writebuf of %d bytes: ", len);
+ for (i=0; i < len; i++) {
+ WriteDOC_(buf[i], docptr, DoC_Mil_CDSN_IO + i);
+ if (debug && i < 16)
+ printk("%02x ", buf[i]);
+ }
+ if (debug) printk("\n");
+}
+
+static void doc2001plus_readbuf(struct mtd_info *mtd,
+ u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+
+ if (debug)printk("readbuf of %d bytes: ", len);
+
+ /* Start read pipeline */
+ ReadDOC(docptr, Mplus_ReadPipeInit);
+ ReadDOC(docptr, Mplus_ReadPipeInit);
+
+ for (i=0; i < len-2; i++) {
+ buf[i] = ReadDOC(docptr, Mil_CDSN_IO);
+ if (debug && i < 16)
+ printk("%02x ", buf[i]);
+ }
+
+ /* Terminate read pipeline */
+ buf[len-2] = ReadDOC(docptr, Mplus_LastDataRead);
+ if (debug && i < 16)
+ printk("%02x ", buf[len-2]);
+ buf[len-1] = ReadDOC(docptr, Mplus_LastDataRead);
+ if (debug && i < 16)
+ printk("%02x ", buf[len-1]);
+ if (debug) printk("\n");
+}
+
+static int doc2001plus_verifybuf(struct mtd_info *mtd,
+ const u_char *buf, int len)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+
+ if (debug)printk("verifybuf of %d bytes: ", len);
+
+ /* Start read pipeline */
+ ReadDOC(docptr, Mplus_ReadPipeInit);
+ ReadDOC(docptr, Mplus_ReadPipeInit);
+
+ for (i=0; i < len-2; i++)
+ if (buf[i] != ReadDOC(docptr, Mil_CDSN_IO)) {
+ ReadDOC(docptr, Mplus_LastDataRead);
+ ReadDOC(docptr, Mplus_LastDataRead);
+ return i;
+ }
+ if (buf[len-2] != ReadDOC(docptr, Mplus_LastDataRead))
+ return len-2;
+ if (buf[len-1] != ReadDOC(docptr, Mplus_LastDataRead))
+ return len-1;
+ return 0;
+}
+
+static void doc2001plus_select_chip(struct mtd_info *mtd, int chip)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int floor = 0;
+
+ if(debug)printk("select chip (%d)\n", chip);
+
+ if (chip == -1) {
+ /* Disable flash internally */
+ WriteDOC(0, docptr, Mplus_FlashSelect);
+ return;
+ }
+
+ floor = chip / doc->chips_per_floor;
+ chip -= (floor * doc->chips_per_floor);
+
+ /* Assert ChipEnable and deassert WriteProtect */
+ WriteDOC((DOC_FLASH_CE), docptr, Mplus_FlashSelect);
+ this->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
+
+ doc->curchip = chip;
+ doc->curfloor = floor;
+}
+
+static void doc200x_select_chip(struct mtd_info *mtd, int chip)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int floor = 0;
+
+ if(debug)printk("select chip (%d)\n", chip);
+
+ if (chip == -1)
+ return;
+
+ floor = chip / doc->chips_per_floor;
+ chip -= (floor * doc->chips_per_floor);
+
+ /* 11.4.4 -- deassert CE before changing chip */
+ doc200x_hwcontrol(mtd, NAND_CTL_CLRNCE);
+
+ WriteDOC(floor, docptr, FloorSelect);
+ WriteDOC(chip, docptr, CDSNDeviceSelect);
+
+ doc200x_hwcontrol(mtd, NAND_CTL_SETNCE);
+
+ doc->curchip = chip;
+ doc->curfloor = floor;
+}
+
+static void doc200x_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+
+ switch(cmd) {
+ case NAND_CTL_SETNCE:
+ doc->CDSNControl |= CDSN_CTRL_CE;
+ break;
+ case NAND_CTL_CLRNCE:
+ doc->CDSNControl &= ~CDSN_CTRL_CE;
+ break;
+ case NAND_CTL_SETCLE:
+ doc->CDSNControl |= CDSN_CTRL_CLE;
+ break;
+ case NAND_CTL_CLRCLE:
+ doc->CDSNControl &= ~CDSN_CTRL_CLE;
+ break;
+ case NAND_CTL_SETALE:
+ doc->CDSNControl |= CDSN_CTRL_ALE;
+ break;
+ case NAND_CTL_CLRALE:
+ doc->CDSNControl &= ~CDSN_CTRL_ALE;
+ break;
+ case NAND_CTL_SETWP:
+ doc->CDSNControl |= CDSN_CTRL_WP;
+ break;
+ case NAND_CTL_CLRWP:
+ doc->CDSNControl &= ~CDSN_CTRL_WP;
+ break;
+ }
+ if (debug)printk("hwcontrol(%d): %02x\n", cmd, doc->CDSNControl);
+ WriteDOC(doc->CDSNControl, docptr, CDSNControl);
+ /* 11.4.3 -- 4 NOPs after CSDNControl write */
+ DoC_Delay(doc, 4);
+}
+
+static void doc2001plus_command (struct mtd_info *mtd, unsigned command, int column, int page_addr)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+
+ /*
+ * Must terminate write pipeline before sending any commands
+ * to the device.
+ */
+ if (command == NAND_CMD_PAGEPROG) {
+ WriteDOC(0x00, docptr, Mplus_WritePipeTerm);
+ WriteDOC(0x00, docptr, Mplus_WritePipeTerm);
+ }
+
+ /*
+ * Write out the command to the device.
+ */
+ if (command == NAND_CMD_SEQIN) {
+ int readcmd;
+
+ if (column >= mtd->oobblock) {
+ /* OOB area */
+ column -= mtd->oobblock;
+ readcmd = NAND_CMD_READOOB;
+ } else if (column < 256) {
+ /* First 256 bytes --> READ0 */
+ readcmd = NAND_CMD_READ0;
+ } else {
+ column -= 256;
+ readcmd = NAND_CMD_READ1;
+ }
+ WriteDOC(readcmd, docptr, Mplus_FlashCmd);
+ }
+ WriteDOC(command, docptr, Mplus_FlashCmd);
+ WriteDOC(0, docptr, Mplus_WritePipeTerm);
+ WriteDOC(0, docptr, Mplus_WritePipeTerm);
+
+ if (column != -1 || page_addr != -1) {
+ /* Serially input address */
+ if (column != -1) {
+ /* Adjust columns for 16 bit buswidth */
+ if (this->options & NAND_BUSWIDTH_16)
+ column >>= 1;
+ WriteDOC(column, docptr, Mplus_FlashAddress);
+ }
+ if (page_addr != -1) {
+ WriteDOC((unsigned char) (page_addr & 0xff), docptr, Mplus_FlashAddress);
+ WriteDOC((unsigned char) ((page_addr >> 8) & 0xff), docptr, Mplus_FlashAddress);
+ /* One more address cycle for higher density devices */
+ if (this->chipsize & 0x0c000000) {
+ WriteDOC((unsigned char) ((page_addr >> 16) & 0x0f), docptr, Mplus_FlashAddress);
+ printk("high density\n");
+ }
+ }
+ WriteDOC(0, docptr, Mplus_WritePipeTerm);
+ WriteDOC(0, docptr, Mplus_WritePipeTerm);
+ /* deassert ALE */
+ if (command == NAND_CMD_READ0 || command == NAND_CMD_READ1 || command == NAND_CMD_READOOB || command == NAND_CMD_READID)
+ WriteDOC(0, docptr, Mplus_FlashControl);
+ }
+
+ /*
+ * program and erase have their own busy handlers
+ * status and sequential in needs no delay
+ */
+ switch (command) {
+
+ case NAND_CMD_PAGEPROG:
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_SEQIN:
+ case NAND_CMD_STATUS:
+ return;
+
+ case NAND_CMD_RESET:
+ if (this->dev_ready)
+ break;
+ udelay(this->chip_delay);
+ WriteDOC(NAND_CMD_STATUS, docptr, Mplus_FlashCmd);
+ WriteDOC(0, docptr, Mplus_WritePipeTerm);
+ WriteDOC(0, docptr, Mplus_WritePipeTerm);
+ while ( !(this->read_byte(mtd) & 0x40));
+ return;
+
+ /* This applies to read commands */
+ default:
+ /*
+ * If we don't have access to the busy pin, we apply the given
+ * command delay
+ */
+ if (!this->dev_ready) {
+ udelay (this->chip_delay);
+ return;
+ }
+ }
+
+ /* Apply this short delay always to ensure that we do wait tWB in
+ * any case on any machine. */
+ ndelay (100);
+ /* wait until command is processed */
+ while (!this->dev_ready(mtd));
+}
+
+static int doc200x_dev_ready(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+
+ if (DoC_is_MillenniumPlus(doc)) {
+ /* 11.4.2 -- must NOP four times before checking FR/B# */
+ DoC_Delay(doc, 4);
+ if ((ReadDOC(docptr, Mplus_FlashControl) & CDSN_CTRL_FR_B_MASK) != CDSN_CTRL_FR_B_MASK) {
+ if(debug)
+ printk("not ready\n");
+ return 0;
+ }
+ if (debug)printk("was ready\n");
+ return 1;
+ } else {
+ /* 11.4.2 -- must NOP four times before checking FR/B# */
+ DoC_Delay(doc, 4);
+ if (!(ReadDOC(docptr, CDSNControl) & CDSN_CTRL_FR_B)) {
+ if(debug)
+ printk("not ready\n");
+ return 0;
+ }
+ /* 11.4.2 -- Must NOP twice if it's ready */
+ DoC_Delay(doc, 2);
+ if (debug)printk("was ready\n");
+ return 1;
+ }
+}
+
+static int doc200x_block_bad(struct mtd_info *mtd, loff_t ofs, int getchip)
+{
+ /* This is our last resort if we couldn't find or create a BBT. Just
+ pretend all blocks are good. */
+ return 0;
+}
+
+static void doc200x_enable_hwecc(struct mtd_info *mtd, int mode)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+
+ /* Prime the ECC engine */
+ switch(mode) {
+ case NAND_ECC_READ:
+ WriteDOC(DOC_ECC_RESET, docptr, ECCConf);
+ WriteDOC(DOC_ECC_EN, docptr, ECCConf);
+ break;
+ case NAND_ECC_WRITE:
+ WriteDOC(DOC_ECC_RESET, docptr, ECCConf);
+ WriteDOC(DOC_ECC_EN | DOC_ECC_RW, docptr, ECCConf);
+ break;
+ }
+}
+
+static void doc2001plus_enable_hwecc(struct mtd_info *mtd, int mode)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+
+ /* Prime the ECC engine */
+ switch(mode) {
+ case NAND_ECC_READ:
+ WriteDOC(DOC_ECC_RESET, docptr, Mplus_ECCConf);
+ WriteDOC(DOC_ECC_EN, docptr, Mplus_ECCConf);
+ break;
+ case NAND_ECC_WRITE:
+ WriteDOC(DOC_ECC_RESET, docptr, Mplus_ECCConf);
+ WriteDOC(DOC_ECC_EN | DOC_ECC_RW, docptr, Mplus_ECCConf);
+ break;
+ }
+}
+
+/* This code is only called on write */
+static int doc200x_calculate_ecc(struct mtd_info *mtd, const u_char *dat,
+ unsigned char *ecc_code)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ int i;
+ int emptymatch = 1;
+
+ /* flush the pipeline */
+ if (DoC_is_2000(doc)) {
+ WriteDOC(doc->CDSNControl & ~CDSN_CTRL_FLASH_IO, docptr, CDSNControl);
+ WriteDOC(0, docptr, 2k_CDSN_IO);
+ WriteDOC(0, docptr, 2k_CDSN_IO);
+ WriteDOC(0, docptr, 2k_CDSN_IO);
+ WriteDOC(doc->CDSNControl, docptr, CDSNControl);
+ } else if (DoC_is_MillenniumPlus(doc)) {
+ WriteDOC(0, docptr, Mplus_NOP);
+ WriteDOC(0, docptr, Mplus_NOP);
+ WriteDOC(0, docptr, Mplus_NOP);
+ } else {
+ WriteDOC(0, docptr, NOP);
+ WriteDOC(0, docptr, NOP);
+ WriteDOC(0, docptr, NOP);
+ }
+
+ for (i = 0; i < 6; i++) {
+ if (DoC_is_MillenniumPlus(doc))
+ ecc_code[i] = ReadDOC_(docptr, DoC_Mplus_ECCSyndrome0 + i);
+ else
+ ecc_code[i] = ReadDOC_(docptr, DoC_ECCSyndrome0 + i);
+ if (ecc_code[i] != empty_write_ecc[i])
+ emptymatch = 0;
+ }
+ if (DoC_is_MillenniumPlus(doc))
+ WriteDOC(DOC_ECC_DIS, docptr, Mplus_ECCConf);
+ else
+ WriteDOC(DOC_ECC_DIS, docptr, ECCConf);
+#if 0
+ /* If emptymatch=1, we might have an all-0xff data buffer. Check. */
+ if (emptymatch) {
+ /* Note: this somewhat expensive test should not be triggered
+ often. It could be optimized away by examining the data in
+ the writebuf routine, and remembering the result. */
+ for (i = 0; i < 512; i++) {
+ if (dat[i] == 0xff) continue;
+ emptymatch = 0;
+ break;
+ }
+ }
+ /* If emptymatch still =1, we do have an all-0xff data buffer.
+ Return all-0xff ecc value instead of the computed one, so
+ it'll look just like a freshly-erased page. */
+ if (emptymatch) memset(ecc_code, 0xff, 6);
+#endif
+ return 0;
+}
+
+static int doc200x_correct_data(struct mtd_info *mtd, u_char *dat, u_char *read_ecc, u_char *calc_ecc)
+{
+ int i, ret = 0;
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ void __iomem *docptr = doc->virtadr;
+ volatile u_char dummy;
+ int emptymatch = 1;
+
+ /* flush the pipeline */
+ if (DoC_is_2000(doc)) {
+ dummy = ReadDOC(docptr, 2k_ECCStatus);
+ dummy = ReadDOC(docptr, 2k_ECCStatus);
+ dummy = ReadDOC(docptr, 2k_ECCStatus);
+ } else if (DoC_is_MillenniumPlus(doc)) {
+ dummy = ReadDOC(docptr, Mplus_ECCConf);
+ dummy = ReadDOC(docptr, Mplus_ECCConf);
+ dummy = ReadDOC(docptr, Mplus_ECCConf);
+ } else {
+ dummy = ReadDOC(docptr, ECCConf);
+ dummy = ReadDOC(docptr, ECCConf);
+ dummy = ReadDOC(docptr, ECCConf);
+ }
+
+ /* Error occured ? */
+ if (dummy & 0x80) {
+ for (i = 0; i < 6; i++) {
+ if (DoC_is_MillenniumPlus(doc))
+ calc_ecc[i] = ReadDOC_(docptr, DoC_Mplus_ECCSyndrome0 + i);
+ else
+ calc_ecc[i] = ReadDOC_(docptr, DoC_ECCSyndrome0 + i);
+ if (calc_ecc[i] != empty_read_syndrome[i])
+ emptymatch = 0;
+ }
+ /* If emptymatch=1, the read syndrome is consistent with an
+ all-0xff data and stored ecc block. Check the stored ecc. */
+ if (emptymatch) {
+ for (i = 0; i < 6; i++) {
+ if (read_ecc[i] == 0xff) continue;
+ emptymatch = 0;
+ break;
+ }
+ }
+ /* If emptymatch still =1, check the data block. */
+ if (emptymatch) {
+ /* Note: this somewhat expensive test should not be triggered
+ often. It could be optimized away by examining the data in
+ the readbuf routine, and remembering the result. */
+ for (i = 0; i < 512; i++) {
+ if (dat[i] == 0xff) continue;
+ emptymatch = 0;
+ break;
+ }
+ }
+ /* If emptymatch still =1, this is almost certainly a freshly-
+ erased block, in which case the ECC will not come out right.
+ We'll suppress the error and tell the caller everything's
+ OK. Because it is. */
+ if (!emptymatch) ret = doc_ecc_decode (rs_decoder, dat, calc_ecc);
+ if (ret > 0)
+ printk(KERN_ERR "doc200x_correct_data corrected %d errors\n", ret);
+ }
+ if (DoC_is_MillenniumPlus(doc))
+ WriteDOC(DOC_ECC_DIS, docptr, Mplus_ECCConf);
+ else
+ WriteDOC(DOC_ECC_DIS, docptr, ECCConf);
+ if (no_ecc_failures && (ret == -1)) {
+ printk(KERN_ERR "suppressing ECC failure\n");
+ ret = 0;
+ }
+ return ret;
+}
+
+//u_char mydatabuf[528];
+
+static struct nand_oobinfo doc200x_oobinfo = {
+ .useecc = MTD_NANDECC_AUTOPLACE,
+ .eccbytes = 6,
+ .eccpos = {0, 1, 2, 3, 4, 5},
+ .oobfree = { {8, 8} }
+};
+
+/* Find the (I)NFTL Media Header, and optionally also the mirror media header.
+ On sucessful return, buf will contain a copy of the media header for
+ further processing. id is the string to scan for, and will presumably be
+ either "ANAND" or "BNAND". If findmirror=1, also look for the mirror media
+ header. The page #s of the found media headers are placed in mh0_page and
+ mh1_page in the DOC private structure. */
+static int __init find_media_headers(struct mtd_info *mtd, u_char *buf,
+ const char *id, int findmirror)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ unsigned offs, end = (MAX_MEDIAHEADER_SCAN << this->phys_erase_shift);
+ int ret;
+ size_t retlen;
+
+ end = min(end, mtd->size); // paranoia
+ for (offs = 0; offs < end; offs += mtd->erasesize) {
+ ret = mtd->read(mtd, offs, mtd->oobblock, &retlen, buf);
+ if (retlen != mtd->oobblock) continue;
+ if (ret) {
+ printk(KERN_WARNING "ECC error scanning DOC at 0x%x\n",
+ offs);
+ }
+ if (memcmp(buf, id, 6)) continue;
+ printk(KERN_INFO "Found DiskOnChip %s Media Header at 0x%x\n", id, offs);
+ if (doc->mh0_page == -1) {
+ doc->mh0_page = offs >> this->page_shift;
+ if (!findmirror) return 1;
+ continue;
+ }
+ doc->mh1_page = offs >> this->page_shift;
+ return 2;
+ }
+ if (doc->mh0_page == -1) {
+ printk(KERN_WARNING "DiskOnChip %s Media Header not found.\n", id);
+ return 0;
+ }
+ /* Only one mediaheader was found. We want buf to contain a
+ mediaheader on return, so we'll have to re-read the one we found. */
+ offs = doc->mh0_page << this->page_shift;
+ ret = mtd->read(mtd, offs, mtd->oobblock, &retlen, buf);
+ if (retlen != mtd->oobblock) {
+ /* Insanity. Give up. */
+ printk(KERN_ERR "Read DiskOnChip Media Header once, but can't reread it???\n");
+ return 0;
+ }
+ return 1;
+}
+
+static inline int __init nftl_partscan(struct mtd_info *mtd,
+ struct mtd_partition *parts)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ int ret = 0;
+ u_char *buf;
+ struct NFTLMediaHeader *mh;
+ const unsigned psize = 1 << this->page_shift;
+ unsigned blocks, maxblocks;
+ int offs, numheaders;
+
+ buf = kmalloc(mtd->oobblock, GFP_KERNEL);
+ if (!buf) {
+ printk(KERN_ERR "DiskOnChip mediaheader kmalloc failed!\n");
+ return 0;
+ }
+ if (!(numheaders=find_media_headers(mtd, buf, "ANAND", 1))) goto out;
+ mh = (struct NFTLMediaHeader *) buf;
+
+//#ifdef CONFIG_MTD_DEBUG_VERBOSE
+// if (CONFIG_MTD_DEBUG_VERBOSE >= 2)
+ printk(KERN_INFO " DataOrgID = %s\n"
+ " NumEraseUnits = %d\n"
+ " FirstPhysicalEUN = %d\n"
+ " FormattedSize = %d\n"
+ " UnitSizeFactor = %d\n",
+ mh->DataOrgID, mh->NumEraseUnits,
+ mh->FirstPhysicalEUN, mh->FormattedSize,
+ mh->UnitSizeFactor);
+//#endif
+
+ blocks = mtd->size >> this->phys_erase_shift;
+ maxblocks = min(32768U, mtd->erasesize - psize);
+
+ if (mh->UnitSizeFactor == 0x00) {
+ /* Auto-determine UnitSizeFactor. The constraints are:
+ - There can be at most 32768 virtual blocks.
+ - There can be at most (virtual block size - page size)
+ virtual blocks (because MediaHeader+BBT must fit in 1).
+ */
+ mh->UnitSizeFactor = 0xff;
+ while (blocks > maxblocks) {
+ blocks >>= 1;
+ maxblocks = min(32768U, (maxblocks << 1) + psize);
+ mh->UnitSizeFactor--;
+ }
+ printk(KERN_WARNING "UnitSizeFactor=0x00 detected. Correct value is assumed to be 0x%02x.\n", mh->UnitSizeFactor);
+ }
+
+ /* NOTE: The lines below modify internal variables of the NAND and MTD
+ layers; variables with have already been configured by nand_scan.
+ Unfortunately, we didn't know before this point what these values
+ should be. Thus, this code is somewhat dependant on the exact
+ implementation of the NAND layer. */
+ if (mh->UnitSizeFactor != 0xff) {
+ this->bbt_erase_shift += (0xff - mh->UnitSizeFactor);
+ mtd->erasesize <<= (0xff - mh->UnitSizeFactor);
+ printk(KERN_INFO "Setting virtual erase size to %d\n", mtd->erasesize);
+ blocks = mtd->size >> this->bbt_erase_shift;
+ maxblocks = min(32768U, mtd->erasesize - psize);
+ }
+
+ if (blocks > maxblocks) {
+ printk(KERN_ERR "UnitSizeFactor of 0x%02x is inconsistent with device size. Aborting.\n", mh->UnitSizeFactor);
+ goto out;
+ }
+
+ /* Skip past the media headers. */
+ offs = max(doc->mh0_page, doc->mh1_page);
+ offs <<= this->page_shift;
+ offs += mtd->erasesize;
+
+ //parts[0].name = " DiskOnChip Boot / Media Header partition";
+ //parts[0].offset = 0;
+ //parts[0].size = offs;
+
+ parts[0].name = " DiskOnChip BDTL partition";
+ parts[0].offset = offs;
+ parts[0].size = (mh->NumEraseUnits - numheaders) << this->bbt_erase_shift;
+
+ offs += parts[0].size;
+ if (offs < mtd->size) {
+ parts[1].name = " DiskOnChip Remainder partition";
+ parts[1].offset = offs;
+ parts[1].size = mtd->size - offs;
+ ret = 2;
+ goto out;
+ }
+ ret = 1;
+out:
+ kfree(buf);
+ return ret;
+}
+
+/* This is a stripped-down copy of the code in inftlmount.c */
+static inline int __init inftl_partscan(struct mtd_info *mtd,
+ struct mtd_partition *parts)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ int ret = 0;
+ u_char *buf;
+ struct INFTLMediaHeader *mh;
+ struct INFTLPartition *ip;
+ int numparts = 0;
+ int blocks;
+ int vshift, lastvunit = 0;
+ int i;
+ int end = mtd->size;
+
+ if (inftl_bbt_write)
+ end -= (INFTL_BBT_RESERVED_BLOCKS << this->phys_erase_shift);
+
+ buf = kmalloc(mtd->oobblock, GFP_KERNEL);
+ if (!buf) {
+ printk(KERN_ERR "DiskOnChip mediaheader kmalloc failed!\n");
+ return 0;
+ }
+
+ if (!find_media_headers(mtd, buf, "BNAND", 0)) goto out;
+ doc->mh1_page = doc->mh0_page + (4096 >> this->page_shift);
+ mh = (struct INFTLMediaHeader *) buf;
+
+ mh->NoOfBootImageBlocks = le32_to_cpu(mh->NoOfBootImageBlocks);
+ mh->NoOfBinaryPartitions = le32_to_cpu(mh->NoOfBinaryPartitions);
+ mh->NoOfBDTLPartitions = le32_to_cpu(mh->NoOfBDTLPartitions);
+ mh->BlockMultiplierBits = le32_to_cpu(mh->BlockMultiplierBits);
+ mh->FormatFlags = le32_to_cpu(mh->FormatFlags);
+ mh->PercentUsed = le32_to_cpu(mh->PercentUsed);
+
+//#ifdef CONFIG_MTD_DEBUG_VERBOSE
+// if (CONFIG_MTD_DEBUG_VERBOSE >= 2)
+ printk(KERN_INFO " bootRecordID = %s\n"
+ " NoOfBootImageBlocks = %d\n"
+ " NoOfBinaryPartitions = %d\n"
+ " NoOfBDTLPartitions = %d\n"
+ " BlockMultiplerBits = %d\n"
+ " FormatFlgs = %d\n"
+ " OsakVersion = %d.%d.%d.%d\n"
+ " PercentUsed = %d\n",
+ mh->bootRecordID, mh->NoOfBootImageBlocks,
+ mh->NoOfBinaryPartitions,
+ mh->NoOfBDTLPartitions,
+ mh->BlockMultiplierBits, mh->FormatFlags,
+ ((unsigned char *) &mh->OsakVersion)[0] & 0xf,
+ ((unsigned char *) &mh->OsakVersion)[1] & 0xf,
+ ((unsigned char *) &mh->OsakVersion)[2] & 0xf,
+ ((unsigned char *) &mh->OsakVersion)[3] & 0xf,
+ mh->PercentUsed);
+//#endif
+
+ vshift = this->phys_erase_shift + mh->BlockMultiplierBits;
+
+ blocks = mtd->size >> vshift;
+ if (blocks > 32768) {
+ printk(KERN_ERR "BlockMultiplierBits=%d is inconsistent with device size. Aborting.\n", mh->BlockMultiplierBits);
+ goto out;
+ }
+
+ blocks = doc->chips_per_floor << (this->chip_shift - this->phys_erase_shift);
+ if (inftl_bbt_write && (blocks > mtd->erasesize)) {
+ printk(KERN_ERR "Writeable BBTs spanning more than one erase block are not yet supported. FIX ME!\n");
+ goto out;
+ }
+
+ /* Scan the partitions */
+ for (i = 0; (i < 4); i++) {
+ ip = &(mh->Partitions[i]);
+ ip->virtualUnits = le32_to_cpu(ip->virtualUnits);
+ ip->firstUnit = le32_to_cpu(ip->firstUnit);
+ ip->lastUnit = le32_to_cpu(ip->lastUnit);
+ ip->flags = le32_to_cpu(ip->flags);
+ ip->spareUnits = le32_to_cpu(ip->spareUnits);
+ ip->Reserved0 = le32_to_cpu(ip->Reserved0);
+
+//#ifdef CONFIG_MTD_DEBUG_VERBOSE
+// if (CONFIG_MTD_DEBUG_VERBOSE >= 2)
+ printk(KERN_INFO " PARTITION[%d] ->\n"
+ " virtualUnits = %d\n"
+ " firstUnit = %d\n"
+ " lastUnit = %d\n"
+ " flags = 0x%x\n"
+ " spareUnits = %d\n",
+ i, ip->virtualUnits, ip->firstUnit,
+ ip->lastUnit, ip->flags,
+ ip->spareUnits);
+//#endif
+
+/*
+ if ((i == 0) && (ip->firstUnit > 0)) {
+ parts[0].name = " DiskOnChip IPL / Media Header partition";
+ parts[0].offset = 0;
+ parts[0].size = mtd->erasesize * ip->firstUnit;
+ numparts = 1;
+ }
+*/
+
+ if (ip->flags & INFTL_BINARY)
+ parts[numparts].name = " DiskOnChip BDK partition";
+ else
+ parts[numparts].name = " DiskOnChip BDTL partition";
+ parts[numparts].offset = ip->firstUnit << vshift;
+ parts[numparts].size = (1 + ip->lastUnit - ip->firstUnit) << vshift;
+ numparts++;
+ if (ip->lastUnit > lastvunit) lastvunit = ip->lastUnit;
+ if (ip->flags & INFTL_LAST) break;
+ }
+ lastvunit++;
+ if ((lastvunit << vshift) < end) {
+ parts[numparts].name = " DiskOnChip Remainder partition";
+ parts[numparts].offset = lastvunit << vshift;
+ parts[numparts].size = end - parts[numparts].offset;
+ numparts++;
+ }
+ ret = numparts;
+out:
+ kfree(buf);
+ return ret;
+}
+
+static int __init nftl_scan_bbt(struct mtd_info *mtd)
+{
+ int ret, numparts;
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ struct mtd_partition parts[2];
+
+ memset((char *) parts, 0, sizeof(parts));
+ /* On NFTL, we have to find the media headers before we can read the
+ BBTs, since they're stored in the media header eraseblocks. */
+ numparts = nftl_partscan(mtd, parts);
+ if (!numparts) return -EIO;
+ this->bbt_td->options = NAND_BBT_ABSPAGE | NAND_BBT_8BIT |
+ NAND_BBT_SAVECONTENT | NAND_BBT_WRITE |
+ NAND_BBT_VERSION;
+ this->bbt_td->veroffs = 7;
+ this->bbt_td->pages[0] = doc->mh0_page + 1;
+ if (doc->mh1_page != -1) {
+ this->bbt_md->options = NAND_BBT_ABSPAGE | NAND_BBT_8BIT |
+ NAND_BBT_SAVECONTENT | NAND_BBT_WRITE |
+ NAND_BBT_VERSION;
+ this->bbt_md->veroffs = 7;
+ this->bbt_md->pages[0] = doc->mh1_page + 1;
+ } else {
+ this->bbt_md = NULL;
+ }
+
+ /* It's safe to set bd=NULL below because NAND_BBT_CREATE is not set.
+ At least as nand_bbt.c is currently written. */
+ if ((ret = nand_scan_bbt(mtd, NULL)))
+ return ret;
+ add_mtd_device(mtd);
+#ifdef CONFIG_MTD_PARTITIONS
+ if (!no_autopart)
+ add_mtd_partitions(mtd, parts, numparts);
+#endif
+ return 0;
+}
+
+static int __init inftl_scan_bbt(struct mtd_info *mtd)
+{
+ int ret, numparts;
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+ struct mtd_partition parts[5];
+
+ if (this->numchips > doc->chips_per_floor) {
+ printk(KERN_ERR "Multi-floor INFTL devices not yet supported.\n");
+ return -EIO;
+ }
+
+ if (DoC_is_MillenniumPlus(doc)) {
+ this->bbt_td->options = NAND_BBT_2BIT | NAND_BBT_ABSPAGE;
+ if (inftl_bbt_write)
+ this->bbt_td->options |= NAND_BBT_WRITE;
+ this->bbt_td->pages[0] = 2;
+ this->bbt_md = NULL;
+ } else {
+ this->bbt_td->options = NAND_BBT_LASTBLOCK | NAND_BBT_8BIT |
+ NAND_BBT_VERSION;
+ if (inftl_bbt_write)
+ this->bbt_td->options |= NAND_BBT_WRITE;
+ this->bbt_td->offs = 8;
+ this->bbt_td->len = 8;
+ this->bbt_td->veroffs = 7;
+ this->bbt_td->maxblocks = INFTL_BBT_RESERVED_BLOCKS;
+ this->bbt_td->reserved_block_code = 0x01;
+ this->bbt_td->pattern = "MSYS_BBT";
+
+ this->bbt_md->options = NAND_BBT_LASTBLOCK | NAND_BBT_8BIT |
+ NAND_BBT_VERSION;
+ if (inftl_bbt_write)
+ this->bbt_md->options |= NAND_BBT_WRITE;
+ this->bbt_md->offs = 8;
+ this->bbt_md->len = 8;
+ this->bbt_md->veroffs = 7;
+ this->bbt_md->maxblocks = INFTL_BBT_RESERVED_BLOCKS;
+ this->bbt_md->reserved_block_code = 0x01;
+ this->bbt_md->pattern = "TBB_SYSM";
+ }
+
+ /* It's safe to set bd=NULL below because NAND_BBT_CREATE is not set.
+ At least as nand_bbt.c is currently written. */
+ if ((ret = nand_scan_bbt(mtd, NULL)))
+ return ret;
+ memset((char *) parts, 0, sizeof(parts));
+ numparts = inftl_partscan(mtd, parts);
+ /* At least for now, require the INFTL Media Header. We could probably
+ do without it for non-INFTL use, since all it gives us is
+ autopartitioning, but I want to give it more thought. */
+ if (!numparts) return -EIO;
+ add_mtd_device(mtd);
+#ifdef CONFIG_MTD_PARTITIONS
+ if (!no_autopart)
+ add_mtd_partitions(mtd, parts, numparts);
+#endif
+ return 0;
+}
+
+static inline int __init doc2000_init(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+
+ this->write_byte = doc2000_write_byte;
+ this->read_byte = doc2000_read_byte;
+ this->write_buf = doc2000_writebuf;
+ this->read_buf = doc2000_readbuf;
+ this->verify_buf = doc2000_verifybuf;
+ this->scan_bbt = nftl_scan_bbt;
+
+ doc->CDSNControl = CDSN_CTRL_FLASH_IO | CDSN_CTRL_ECC_IO;
+ doc2000_count_chips(mtd);
+ mtd->name = "DiskOnChip 2000 (NFTL Model)";
+ return (4 * doc->chips_per_floor);
+}
+
+static inline int __init doc2001_init(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+
+ this->write_byte = doc2001_write_byte;
+ this->read_byte = doc2001_read_byte;
+ this->write_buf = doc2001_writebuf;
+ this->read_buf = doc2001_readbuf;
+ this->verify_buf = doc2001_verifybuf;
+
+ ReadDOC(doc->virtadr, ChipID);
+ ReadDOC(doc->virtadr, ChipID);
+ ReadDOC(doc->virtadr, ChipID);
+ if (ReadDOC(doc->virtadr, ChipID) != DOC_ChipID_DocMil) {
+ /* It's not a Millennium; it's one of the newer
+ DiskOnChip 2000 units with a similar ASIC.
+ Treat it like a Millennium, except that it
+ can have multiple chips. */
+ doc2000_count_chips(mtd);
+ mtd->name = "DiskOnChip 2000 (INFTL Model)";
+ this->scan_bbt = inftl_scan_bbt;
+ return (4 * doc->chips_per_floor);
+ } else {
+ /* Bog-standard Millennium */
+ doc->chips_per_floor = 1;
+ mtd->name = "DiskOnChip Millennium";
+ this->scan_bbt = nftl_scan_bbt;
+ return 1;
+ }
+}
+
+static inline int __init doc2001plus_init(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ struct doc_priv *doc = (void *)this->priv;
+
+ this->write_byte = NULL;
+ this->read_byte = doc2001plus_read_byte;
+ this->write_buf = doc2001plus_writebuf;
+ this->read_buf = doc2001plus_readbuf;
+ this->verify_buf = doc2001plus_verifybuf;
+ this->scan_bbt = inftl_scan_bbt;
+ this->hwcontrol = NULL;
+ this->select_chip = doc2001plus_select_chip;
+ this->cmdfunc = doc2001plus_command;
+ this->enable_hwecc = doc2001plus_enable_hwecc;
+
+ doc->chips_per_floor = 1;
+ mtd->name = "DiskOnChip Millennium Plus";
+
+ return 1;
+}
+
+static inline int __init doc_probe(unsigned long physadr)
+{
+ unsigned char ChipID;
+ struct mtd_info *mtd;
+ struct nand_chip *nand;
+ struct doc_priv *doc;
+ void __iomem *virtadr;
+ unsigned char save_control;
+ unsigned char tmp, tmpb, tmpc;
+ int reg, len, numchips;
+ int ret = 0;
+
+ virtadr = ioremap(physadr, DOC_IOREMAP_LEN);
+ if (!virtadr) {
+ printk(KERN_ERR "Diskonchip ioremap failed: 0x%x bytes at 0x%lx\n", DOC_IOREMAP_LEN, physadr);
+ return -EIO;
+ }
+
+ /* It's not possible to cleanly detect the DiskOnChip - the
+ * bootup procedure will put the device into reset mode, and
+ * it's not possible to talk to it without actually writing
+ * to the DOCControl register. So we store the current contents
+ * of the DOCControl register's location, in case we later decide
+ * that it's not a DiskOnChip, and want to put it back how we
+ * found it.
+ */
+ save_control = ReadDOC(virtadr, DOCControl);
+
+ /* Reset the DiskOnChip ASIC */
+ WriteDOC(DOC_MODE_CLR_ERR | DOC_MODE_MDWREN | DOC_MODE_RESET,
+ virtadr, DOCControl);
+ WriteDOC(DOC_MODE_CLR_ERR | DOC_MODE_MDWREN | DOC_MODE_RESET,
+ virtadr, DOCControl);
+
+ /* Enable the DiskOnChip ASIC */
+ WriteDOC(DOC_MODE_CLR_ERR | DOC_MODE_MDWREN | DOC_MODE_NORMAL,
+ virtadr, DOCControl);
+ WriteDOC(DOC_MODE_CLR_ERR | DOC_MODE_MDWREN | DOC_MODE_NORMAL,
+ virtadr, DOCControl);
+
+ ChipID = ReadDOC(virtadr, ChipID);
+
+ switch(ChipID) {
+ case DOC_ChipID_Doc2k:
+ reg = DoC_2k_ECCStatus;
+ break;
+ case DOC_ChipID_DocMil:
+ reg = DoC_ECCConf;
+ break;
+ case DOC_ChipID_DocMilPlus16:
+ case DOC_ChipID_DocMilPlus32:
+ case 0:
+ /* Possible Millennium Plus, need to do more checks */
+ /* Possibly release from power down mode */
+ for (tmp = 0; (tmp < 4); tmp++)
+ ReadDOC(virtadr, Mplus_Power);
+
+ /* Reset the Millennium Plus ASIC */
+ tmp = DOC_MODE_RESET | DOC_MODE_MDWREN | DOC_MODE_RST_LAT |
+ DOC_MODE_BDECT;
+ WriteDOC(tmp, virtadr, Mplus_DOCControl);
+ WriteDOC(~tmp, virtadr, Mplus_CtrlConfirm);
+
+ mdelay(1);
+ /* Enable the Millennium Plus ASIC */
+ tmp = DOC_MODE_NORMAL | DOC_MODE_MDWREN | DOC_MODE_RST_LAT |
+ DOC_MODE_BDECT;
+ WriteDOC(tmp, virtadr, Mplus_DOCControl);
+ WriteDOC(~tmp, virtadr, Mplus_CtrlConfirm);
+ mdelay(1);
+
+ ChipID = ReadDOC(virtadr, ChipID);
+
+ switch (ChipID) {
+ case DOC_ChipID_DocMilPlus16:
+ reg = DoC_Mplus_Toggle;
+ break;
+ case DOC_ChipID_DocMilPlus32:
+ printk(KERN_ERR "DiskOnChip Millennium Plus 32MB is not supported, ignoring.\n");
+ default:
+ ret = -ENODEV;
+ goto notfound;
+ }
+ break;
+
+ default:
+ ret = -ENODEV;
+ goto notfound;
+ }
+ /* Check the TOGGLE bit in the ECC register */
+ tmp = ReadDOC_(virtadr, reg) & DOC_TOGGLE_BIT;
+ tmpb = ReadDOC_(virtadr, reg) & DOC_TOGGLE_BIT;
+ tmpc = ReadDOC_(virtadr, reg) & DOC_TOGGLE_BIT;
+ if ((tmp == tmpb) || (tmp != tmpc)) {
+ printk(KERN_WARNING "Possible DiskOnChip at 0x%lx failed TOGGLE test, dropping.\n", physadr);
+ ret = -ENODEV;
+ goto notfound;
+ }
+
+ for (mtd = doclist; mtd; mtd = doc->nextdoc) {
+ unsigned char oldval;
+ unsigned char newval;
+ nand = mtd->priv;
+ doc = (void *)nand->priv;
+ /* Use the alias resolution register to determine if this is
+ in fact the same DOC aliased to a new address. If writes
+ to one chip's alias resolution register change the value on
+ the other chip, they're the same chip. */
+ if (ChipID == DOC_ChipID_DocMilPlus16) {
+ oldval = ReadDOC(doc->virtadr, Mplus_AliasResolution);
+ newval = ReadDOC(virtadr, Mplus_AliasResolution);
+ } else {
+ oldval = ReadDOC(doc->virtadr, AliasResolution);
+ newval = ReadDOC(virtadr, AliasResolution);
+ }
+ if (oldval != newval)
+ continue;
+ if (ChipID == DOC_ChipID_DocMilPlus16) {
+ WriteDOC(~newval, virtadr, Mplus_AliasResolution);
+ oldval = ReadDOC(doc->virtadr, Mplus_AliasResolution);
+ WriteDOC(newval, virtadr, Mplus_AliasResolution); // restore it
+ } else {
+ WriteDOC(~newval, virtadr, AliasResolution);
+ oldval = ReadDOC(doc->virtadr, AliasResolution);
+ WriteDOC(newval, virtadr, AliasResolution); // restore it
+ }
+ newval = ~newval;
+ if (oldval == newval) {
+ printk(KERN_DEBUG "Found alias of DOC at 0x%lx to 0x%lx\n", doc->physadr, physadr);
+ goto notfound;
+ }
+ }
+
+ printk(KERN_NOTICE "DiskOnChip found at 0x%lx\n", physadr);
+
+ len = sizeof(struct mtd_info) +
+ sizeof(struct nand_chip) +
+ sizeof(struct doc_priv) +
+ (2 * sizeof(struct nand_bbt_descr));
+ mtd = kmalloc(len, GFP_KERNEL);
+ if (!mtd) {
+ printk(KERN_ERR "DiskOnChip kmalloc (%d bytes) failed!\n", len);
+ ret = -ENOMEM;
+ goto fail;
+ }
+ memset(mtd, 0, len);
+
+ nand = (struct nand_chip *) (mtd + 1);
+ doc = (struct doc_priv *) (nand + 1);
+ nand->bbt_td = (struct nand_bbt_descr *) (doc + 1);
+ nand->bbt_md = nand->bbt_td + 1;
+
+ mtd->priv = (void *) nand;
+ mtd->owner = THIS_MODULE;
+
+ nand->priv = (void *) doc;
+ nand->select_chip = doc200x_select_chip;
+ nand->hwcontrol = doc200x_hwcontrol;
+ nand->dev_ready = doc200x_dev_ready;
+ nand->waitfunc = doc200x_wait;
+ nand->block_bad = doc200x_block_bad;
+ nand->enable_hwecc = doc200x_enable_hwecc;
+ nand->calculate_ecc = doc200x_calculate_ecc;
+ nand->correct_data = doc200x_correct_data;
+
+ nand->autooob = &doc200x_oobinfo;
+ nand->eccmode = NAND_ECC_HW6_512;
+ nand->options = NAND_USE_FLASH_BBT | NAND_HWECC_SYNDROME;
+
+ doc->physadr = physadr;
+ doc->virtadr = virtadr;
+ doc->ChipID = ChipID;
+ doc->curfloor = -1;
+ doc->curchip = -1;
+ doc->mh0_page = -1;
+ doc->mh1_page = -1;
+ doc->nextdoc = doclist;
+
+ if (ChipID == DOC_ChipID_Doc2k)
+ numchips = doc2000_init(mtd);
+ else if (ChipID == DOC_ChipID_DocMilPlus16)
+ numchips = doc2001plus_init(mtd);
+ else
+ numchips = doc2001_init(mtd);
+
+ if ((ret = nand_scan(mtd, numchips))) {
+ /* DBB note: i believe nand_release is necessary here, as
+ buffers may have been allocated in nand_base. Check with
+ Thomas. FIX ME! */
+ /* nand_release will call del_mtd_device, but we haven't yet
+ added it. This is handled without incident by
+ del_mtd_device, as far as I can tell. */
+ nand_release(mtd);
+ kfree(mtd);
+ goto fail;
+ }
+
+ /* Success! */
+ doclist = mtd;
+ return 0;
+
+notfound:
+ /* Put back the contents of the DOCControl register, in case it's not
+ actually a DiskOnChip. */
+ WriteDOC(save_control, virtadr, DOCControl);
+fail:
+ iounmap((void *)virtadr);
+ return ret;
+}
+
+static void release_nanddoc(void)
+{
+ struct mtd_info *mtd, *nextmtd;
+ struct nand_chip *nand;
+ struct doc_priv *doc;
+
+ for (mtd = doclist; mtd; mtd = nextmtd) {
+ nand = mtd->priv;
+ doc = (void *)nand->priv;
+
+ nextmtd = doc->nextdoc;
+ nand_release(mtd);
+ iounmap((void *)doc->virtadr);
+ kfree(mtd);
+ }
+}
+
+static int __init init_nanddoc(void)
+{
+ int i, ret = 0;
+
+ /* We could create the decoder on demand, if memory is a concern.
+ * This way we have it handy, if an error happens
+ *
+ * Symbolsize is 10 (bits)
+ * Primitve polynomial is x^10+x^3+1
+ * first consecutive root is 510
+ * primitve element to generate roots = 1
+ * generator polinomial degree = 4
+ */
+ rs_decoder = init_rs(10, 0x409, FCR, 1, NROOTS);
+ if (!rs_decoder) {
+ printk (KERN_ERR "DiskOnChip: Could not create a RS decoder\n");
+ return -ENOMEM;
+ }
+
+ if (doc_config_location) {
+ printk(KERN_INFO "Using configured DiskOnChip probe address 0x%lx\n", doc_config_location);
+ ret = doc_probe(doc_config_location);
+ if (ret < 0)
+ goto outerr;
+ } else {
+ for (i=0; (doc_locations[i] != 0xffffffff); i++) {
+ doc_probe(doc_locations[i]);
+ }
+ }
+ /* No banner message any more. Print a message if no DiskOnChip
+ found, so the user knows we at least tried. */
+ if (!doclist) {
+ printk(KERN_INFO "No valid DiskOnChip devices found\n");
+ ret = -ENODEV;
+ goto outerr;
+ }
+ return 0;
+outerr:
+ free_rs(rs_decoder);
+ return ret;
+}
+
+static void __exit cleanup_nanddoc(void)
+{
+ /* Cleanup the nand/DoC resources */
+ release_nanddoc();
+
+ /* Free the reed solomon resources */
+ if (rs_decoder) {
+ free_rs(rs_decoder);
+ }
+}
+
+module_init(init_nanddoc);
+module_exit(cleanup_nanddoc);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("David Woodhouse <dwmw2@infradead.org>");
+MODULE_DESCRIPTION("M-Systems DiskOnChip 2000, Millennium and Millennium Plus device driver\n");
--- /dev/null
+/*
+ * drivers/mtd/nand.c
+ *
+ * Overview:
+ * This is the generic MTD driver for NAND flash devices. It should be
+ * capable of working with almost all NAND chips currently available.
+ * Basic support for AG-AND chips is provided.
+ *
+ * Additional technical information is available on
+ * http://www.linux-mtd.infradead.org/tech/nand.html
+ *
+ * Copyright (C) 2000 Steven J. Hill (sjhill@realitydiluted.com)
+ * 2002 Thomas Gleixner (tglx@linutronix.de)
+ *
+ * 02-08-2004 tglx: support for strange chips, which cannot auto increment
+ * pages on read / read_oob
+ *
+ * 03-17-2004 tglx: Check ready before auto increment check. Simon Bayes
+ * pointed this out, as he marked an auto increment capable chip
+ * as NOAUTOINCR in the board driver.
+ * Make reads over block boundaries work too
+ *
+ * 04-14-2004 tglx: first working version for 2k page size chips
+ *
+ * 05-19-2004 tglx: Basic support for Renesas AG-AND chips
+ *
+ * 09-24-2004 tglx: add support for hardware controllers (e.g. ECC) shared
+ * among multiple independend devices. Suggestions and initial patch
+ * from Ben Dooks <ben-mtd@fluff.org>
+ *
+ * Credits:
+ * David Woodhouse for adding multichip support
+ *
+ * Aleph One Ltd. and Toby Churchill Ltd. for supporting the
+ * rework for 2K page size chips
+ *
+ * TODO:
+ * Enable cached programming for 2k page size chips
+ * Check, if mtd->ecctype should be set to MTD_ECC_HW
+ * if we have HW ecc support.
+ * The AG-AND chips have nice features for speed improvement,
+ * which are not supported yet. Read / program 4 pages in one go.
+ *
+ * $Id: nand_base.c,v 1.121 2004/10/06 19:53:11 gleixner Exp $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/delay.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/nand_ecc.h>
+#include <linux/mtd/compatmac.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <asm/io.h>
+
+#ifdef CONFIG_MTD_PARTITIONS
+#include <linux/mtd/partitions.h>
+#endif
+
+/* Define default oob placement schemes for large and small page devices */
+static struct nand_oobinfo nand_oob_8 = {
+ .useecc = MTD_NANDECC_AUTOPLACE,
+ .eccbytes = 3,
+ .eccpos = {0, 1, 2},
+ .oobfree = { {3, 2}, {6, 2} }
+};
+
+static struct nand_oobinfo nand_oob_16 = {
+ .useecc = MTD_NANDECC_AUTOPLACE,
+ .eccbytes = 6,
+ .eccpos = {0, 1, 2, 3, 6, 7},
+ .oobfree = { {8, 8} }
+};
+
+static struct nand_oobinfo nand_oob_64 = {
+ .useecc = MTD_NANDECC_AUTOPLACE,
+ .eccbytes = 24,
+ .eccpos = {
+ 40, 41, 42, 43, 44, 45, 46, 47,
+ 48, 49, 50, 51, 52, 53, 54, 55,
+ 56, 57, 58, 59, 60, 61, 62, 63},
+ .oobfree = { {2, 38} }
+};
+
+/* This is used for padding purposes in nand_write_oob */
+static u_char ffchars[] = {
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff,
+};
+
+/*
+ * NAND low-level MTD interface functions
+ */
+static void nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len);
+static void nand_read_buf(struct mtd_info *mtd, u_char *buf, int len);
+static int nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len);
+
+static int nand_read (struct mtd_info *mtd, loff_t from, size_t len, size_t * retlen, u_char * buf);
+static int nand_read_ecc (struct mtd_info *mtd, loff_t from, size_t len,
+ size_t * retlen, u_char * buf, u_char * eccbuf, struct nand_oobinfo *oobsel);
+static int nand_read_oob (struct mtd_info *mtd, loff_t from, size_t len, size_t * retlen, u_char * buf);
+static int nand_write (struct mtd_info *mtd, loff_t to, size_t len, size_t * retlen, const u_char * buf);
+static int nand_write_ecc (struct mtd_info *mtd, loff_t to, size_t len,
+ size_t * retlen, const u_char * buf, u_char * eccbuf, struct nand_oobinfo *oobsel);
+static int nand_write_oob (struct mtd_info *mtd, loff_t to, size_t len, size_t * retlen, const u_char *buf);
+static int nand_writev (struct mtd_info *mtd, const struct kvec *vecs,
+ unsigned long count, loff_t to, size_t * retlen);
+static int nand_writev_ecc (struct mtd_info *mtd, const struct kvec *vecs,
+ unsigned long count, loff_t to, size_t * retlen, u_char *eccbuf, struct nand_oobinfo *oobsel);
+static int nand_erase (struct mtd_info *mtd, struct erase_info *instr);
+static void nand_sync (struct mtd_info *mtd);
+
+/* Some internal functions */
+static int nand_write_page (struct mtd_info *mtd, struct nand_chip *this, int page, u_char *oob_buf,
+ struct nand_oobinfo *oobsel, int mode);
+#ifdef CONFIG_MTD_NAND_VERIFY_WRITE
+static int nand_verify_pages (struct mtd_info *mtd, struct nand_chip *this, int page, int numpages,
+ u_char *oob_buf, struct nand_oobinfo *oobsel, int chipnr, int oobmode);
+#else
+#define nand_verify_pages(...) (0)
+#endif
+
+static void nand_get_device (struct nand_chip *this, struct mtd_info *mtd, int new_state);
+
+/**
+ * nand_release_device - [GENERIC] release chip
+ * @mtd: MTD device structure
+ *
+ * Deselect, release chip lock and wake up anyone waiting on the device
+ */
+static void nand_release_device (struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+
+ /* De-select the NAND device */
+ this->select_chip(mtd, -1);
+ /* Do we have a hardware controller ? */
+ if (this->controller) {
+ spin_lock(&this->controller->lock);
+ this->controller->active = NULL;
+ spin_unlock(&this->controller->lock);
+ }
+ /* Release the chip */
+ spin_lock (&this->chip_lock);
+ this->state = FL_READY;
+ wake_up (&this->wq);
+ spin_unlock (&this->chip_lock);
+}
+
+/**
+ * nand_read_byte - [DEFAULT] read one byte from the chip
+ * @mtd: MTD device structure
+ *
+ * Default read function for 8bit buswith
+ */
+static u_char nand_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ return readb(this->IO_ADDR_R);
+}
+
+/**
+ * nand_write_byte - [DEFAULT] write one byte to the chip
+ * @mtd: MTD device structure
+ * @byte: pointer to data byte to write
+ *
+ * Default write function for 8it buswith
+ */
+static void nand_write_byte(struct mtd_info *mtd, u_char byte)
+{
+ struct nand_chip *this = mtd->priv;
+ writeb(byte, this->IO_ADDR_W);
+}
+
+/**
+ * nand_read_byte16 - [DEFAULT] read one byte endianess aware from the chip
+ * @mtd: MTD device structure
+ *
+ * Default read function for 16bit buswith with
+ * endianess conversion
+ */
+static u_char nand_read_byte16(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ return (u_char) cpu_to_le16(readw(this->IO_ADDR_R));
+}
+
+/**
+ * nand_write_byte16 - [DEFAULT] write one byte endianess aware to the chip
+ * @mtd: MTD device structure
+ * @byte: pointer to data byte to write
+ *
+ * Default write function for 16bit buswith with
+ * endianess conversion
+ */
+static void nand_write_byte16(struct mtd_info *mtd, u_char byte)
+{
+ struct nand_chip *this = mtd->priv;
+ writew(le16_to_cpu((u16) byte), this->IO_ADDR_W);
+}
+
+/**
+ * nand_read_word - [DEFAULT] read one word from the chip
+ * @mtd: MTD device structure
+ *
+ * Default read function for 16bit buswith without
+ * endianess conversion
+ */
+static u16 nand_read_word(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ return readw(this->IO_ADDR_R);
+}
+
+/**
+ * nand_write_word - [DEFAULT] write one word to the chip
+ * @mtd: MTD device structure
+ * @word: data word to write
+ *
+ * Default write function for 16bit buswith without
+ * endianess conversion
+ */
+static void nand_write_word(struct mtd_info *mtd, u16 word)
+{
+ struct nand_chip *this = mtd->priv;
+ writew(word, this->IO_ADDR_W);
+}
+
+/**
+ * nand_select_chip - [DEFAULT] control CE line
+ * @mtd: MTD device structure
+ * @chip: chipnumber to select, -1 for deselect
+ *
+ * Default select function for 1 chip devices.
+ */
+static void nand_select_chip(struct mtd_info *mtd, int chip)
+{
+ struct nand_chip *this = mtd->priv;
+ switch(chip) {
+ case -1:
+ this->hwcontrol(mtd, NAND_CTL_CLRNCE);
+ break;
+ case 0:
+ this->hwcontrol(mtd, NAND_CTL_SETNCE);
+ break;
+
+ default:
+ BUG();
+ }
+}
+
+/**
+ * nand_write_buf - [DEFAULT] write buffer to chip
+ * @mtd: MTD device structure
+ * @buf: data buffer
+ * @len: number of bytes to write
+ *
+ * Default write function for 8bit buswith
+ */
+static void nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ writeb(buf[i], this->IO_ADDR_W);
+}
+
+/**
+ * nand_read_buf - [DEFAULT] read chip data into buffer
+ * @mtd: MTD device structure
+ * @buf: buffer to store date
+ * @len: number of bytes to read
+ *
+ * Default read function for 8bit buswith
+ */
+static void nand_read_buf(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ buf[i] = readb(this->IO_ADDR_R);
+}
+
+/**
+ * nand_verify_buf - [DEFAULT] Verify chip data against buffer
+ * @mtd: MTD device structure
+ * @buf: buffer containing the data to compare
+ * @len: number of bytes to compare
+ *
+ * Default verify function for 8bit buswith
+ */
+static int nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ if (buf[i] != readb(this->IO_ADDR_R))
+ return -EFAULT;
+
+ return 0;
+}
+
+/**
+ * nand_write_buf16 - [DEFAULT] write buffer to chip
+ * @mtd: MTD device structure
+ * @buf: data buffer
+ * @len: number of bytes to write
+ *
+ * Default write function for 16bit buswith
+ */
+static void nand_write_buf16(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+ u16 *p = (u16 *) buf;
+ len >>= 1;
+
+ for (i=0; i<len; i++)
+ writew(p[i], this->IO_ADDR_W);
+
+}
+
+/**
+ * nand_read_buf16 - [DEFAULT] read chip data into buffer
+ * @mtd: MTD device structure
+ * @buf: buffer to store date
+ * @len: number of bytes to read
+ *
+ * Default read function for 16bit buswith
+ */
+static void nand_read_buf16(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+ u16 *p = (u16 *) buf;
+ len >>= 1;
+
+ for (i=0; i<len; i++)
+ p[i] = readw(this->IO_ADDR_R);
+}
+
+/**
+ * nand_verify_buf16 - [DEFAULT] Verify chip data against buffer
+ * @mtd: MTD device structure
+ * @buf: buffer containing the data to compare
+ * @len: number of bytes to compare
+ *
+ * Default verify function for 16bit buswith
+ */
+static int nand_verify_buf16(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+ u16 *p = (u16 *) buf;
+ len >>= 1;
+
+ for (i=0; i<len; i++)
+ if (p[i] != readw(this->IO_ADDR_R))
+ return -EFAULT;
+
+ return 0;
+}
+
+/**
+ * nand_block_bad - [DEFAULT] Read bad block marker from the chip
+ * @mtd: MTD device structure
+ * @ofs: offset from device start
+ * @getchip: 0, if the chip is already selected
+ *
+ * Check, if the block is bad.
+ */
+static int nand_block_bad(struct mtd_info *mtd, loff_t ofs, int getchip)
+{
+ int page, chipnr, res = 0;
+ struct nand_chip *this = mtd->priv;
+ u16 bad;
+
+ if (getchip) {
+ page = (int)(ofs >> this->page_shift);
+ chipnr = (int)(ofs >> this->chip_shift);
+
+ /* Grab the lock and see if the device is available */
+ nand_get_device (this, mtd, FL_READING);
+
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+ } else
+ page = (int) ofs;
+
+ if (this->options & NAND_BUSWIDTH_16) {
+ this->cmdfunc (mtd, NAND_CMD_READOOB, this->badblockpos & 0xFE, page & this->pagemask);
+ bad = cpu_to_le16(this->read_word(mtd));
+ if (this->badblockpos & 0x1)
+ bad >>= 1;
+ if ((bad & 0xFF) != 0xff)
+ res = 1;
+ } else {
+ this->cmdfunc (mtd, NAND_CMD_READOOB, this->badblockpos, page & this->pagemask);
+ if (this->read_byte(mtd) != 0xff)
+ res = 1;
+ }
+
+ if (getchip) {
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_device(mtd);
+ }
+
+ return res;
+}
+
+/**
+ * nand_default_block_markbad - [DEFAULT] mark a block bad
+ * @mtd: MTD device structure
+ * @ofs: offset from device start
+ *
+ * This is the default implementation, which can be overridden by
+ * a hardware specific driver.
+*/
+static int nand_default_block_markbad(struct mtd_info *mtd, loff_t ofs)
+{
+ struct nand_chip *this = mtd->priv;
+ u_char buf[2] = {0, 0};
+ size_t retlen;
+ int block;
+
+ /* Get block number */
+ block = ((int) ofs) >> this->bbt_erase_shift;
+ this->bbt[block >> 2] |= 0x01 << ((block & 0x03) << 1);
+
+ /* Do we have a flash based bad block table ? */
+ if (this->options & NAND_USE_FLASH_BBT)
+ return nand_update_bbt (mtd, ofs);
+
+ /* We write two bytes, so we dont have to mess with 16 bit access */
+ ofs += mtd->oobsize + (this->badblockpos & ~0x01);
+ return nand_write_oob (mtd, ofs , 2, &retlen, buf);
+}
+
+/**
+ * nand_check_wp - [GENERIC] check if the chip is write protected
+ * @mtd: MTD device structure
+ * Check, if the device is write protected
+ *
+ * The function expects, that the device is already selected
+ */
+static int nand_check_wp (struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ /* Check the WP bit */
+ this->cmdfunc (mtd, NAND_CMD_STATUS, -1, -1);
+ return (this->read_byte(mtd) & 0x80) ? 0 : 1;
+}
+
+/**
+ * nand_block_checkbad - [GENERIC] Check if a block is marked bad
+ * @mtd: MTD device structure
+ * @ofs: offset from device start
+ * @getchip: 0, if the chip is already selected
+ * @allowbbt: 1, if its allowed to access the bbt area
+ *
+ * Check, if the block is bad. Either by reading the bad block table or
+ * calling of the scan function.
+ */
+static int nand_block_checkbad (struct mtd_info *mtd, loff_t ofs, int getchip, int allowbbt)
+{
+ struct nand_chip *this = mtd->priv;
+
+ if (!this->bbt)
+ return this->block_bad(mtd, ofs, getchip);
+
+ /* Return info from the table */
+ return nand_isbad_bbt (mtd, ofs, allowbbt);
+}
+
+/**
+ * nand_command - [DEFAULT] Send command to NAND device
+ * @mtd: MTD device structure
+ * @command: the command to be sent
+ * @column: the column address for this command, -1 if none
+ * @page_addr: the page address for this command, -1 if none
+ *
+ * Send command to NAND device. This function is used for small page
+ * devices (256/512 Bytes per page)
+ */
+static void nand_command (struct mtd_info *mtd, unsigned command, int column, int page_addr)
+{
+ register struct nand_chip *this = mtd->priv;
+
+ /* Begin command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ /*
+ * Write out the command to the device.
+ */
+ if (command == NAND_CMD_SEQIN) {
+ int readcmd;
+
+ if (column >= mtd->oobblock) {
+ /* OOB area */
+ column -= mtd->oobblock;
+ readcmd = NAND_CMD_READOOB;
+ } else if (column < 256) {
+ /* First 256 bytes --> READ0 */
+ readcmd = NAND_CMD_READ0;
+ } else {
+ column -= 256;
+ readcmd = NAND_CMD_READ1;
+ }
+ this->write_byte(mtd, readcmd);
+ }
+ this->write_byte(mtd, command);
+
+ /* Set ALE and clear CLE to start address cycle */
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+
+ if (column != -1 || page_addr != -1) {
+ this->hwcontrol(mtd, NAND_CTL_SETALE);
+
+ /* Serially input address */
+ if (column != -1) {
+ /* Adjust columns for 16 bit buswidth */
+ if (this->options & NAND_BUSWIDTH_16)
+ column >>= 1;
+ this->write_byte(mtd, column);
+ }
+ if (page_addr != -1) {
+ this->write_byte(mtd, (unsigned char) (page_addr & 0xff));
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 8) & 0xff));
+ /* One more address cycle for devices > 32MiB */
+ if (this->chipsize > (32 << 20))
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 16) & 0x0f));
+ }
+ /* Latch in address */
+ this->hwcontrol(mtd, NAND_CTL_CLRALE);
+ }
+
+ /*
+ * program and erase have their own busy handlers
+ * status and sequential in needs no delay
+ */
+ switch (command) {
+
+ case NAND_CMD_PAGEPROG:
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_SEQIN:
+ case NAND_CMD_STATUS:
+ return;
+
+ case NAND_CMD_RESET:
+ if (this->dev_ready)
+ break;
+ udelay(this->chip_delay);
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ this->write_byte(mtd, NAND_CMD_STATUS);
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+ while ( !(this->read_byte(mtd) & 0x40));
+ return;
+
+ /* This applies to read commands */
+ default:
+ /*
+ * If we don't have access to the busy pin, we apply the given
+ * command delay
+ */
+ if (!this->dev_ready) {
+ udelay (this->chip_delay);
+ return;
+ }
+ }
+
+ /* Apply this short delay always to ensure that we do wait tWB in
+ * any case on any machine. */
+ ndelay (100);
+ /* wait until command is processed */
+ while (!this->dev_ready(mtd));
+}
+
+/**
+ * nand_command_lp - [DEFAULT] Send command to NAND large page device
+ * @mtd: MTD device structure
+ * @command: the command to be sent
+ * @column: the column address for this command, -1 if none
+ * @page_addr: the page address for this command, -1 if none
+ *
+ * Send command to NAND device. This is the version for the new large page devices
+ * We dont have the seperate regions as we have in the small page devices.
+ * We must emulate NAND_CMD_READOOB to keep the code compatible.
+ *
+ */
+static void nand_command_lp (struct mtd_info *mtd, unsigned command, int column, int page_addr)
+{
+ register struct nand_chip *this = mtd->priv;
+
+ /* Emulate NAND_CMD_READOOB */
+ if (command == NAND_CMD_READOOB) {
+ column += mtd->oobblock;
+ command = NAND_CMD_READ0;
+ }
+
+
+ /* Begin command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ /* Write out the command to the device. */
+ this->write_byte(mtd, command);
+ /* End command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+
+ if (column != -1 || page_addr != -1) {
+ this->hwcontrol(mtd, NAND_CTL_SETALE);
+
+ /* Serially input address */
+ if (column != -1) {
+ /* Adjust columns for 16 bit buswidth */
+ if (this->options & NAND_BUSWIDTH_16)
+ column >>= 1;
+ this->write_byte(mtd, column & 0xff);
+ this->write_byte(mtd, column >> 8);
+ }
+ if (page_addr != -1) {
+ this->write_byte(mtd, (unsigned char) (page_addr & 0xff));
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 8) & 0xff));
+ /* One more address cycle for devices > 128MiB */
+ if (this->chipsize > (128 << 20))
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 16) & 0xff));
+ }
+ /* Latch in address */
+ this->hwcontrol(mtd, NAND_CTL_CLRALE);
+ }
+
+ /*
+ * program and erase have their own busy handlers
+ * status and sequential in needs no delay
+ */
+ switch (command) {
+
+ case NAND_CMD_CACHEDPROG:
+ case NAND_CMD_PAGEPROG:
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_SEQIN:
+ case NAND_CMD_STATUS:
+ return;
+
+
+ case NAND_CMD_RESET:
+ if (this->dev_ready)
+ break;
+ udelay(this->chip_delay);
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ this->write_byte(mtd, NAND_CMD_STATUS);
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+ while ( !(this->read_byte(mtd) & 0x40));
+ return;
+
+ case NAND_CMD_READ0:
+ /* Begin command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ /* Write out the start read command */
+ this->write_byte(mtd, NAND_CMD_READSTART);
+ /* End command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+ /* Fall through into ready check */
+
+ /* This applies to read commands */
+ default:
+ /*
+ * If we don't have access to the busy pin, we apply the given
+ * command delay
+ */
+ if (!this->dev_ready) {
+ udelay (this->chip_delay);
+ return;
+ }
+ }
+
+ /* Apply this short delay always to ensure that we do wait tWB in
+ * any case on any machine. */
+ ndelay (100);
+ /* wait until command is processed */
+ while (!this->dev_ready(mtd));
+}
+
+/**
+ * nand_get_device - [GENERIC] Get chip for selected access
+ * @this: the nand chip descriptor
+ * @mtd: MTD device structure
+ * @new_state: the state which is requested
+ *
+ * Get the device and lock it for exclusive access
+ */
+static void nand_get_device (struct nand_chip *this, struct mtd_info *mtd, int new_state)
+{
+ struct nand_chip *active = this;
+
+ DECLARE_WAITQUEUE (wait, current);
+
+ /*
+ * Grab the lock and see if the device is available
+ */
+retry:
+ /* Hardware controller shared among independend devices */
+ if (this->controller) {
+ spin_lock (&this->controller->lock);
+ if (this->controller->active)
+ active = this->controller->active;
+ else
+ this->controller->active = this;
+ spin_unlock (&this->controller->lock);
+ }
+
+ if (active == this) {
+ spin_lock (&this->chip_lock);
+ if (this->state == FL_READY) {
+ this->state = new_state;
+ spin_unlock (&this->chip_lock);
+ return;
+ }
+ }
+ set_current_state (TASK_UNINTERRUPTIBLE);
+ add_wait_queue (&active->wq, &wait);
+ spin_unlock (&active->chip_lock);
+ schedule ();
+ remove_wait_queue (&active->wq, &wait);
+ goto retry;
+}
+
+/**
+ * nand_wait - [DEFAULT] wait until the command is done
+ * @mtd: MTD device structure
+ * @this: NAND chip structure
+ * @state: state to select the max. timeout value
+ *
+ * Wait for command done. This applies to erase and program only
+ * Erase can take up to 400ms and program up to 20ms according to
+ * general NAND and SmartMedia specs
+ *
+*/
+static int nand_wait(struct mtd_info *mtd, struct nand_chip *this, int state)
+{
+
+ unsigned long timeo = jiffies;
+ int status;
+
+ if (state == FL_ERASING)
+ timeo += (HZ * 400) / 1000;
+ else
+ timeo += (HZ * 20) / 1000;
+
+ /* Apply this short delay always to ensure that we do wait tWB in
+ * any case on any machine. */
+ ndelay (100);
+
+ if ((state == FL_ERASING) && (this->options & NAND_IS_AND))
+ this->cmdfunc (mtd, NAND_CMD_STATUS_MULTI, -1, -1);
+ else
+ this->cmdfunc (mtd, NAND_CMD_STATUS, -1, -1);
+
+ while (time_before(jiffies, timeo)) {
+ /* Check, if we were interrupted */
+ if (this->state != state)
+ return 0;
+
+ if (this->dev_ready) {
+ if (this->dev_ready(mtd))
+ break;
+ } else {
+ if (this->read_byte(mtd) & NAND_STATUS_READY)
+ break;
+ }
+ yield ();
+ }
+ status = (int) this->read_byte(mtd);
+ return status;
+}
+
+/**
+ * nand_write_page - [GENERIC] write one page
+ * @mtd: MTD device structure
+ * @this: NAND chip structure
+ * @page: startpage inside the chip, must be called with (page & this->pagemask)
+ * @oob_buf: out of band data buffer
+ * @oobsel: out of band selecttion structre
+ * @cached: 1 = enable cached programming if supported by chip
+ *
+ * Nand_page_program function is used for write and writev !
+ * This function will always program a full page of data
+ * If you call it with a non page aligned buffer, you're lost :)
+ *
+ * Cached programming is not supported yet.
+ */
+static int nand_write_page (struct mtd_info *mtd, struct nand_chip *this, int page,
+ u_char *oob_buf, struct nand_oobinfo *oobsel, int cached)
+{
+ int i, status;
+ u_char ecc_code[8];
+ int eccmode = oobsel->useecc ? this->eccmode : NAND_ECC_NONE;
+ int *oob_config = oobsel->eccpos;
+ int datidx = 0, eccidx = 0, eccsteps = this->eccsteps;
+ int eccbytes = 0;
+
+ /* FIXME: Enable cached programming */
+ cached = 0;
+
+ /* Send command to begin auto page programming */
+ this->cmdfunc (mtd, NAND_CMD_SEQIN, 0x00, page);
+
+ /* Write out complete page of data, take care of eccmode */
+ switch (eccmode) {
+ /* No ecc, write all */
+ case NAND_ECC_NONE:
+ printk (KERN_WARNING "Writing data without ECC to NAND-FLASH is not recommended\n");
+ this->write_buf(mtd, this->data_poi, mtd->oobblock);
+ break;
+
+ /* Software ecc 3/256, write all */
+ case NAND_ECC_SOFT:
+ for (; eccsteps; eccsteps--) {
+ this->calculate_ecc(mtd, &this->data_poi[datidx], ecc_code);
+ for (i = 0; i < 3; i++, eccidx++)
+ oob_buf[oob_config[eccidx]] = ecc_code[i];
+ datidx += this->eccsize;
+ }
+ this->write_buf(mtd, this->data_poi, mtd->oobblock);
+ break;
+
+ /* Hardware ecc 8 byte / 512 byte data */
+ case NAND_ECC_HW8_512:
+ eccbytes += 2;
+ /* Hardware ecc 6 byte / 512 byte data */
+ case NAND_ECC_HW6_512:
+ eccbytes += 3;
+ /* Hardware ecc 3 byte / 256 data */
+ /* Hardware ecc 3 byte / 512 byte data */
+ case NAND_ECC_HW3_256:
+ case NAND_ECC_HW3_512:
+ eccbytes += 3;
+ for (; eccsteps; eccsteps--) {
+ /* enable hardware ecc logic for write */
+ this->enable_hwecc(mtd, NAND_ECC_WRITE);
+ this->write_buf(mtd, &this->data_poi[datidx], this->eccsize);
+ this->calculate_ecc(mtd, &this->data_poi[datidx], ecc_code);
+ for (i = 0; i < eccbytes; i++, eccidx++)
+ oob_buf[oob_config[eccidx]] = ecc_code[i];
+ /* If the hardware ecc provides syndromes then
+ * the ecc code must be written immidiately after
+ * the data bytes (words) */
+ if (this->options & NAND_HWECC_SYNDROME)
+ this->write_buf(mtd, ecc_code, eccbytes);
+
+ datidx += this->eccsize;
+ }
+ break;
+
+ default:
+ printk (KERN_WARNING "Invalid NAND_ECC_MODE %d\n", this->eccmode);
+ BUG();
+ }
+
+ /* Write out OOB data */
+ if (this->options & NAND_HWECC_SYNDROME)
+ this->write_buf(mtd, &oob_buf[oobsel->eccbytes], mtd->oobsize - oobsel->eccbytes);
+ else
+ this->write_buf(mtd, oob_buf, mtd->oobsize);
+
+ /* Send command to actually program the data */
+ this->cmdfunc (mtd, cached ? NAND_CMD_CACHEDPROG : NAND_CMD_PAGEPROG, -1, -1);
+
+ if (!cached) {
+ /* call wait ready function */
+ status = this->waitfunc (mtd, this, FL_WRITING);
+ /* See if device thinks it succeeded */
+ if (status & 0x01) {
+ DEBUG (MTD_DEBUG_LEVEL0, "%s: " "Failed write, page 0x%08x, ", __FUNCTION__, page);
+ return -EIO;
+ }
+ } else {
+ /* FIXME: Implement cached programming ! */
+ /* wait until cache is ready*/
+ // status = this->waitfunc (mtd, this, FL_CACHEDRPG);
+ }
+ return 0;
+}
+
+#ifdef CONFIG_MTD_NAND_VERIFY_WRITE
+/**
+ * nand_verify_pages - [GENERIC] verify the chip contents after a write
+ * @mtd: MTD device structure
+ * @this: NAND chip structure
+ * @page: startpage inside the chip, must be called with (page & this->pagemask)
+ * @numpages: number of pages to verify
+ * @oob_buf: out of band data buffer
+ * @oobsel: out of band selecttion structre
+ * @chipnr: number of the current chip
+ * @oobmode: 1 = full buffer verify, 0 = ecc only
+ *
+ * The NAND device assumes that it is always writing to a cleanly erased page.
+ * Hence, it performs its internal write verification only on bits that
+ * transitioned from 1 to 0. The device does NOT verify the whole page on a
+ * byte by byte basis. It is possible that the page was not completely erased
+ * or the page is becoming unusable due to wear. The read with ECC would catch
+ * the error later when the ECC page check fails, but we would rather catch
+ * it early in the page write stage. Better to write no data than invalid data.
+ */
+static int nand_verify_pages (struct mtd_info *mtd, struct nand_chip *this, int page, int numpages,
+ u_char *oob_buf, struct nand_oobinfo *oobsel, int chipnr, int oobmode)
+{
+ int i, j, datidx = 0, oobofs = 0, res = -EIO;
+ int eccsteps = this->eccsteps;
+ int hweccbytes;
+ u_char oobdata[64];
+
+ hweccbytes = (this->options & NAND_HWECC_SYNDROME) ? (oobsel->eccbytes / eccsteps) : 0;
+
+ /* Send command to read back the first page */
+ this->cmdfunc (mtd, NAND_CMD_READ0, 0, page);
+
+ for(;;) {
+ for (j = 0; j < eccsteps; j++) {
+ /* Loop through and verify the data */
+ if (this->verify_buf(mtd, &this->data_poi[datidx], mtd->eccsize)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "%s: " "Failed write verify, page 0x%08x ", __FUNCTION__, page);
+ goto out;
+ }
+ datidx += mtd->eccsize;
+ /* Have we a hw generator layout ? */
+ if (!hweccbytes)
+ continue;
+ if (this->verify_buf(mtd, &this->oob_buf[oobofs], hweccbytes)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "%s: " "Failed write verify, page 0x%08x ", __FUNCTION__, page);
+ goto out;
+ }
+ oobofs += hweccbytes;
+ }
+
+ /* check, if we must compare all data or if we just have to
+ * compare the ecc bytes
+ */
+ if (oobmode) {
+ if (this->verify_buf(mtd, &oob_buf[oobofs], mtd->oobsize - hweccbytes * eccsteps)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "%s: " "Failed write verify, page 0x%08x ", __FUNCTION__, page);
+ goto out;
+ }
+ } else {
+ /* Read always, else autoincrement fails */
+ this->read_buf(mtd, oobdata, mtd->oobsize - hweccbytes * eccsteps);
+
+ if (oobsel->useecc != MTD_NANDECC_OFF && !hweccbytes) {
+ int ecccnt = oobsel->eccbytes;
+
+ for (i = 0; i < ecccnt; i++) {
+ int idx = oobsel->eccpos[i];
+ if (oobdata[idx] != oob_buf[oobofs + idx] ) {
+ DEBUG (MTD_DEBUG_LEVEL0,
+ "%s: Failed ECC write "
+ "verify, page 0x%08x, " "%6i bytes were succesful\n", __FUNCTION__, page, i);
+ goto out;
+ }
+ }
+ }
+ }
+ oobofs += mtd->oobsize - hweccbytes * eccsteps;
+ page++;
+ numpages--;
+
+ /* Apply delay or wait for ready/busy pin
+ * Do this before the AUTOINCR check, so no problems
+ * arise if a chip which does auto increment
+ * is marked as NOAUTOINCR by the board driver.
+ * Do this also before returning, so the chip is
+ * ready for the next command.
+ */
+ if (!this->dev_ready)
+ udelay (this->chip_delay);
+ else
+ while (!this->dev_ready(mtd));
+
+ /* All done, return happy */
+ if (!numpages)
+ return 0;
+
+
+ /* Check, if the chip supports auto page increment */
+ if (!NAND_CANAUTOINCR(this))
+ this->cmdfunc (mtd, NAND_CMD_READ0, 0x00, page);
+ }
+ /*
+ * Terminate the read command. We come here in case of an error
+ * So we must issue a reset command.
+ */
+out:
+ this->cmdfunc (mtd, NAND_CMD_RESET, -1, -1);
+ return res;
+}
+#endif
+
+/**
+ * nand_read - [MTD Interface] MTD compability function for nand_read_ecc
+ * @mtd: MTD device structure
+ * @from: offset to read from
+ * @len: number of bytes to read
+ * @retlen: pointer to variable to store the number of read bytes
+ * @buf: the databuffer to put data
+ *
+ * This function simply calls nand_read_ecc with oob buffer and oobsel = NULL
+*/
+static int nand_read (struct mtd_info *mtd, loff_t from, size_t len, size_t * retlen, u_char * buf)
+{
+ return nand_read_ecc (mtd, from, len, retlen, buf, NULL, NULL);
+}
+
+
+/**
+ * nand_read_ecc - [MTD Interface] Read data with ECC
+ * @mtd: MTD device structure
+ * @from: offset to read from
+ * @len: number of bytes to read
+ * @retlen: pointer to variable to store the number of read bytes
+ * @buf: the databuffer to put data
+ * @oob_buf: filesystem supplied oob data buffer
+ * @oobsel: oob selection structure
+ *
+ * NAND read with ECC
+ */
+static int nand_read_ecc (struct mtd_info *mtd, loff_t from, size_t len,
+ size_t * retlen, u_char * buf, u_char * oob_buf, struct nand_oobinfo *oobsel)
+{
+ int i, j, col, realpage, page, end, ecc, chipnr, sndcmd = 1;
+ int read = 0, oob = 0, ecc_status = 0, ecc_failed = 0;
+ struct nand_chip *this = mtd->priv;
+ u_char *data_poi, *oob_data = oob_buf;
+ u_char ecc_calc[32];
+ u_char ecc_code[32];
+ int eccmode, eccsteps;
+ int *oob_config, datidx;
+ int blockcheck = (1 << (this->phys_erase_shift - this->page_shift)) - 1;
+ int eccbytes = 3;
+ int compareecc = 1;
+ int oobreadlen;
+
+
+ DEBUG (MTD_DEBUG_LEVEL3, "nand_read_ecc: from = 0x%08x, len = %i\n", (unsigned int) from, (int) len);
+
+ /* Do not allow reads past end of device */
+ if ((from + len) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_read_ecc: Attempt read beyond end of device\n");
+ *retlen = 0;
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_device (this, mtd ,FL_READING);
+
+ /* use userspace supplied oobinfo, if zero */
+ if (oobsel == NULL)
+ oobsel = &mtd->oobinfo;
+
+ /* Autoplace of oob data ? Use the default placement scheme */
+ if (oobsel->useecc == MTD_NANDECC_AUTOPLACE)
+ oobsel = this->autooob;
+
+ eccmode = oobsel->useecc ? this->eccmode : NAND_ECC_NONE;
+ oob_config = oobsel->eccpos;
+
+ /* Select the NAND device */
+ chipnr = (int)(from >> this->chip_shift);
+ this->select_chip(mtd, chipnr);
+
+ /* First we calculate the starting page */
+ realpage = (int) (from >> this->page_shift);
+ page = realpage & this->pagemask;
+
+ /* Get raw starting column */
+ col = from & (mtd->oobblock - 1);
+
+ end = mtd->oobblock;
+ ecc = this->eccsize;
+ switch (eccmode) {
+ case NAND_ECC_HW6_512: /* Hardware ECC 6 byte / 512 byte data */
+ eccbytes = 6;
+ break;
+ case NAND_ECC_HW8_512: /* Hardware ECC 8 byte / 512 byte data */
+ eccbytes = 8;
+ break;
+ case NAND_ECC_NONE:
+ compareecc = 0;
+ break;
+ }
+
+ if (this->options & NAND_HWECC_SYNDROME)
+ compareecc = 0;
+
+ oobreadlen = mtd->oobsize;
+ if (this->options & NAND_HWECC_SYNDROME)
+ oobreadlen -= oobsel->eccbytes;
+
+ /* Loop until all data read */
+ while (read < len) {
+
+ int aligned = (!col && (len - read) >= end);
+ /*
+ * If the read is not page aligned, we have to read into data buffer
+ * due to ecc, else we read into return buffer direct
+ */
+ if (aligned)
+ data_poi = &buf[read];
+ else
+ data_poi = this->data_buf;
+
+ /* Check, if we have this page in the buffer
+ *
+ * FIXME: Make it work when we must provide oob data too,
+ * check the usage of data_buf oob field
+ */
+ if (realpage == this->pagebuf && !oob_buf) {
+ /* aligned read ? */
+ if (aligned)
+ memcpy (data_poi, this->data_buf, end);
+ goto readdata;
+ }
+
+ /* Check, if we must send the read command */
+ if (sndcmd) {
+ this->cmdfunc (mtd, NAND_CMD_READ0, 0x00, page);
+ sndcmd = 0;
+ }
+
+ /* get oob area, if we have no oob buffer from fs-driver */
+ if (!oob_buf || oobsel->useecc == MTD_NANDECC_AUTOPLACE)
+ oob_data = &this->data_buf[end];
+
+ eccsteps = this->eccsteps;
+
+ switch (eccmode) {
+ case NAND_ECC_NONE: { /* No ECC, Read in a page */
+ static unsigned long lastwhinge = 0;
+ if ((lastwhinge / HZ) != (jiffies / HZ)) {
+ printk (KERN_WARNING "Reading data from NAND FLASH without ECC is not recommended\n");
+ lastwhinge = jiffies;
+ }
+ this->read_buf(mtd, data_poi, end);
+ break;
+ }
+
+ case NAND_ECC_SOFT: /* Software ECC 3/256: Read in a page + oob data */
+ this->read_buf(mtd, data_poi, end);
+ for (i = 0, datidx = 0; eccsteps; eccsteps--, i+=3, datidx += ecc)
+ this->calculate_ecc(mtd, &data_poi[datidx], &ecc_calc[i]);
+ break;
+
+ case NAND_ECC_HW3_256: /* Hardware ECC 3 byte /256 byte data */
+ case NAND_ECC_HW3_512: /* Hardware ECC 3 byte /512 byte data */
+ case NAND_ECC_HW6_512: /* Hardware ECC 6 byte / 512 byte data */
+ case NAND_ECC_HW8_512: /* Hardware ECC 8 byte / 512 byte data */
+ for (i = 0, datidx = 0; eccsteps; eccsteps--, i+=eccbytes, datidx += ecc) {
+ this->enable_hwecc(mtd, NAND_ECC_READ);
+ this->read_buf(mtd, &data_poi[datidx], ecc);
+
+ /* HW ecc with syndrome calculation must read the
+ * syndrome from flash immidiately after the data */
+ if (!compareecc) {
+ /* Some hw ecc generators need to know when the
+ * syndrome is read from flash */
+ this->enable_hwecc(mtd, NAND_ECC_READSYN);
+ this->read_buf(mtd, &oob_data[i], eccbytes);
+ /* We calc error correction directly, it checks the hw
+ * generator for an error, reads back the syndrome and
+ * does the error correction on the fly */
+ if (this->correct_data(mtd, &data_poi[datidx], &oob_data[i], &ecc_code[i]) == -1) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_read_ecc: "
+ "Failed ECC read, page 0x%08x on chip %d\n", page, chipnr);
+ ecc_failed++;
+ }
+ } else {
+ this->calculate_ecc(mtd, &data_poi[datidx], &ecc_calc[i]);
+ }
+ }
+ break;
+
+ default:
+ printk (KERN_WARNING "Invalid NAND_ECC_MODE %d\n", this->eccmode);
+ BUG();
+ }
+
+ /* read oobdata */
+ this->read_buf(mtd, &oob_data[mtd->oobsize - oobreadlen], oobreadlen);
+
+ /* Skip ECC check, if not requested (ECC_NONE or HW_ECC with syndromes) */
+ if (!compareecc)
+ goto readoob;
+
+ /* Pick the ECC bytes out of the oob data */
+ for (j = 0; j < oobsel->eccbytes; j++)
+ ecc_code[j] = oob_data[oob_config[j]];
+
+ /* correct data, if neccecary */
+ for (i = 0, j = 0, datidx = 0; i < this->eccsteps; i++, datidx += ecc) {
+ ecc_status = this->correct_data(mtd, &data_poi[datidx], &ecc_code[j], &ecc_calc[j]);
+
+ /* Get next chunk of ecc bytes */
+ j += eccbytes;
+
+ /* Check, if we have a fs supplied oob-buffer,
+ * This is the legacy mode. Used by YAFFS1
+ * Should go away some day
+ */
+ if (oob_buf && oobsel->useecc == MTD_NANDECC_PLACE) {
+ int *p = (int *)(&oob_data[mtd->oobsize]);
+ p[i] = ecc_status;
+ }
+
+ if (ecc_status == -1) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_read_ecc: " "Failed ECC read, page 0x%08x\n", page);
+ ecc_failed++;
+ }
+ }
+
+ readoob:
+ /* check, if we have a fs supplied oob-buffer */
+ if (oob_buf) {
+ /* without autoplace. Legacy mode used by YAFFS1 */
+ switch(oobsel->useecc) {
+ case MTD_NANDECC_AUTOPLACE:
+ /* Walk through the autoplace chunks */
+ for (i = 0, j = 0; j < mtd->oobavail; i++) {
+ int from = oobsel->oobfree[i][0];
+ int num = oobsel->oobfree[i][1];
+ memcpy(&oob_buf[oob], &oob_data[from], num);
+ j+= num;
+ }
+ oob += mtd->oobavail;
+ break;
+ case MTD_NANDECC_PLACE:
+ /* YAFFS1 legacy mode */
+ oob_data += this->eccsteps * sizeof (int);
+ default:
+ oob_data += mtd->oobsize;
+ }
+ }
+ readdata:
+ /* Partial page read, transfer data into fs buffer */
+ if (!aligned) {
+ for (j = col; j < end && read < len; j++)
+ buf[read++] = data_poi[j];
+ this->pagebuf = realpage;
+ } else
+ read += mtd->oobblock;
+
+ /* Apply delay or wait for ready/busy pin
+ * Do this before the AUTOINCR check, so no problems
+ * arise if a chip which does auto increment
+ * is marked as NOAUTOINCR by the board driver.
+ */
+ if (!this->dev_ready)
+ udelay (this->chip_delay);
+ else
+ while (!this->dev_ready(mtd));
+
+ if (read == len)
+ break;
+
+ /* For subsequent reads align to page boundary. */
+ col = 0;
+ /* Increment page address */
+ realpage++;
+
+ page = realpage & this->pagemask;
+ /* Check, if we cross a chip boundary */
+ if (!page) {
+ chipnr++;
+ this->select_chip(mtd, -1);
+ this->select_chip(mtd, chipnr);
+ }
+ /* Check, if the chip supports auto page increment
+ * or if we have hit a block boundary.
+ */
+ if (!NAND_CANAUTOINCR(this) || !(page & blockcheck))
+ sndcmd = 1;
+ }
+
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_device(mtd);
+
+ /*
+ * Return success, if no ECC failures, else -EBADMSG
+ * fs driver will take care of that, because
+ * retlen == desired len and result == -EBADMSG
+ */
+ *retlen = read;
+ return ecc_failed ? -EBADMSG : 0;
+}
+
+/**
+ * nand_read_oob - [MTD Interface] NAND read out-of-band
+ * @mtd: MTD device structure
+ * @from: offset to read from
+ * @len: number of bytes to read
+ * @retlen: pointer to variable to store the number of read bytes
+ * @buf: the databuffer to put data
+ *
+ * NAND read out-of-band data from the spare area
+ */
+static int nand_read_oob (struct mtd_info *mtd, loff_t from, size_t len, size_t * retlen, u_char * buf)
+{
+ int i, col, page, chipnr;
+ struct nand_chip *this = mtd->priv;
+ int blockcheck = (1 << (this->phys_erase_shift - this->page_shift)) - 1;
+
+ DEBUG (MTD_DEBUG_LEVEL3, "nand_read_oob: from = 0x%08x, len = %i\n", (unsigned int) from, (int) len);
+
+ /* Shift to get page */
+ page = (int)(from >> this->page_shift);
+ chipnr = (int)(from >> this->chip_shift);
+
+ /* Mask to get column */
+ col = from & (mtd->oobsize - 1);
+
+ /* Initialize return length value */
+ *retlen = 0;
+
+ /* Do not allow reads past end of device */
+ if ((from + len) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_read_oob: Attempt read beyond end of device\n");
+ *retlen = 0;
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_device (this, mtd , FL_READING);
+
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+
+ /* Send the read command */
+ this->cmdfunc (mtd, NAND_CMD_READOOB, col, page & this->pagemask);
+ /*
+ * Read the data, if we read more than one page
+ * oob data, let the device transfer the data !
+ */
+ i = 0;
+ while (i < len) {
+ int thislen = mtd->oobsize - col;
+ thislen = min_t(int, thislen, len);
+ this->read_buf(mtd, &buf[i], thislen);
+ i += thislen;
+
+ /* Apply delay or wait for ready/busy pin
+ * Do this before the AUTOINCR check, so no problems
+ * arise if a chip which does auto increment
+ * is marked as NOAUTOINCR by the board driver.
+ */
+ if (!this->dev_ready)
+ udelay (this->chip_delay);
+ else
+ while (!this->dev_ready(mtd));
+
+ /* Read more ? */
+ if (i < len) {
+ page++;
+ col = 0;
+
+ /* Check, if we cross a chip boundary */
+ if (!(page & this->pagemask)) {
+ chipnr++;
+ this->select_chip(mtd, -1);
+ this->select_chip(mtd, chipnr);
+ }
+
+ /* Check, if the chip supports auto page increment
+ * or if we have hit a block boundary.
+ */
+ if (!NAND_CANAUTOINCR(this) || !(page & blockcheck)) {
+ /* For subsequent page reads set offset to 0 */
+ this->cmdfunc (mtd, NAND_CMD_READOOB, 0x0, page & this->pagemask);
+ }
+ }
+ }
+
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_device(mtd);
+
+ /* Return happy */
+ *retlen = len;
+ return 0;
+}
+
+/**
+ * nand_read_raw - [GENERIC] Read raw data including oob into buffer
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @from: offset to read from
+ * @len: number of bytes to read
+ * @ooblen: number of oob data bytes to read
+ *
+ * Read raw data including oob into buffer
+ */
+int nand_read_raw (struct mtd_info *mtd, uint8_t *buf, loff_t from, size_t len, size_t ooblen)
+{
+ struct nand_chip *this = mtd->priv;
+ int page = (int) (from >> this->page_shift);
+ int chip = (int) (from >> this->chip_shift);
+ int sndcmd = 1;
+ int cnt = 0;
+ int pagesize = mtd->oobblock + mtd->oobsize;
+ int blockcheck = (1 << (this->phys_erase_shift - this->page_shift)) - 1;
+
+ /* Do not allow reads past end of device */
+ if ((from + len) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_read_raw: Attempt read beyond end of device\n");
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_device (this, mtd , FL_READING);
+
+ this->select_chip (mtd, chip);
+
+ /* Add requested oob length */
+ len += ooblen;
+
+ while (len) {
+ if (sndcmd)
+ this->cmdfunc (mtd, NAND_CMD_READ0, 0, page & this->pagemask);
+ sndcmd = 0;
+
+ this->read_buf (mtd, &buf[cnt], pagesize);
+
+ len -= pagesize;
+ cnt += pagesize;
+ page++;
+
+ if (!this->dev_ready)
+ udelay (this->chip_delay);
+ else
+ while (!this->dev_ready(mtd));
+
+ /* Check, if the chip supports auto page increment */
+ if (!NAND_CANAUTOINCR(this) || !(page & blockcheck))
+ sndcmd = 1;
+ }
+
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_device(mtd);
+ return 0;
+}
+
+
+/**
+ * nand_prepare_oobbuf - [GENERIC] Prepare the out of band buffer
+ * @mtd: MTD device structure
+ * @fsbuf: buffer given by fs driver
+ * @oobsel: out of band selection structre
+ * @autoplace: 1 = place given buffer into the oob bytes
+ * @numpages: number of pages to prepare
+ *
+ * Return:
+ * 1. Filesystem buffer available and autoplacement is off,
+ * return filesystem buffer
+ * 2. No filesystem buffer or autoplace is off, return internal
+ * buffer
+ * 3. Filesystem buffer is given and autoplace selected
+ * put data from fs buffer into internal buffer and
+ * retrun internal buffer
+ *
+ * Note: The internal buffer is filled with 0xff. This must
+ * be done only once, when no autoplacement happens
+ * Autoplacement sets the buffer dirty flag, which
+ * forces the 0xff fill before using the buffer again.
+ *
+*/
+static u_char * nand_prepare_oobbuf (struct mtd_info *mtd, u_char *fsbuf, struct nand_oobinfo *oobsel,
+ int autoplace, int numpages)
+{
+ struct nand_chip *this = mtd->priv;
+ int i, len, ofs;
+
+ /* Zero copy fs supplied buffer */
+ if (fsbuf && !autoplace)
+ return fsbuf;
+
+ /* Check, if the buffer must be filled with ff again */
+ if (this->oobdirty) {
+ memset (this->oob_buf, 0xff,
+ mtd->oobsize << (this->phys_erase_shift - this->page_shift));
+ this->oobdirty = 0;
+ }
+
+ /* If we have no autoplacement or no fs buffer use the internal one */
+ if (!autoplace || !fsbuf)
+ return this->oob_buf;
+
+ /* Walk through the pages and place the data */
+ this->oobdirty = 1;
+ ofs = 0;
+ while (numpages--) {
+ for (i = 0, len = 0; len < mtd->oobavail; i++) {
+ int to = ofs + oobsel->oobfree[i][0];
+ int num = oobsel->oobfree[i][1];
+ memcpy (&this->oob_buf[to], fsbuf, num);
+ len += num;
+ fsbuf += num;
+ }
+ ofs += mtd->oobavail;
+ }
+ return this->oob_buf;
+}
+
+#define NOTALIGNED(x) (x & (mtd->oobblock-1)) != 0
+
+/**
+ * nand_write - [MTD Interface] compability function for nand_write_ecc
+ * @mtd: MTD device structure
+ * @to: offset to write to
+ * @len: number of bytes to write
+ * @retlen: pointer to variable to store the number of written bytes
+ * @buf: the data to write
+ *
+ * This function simply calls nand_write_ecc with oob buffer and oobsel = NULL
+ *
+*/
+static int nand_write (struct mtd_info *mtd, loff_t to, size_t len, size_t * retlen, const u_char * buf)
+{
+ return (nand_write_ecc (mtd, to, len, retlen, buf, NULL, NULL));
+}
+
+/**
+ * nand_write_ecc - [MTD Interface] NAND write with ECC
+ * @mtd: MTD device structure
+ * @to: offset to write to
+ * @len: number of bytes to write
+ * @retlen: pointer to variable to store the number of written bytes
+ * @buf: the data to write
+ * @eccbuf: filesystem supplied oob data buffer
+ * @oobsel: oob selection structure
+ *
+ * NAND write with ECC
+ */
+static int nand_write_ecc (struct mtd_info *mtd, loff_t to, size_t len,
+ size_t * retlen, const u_char * buf, u_char * eccbuf, struct nand_oobinfo *oobsel)
+{
+ int startpage, page, ret = -EIO, oob = 0, written = 0, chipnr;
+ int autoplace = 0, numpages, totalpages;
+ struct nand_chip *this = mtd->priv;
+ u_char *oobbuf, *bufstart;
+ int ppblock = (1 << (this->phys_erase_shift - this->page_shift));
+
+ DEBUG (MTD_DEBUG_LEVEL3, "nand_write_ecc: to = 0x%08x, len = %i\n", (unsigned int) to, (int) len);
+
+ /* Initialize retlen, in case of early exit */
+ *retlen = 0;
+
+ /* Do not allow write past end of device */
+ if ((to + len) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_ecc: Attempt to write past end of page\n");
+ return -EINVAL;
+ }
+
+ /* reject writes, which are not page aligned */
+ if (NOTALIGNED (to) || NOTALIGNED(len)) {
+ printk (KERN_NOTICE "nand_write_ecc: Attempt to write not page aligned data\n");
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_device (this, mtd, FL_WRITING);
+
+ /* Calculate chipnr */
+ chipnr = (int)(to >> this->chip_shift);
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+
+ /* Check, if it is write protected */
+ if (nand_check_wp(mtd))
+ goto out;
+
+ /* if oobsel is NULL, use chip defaults */
+ if (oobsel == NULL)
+ oobsel = &mtd->oobinfo;
+
+ /* Autoplace of oob data ? Use the default placement scheme */
+ if (oobsel->useecc == MTD_NANDECC_AUTOPLACE) {
+ oobsel = this->autooob;
+ autoplace = 1;
+ }
+
+ /* Setup variables and oob buffer */
+ totalpages = len >> this->page_shift;
+ page = (int) (to >> this->page_shift);
+ /* Invalidate the page cache, if we write to the cached page */
+ if (page <= this->pagebuf && this->pagebuf < (page + totalpages))
+ this->pagebuf = -1;
+
+ /* Set it relative to chip */
+ page &= this->pagemask;
+ startpage = page;
+ /* Calc number of pages we can write in one go */
+ numpages = min (ppblock - (startpage & (ppblock - 1)), totalpages);
+ oobbuf = nand_prepare_oobbuf (mtd, eccbuf, oobsel, autoplace, numpages);
+ bufstart = (u_char *)buf;
+
+ /* Loop until all data is written */
+ while (written < len) {
+
+ this->data_poi = (u_char*) &buf[written];
+ /* Write one page. If this is the last page to write
+ * or the last page in this block, then use the
+ * real pageprogram command, else select cached programming
+ * if supported by the chip.
+ */
+ ret = nand_write_page (mtd, this, page, &oobbuf[oob], oobsel, (--numpages > 0));
+ if (ret) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_ecc: write_page failed %d\n", ret);
+ goto out;
+ }
+ /* Next oob page */
+ oob += mtd->oobsize;
+ /* Update written bytes count */
+ written += mtd->oobblock;
+ if (written == len)
+ goto cmp;
+
+ /* Increment page address */
+ page++;
+
+ /* Have we hit a block boundary ? Then we have to verify and
+ * if verify is ok, we have to setup the oob buffer for
+ * the next pages.
+ */
+ if (!(page & (ppblock - 1))){
+ int ofs;
+ this->data_poi = bufstart;
+ ret = nand_verify_pages (mtd, this, startpage,
+ page - startpage,
+ oobbuf, oobsel, chipnr, (eccbuf != NULL));
+ if (ret) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_ecc: verify_pages failed %d\n", ret);
+ goto out;
+ }
+ *retlen = written;
+
+ ofs = autoplace ? mtd->oobavail : mtd->oobsize;
+ if (eccbuf)
+ eccbuf += (page - startpage) * ofs;
+ totalpages -= page - startpage;
+ numpages = min (totalpages, ppblock);
+ page &= this->pagemask;
+ startpage = page;
+ oobbuf = nand_prepare_oobbuf (mtd, eccbuf, oobsel,
+ autoplace, numpages);
+ /* Check, if we cross a chip boundary */
+ if (!page) {
+ chipnr++;
+ this->select_chip(mtd, -1);
+ this->select_chip(mtd, chipnr);
+ }
+ }
+ }
+ /* Verify the remaining pages */
+cmp:
+ this->data_poi = bufstart;
+ ret = nand_verify_pages (mtd, this, startpage, totalpages,
+ oobbuf, oobsel, chipnr, (eccbuf != NULL));
+ if (!ret)
+ *retlen = written;
+ else
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_ecc: verify_pages failed %d\n", ret);
+
+out:
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_device(mtd);
+
+ return ret;
+}
+
+
+/**
+ * nand_write_oob - [MTD Interface] NAND write out-of-band
+ * @mtd: MTD device structure
+ * @to: offset to write to
+ * @len: number of bytes to write
+ * @retlen: pointer to variable to store the number of written bytes
+ * @buf: the data to write
+ *
+ * NAND write out-of-band
+ */
+static int nand_write_oob (struct mtd_info *mtd, loff_t to, size_t len, size_t * retlen, const u_char * buf)
+{
+ int column, page, status, ret = -EIO, chipnr;
+ struct nand_chip *this = mtd->priv;
+
+ DEBUG (MTD_DEBUG_LEVEL3, "nand_write_oob: to = 0x%08x, len = %i\n", (unsigned int) to, (int) len);
+
+ /* Shift to get page */
+ page = (int) (to >> this->page_shift);
+ chipnr = (int) (to >> this->chip_shift);
+
+ /* Mask to get column */
+ column = to & (mtd->oobsize - 1);
+
+ /* Initialize return length value */
+ *retlen = 0;
+
+ /* Do not allow write past end of page */
+ if ((column + len) > mtd->oobsize) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_oob: Attempt to write past end of page\n");
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_device (this, mtd, FL_WRITING);
+
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+
+ /* Reset the chip. Some chips (like the Toshiba TC5832DC found
+ in one of my DiskOnChip 2000 test units) will clear the whole
+ data page too if we don't do this. I have no clue why, but
+ I seem to have 'fixed' it in the doc2000 driver in
+ August 1999. dwmw2. */
+ this->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
+
+ /* Check, if it is write protected */
+ if (nand_check_wp(mtd))
+ goto out;
+
+ /* Invalidate the page cache, if we write to the cached page */
+ if (page == this->pagebuf)
+ this->pagebuf = -1;
+
+ if (NAND_MUST_PAD(this)) {
+ /* Write out desired data */
+ this->cmdfunc (mtd, NAND_CMD_SEQIN, mtd->oobblock, page & this->pagemask);
+ /* prepad 0xff for partial programming */
+ this->write_buf(mtd, ffchars, column);
+ /* write data */
+ this->write_buf(mtd, buf, len);
+ /* postpad 0xff for partial programming */
+ this->write_buf(mtd, ffchars, mtd->oobsize - (len+column));
+ } else {
+ /* Write out desired data */
+ this->cmdfunc (mtd, NAND_CMD_SEQIN, mtd->oobblock + column, page & this->pagemask);
+ /* write data */
+ this->write_buf(mtd, buf, len);
+ }
+ /* Send command to program the OOB data */
+ this->cmdfunc (mtd, NAND_CMD_PAGEPROG, -1, -1);
+
+ status = this->waitfunc (mtd, this, FL_WRITING);
+
+ /* See if device thinks it succeeded */
+ if (status & 0x01) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_oob: " "Failed write, page 0x%08x\n", page);
+ ret = -EIO;
+ goto out;
+ }
+ /* Return happy */
+ *retlen = len;
+
+#ifdef CONFIG_MTD_NAND_VERIFY_WRITE
+ /* Send command to read back the data */
+ this->cmdfunc (mtd, NAND_CMD_READOOB, column, page & this->pagemask);
+
+ if (this->verify_buf(mtd, buf, len)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_write_oob: " "Failed write verify, page 0x%08x\n", page);
+ ret = -EIO;
+ goto out;
+ }
+#endif
+ ret = 0;
+out:
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_device(mtd);
+
+ return ret;
+}
+
+
+/**
+ * nand_writev - [MTD Interface] compabilty function for nand_writev_ecc
+ * @mtd: MTD device structure
+ * @vecs: the iovectors to write
+ * @count: number of vectors
+ * @to: offset to write to
+ * @retlen: pointer to variable to store the number of written bytes
+ *
+ * NAND write with kvec. This just calls the ecc function
+ */
+static int nand_writev (struct mtd_info *mtd, const struct kvec *vecs, unsigned long count,
+ loff_t to, size_t * retlen)
+{
+ return (nand_writev_ecc (mtd, vecs, count, to, retlen, NULL, NULL));
+}
+
+/**
+ * nand_writev_ecc - [MTD Interface] write with iovec with ecc
+ * @mtd: MTD device structure
+ * @vecs: the iovectors to write
+ * @count: number of vectors
+ * @to: offset to write to
+ * @retlen: pointer to variable to store the number of written bytes
+ * @eccbuf: filesystem supplied oob data buffer
+ * @oobsel: oob selection structure
+ *
+ * NAND write with iovec with ecc
+ */
+static int nand_writev_ecc (struct mtd_info *mtd, const struct kvec *vecs, unsigned long count,
+ loff_t to, size_t * retlen, u_char *eccbuf, struct nand_oobinfo *oobsel)
+{
+ int i, page, len, total_len, ret = -EIO, written = 0, chipnr;
+ int oob, numpages, autoplace = 0, startpage;
+ struct nand_chip *this = mtd->priv;
+ int ppblock = (1 << (this->phys_erase_shift - this->page_shift));
+ u_char *oobbuf, *bufstart;
+
+ /* Preset written len for early exit */
+ *retlen = 0;
+
+ /* Calculate total length of data */
+ total_len = 0;
+ for (i = 0; i < count; i++)
+ total_len += (int) vecs[i].iov_len;
+
+ DEBUG (MTD_DEBUG_LEVEL3,
+ "nand_writev: to = 0x%08x, len = %i, count = %ld\n", (unsigned int) to, (unsigned int) total_len, count);
+
+ /* Do not allow write past end of page */
+ if ((to + total_len) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_writev: Attempted write past end of device\n");
+ return -EINVAL;
+ }
+
+ /* reject writes, which are not page aligned */
+ if (NOTALIGNED (to) || NOTALIGNED(total_len)) {
+ printk (KERN_NOTICE "nand_write_ecc: Attempt to write not page aligned data\n");
+ return -EINVAL;
+ }
+
+ /* Grab the lock and see if the device is available */
+ nand_get_device (this, mtd, FL_WRITING);
+
+ /* Get the current chip-nr */
+ chipnr = (int) (to >> this->chip_shift);
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+
+ /* Check, if it is write protected */
+ if (nand_check_wp(mtd))
+ goto out;
+
+ /* if oobsel is NULL, use chip defaults */
+ if (oobsel == NULL)
+ oobsel = &mtd->oobinfo;
+
+ /* Autoplace of oob data ? Use the default placement scheme */
+ if (oobsel->useecc == MTD_NANDECC_AUTOPLACE) {
+ oobsel = this->autooob;
+ autoplace = 1;
+ }
+
+ /* Setup start page */
+ page = (int) (to >> this->page_shift);
+ /* Invalidate the page cache, if we write to the cached page */
+ if (page <= this->pagebuf && this->pagebuf < ((to + total_len) >> this->page_shift))
+ this->pagebuf = -1;
+
+ startpage = page & this->pagemask;
+
+ /* Loop until all kvec' data has been written */
+ len = 0;
+ while (count) {
+ /* If the given tuple is >= pagesize then
+ * write it out from the iov
+ */
+ if ((vecs->iov_len - len) >= mtd->oobblock) {
+ /* Calc number of pages we can write
+ * out of this iov in one go */
+ numpages = (vecs->iov_len - len) >> this->page_shift;
+ /* Do not cross block boundaries */
+ numpages = min (ppblock - (startpage & (ppblock - 1)), numpages);
+ oobbuf = nand_prepare_oobbuf (mtd, NULL, oobsel, autoplace, numpages);
+ bufstart = (u_char *)vecs->iov_base;
+ bufstart += len;
+ this->data_poi = bufstart;
+ oob = 0;
+ for (i = 1; i <= numpages; i++) {
+ /* Write one page. If this is the last page to write
+ * then use the real pageprogram command, else select
+ * cached programming if supported by the chip.
+ */
+ ret = nand_write_page (mtd, this, page & this->pagemask,
+ &oobbuf[oob], oobsel, i != numpages);
+ if (ret)
+ goto out;
+ this->data_poi += mtd->oobblock;
+ len += mtd->oobblock;
+ oob += mtd->oobsize;
+ page++;
+ }
+ /* Check, if we have to switch to the next tuple */
+ if (len >= (int) vecs->iov_len) {
+ vecs++;
+ len = 0;
+ count--;
+ }
+ } else {
+ /* We must use the internal buffer, read data out of each
+ * tuple until we have a full page to write
+ */
+ int cnt = 0;
+ while (cnt < mtd->oobblock) {
+ if (vecs->iov_base != NULL && vecs->iov_len)
+ this->data_buf[cnt++] = ((u_char *) vecs->iov_base)[len++];
+ /* Check, if we have to switch to the next tuple */
+ if (len >= (int) vecs->iov_len) {
+ vecs++;
+ len = 0;
+ count--;
+ }
+ }
+ this->pagebuf = page;
+ this->data_poi = this->data_buf;
+ bufstart = this->data_poi;
+ numpages = 1;
+ oobbuf = nand_prepare_oobbuf (mtd, NULL, oobsel, autoplace, numpages);
+ ret = nand_write_page (mtd, this, page & this->pagemask,
+ oobbuf, oobsel, 0);
+ if (ret)
+ goto out;
+ page++;
+ }
+
+ this->data_poi = bufstart;
+ ret = nand_verify_pages (mtd, this, startpage, numpages, oobbuf, oobsel, chipnr, 0);
+ if (ret)
+ goto out;
+
+ written += mtd->oobblock * numpages;
+ /* All done ? */
+ if (!count)
+ break;
+
+ startpage = page & this->pagemask;
+ /* Check, if we cross a chip boundary */
+ if (!startpage) {
+ chipnr++;
+ this->select_chip(mtd, -1);
+ this->select_chip(mtd, chipnr);
+ }
+ }
+ ret = 0;
+out:
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_device(mtd);
+
+ *retlen = written;
+ return ret;
+}
+
+/**
+ * single_erease_cmd - [GENERIC] NAND standard block erase command function
+ * @mtd: MTD device structure
+ * @page: the page address of the block which will be erased
+ *
+ * Standard erase command for NAND chips
+ */
+static void single_erase_cmd (struct mtd_info *mtd, int page)
+{
+ struct nand_chip *this = mtd->priv;
+ /* Send commands to erase a block */
+ this->cmdfunc (mtd, NAND_CMD_ERASE1, -1, page);
+ this->cmdfunc (mtd, NAND_CMD_ERASE2, -1, -1);
+}
+
+/**
+ * multi_erease_cmd - [GENERIC] AND specific block erase command function
+ * @mtd: MTD device structure
+ * @page: the page address of the block which will be erased
+ *
+ * AND multi block erase command function
+ * Erase 4 consecutive blocks
+ */
+static void multi_erase_cmd (struct mtd_info *mtd, int page)
+{
+ struct nand_chip *this = mtd->priv;
+ /* Send commands to erase a block */
+ this->cmdfunc (mtd, NAND_CMD_ERASE1, -1, page++);
+ this->cmdfunc (mtd, NAND_CMD_ERASE1, -1, page++);
+ this->cmdfunc (mtd, NAND_CMD_ERASE1, -1, page++);
+ this->cmdfunc (mtd, NAND_CMD_ERASE1, -1, page);
+ this->cmdfunc (mtd, NAND_CMD_ERASE2, -1, -1);
+}
+
+/**
+ * nand_erase - [MTD Interface] erase block(s)
+ * @mtd: MTD device structure
+ * @instr: erase instruction
+ *
+ * Erase one ore more blocks
+ */
+static int nand_erase (struct mtd_info *mtd, struct erase_info *instr)
+{
+ return nand_erase_nand (mtd, instr, 0);
+}
+
+/**
+ * nand_erase_intern - [NAND Interface] erase block(s)
+ * @mtd: MTD device structure
+ * @instr: erase instruction
+ * @allowbbt: allow erasing the bbt area
+ *
+ * Erase one ore more blocks
+ */
+int nand_erase_nand (struct mtd_info *mtd, struct erase_info *instr, int allowbbt)
+{
+ int page, len, status, pages_per_block, ret, chipnr;
+ struct nand_chip *this = mtd->priv;
+
+ DEBUG (MTD_DEBUG_LEVEL3,
+ "nand_erase: start = 0x%08x, len = %i\n", (unsigned int) instr->addr, (unsigned int) instr->len);
+
+ /* Start address must align on block boundary */
+ if (instr->addr & ((1 << this->phys_erase_shift) - 1)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_erase: Unaligned address\n");
+ return -EINVAL;
+ }
+
+ /* Length must align on block boundary */
+ if (instr->len & ((1 << this->phys_erase_shift) - 1)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_erase: Length not block aligned\n");
+ return -EINVAL;
+ }
+
+ /* Do not allow erase past end of device */
+ if ((instr->len + instr->addr) > mtd->size) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_erase: Erase past end of device\n");
+ return -EINVAL;
+ }
+
+ instr->fail_addr = 0xffffffff;
+
+ /* Grab the lock and see if the device is available */
+ nand_get_device (this, mtd, FL_ERASING);
+
+ /* Shift to get first page */
+ page = (int) (instr->addr >> this->page_shift);
+ chipnr = (int) (instr->addr >> this->chip_shift);
+
+ /* Calculate pages in each block */
+ pages_per_block = 1 << (this->phys_erase_shift - this->page_shift);
+
+ /* Select the NAND device */
+ this->select_chip(mtd, chipnr);
+
+ /* Check the WP bit */
+ /* Check, if it is write protected */
+ if (nand_check_wp(mtd)) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_erase: Device is write protected!!!\n");
+ instr->state = MTD_ERASE_FAILED;
+ goto erase_exit;
+ }
+
+ /* Loop through the pages */
+ len = instr->len;
+
+ instr->state = MTD_ERASING;
+
+ while (len) {
+ /* Check if we have a bad block, we do not erase bad blocks ! */
+ if (nand_block_checkbad(mtd, ((loff_t) page) << this->page_shift, 0, allowbbt)) {
+ printk (KERN_WARNING "nand_erase: attempt to erase a bad block at page 0x%08x\n", page);
+ instr->state = MTD_ERASE_FAILED;
+ goto erase_exit;
+ }
+
+ /* Invalidate the page cache, if we erase the block which contains
+ the current cached page */
+ if (page <= this->pagebuf && this->pagebuf < (page + pages_per_block))
+ this->pagebuf = -1;
+
+ this->erase_cmd (mtd, page & this->pagemask);
+
+ status = this->waitfunc (mtd, this, FL_ERASING);
+
+ /* See if block erase succeeded */
+ if (status & 0x01) {
+ DEBUG (MTD_DEBUG_LEVEL0, "nand_erase: " "Failed erase, page 0x%08x\n", page);
+ instr->state = MTD_ERASE_FAILED;
+ instr->fail_addr = (page << this->page_shift);
+ goto erase_exit;
+ }
+
+ /* Increment page address and decrement length */
+ len -= (1 << this->phys_erase_shift);
+ page += pages_per_block;
+
+ /* Check, if we cross a chip boundary */
+ if (len && !(page & this->pagemask)) {
+ chipnr++;
+ this->select_chip(mtd, -1);
+ this->select_chip(mtd, chipnr);
+ }
+ }
+ instr->state = MTD_ERASE_DONE;
+
+erase_exit:
+
+ ret = instr->state == MTD_ERASE_DONE ? 0 : -EIO;
+ /* Do call back function */
+ if (!ret)
+ mtd_erase_callback(instr);
+
+ /* Deselect and wake up anyone waiting on the device */
+ nand_release_device(mtd);
+
+ /* Return more or less happy */
+ return ret;
+}
+
+/**
+ * nand_sync - [MTD Interface] sync
+ * @mtd: MTD device structure
+ *
+ * Sync is actually a wait for chip ready function
+ */
+static void nand_sync (struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+
+ DEBUG (MTD_DEBUG_LEVEL3, "nand_sync: called\n");
+
+ /* Grab the lock and see if the device is available */
+ nand_get_device (this, mtd, FL_SYNCING);
+ /* Release it and go back */
+ nand_release_device (mtd);
+}
+
+
+/**
+ * nand_block_isbad - [MTD Interface] Check whether the block at the given offset is bad
+ * @mtd: MTD device structure
+ * @ofs: offset relative to mtd start
+ */
+static int nand_block_isbad (struct mtd_info *mtd, loff_t ofs)
+{
+ /* Check for invalid offset */
+ if (ofs > mtd->size)
+ return -EINVAL;
+
+ return nand_block_checkbad (mtd, ofs, 1, 0);
+}
+
+/**
+ * nand_block_markbad - [MTD Interface] Mark the block at the given offset as bad
+ * @mtd: MTD device structure
+ * @ofs: offset relative to mtd start
+ */
+static int nand_block_markbad (struct mtd_info *mtd, loff_t ofs)
+{
+ struct nand_chip *this = mtd->priv;
+ int ret;
+
+ if ((ret = nand_block_isbad(mtd, ofs))) {
+ /* If it was bad already, return success and do nothing. */
+ if (ret > 0)
+ return 0;
+ return ret;
+ }
+
+ return this->block_markbad(mtd, ofs);
+}
+
+/**
+ * nand_scan - [NAND Interface] Scan for the NAND device
+ * @mtd: MTD device structure
+ * @maxchips: Number of chips to scan for
+ *
+ * This fills out all the not initialized function pointers
+ * with the defaults.
+ * The flash ID is read and the mtd/chip structures are
+ * filled with the appropriate values. Buffers are allocated if
+ * they are not provided by the board driver
+ *
+ */
+int nand_scan (struct mtd_info *mtd, int maxchips)
+{
+ int i, j, nand_maf_id, nand_dev_id, busw;
+ struct nand_chip *this = mtd->priv;
+
+ /* Get buswidth to select the correct functions*/
+ busw = this->options & NAND_BUSWIDTH_16;
+
+ /* check for proper chip_delay setup, set 20us if not */
+ if (!this->chip_delay)
+ this->chip_delay = 20;
+
+ /* check, if a user supplied command function given */
+ if (this->cmdfunc == NULL)
+ this->cmdfunc = nand_command;
+
+ /* check, if a user supplied wait function given */
+ if (this->waitfunc == NULL)
+ this->waitfunc = nand_wait;
+
+ if (!this->select_chip)
+ this->select_chip = nand_select_chip;
+ if (!this->write_byte)
+ this->write_byte = busw ? nand_write_byte16 : nand_write_byte;
+ if (!this->read_byte)
+ this->read_byte = busw ? nand_read_byte16 : nand_read_byte;
+ if (!this->write_word)
+ this->write_word = nand_write_word;
+ if (!this->read_word)
+ this->read_word = nand_read_word;
+ if (!this->block_bad)
+ this->block_bad = nand_block_bad;
+ if (!this->block_markbad)
+ this->block_markbad = nand_default_block_markbad;
+ if (!this->write_buf)
+ this->write_buf = busw ? nand_write_buf16 : nand_write_buf;
+ if (!this->read_buf)
+ this->read_buf = busw ? nand_read_buf16 : nand_read_buf;
+ if (!this->verify_buf)
+ this->verify_buf = busw ? nand_verify_buf16 : nand_verify_buf;
+ if (!this->scan_bbt)
+ this->scan_bbt = nand_default_bbt;
+
+ /* Select the device */
+ this->select_chip(mtd, 0);
+
+ /* Send the command for reading device ID */
+ this->cmdfunc (mtd, NAND_CMD_READID, 0x00, -1);
+
+ /* Read manufacturer and device IDs */
+ nand_maf_id = this->read_byte(mtd);
+ nand_dev_id = this->read_byte(mtd);
+
+ /* Print and store flash device information */
+ for (i = 0; nand_flash_ids[i].name != NULL; i++) {
+
+ if (nand_dev_id != nand_flash_ids[i].id)
+ continue;
+
+ if (!mtd->name) mtd->name = nand_flash_ids[i].name;
+ this->chipsize = nand_flash_ids[i].chipsize << 20;
+
+ /* New devices have all the information in additional id bytes */
+ if (!nand_flash_ids[i].pagesize) {
+ int extid;
+ /* The 3rd id byte contains non relevant data ATM */
+ extid = this->read_byte(mtd);
+ /* The 4th id byte is the important one */
+ extid = this->read_byte(mtd);
+ /* Calc pagesize */
+ mtd->oobblock = 1024 << (extid & 0x3);
+ extid >>= 2;
+ /* Calc oobsize */
+ mtd->oobsize = (8 << (extid & 0x03)) * (mtd->oobblock / 512);
+ extid >>= 2;
+ /* Calc blocksize. Blocksize is multiples of 64KiB */
+ mtd->erasesize = (64 * 1024) << (extid & 0x03);
+ extid >>= 2;
+ /* Get buswidth information */
+ busw = (extid & 0x01) ? NAND_BUSWIDTH_16 : 0;
+
+ } else {
+ /* Old devices have this data hardcoded in the
+ * device id table */
+ mtd->erasesize = nand_flash_ids[i].erasesize;
+ mtd->oobblock = nand_flash_ids[i].pagesize;
+ mtd->oobsize = mtd->oobblock / 32;
+ busw = nand_flash_ids[i].options & NAND_BUSWIDTH_16;
+ }
+
+ /* Check, if buswidth is correct. Hardware drivers should set
+ * this correct ! */
+ if (busw != (this->options & NAND_BUSWIDTH_16)) {
+ printk (KERN_INFO "NAND device: Manufacturer ID:"
+ " 0x%02x, Chip ID: 0x%02x (%s %s)\n", nand_maf_id, nand_dev_id,
+ nand_manuf_ids[i].name , mtd->name);
+ printk (KERN_WARNING
+ "NAND bus width %d instead %d bit\n",
+ (this->options & NAND_BUSWIDTH_16) ? 16 : 8,
+ busw ? 16 : 8);
+ this->select_chip(mtd, -1);
+ return 1;
+ }
+
+ /* Calculate the address shift from the page size */
+ this->page_shift = ffs(mtd->oobblock) - 1;
+ this->bbt_erase_shift = this->phys_erase_shift = ffs(mtd->erasesize) - 1;
+ this->chip_shift = ffs(this->chipsize) - 1;
+
+ /* Set the bad block position */
+ this->badblockpos = mtd->oobblock > 512 ?
+ NAND_LARGE_BADBLOCK_POS : NAND_SMALL_BADBLOCK_POS;
+
+ /* Get chip options, preserve non chip based options */
+ this->options &= ~NAND_CHIPOPTIONS_MSK;
+ this->options |= nand_flash_ids[i].options & NAND_CHIPOPTIONS_MSK;
+ /* Set this as a default. Board drivers can override it, if neccecary */
+ this->options |= NAND_NO_AUTOINCR;
+ /* Check if this is a not a samsung device. Do not clear the options
+ * for chips which are not having an extended id.
+ */
+ if (nand_maf_id != NAND_MFR_SAMSUNG && !nand_flash_ids[i].pagesize)
+ this->options &= ~NAND_SAMSUNG_LP_OPTIONS;
+
+ /* Check for AND chips with 4 page planes */
+ if (this->options & NAND_4PAGE_ARRAY)
+ this->erase_cmd = multi_erase_cmd;
+ else
+ this->erase_cmd = single_erase_cmd;
+
+ /* Do not replace user supplied command function ! */
+ if (mtd->oobblock > 512 && this->cmdfunc == nand_command)
+ this->cmdfunc = nand_command_lp;
+
+ /* Try to identify manufacturer */
+ for (j = 0; nand_manuf_ids[j].id != 0x0; j++) {
+ if (nand_manuf_ids[j].id == nand_maf_id)
+ break;
+ }
+ printk (KERN_INFO "NAND device: Manufacturer ID:"
+ " 0x%02x, Chip ID: 0x%02x (%s %s)\n", nand_maf_id, nand_dev_id,
+ nand_manuf_ids[j].name , nand_flash_ids[i].name);
+ break;
+ }
+
+ if (!nand_flash_ids[i].name) {
+ printk (KERN_WARNING "No NAND device found!!!\n");
+ this->select_chip(mtd, -1);
+ return 1;
+ }
+
+ for (i=1; i < maxchips; i++) {
+ this->select_chip(mtd, i);
+
+ /* Send the command for reading device ID */
+ this->cmdfunc (mtd, NAND_CMD_READID, 0x00, -1);
+
+ /* Read manufacturer and device IDs */
+ if (nand_maf_id != this->read_byte(mtd) ||
+ nand_dev_id != this->read_byte(mtd))
+ break;
+ }
+ if (i > 1)
+ printk(KERN_INFO "%d NAND chips detected\n", i);
+
+ /* Allocate buffers, if neccecary */
+ if (!this->oob_buf) {
+ size_t len;
+ len = mtd->oobsize << (this->phys_erase_shift - this->page_shift);
+ this->oob_buf = kmalloc (len, GFP_KERNEL);
+ if (!this->oob_buf) {
+ printk (KERN_ERR "nand_scan(): Cannot allocate oob_buf\n");
+ return -ENOMEM;
+ }
+ this->options |= NAND_OOBBUF_ALLOC;
+ }
+
+ if (!this->data_buf) {
+ size_t len;
+ len = mtd->oobblock + mtd->oobsize;
+ this->data_buf = kmalloc (len, GFP_KERNEL);
+ if (!this->data_buf) {
+ if (this->options & NAND_OOBBUF_ALLOC)
+ kfree (this->oob_buf);
+ printk (KERN_ERR "nand_scan(): Cannot allocate data_buf\n");
+ return -ENOMEM;
+ }
+ this->options |= NAND_DATABUF_ALLOC;
+ }
+
+ /* Store the number of chips and calc total size for mtd */
+ this->numchips = i;
+ mtd->size = i * this->chipsize;
+ /* Convert chipsize to number of pages per chip -1. */
+ this->pagemask = (this->chipsize >> this->page_shift) - 1;
+ /* Preset the internal oob buffer */
+ memset(this->oob_buf, 0xff, mtd->oobsize << (this->phys_erase_shift - this->page_shift));
+
+ /* If no default placement scheme is given, select an
+ * appropriate one */
+ if (!this->autooob) {
+ /* Select the appropriate default oob placement scheme for
+ * placement agnostic filesystems */
+ switch (mtd->oobsize) {
+ case 8:
+ this->autooob = &nand_oob_8;
+ break;
+ case 16:
+ this->autooob = &nand_oob_16;
+ break;
+ case 64:
+ this->autooob = &nand_oob_64;
+ break;
+ default:
+ printk (KERN_WARNING "No oob scheme defined for oobsize %d\n",
+ mtd->oobsize);
+ BUG();
+ }
+ }
+
+ /* The number of bytes available for the filesystem to place fs dependend
+ * oob data */
+ if (this->options & NAND_BUSWIDTH_16) {
+ mtd->oobavail = mtd->oobsize - (this->autooob->eccbytes + 2);
+ if (this->autooob->eccbytes & 0x01)
+ mtd->oobavail--;
+ } else
+ mtd->oobavail = mtd->oobsize - (this->autooob->eccbytes + 1);
+
+ /*
+ * check ECC mode, default to software
+ * if 3byte/512byte hardware ECC is selected and we have 256 byte pagesize
+ * fallback to software ECC
+ */
+ this->eccsize = 256; /* set default eccsize */
+
+ switch (this->eccmode) {
+
+ case NAND_ECC_HW3_512:
+ case NAND_ECC_HW6_512:
+ case NAND_ECC_HW8_512:
+ if (mtd->oobblock == 256) {
+ printk (KERN_WARNING "512 byte HW ECC not possible on 256 Byte pagesize, fallback to SW ECC \n");
+ this->eccmode = NAND_ECC_SOFT;
+ this->calculate_ecc = nand_calculate_ecc;
+ this->correct_data = nand_correct_data;
+ break;
+ } else
+ this->eccsize = 512; /* set eccsize to 512 and fall through for function check */
+
+ case NAND_ECC_HW3_256:
+ if (this->calculate_ecc && this->correct_data && this->enable_hwecc)
+ break;
+ printk (KERN_WARNING "No ECC functions supplied, Hardware ECC not possible\n");
+ BUG();
+
+ case NAND_ECC_NONE:
+ printk (KERN_WARNING "NAND_ECC_NONE selected by board driver. This is not recommended !!\n");
+ this->eccmode = NAND_ECC_NONE;
+ break;
+
+ case NAND_ECC_SOFT:
+ this->calculate_ecc = nand_calculate_ecc;
+ this->correct_data = nand_correct_data;
+ break;
+
+ default:
+ printk (KERN_WARNING "Invalid NAND_ECC_MODE %d\n", this->eccmode);
+ BUG();
+ }
+
+ mtd->eccsize = this->eccsize;
+
+ /* Set the number of read / write steps for one page to ensure ECC generation */
+ switch (this->eccmode) {
+ case NAND_ECC_HW3_512:
+ case NAND_ECC_HW6_512:
+ case NAND_ECC_HW8_512:
+ this->eccsteps = mtd->oobblock / 512;
+ break;
+ case NAND_ECC_HW3_256:
+ case NAND_ECC_SOFT:
+ this->eccsteps = mtd->oobblock / 256;
+ break;
+
+ case NAND_ECC_NONE:
+ this->eccsteps = 1;
+ break;
+ }
+
+ /* Initialize state, waitqueue and spinlock */
+ this->state = FL_READY;
+ init_waitqueue_head (&this->wq);
+ spin_lock_init (&this->chip_lock);
+
+ /* De-select the device */
+ this->select_chip(mtd, -1);
+
+ /* Invalidate the pagebuffer reference */
+ this->pagebuf = -1;
+
+ /* Fill in remaining MTD driver data */
+ mtd->type = MTD_NANDFLASH;
+ mtd->flags = MTD_CAP_NANDFLASH | MTD_ECC;
+ mtd->ecctype = MTD_ECC_SW;
+ mtd->erase = nand_erase;
+ mtd->point = NULL;
+ mtd->unpoint = NULL;
+ mtd->read = nand_read;
+ mtd->write = nand_write;
+ mtd->read_ecc = nand_read_ecc;
+ mtd->write_ecc = nand_write_ecc;
+ mtd->read_oob = nand_read_oob;
+ mtd->write_oob = nand_write_oob;
+ mtd->readv = NULL;
+ mtd->writev = nand_writev;
+ mtd->writev_ecc = nand_writev_ecc;
+ mtd->sync = nand_sync;
+ mtd->lock = NULL;
+ mtd->unlock = NULL;
+ mtd->suspend = NULL;
+ mtd->resume = NULL;
+ mtd->block_isbad = nand_block_isbad;
+ mtd->block_markbad = nand_block_markbad;
+
+ /* and make the autooob the default one */
+ memcpy(&mtd->oobinfo, this->autooob, sizeof(mtd->oobinfo));
+
+ mtd->owner = THIS_MODULE;
+
+ /* Build bad block table */
+ return this->scan_bbt (mtd);
+}
+
+/**
+ * nand_release - [NAND Interface] Free resources held by the NAND device
+ * @mtd: MTD device structure
+*/
+void nand_release (struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+
+#ifdef CONFIG_MTD_PARTITIONS
+ /* Deregister partitions */
+ del_mtd_partitions (mtd);
+#endif
+ /* Deregister the device */
+ del_mtd_device (mtd);
+
+ /* Free bad block table memory, if allocated */
+ if (this->bbt)
+ kfree (this->bbt);
+ /* Buffer allocated by nand_scan ? */
+ if (this->options & NAND_OOBBUF_ALLOC)
+ kfree (this->oob_buf);
+ /* Buffer allocated by nand_scan ? */
+ if (this->options & NAND_DATABUF_ALLOC)
+ kfree (this->data_buf);
+}
+
+EXPORT_SYMBOL (nand_scan);
+EXPORT_SYMBOL (nand_release);
+
+MODULE_LICENSE ("GPL");
+MODULE_AUTHOR ("Steven J. Hill <sjhill@realitydiluted.com>, Thomas Gleixner <tglx@linutronix.de>");
+MODULE_DESCRIPTION ("Generic NAND flash driver code");
--- /dev/null
+/*
+ * drivers/mtd/nand_bbt.c
+ *
+ * Overview:
+ * Bad block table support for the NAND driver
+ *
+ * Copyright (C) 2004 Thomas Gleixner (tglx@linutronix.de)
+ *
+ * $Id: nand_bbt.c,v 1.26 2004/10/05 13:50:20 gleixner Exp $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Description:
+ *
+ * When nand_scan_bbt is called, then it tries to find the bad block table
+ * depending on the options in the bbt descriptor(s). If a bbt is found
+ * then the contents are read and the memory based bbt is created. If a
+ * mirrored bbt is selected then the mirror is searched too and the
+ * versions are compared. If the mirror has a greater version number
+ * than the mirror bbt is used to build the memory based bbt.
+ * If the tables are not versioned, then we "or" the bad block information.
+ * If one of the bbt's is out of date or does not exist it is (re)created.
+ * If no bbt exists at all then the device is scanned for factory marked
+ * good / bad blocks and the bad block tables are created.
+ *
+ * For manufacturer created bbts like the one found on M-SYS DOC devices
+ * the bbt is searched and read but never created
+ *
+ * The autogenerated bad block table is located in the last good blocks
+ * of the device. The table is mirrored, so it can be updated eventually.
+ * The table is marked in the oob area with an ident pattern and a version
+ * number which indicates which of both tables is more up to date.
+ *
+ * The table uses 2 bits per block
+ * 11b: block is good
+ * 00b: block is factory marked bad
+ * 01b, 10b: block is marked bad due to wear
+ *
+ * The memory bad block table uses the following scheme:
+ * 00b: block is good
+ * 01b: block is marked bad due to wear
+ * 10b: block is reserved (to protect the bbt area)
+ * 11b: block is factory marked bad
+ *
+ * Multichip devices like DOC store the bad block info per floor.
+ *
+ * Following assumptions are made:
+ * - bbts start at a page boundary, if autolocated on a block boundary
+ * - the space neccecary for a bbt in FLASH does not exceed a block boundary
+ *
+ */
+
+#include <linux/slab.h>
+#include <linux/types.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/nand_ecc.h>
+#include <linux/mtd/compatmac.h>
+#include <linux/bitops.h>
+#include <linux/delay.h>
+
+
+/**
+ * check_pattern - [GENERIC] check if a pattern is in the buffer
+ * @buf: the buffer to search
+ * @len: the length of buffer to search
+ * @paglen: the pagelength
+ * @td: search pattern descriptor
+ *
+ * Check for a pattern at the given place. Used to search bad block
+ * tables and good / bad block identifiers.
+ * If the SCAN_EMPTY option is set then check, if all bytes except the
+ * pattern area contain 0xff
+ *
+*/
+static int check_pattern (uint8_t *buf, int len, int paglen, struct nand_bbt_descr *td)
+{
+ int i, end;
+ uint8_t *p = buf;
+
+ end = paglen + td->offs;
+ if (td->options & NAND_BBT_SCANEMPTY) {
+ for (i = 0; i < end; i++) {
+ if (p[i] != 0xff)
+ return -1;
+ }
+ }
+ p += end;
+
+ /* Compare the pattern */
+ for (i = 0; i < td->len; i++) {
+ if (p[i] != td->pattern[i])
+ return -1;
+ }
+
+ p += td->len;
+ end += td->len;
+ if (td->options & NAND_BBT_SCANEMPTY) {
+ for (i = end; i < len; i++) {
+ if (*p++ != 0xff)
+ return -1;
+ }
+ }
+ return 0;
+}
+
+/**
+ * read_bbt - [GENERIC] Read the bad block table starting from page
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @page: the starting page
+ * @num: the number of bbt descriptors to read
+ * @bits: number of bits per block
+ * @offs: offset in the memory table
+ * @reserved_block_code: Pattern to identify reserved blocks
+ *
+ * Read the bad block table starting from page.
+ *
+ */
+static int read_bbt (struct mtd_info *mtd, uint8_t *buf, int page, int num,
+ int bits, int offs, int reserved_block_code)
+{
+ int res, i, j, act = 0;
+ struct nand_chip *this = mtd->priv;
+ size_t retlen, len, totlen;
+ loff_t from;
+ uint8_t msk = (uint8_t) ((1 << bits) - 1);
+
+ totlen = (num * bits) >> 3;
+ from = ((loff_t)page) << this->page_shift;
+
+ while (totlen) {
+ len = min (totlen, (size_t) (1 << this->bbt_erase_shift));
+ res = mtd->read_ecc (mtd, from, len, &retlen, buf, NULL, this->autooob);
+ if (res < 0) {
+ if (retlen != len) {
+ printk (KERN_INFO "nand_bbt: Error reading bad block table\n");
+ return res;
+ }
+ printk (KERN_WARNING "nand_bbt: ECC error while reading bad block table\n");
+ }
+
+ /* Analyse data */
+ for (i = 0; i < len; i++) {
+ uint8_t dat = buf[i];
+ for (j = 0; j < 8; j += bits, act += 2) {
+ uint8_t tmp = (dat >> j) & msk;
+ if (tmp == msk)
+ continue;
+ if (reserved_block_code &&
+ (tmp == reserved_block_code)) {
+ printk (KERN_DEBUG "nand_read_bbt: Reserved block at 0x%08x\n",
+ ((offs << 2) + (act >> 1)) << this->bbt_erase_shift);
+ this->bbt[offs + (act >> 3)] |= 0x2 << (act & 0x06);
+ continue;
+ }
+ /* Leave it for now, if its matured we can move this
+ * message to MTD_DEBUG_LEVEL0 */
+ printk (KERN_DEBUG "nand_read_bbt: Bad block at 0x%08x\n",
+ ((offs << 2) + (act >> 1)) << this->bbt_erase_shift);
+ /* Factory marked bad or worn out ? */
+ if (tmp == 0)
+ this->bbt[offs + (act >> 3)] |= 0x3 << (act & 0x06);
+ else
+ this->bbt[offs + (act >> 3)] |= 0x1 << (act & 0x06);
+ }
+ }
+ totlen -= len;
+ from += len;
+ }
+ return 0;
+}
+
+/**
+ * read_abs_bbt - [GENERIC] Read the bad block table starting at a given page
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @td: descriptor for the bad block table
+ * @chip: read the table for a specific chip, -1 read all chips.
+ * Applies only if NAND_BBT_PERCHIP option is set
+ *
+ * Read the bad block table for all chips starting at a given page
+ * We assume that the bbt bits are in consecutive order.
+*/
+static int read_abs_bbt (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td, int chip)
+{
+ struct nand_chip *this = mtd->priv;
+ int res = 0, i;
+ int bits;
+
+ bits = td->options & NAND_BBT_NRBITS_MSK;
+ if (td->options & NAND_BBT_PERCHIP) {
+ int offs = 0;
+ for (i = 0; i < this->numchips; i++) {
+ if (chip == -1 || chip == i)
+ res = read_bbt (mtd, buf, td->pages[i], this->chipsize >> this->bbt_erase_shift, bits, offs, td->reserved_block_code);
+ if (res)
+ return res;
+ offs += this->chipsize >> (this->bbt_erase_shift + 2);
+ }
+ } else {
+ res = read_bbt (mtd, buf, td->pages[0], mtd->size >> this->bbt_erase_shift, bits, 0, td->reserved_block_code);
+ if (res)
+ return res;
+ }
+ return 0;
+}
+
+/**
+ * read_abs_bbts - [GENERIC] Read the bad block table(s) for all chips starting at a given page
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @td: descriptor for the bad block table
+ * @md: descriptor for the bad block table mirror
+ *
+ * Read the bad block table(s) for all chips starting at a given page
+ * We assume that the bbt bits are in consecutive order.
+ *
+*/
+static int read_abs_bbts (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td,
+ struct nand_bbt_descr *md)
+{
+ struct nand_chip *this = mtd->priv;
+
+ /* Read the primary version, if available */
+ if (td->options & NAND_BBT_VERSION) {
+ nand_read_raw (mtd, buf, td->pages[0] << this->page_shift, mtd->oobblock, mtd->oobsize);
+ td->version[0] = buf[mtd->oobblock + td->veroffs];
+ printk (KERN_DEBUG "Bad block table at page %d, version 0x%02X\n", td->pages[0], td->version[0]);
+ }
+
+ /* Read the mirror version, if available */
+ if (md && (md->options & NAND_BBT_VERSION)) {
+ nand_read_raw (mtd, buf, md->pages[0] << this->page_shift, mtd->oobblock, mtd->oobsize);
+ md->version[0] = buf[mtd->oobblock + md->veroffs];
+ printk (KERN_DEBUG "Bad block table at page %d, version 0x%02X\n", md->pages[0], md->version[0]);
+ }
+
+ return 1;
+}
+
+/**
+ * create_bbt - [GENERIC] Create a bad block table by scanning the device
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @bd: descriptor for the good/bad block search pattern
+ * @chip: create the table for a specific chip, -1 read all chips.
+ * Applies only if NAND_BBT_PERCHIP option is set
+ *
+ * Create a bad block table by scanning the device
+ * for the given good/bad block identify pattern
+ */
+static void create_bbt (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *bd, int chip)
+{
+ struct nand_chip *this = mtd->priv;
+ int i, j, numblocks, len, scanlen;
+ int startblock;
+ loff_t from;
+ size_t readlen, ooblen;
+
+ printk (KERN_INFO "Scanning device for bad blocks\n");
+
+ if (bd->options & NAND_BBT_SCANALLPAGES)
+ len = 1 << (this->bbt_erase_shift - this->page_shift);
+ else {
+ if (bd->options & NAND_BBT_SCAN2NDPAGE)
+ len = 2;
+ else
+ len = 1;
+ }
+ scanlen = mtd->oobblock + mtd->oobsize;
+ readlen = len * mtd->oobblock;
+ ooblen = len * mtd->oobsize;
+
+ if (chip == -1) {
+ /* Note that numblocks is 2 * (real numblocks) here, see i+=2 below as it
+ * makes shifting and masking less painful */
+ numblocks = mtd->size >> (this->bbt_erase_shift - 1);
+ startblock = 0;
+ from = 0;
+ } else {
+ if (chip >= this->numchips) {
+ printk (KERN_WARNING "create_bbt(): chipnr (%d) > available chips (%d)\n",
+ chip + 1, this->numchips);
+ return;
+ }
+ numblocks = this->chipsize >> (this->bbt_erase_shift - 1);
+ startblock = chip * numblocks;
+ numblocks += startblock;
+ from = startblock << (this->bbt_erase_shift - 1);
+ }
+
+ for (i = startblock; i < numblocks;) {
+ nand_read_raw (mtd, buf, from, readlen, ooblen);
+ for (j = 0; j < len; j++) {
+ if (check_pattern (&buf[j * scanlen], scanlen, mtd->oobblock, bd)) {
+ this->bbt[i >> 3] |= 0x03 << (i & 0x6);
+ printk (KERN_WARNING "Bad eraseblock %d at 0x%08x\n",
+ i >> 1, (unsigned int) from);
+ break;
+ }
+ }
+ i += 2;
+ from += (1 << this->bbt_erase_shift);
+ }
+}
+
+/**
+ * search_bbt - [GENERIC] scan the device for a specific bad block table
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @td: descriptor for the bad block table
+ *
+ * Read the bad block table by searching for a given ident pattern.
+ * Search is preformed either from the beginning up or from the end of
+ * the device downwards. The search starts always at the start of a
+ * block.
+ * If the option NAND_BBT_PERCHIP is given, each chip is searched
+ * for a bbt, which contains the bad block information of this chip.
+ * This is neccecary to provide support for certain DOC devices.
+ *
+ * The bbt ident pattern resides in the oob area of the first page
+ * in a block.
+ */
+static int search_bbt (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *td)
+{
+ struct nand_chip *this = mtd->priv;
+ int i, chips;
+ int bits, startblock, block, dir;
+ int scanlen = mtd->oobblock + mtd->oobsize;
+ int bbtblocks;
+
+ /* Search direction top -> down ? */
+ if (td->options & NAND_BBT_LASTBLOCK) {
+ startblock = (mtd->size >> this->bbt_erase_shift) -1;
+ dir = -1;
+ } else {
+ startblock = 0;
+ dir = 1;
+ }
+
+ /* Do we have a bbt per chip ? */
+ if (td->options & NAND_BBT_PERCHIP) {
+ chips = this->numchips;
+ bbtblocks = this->chipsize >> this->bbt_erase_shift;
+ startblock &= bbtblocks - 1;
+ } else {
+ chips = 1;
+ bbtblocks = mtd->size >> this->bbt_erase_shift;
+ }
+
+ /* Number of bits for each erase block in the bbt */
+ bits = td->options & NAND_BBT_NRBITS_MSK;
+
+ for (i = 0; i < chips; i++) {
+ /* Reset version information */
+ td->version[i] = 0;
+ td->pages[i] = -1;
+ /* Scan the maximum number of blocks */
+ for (block = 0; block < td->maxblocks; block++) {
+ int actblock = startblock + dir * block;
+ /* Read first page */
+ nand_read_raw (mtd, buf, actblock << this->bbt_erase_shift, mtd->oobblock, mtd->oobsize);
+ if (!check_pattern(buf, scanlen, mtd->oobblock, td)) {
+ td->pages[i] = actblock << (this->bbt_erase_shift - this->page_shift);
+ if (td->options & NAND_BBT_VERSION) {
+ td->version[i] = buf[mtd->oobblock + td->veroffs];
+ }
+ break;
+ }
+ }
+ startblock += this->chipsize >> this->bbt_erase_shift;
+ }
+ /* Check, if we found a bbt for each requested chip */
+ for (i = 0; i < chips; i++) {
+ if (td->pages[i] == -1)
+ printk (KERN_WARNING "Bad block table not found for chip %d\n", i);
+ else
+ printk (KERN_DEBUG "Bad block table found at page %d, version 0x%02X\n", td->pages[i], td->version[i]);
+ }
+ return 0;
+}
+
+/**
+ * search_read_bbts - [GENERIC] scan the device for bad block table(s)
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @td: descriptor for the bad block table
+ * @md: descriptor for the bad block table mirror
+ *
+ * Search and read the bad block table(s)
+*/
+static int search_read_bbts (struct mtd_info *mtd, uint8_t *buf,
+ struct nand_bbt_descr *td, struct nand_bbt_descr *md)
+{
+ /* Search the primary table */
+ search_bbt (mtd, buf, td);
+
+ /* Search the mirror table */
+ if (md)
+ search_bbt (mtd, buf, md);
+
+ /* Force result check */
+ return 1;
+}
+
+
+/**
+ * write_bbt - [GENERIC] (Re)write the bad block table
+ *
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @td: descriptor for the bad block table
+ * @md: descriptor for the bad block table mirror
+ * @chipsel: selector for a specific chip, -1 for all
+ *
+ * (Re)write the bad block table
+ *
+*/
+static int write_bbt (struct mtd_info *mtd, uint8_t *buf,
+ struct nand_bbt_descr *td, struct nand_bbt_descr *md, int chipsel)
+{
+ struct nand_chip *this = mtd->priv;
+ struct nand_oobinfo oobinfo;
+ struct erase_info einfo;
+ int i, j, res, chip = 0;
+ int bits, startblock, dir, page, offs, numblocks, sft, sftmsk;
+ int nrchips, bbtoffs, pageoffs;
+ uint8_t msk[4];
+ uint8_t rcode = td->reserved_block_code;
+ size_t retlen, len = 0;
+ loff_t to;
+
+ if (!rcode)
+ rcode = 0xff;
+ /* Write bad block table per chip rather than per device ? */
+ if (td->options & NAND_BBT_PERCHIP) {
+ numblocks = (int) (this->chipsize >> this->bbt_erase_shift);
+ /* Full device write or specific chip ? */
+ if (chipsel == -1) {
+ nrchips = this->numchips;
+ } else {
+ nrchips = chipsel + 1;
+ chip = chipsel;
+ }
+ } else {
+ numblocks = (int) (mtd->size >> this->bbt_erase_shift);
+ nrchips = 1;
+ }
+
+ /* Loop through the chips */
+ for (; chip < nrchips; chip++) {
+
+ /* There was already a version of the table, reuse the page
+ * This applies for absolute placement too, as we have the
+ * page nr. in td->pages.
+ */
+ if (td->pages[chip] != -1) {
+ page = td->pages[chip];
+ goto write;
+ }
+
+ /* Automatic placement of the bad block table */
+ /* Search direction top -> down ? */
+ if (td->options & NAND_BBT_LASTBLOCK) {
+ startblock = numblocks * (chip + 1) - 1;
+ dir = -1;
+ } else {
+ startblock = chip * numblocks;
+ dir = 1;
+ }
+
+ for (i = 0; i < td->maxblocks; i++) {
+ int block = startblock + dir * i;
+ /* Check, if the block is bad */
+ switch ((this->bbt[block >> 2] >> (2 * (block & 0x03))) & 0x03) {
+ case 0x01:
+ case 0x03:
+ continue;
+ }
+ page = block << (this->bbt_erase_shift - this->page_shift);
+ /* Check, if the block is used by the mirror table */
+ if (!md || md->pages[chip] != page)
+ goto write;
+ }
+ printk (KERN_ERR "No space left to write bad block table\n");
+ return -ENOSPC;
+write:
+
+ /* Set up shift count and masks for the flash table */
+ bits = td->options & NAND_BBT_NRBITS_MSK;
+ switch (bits) {
+ case 1: sft = 3; sftmsk = 0x07; msk[0] = 0x00; msk[1] = 0x01; msk[2] = ~rcode; msk[3] = 0x01; break;
+ case 2: sft = 2; sftmsk = 0x06; msk[0] = 0x00; msk[1] = 0x01; msk[2] = ~rcode; msk[3] = 0x03; break;
+ case 4: sft = 1; sftmsk = 0x04; msk[0] = 0x00; msk[1] = 0x0C; msk[2] = ~rcode; msk[3] = 0x0f; break;
+ case 8: sft = 0; sftmsk = 0x00; msk[0] = 0x00; msk[1] = 0x0F; msk[2] = ~rcode; msk[3] = 0xff; break;
+ default: return -EINVAL;
+ }
+
+ bbtoffs = chip * (numblocks >> 2);
+
+ to = ((loff_t) page) << this->page_shift;
+
+ memcpy (&oobinfo, this->autooob, sizeof(oobinfo));
+ oobinfo.useecc = MTD_NANDECC_PLACEONLY;
+
+ /* Must we save the block contents ? */
+ if (td->options & NAND_BBT_SAVECONTENT) {
+ /* Make it block aligned */
+ to &= ~((loff_t) ((1 << this->bbt_erase_shift) - 1));
+ len = 1 << this->bbt_erase_shift;
+ res = mtd->read_ecc (mtd, to, len, &retlen, buf, &buf[len], &oobinfo);
+ if (res < 0) {
+ if (retlen != len) {
+ printk (KERN_INFO "nand_bbt: Error reading block for writing the bad block table\n");
+ return res;
+ }
+ printk (KERN_WARNING "nand_bbt: ECC error while reading block for writing bad block table\n");
+ }
+ /* Calc the byte offset in the buffer */
+ pageoffs = page - (int)(to >> this->page_shift);
+ offs = pageoffs << this->page_shift;
+ /* Preset the bbt area with 0xff */
+ memset (&buf[offs], 0xff, (size_t)(numblocks >> sft));
+ /* Preset the bbt's oob area with 0xff */
+ memset (&buf[len + pageoffs * mtd->oobsize], 0xff,
+ ((len >> this->page_shift) - pageoffs) * mtd->oobsize);
+ if (td->options & NAND_BBT_VERSION) {
+ buf[len + (pageoffs * mtd->oobsize) + td->veroffs] = td->version[chip];
+ }
+ } else {
+ /* Calc length */
+ len = (size_t) (numblocks >> sft);
+ /* Make it page aligned ! */
+ len = (len + (mtd->oobblock-1)) & ~(mtd->oobblock-1);
+ /* Preset the buffer with 0xff */
+ memset (buf, 0xff, len + (len >> this->page_shift) * mtd->oobsize);
+ offs = 0;
+ /* Pattern is located in oob area of first page */
+ memcpy (&buf[len + td->offs], td->pattern, td->len);
+ if (td->options & NAND_BBT_VERSION) {
+ buf[len + td->veroffs] = td->version[chip];
+ }
+ }
+
+ /* walk through the memory table */
+ for (i = 0; i < numblocks; ) {
+ uint8_t dat;
+ dat = this->bbt[bbtoffs + (i >> 2)];
+ for (j = 0; j < 4; j++ , i++) {
+ int sftcnt = (i << (3 - sft)) & sftmsk;
+ /* Do not store the reserved bbt blocks ! */
+ buf[offs + (i >> sft)] &= ~(msk[dat & 0x03] << sftcnt);
+ dat >>= 2;
+ }
+ }
+
+ memset (&einfo, 0, sizeof (einfo));
+ einfo.mtd = mtd;
+ einfo.addr = (unsigned long) to;
+ einfo.len = 1 << this->bbt_erase_shift;
+ res = nand_erase_nand (mtd, &einfo, 1);
+ if (res < 0) {
+ printk (KERN_WARNING "nand_bbt: Error during block erase: %d\n", res);
+ return res;
+ }
+
+ res = mtd->write_ecc (mtd, to, len, &retlen, buf, &buf[len], &oobinfo);
+ if (res < 0) {
+ printk (KERN_WARNING "nand_bbt: Error while writing bad block table %d\n", res);
+ return res;
+ }
+ printk (KERN_DEBUG "Bad block table written to 0x%08x, version 0x%02X\n",
+ (unsigned int) to, td->version[chip]);
+
+ /* Mark it as used */
+ td->pages[chip] = page;
+ }
+ return 0;
+}
+
+/**
+ * nand_memory_bbt - [GENERIC] create a memory based bad block table
+ * @mtd: MTD device structure
+ * @bd: descriptor for the good/bad block search pattern
+ *
+ * The function creates a memory based bbt by scanning the device
+ * for manufacturer / software marked good / bad blocks
+*/
+static int nand_memory_bbt (struct mtd_info *mtd, struct nand_bbt_descr *bd)
+{
+ struct nand_chip *this = mtd->priv;
+
+ /* Ensure that we only scan for the pattern and nothing else */
+ bd->options = 0;
+ create_bbt (mtd, this->data_buf, bd, -1);
+ return 0;
+}
+
+/**
+ * check_create - [GENERIC] create and write bbt(s) if neccecary
+ * @mtd: MTD device structure
+ * @buf: temporary buffer
+ * @bd: descriptor for the good/bad block search pattern
+ *
+ * The function checks the results of the previous call to read_bbt
+ * and creates / updates the bbt(s) if neccecary
+ * Creation is neccecary if no bbt was found for the chip/device
+ * Update is neccecary if one of the tables is missing or the
+ * version nr. of one table is less than the other
+*/
+static int check_create (struct mtd_info *mtd, uint8_t *buf, struct nand_bbt_descr *bd)
+{
+ int i, chips, writeops, chipsel, res;
+ struct nand_chip *this = mtd->priv;
+ struct nand_bbt_descr *td = this->bbt_td;
+ struct nand_bbt_descr *md = this->bbt_md;
+ struct nand_bbt_descr *rd, *rd2;
+
+ /* Do we have a bbt per chip ? */
+ if (td->options & NAND_BBT_PERCHIP)
+ chips = this->numchips;
+ else
+ chips = 1;
+
+ for (i = 0; i < chips; i++) {
+ writeops = 0;
+ rd = NULL;
+ rd2 = NULL;
+ /* Per chip or per device ? */
+ chipsel = (td->options & NAND_BBT_PERCHIP) ? i : -1;
+ /* Mirrored table avilable ? */
+ if (md) {
+ if (td->pages[i] == -1 && md->pages[i] == -1) {
+ writeops = 0x03;
+ goto create;
+ }
+
+ if (td->pages[i] == -1) {
+ rd = md;
+ td->version[i] = md->version[i];
+ writeops = 1;
+ goto writecheck;
+ }
+
+ if (md->pages[i] == -1) {
+ rd = td;
+ md->version[i] = td->version[i];
+ writeops = 2;
+ goto writecheck;
+ }
+
+ if (td->version[i] == md->version[i]) {
+ rd = td;
+ if (!(td->options & NAND_BBT_VERSION))
+ rd2 = md;
+ goto writecheck;
+ }
+
+ if (((int8_t) (td->version[i] - md->version[i])) > 0) {
+ rd = td;
+ md->version[i] = td->version[i];
+ writeops = 2;
+ } else {
+ rd = md;
+ td->version[i] = md->version[i];
+ writeops = 1;
+ }
+
+ goto writecheck;
+
+ } else {
+ if (td->pages[i] == -1) {
+ writeops = 0x01;
+ goto create;
+ }
+ rd = td;
+ goto writecheck;
+ }
+create:
+ /* Create the bad block table by scanning the device ? */
+ if (!(td->options & NAND_BBT_CREATE))
+ continue;
+
+ /* Create the table in memory by scanning the chip(s) */
+ create_bbt (mtd, buf, bd, chipsel);
+
+ td->version[i] = 1;
+ if (md)
+ md->version[i] = 1;
+writecheck:
+ /* read back first ? */
+ if (rd)
+ read_abs_bbt (mtd, buf, rd, chipsel);
+ /* If they weren't versioned, read both. */
+ if (rd2)
+ read_abs_bbt (mtd, buf, rd2, chipsel);
+
+ /* Write the bad block table to the device ? */
+ if ((writeops & 0x01) && (td->options & NAND_BBT_WRITE)) {
+ res = write_bbt (mtd, buf, td, md, chipsel);
+ if (res < 0)
+ return res;
+ }
+
+ /* Write the mirror bad block table to the device ? */
+ if ((writeops & 0x02) && md && (md->options & NAND_BBT_WRITE)) {
+ res = write_bbt (mtd, buf, md, td, chipsel);
+ if (res < 0)
+ return res;
+ }
+ }
+ return 0;
+}
+
+/**
+ * mark_bbt_regions - [GENERIC] mark the bad block table regions
+ * @mtd: MTD device structure
+ * @td: bad block table descriptor
+ *
+ * The bad block table regions are marked as "bad" to prevent
+ * accidental erasures / writes. The regions are identified by
+ * the mark 0x02.
+*/
+static void mark_bbt_region (struct mtd_info *mtd, struct nand_bbt_descr *td)
+{
+ struct nand_chip *this = mtd->priv;
+ int i, j, chips, block, nrblocks, update;
+ uint8_t oldval, newval;
+
+ /* Do we have a bbt per chip ? */
+ if (td->options & NAND_BBT_PERCHIP) {
+ chips = this->numchips;
+ nrblocks = (int)(this->chipsize >> this->bbt_erase_shift);
+ } else {
+ chips = 1;
+ nrblocks = (int)(mtd->size >> this->bbt_erase_shift);
+ }
+
+ for (i = 0; i < chips; i++) {
+ if ((td->options & NAND_BBT_ABSPAGE) ||
+ !(td->options & NAND_BBT_WRITE)) {
+ if (td->pages[i] == -1) continue;
+ block = td->pages[i] >> (this->bbt_erase_shift - this->page_shift);
+ block <<= 1;
+ oldval = this->bbt[(block >> 3)];
+ newval = oldval | (0x2 << (block & 0x06));
+ this->bbt[(block >> 3)] = newval;
+ if ((oldval != newval) && td->reserved_block_code)
+ nand_update_bbt(mtd, block << (this->bbt_erase_shift - 1));
+ continue;
+ }
+ update = 0;
+ if (td->options & NAND_BBT_LASTBLOCK)
+ block = ((i + 1) * nrblocks) - td->maxblocks;
+ else
+ block = i * nrblocks;
+ block <<= 1;
+ for (j = 0; j < td->maxblocks; j++) {
+ oldval = this->bbt[(block >> 3)];
+ newval = oldval | (0x2 << (block & 0x06));
+ this->bbt[(block >> 3)] = newval;
+ if (oldval != newval) update = 1;
+ block += 2;
+ }
+ /* If we want reserved blocks to be recorded to flash, and some
+ new ones have been marked, then we need to update the stored
+ bbts. This should only happen once. */
+ if (update && td->reserved_block_code)
+ nand_update_bbt(mtd, (block - 2) << (this->bbt_erase_shift - 1));
+ }
+}
+
+/**
+ * nand_scan_bbt - [NAND Interface] scan, find, read and maybe create bad block table(s)
+ * @mtd: MTD device structure
+ * @bd: descriptor for the good/bad block search pattern
+ *
+ * The function checks, if a bad block table(s) is/are already
+ * available. If not it scans the device for manufacturer
+ * marked good / bad blocks and writes the bad block table(s) to
+ * the selected place.
+ *
+ * The bad block table memory is allocated here. It must be freed
+ * by calling the nand_free_bbt function.
+ *
+*/
+int nand_scan_bbt (struct mtd_info *mtd, struct nand_bbt_descr *bd)
+{
+ struct nand_chip *this = mtd->priv;
+ int len, res = 0;
+ uint8_t *buf;
+ struct nand_bbt_descr *td = this->bbt_td;
+ struct nand_bbt_descr *md = this->bbt_md;
+
+ len = mtd->size >> (this->bbt_erase_shift + 2);
+ /* Allocate memory (2bit per block) */
+ this->bbt = kmalloc (len, GFP_KERNEL);
+ if (!this->bbt) {
+ printk (KERN_ERR "nand_scan_bbt: Out of memory\n");
+ return -ENOMEM;
+ }
+ /* Clear the memory bad block table */
+ memset (this->bbt, 0x00, len);
+
+ /* If no primary table decriptor is given, scan the device
+ * to build a memory based bad block table
+ */
+ if (!td)
+ return nand_memory_bbt(mtd, bd);
+
+ /* Allocate a temporary buffer for one eraseblock incl. oob */
+ len = (1 << this->bbt_erase_shift);
+ len += (len >> this->page_shift) * mtd->oobsize;
+ buf = kmalloc (len, GFP_KERNEL);
+ if (!buf) {
+ printk (KERN_ERR "nand_bbt: Out of memory\n");
+ kfree (this->bbt);
+ this->bbt = NULL;
+ return -ENOMEM;
+ }
+
+ /* Is the bbt at a given page ? */
+ if (td->options & NAND_BBT_ABSPAGE) {
+ res = read_abs_bbts (mtd, buf, td, md);
+ } else {
+ /* Search the bad block table using a pattern in oob */
+ res = search_read_bbts (mtd, buf, td, md);
+ }
+
+ if (res)
+ res = check_create (mtd, buf, bd);
+
+ /* Prevent the bbt regions from erasing / writing */
+ mark_bbt_region (mtd, td);
+ if (md)
+ mark_bbt_region (mtd, md);
+
+ kfree (buf);
+ return res;
+}
+
+
+/**
+ * nand_update_bbt - [NAND Interface] update bad block table(s)
+ * @mtd: MTD device structure
+ * @offs: the offset of the newly marked block
+ *
+ * The function updates the bad block table(s)
+*/
+int nand_update_bbt (struct mtd_info *mtd, loff_t offs)
+{
+ struct nand_chip *this = mtd->priv;
+ int len, res = 0, writeops = 0;
+ int chip, chipsel;
+ uint8_t *buf;
+ struct nand_bbt_descr *td = this->bbt_td;
+ struct nand_bbt_descr *md = this->bbt_md;
+
+ if (!this->bbt || !td)
+ return -EINVAL;
+
+ len = mtd->size >> (this->bbt_erase_shift + 2);
+ /* Allocate a temporary buffer for one eraseblock incl. oob */
+ len = (1 << this->bbt_erase_shift);
+ len += (len >> this->page_shift) * mtd->oobsize;
+ buf = kmalloc (len, GFP_KERNEL);
+ if (!buf) {
+ printk (KERN_ERR "nand_update_bbt: Out of memory\n");
+ return -ENOMEM;
+ }
+
+ writeops = md != NULL ? 0x03 : 0x01;
+
+ /* Do we have a bbt per chip ? */
+ if (td->options & NAND_BBT_PERCHIP) {
+ chip = (int) (offs >> this->chip_shift);
+ chipsel = chip;
+ } else {
+ chip = 0;
+ chipsel = -1;
+ }
+
+ td->version[chip]++;
+ if (md)
+ md->version[chip]++;
+
+ /* Write the bad block table to the device ? */
+ if ((writeops & 0x01) && (td->options & NAND_BBT_WRITE)) {
+ res = write_bbt (mtd, buf, td, md, chipsel);
+ if (res < 0)
+ goto out;
+ }
+ /* Write the mirror bad block table to the device ? */
+ if ((writeops & 0x02) && md && (md->options & NAND_BBT_WRITE)) {
+ res = write_bbt (mtd, buf, md, td, chipsel);
+ }
+
+out:
+ kfree (buf);
+ return res;
+}
+
+/* Define some generic bad / good block scan pattern which are used
+ * while scanning a device for factory marked good / bad blocks
+ *
+ * The memory based patterns just
+ */
+static uint8_t scan_ff_pattern[] = { 0xff, 0xff };
+
+static struct nand_bbt_descr smallpage_memorybased = {
+ .options = 0,
+ .offs = 5,
+ .len = 1,
+ .pattern = scan_ff_pattern
+};
+
+static struct nand_bbt_descr largepage_memorybased = {
+ .options = 0,
+ .offs = 0,
+ .len = 2,
+ .pattern = scan_ff_pattern
+};
+
+static struct nand_bbt_descr smallpage_flashbased = {
+ .options = NAND_BBT_SCANEMPTY | NAND_BBT_SCANALLPAGES,
+ .offs = 5,
+ .len = 1,
+ .pattern = scan_ff_pattern
+};
+
+static struct nand_bbt_descr largepage_flashbased = {
+ .options = NAND_BBT_SCANEMPTY | NAND_BBT_SCANALLPAGES,
+ .offs = 0,
+ .len = 2,
+ .pattern = scan_ff_pattern
+};
+
+static uint8_t scan_agand_pattern[] = { 0x1C, 0x71, 0xC7, 0x1C, 0x71, 0xC7 };
+
+static struct nand_bbt_descr agand_flashbased = {
+ .options = NAND_BBT_SCANEMPTY | NAND_BBT_SCANALLPAGES,
+ .offs = 0x20,
+ .len = 6,
+ .pattern = scan_agand_pattern
+};
+
+/* Generic flash bbt decriptors
+*/
+static uint8_t bbt_pattern[] = {'B', 'b', 't', '0' };
+static uint8_t mirror_pattern[] = {'1', 't', 'b', 'B' };
+
+static struct nand_bbt_descr bbt_main_descr = {
+ .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE
+ | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP,
+ .offs = 8,
+ .len = 4,
+ .veroffs = 12,
+ .maxblocks = 4,
+ .pattern = bbt_pattern
+};
+
+static struct nand_bbt_descr bbt_mirror_descr = {
+ .options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE
+ | NAND_BBT_2BIT | NAND_BBT_VERSION | NAND_BBT_PERCHIP,
+ .offs = 8,
+ .len = 4,
+ .veroffs = 12,
+ .maxblocks = 4,
+ .pattern = mirror_pattern
+};
+
+/**
+ * nand_default_bbt - [NAND Interface] Select a default bad block table for the device
+ * @mtd: MTD device structure
+ *
+ * This function selects the default bad block table
+ * support for the device and calls the nand_scan_bbt function
+ *
+*/
+int nand_default_bbt (struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+
+ /* Default for AG-AND. We must use a flash based
+ * bad block table as the devices have factory marked
+ * _good_ blocks. Erasing those blocks leads to loss
+ * of the good / bad information, so we _must_ store
+ * this information in a good / bad table during
+ * startup
+ */
+ if (this->options & NAND_IS_AND) {
+ /* Use the default pattern descriptors */
+ if (!this->bbt_td) {
+ this->bbt_td = &bbt_main_descr;
+ this->bbt_md = &bbt_mirror_descr;
+ }
+ this->options |= NAND_USE_FLASH_BBT;
+ return nand_scan_bbt (mtd, &agand_flashbased);
+ }
+
+ /* Is a flash based bad block table requested ? */
+ if (this->options & NAND_USE_FLASH_BBT) {
+ /* Use the default pattern descriptors */
+ if (!this->bbt_td) {
+ this->bbt_td = &bbt_main_descr;
+ this->bbt_md = &bbt_mirror_descr;
+ }
+ if (mtd->oobblock > 512)
+ return nand_scan_bbt (mtd, &largepage_flashbased);
+ else
+ return nand_scan_bbt (mtd, &smallpage_flashbased);
+ } else {
+ this->bbt_td = NULL;
+ this->bbt_md = NULL;
+ if (mtd->oobblock > 512)
+ return nand_scan_bbt (mtd, &largepage_memorybased);
+ else
+ return nand_scan_bbt (mtd, &smallpage_memorybased);
+ }
+}
+
+/**
+ * nand_isbad_bbt - [NAND Interface] Check if a block is bad
+ * @mtd: MTD device structure
+ * @offs: offset in the device
+ * @allowbbt: allow access to bad block table region
+ *
+*/
+int nand_isbad_bbt (struct mtd_info *mtd, loff_t offs, int allowbbt)
+{
+ struct nand_chip *this = mtd->priv;
+ int block;
+ uint8_t res;
+
+ /* Get block number * 2 */
+ block = (int) (offs >> (this->bbt_erase_shift - 1));
+ res = (this->bbt[block >> 3] >> (block & 0x06)) & 0x03;
+
+ DEBUG (MTD_DEBUG_LEVEL2, "nand_isbad_bbt(): bbt info for offs 0x%08x: (block %d) 0x%02x\n",
+ (unsigned int)offs, res, block >> 1);
+
+ switch ((int)res) {
+ case 0x00: return 0;
+ case 0x01: return 1;
+ case 0x02: return allowbbt ? 0 : 1;
+ }
+ return 1;
+}
+
+EXPORT_SYMBOL (nand_scan_bbt);
+EXPORT_SYMBOL (nand_default_bbt);
--- /dev/null
+/*
+ * drivers/mtd/nand/ppchameleonevb.c
+ *
+ * Copyright (C) 2003 DAVE Srl (info@wawnet.biz)
+ *
+ * Derived from drivers/mtd/nand/edb7312.c
+ *
+ *
+ * $Id: ppchameleonevb.c,v 1.6 2004/11/05 16:07:16 kalev Exp $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Overview:
+ * This is a device driver for the NAND flash devices found on the
+ * PPChameleon/PPChameleonEVB system.
+ * PPChameleon options (autodetected):
+ * - BA model: no NAND
+ * - ME model: 32MB (Samsung K9F5608U0B)
+ * - HI model: 128MB (Samsung K9F1G08UOM)
+ * PPChameleonEVB options:
+ * - 32MB (Samsung K9F5608U0B)
+ */
+
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/partitions.h>
+#include <asm/io.h>
+#include <platforms/PPChameleonEVB.h>
+
+#undef USE_READY_BUSY_PIN
+#define USE_READY_BUSY_PIN
+/* see datasheets (tR) */
+#define NAND_BIG_DELAY_US 25
+#define NAND_SMALL_DELAY_US 10
+
+/* handy sizes */
+#define SZ_4M 0x00400000
+#define NAND_SMALL_SIZE 0x02000000
+#define NAND_MTD_NAME "ppchameleon-nand"
+#define NAND_EVB_MTD_NAME "ppchameleonevb-nand"
+
+/* GPIO pins used to drive NAND chip mounted on processor module */
+#define NAND_nCE_GPIO_PIN (0x80000000 >> 1)
+#define NAND_CLE_GPIO_PIN (0x80000000 >> 2)
+#define NAND_ALE_GPIO_PIN (0x80000000 >> 3)
+#define NAND_RB_GPIO_PIN (0x80000000 >> 4)
+/* GPIO pins used to drive NAND chip mounted on EVB */
+#define NAND_EVB_nCE_GPIO_PIN (0x80000000 >> 14)
+#define NAND_EVB_CLE_GPIO_PIN (0x80000000 >> 15)
+#define NAND_EVB_ALE_GPIO_PIN (0x80000000 >> 16)
+#define NAND_EVB_RB_GPIO_PIN (0x80000000 >> 31)
+
+/*
+ * MTD structure for PPChameleonEVB board
+ */
+static struct mtd_info *ppchameleon_mtd = NULL;
+static struct mtd_info *ppchameleonevb_mtd = NULL;
+
+/*
+ * Module stuff
+ */
+static unsigned long ppchameleon_fio_pbase = CFG_NAND0_PADDR;
+static unsigned long ppchameleonevb_fio_pbase = CFG_NAND1_PADDR;
+
+#ifdef MODULE
+module_param(ppchameleon_fio_pbase, ulong, 0);
+module_param(ppchameleonevb_fio_pbase, ulong, 0);
+#else
+__setup("ppchameleon_fio_pbase=",ppchameleon_fio_pbase);
+__setup("ppchameleonevb_fio_pbase=",ppchameleonevb_fio_pbase);
+#endif
+
+#ifdef CONFIG_MTD_PARTITIONS
+/*
+ * Define static partitions for flash devices
+ */
+static struct mtd_partition partition_info_hi[] = {
+ { name: "PPChameleon HI Nand Flash",
+ offset: 0,
+ size: 128*1024*1024 }
+};
+
+static struct mtd_partition partition_info_me[] = {
+ { name: "PPChameleon ME Nand Flash",
+ offset: 0,
+ size: 32*1024*1024 }
+};
+
+static struct mtd_partition partition_info_evb[] = {
+ { name: "PPChameleonEVB Nand Flash",
+ offset: 0,
+ size: 32*1024*1024 }
+};
+
+#define NUM_PARTITIONS 1
+
+extern int parse_cmdline_partitions(struct mtd_info *master,
+ struct mtd_partition **pparts,
+ const char *mtd_id);
+#endif
+
+
+/*
+ * hardware specific access to control-lines
+ */
+static void ppchameleon_hwcontrol(struct mtd_info *mtdinfo, int cmd)
+{
+ switch(cmd) {
+
+ case NAND_CTL_SETCLE:
+ MACRO_NAND_CTL_SETCLE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ case NAND_CTL_CLRCLE:
+ MACRO_NAND_CTL_CLRCLE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ case NAND_CTL_SETALE:
+ MACRO_NAND_CTL_SETALE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ case NAND_CTL_CLRALE:
+ MACRO_NAND_CTL_CLRALE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ case NAND_CTL_SETNCE:
+ MACRO_NAND_ENABLE_CE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ case NAND_CTL_CLRNCE:
+ MACRO_NAND_DISABLE_CE((unsigned long)CFG_NAND0_PADDR);
+ break;
+ }
+}
+
+static void ppchameleonevb_hwcontrol(struct mtd_info *mtdinfo, int cmd)
+{
+ switch(cmd) {
+
+ case NAND_CTL_SETCLE:
+ MACRO_NAND_CTL_SETCLE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ case NAND_CTL_CLRCLE:
+ MACRO_NAND_CTL_CLRCLE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ case NAND_CTL_SETALE:
+ MACRO_NAND_CTL_SETALE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ case NAND_CTL_CLRALE:
+ MACRO_NAND_CTL_CLRALE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ case NAND_CTL_SETNCE:
+ MACRO_NAND_ENABLE_CE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ case NAND_CTL_CLRNCE:
+ MACRO_NAND_DISABLE_CE((unsigned long)CFG_NAND1_PADDR);
+ break;
+ }
+}
+
+#ifdef USE_READY_BUSY_PIN
+/*
+ * read device ready pin
+ */
+static int ppchameleon_device_ready(struct mtd_info *minfo)
+{
+ if (in_be32((volatile unsigned*)GPIO0_IR) & NAND_RB_GPIO_PIN)
+ return 1;
+ return 0;
+}
+
+static int ppchameleonevb_device_ready(struct mtd_info *minfo)
+{
+ if (in_be32((volatile unsigned*)GPIO0_IR) & NAND_EVB_RB_GPIO_PIN)
+ return 1;
+ return 0;
+}
+#endif
+
+#ifdef CONFIG_MTD_PARTITIONS
+const char *part_probes[] = { "cmdlinepart", NULL };
+const char *part_probes_evb[] = { "cmdlinepart", NULL };
+#endif
+
+/*
+ * Main initialization routine
+ */
+static int __init ppchameleonevb_init (void)
+{
+ struct nand_chip *this;
+ const char *part_type = 0;
+ int mtd_parts_nb = 0;
+ struct mtd_partition *mtd_parts = 0;
+ void __iomem *ppchameleon_fio_base;
+ void __iomem *ppchameleonevb_fio_base;
+
+
+ /*********************************
+ * Processor module NAND (if any) *
+ *********************************/
+ /* Allocate memory for MTD device structure and private data */
+ ppchameleon_mtd = kmalloc(sizeof(struct mtd_info) +
+ sizeof(struct nand_chip), GFP_KERNEL);
+ if (!ppchameleon_mtd) {
+ printk("Unable to allocate PPChameleon NAND MTD device structure.\n");
+ return -ENOMEM;
+ }
+
+ /* map physical address */
+ ppchameleon_fio_base = ioremap(ppchameleon_fio_pbase, SZ_4M);
+ if(!ppchameleon_fio_base) {
+ printk("ioremap PPChameleon NAND flash failed\n");
+ kfree(ppchameleon_mtd);
+ return -EIO;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&ppchameleon_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) ppchameleon_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ ppchameleon_mtd->priv = this;
+
+ /* Initialize GPIOs */
+ /* Pin mapping for NAND chip */
+ /*
+ CE GPIO_01
+ CLE GPIO_02
+ ALE GPIO_03
+ R/B GPIO_04
+ */
+ /* output select */
+ out_be32((volatile unsigned*)GPIO0_OSRH, in_be32((volatile unsigned*)GPIO0_OSRH) & 0xC0FFFFFF);
+ /* three-state select */
+ out_be32((volatile unsigned*)GPIO0_TSRH, in_be32((volatile unsigned*)GPIO0_TSRH) & 0xC0FFFFFF);
+ /* enable output driver */
+ out_be32((volatile unsigned*)GPIO0_TCR, in_be32((volatile unsigned*)GPIO0_TCR) | NAND_nCE_GPIO_PIN | NAND_CLE_GPIO_PIN | NAND_ALE_GPIO_PIN);
+#ifdef USE_READY_BUSY_PIN
+ /* three-state select */
+ out_be32((volatile unsigned*)GPIO0_TSRH, in_be32((volatile unsigned*)GPIO0_TSRH) & 0xFF3FFFFF);
+ /* high-impedecence */
+ out_be32((volatile unsigned*)GPIO0_TCR, in_be32((volatile unsigned*)GPIO0_TCR) & (~NAND_RB_GPIO_PIN));
+ /* input select */
+ out_be32((volatile unsigned*)GPIO0_ISR1H, (in_be32((volatile unsigned*)GPIO0_ISR1H) & 0xFF3FFFFF) | 0x00400000);
+#endif
+
+ /* insert callbacks */
+ this->IO_ADDR_R = ppchameleon_fio_base;
+ this->IO_ADDR_W = ppchameleon_fio_base;
+ this->hwcontrol = ppchameleon_hwcontrol;
+#ifdef USE_READY_BUSY_PIN
+ this->dev_ready = ppchameleon_device_ready;
+#endif
+ this->chip_delay = NAND_BIG_DELAY_US;
+ /* ECC mode */
+ this->eccmode = NAND_ECC_SOFT;
+
+ /* Scan to find existence of the device (it could not be mounted) */
+ if (nand_scan (ppchameleon_mtd, 1)) {
+ iounmap((void *)ppchameleon_fio_base);
+ kfree (ppchameleon_mtd);
+ goto nand_evb_init;
+ }
+
+#ifndef USE_READY_BUSY_PIN
+ /* Adjust delay if necessary */
+ if (ppchameleon_mtd->size == NAND_SMALL_SIZE)
+ this->chip_delay = NAND_SMALL_DELAY_US;
+#endif
+
+#ifdef CONFIG_MTD_PARTITIONS
+ ppchameleon_mtd->name = "ppchameleon-nand";
+ mtd_parts_nb = parse_mtd_partitions(ppchameleon_mtd, part_probes, &mtd_parts, 0);
+ if (mtd_parts_nb > 0)
+ part_type = "command line";
+ else
+ mtd_parts_nb = 0;
+#endif
+ if (mtd_parts_nb == 0)
+ {
+ if (ppchameleon_mtd->size == NAND_SMALL_SIZE)
+ mtd_parts = partition_info_me;
+ else
+ mtd_parts = partition_info_hi;
+ mtd_parts_nb = NUM_PARTITIONS;
+ part_type = "static";
+ }
+
+ /* Register the partitions */
+ printk(KERN_NOTICE "Using %s partition definition\n", part_type);
+ add_mtd_partitions(ppchameleon_mtd, mtd_parts, mtd_parts_nb);
+
+nand_evb_init:
+ /****************************
+ * EVB NAND (always present) *
+ ****************************/
+ /* Allocate memory for MTD device structure and private data */
+ ppchameleonevb_mtd = kmalloc(sizeof(struct mtd_info) +
+ sizeof(struct nand_chip), GFP_KERNEL);
+ if (!ppchameleonevb_mtd) {
+ printk("Unable to allocate PPChameleonEVB NAND MTD device structure.\n");
+ return -ENOMEM;
+ }
+
+ /* map physical address */
+ ppchameleonevb_fio_base = ioremap(ppchameleonevb_fio_pbase, SZ_4M);
+ if(!ppchameleonevb_fio_base) {
+ printk("ioremap PPChameleonEVB NAND flash failed\n");
+ kfree(ppchameleonevb_mtd);
+ return -EIO;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&ppchameleonevb_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) ppchameleonevb_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ ppchameleonevb_mtd->priv = this;
+
+ /* Initialize GPIOs */
+ /* Pin mapping for NAND chip */
+ /*
+ CE GPIO_14
+ CLE GPIO_15
+ ALE GPIO_16
+ R/B GPIO_31
+ */
+ /* output select */
+ out_be32((volatile unsigned*)GPIO0_OSRH, in_be32((volatile unsigned*)GPIO0_OSRH) & 0xFFFFFFF0);
+ out_be32((volatile unsigned*)GPIO0_OSRL, in_be32((volatile unsigned*)GPIO0_OSRL) & 0x3FFFFFFF);
+ /* three-state select */
+ out_be32((volatile unsigned*)GPIO0_TSRH, in_be32((volatile unsigned*)GPIO0_TSRH) & 0xFFFFFFF0);
+ out_be32((volatile unsigned*)GPIO0_TSRL, in_be32((volatile unsigned*)GPIO0_TSRL) & 0x3FFFFFFF);
+ /* enable output driver */
+ out_be32((volatile unsigned*)GPIO0_TCR, in_be32((volatile unsigned*)GPIO0_TCR) | NAND_EVB_nCE_GPIO_PIN |
+ NAND_EVB_CLE_GPIO_PIN | NAND_EVB_ALE_GPIO_PIN);
+#ifdef USE_READY_BUSY_PIN
+ /* three-state select */
+ out_be32((volatile unsigned*)GPIO0_TSRL, in_be32((volatile unsigned*)GPIO0_TSRL) & 0xFFFFFFFC);
+ /* high-impedecence */
+ out_be32((volatile unsigned*)GPIO0_TCR, in_be32((volatile unsigned*)GPIO0_TCR) & (~NAND_EVB_RB_GPIO_PIN));
+ /* input select */
+ out_be32((volatile unsigned*)GPIO0_ISR1L, (in_be32((volatile unsigned*)GPIO0_ISR1L) & 0xFFFFFFFC) | 0x00000001);
+#endif
+
+ /* insert callbacks */
+ this->IO_ADDR_R = ppchameleonevb_fio_base;
+ this->IO_ADDR_W = ppchameleonevb_fio_base;
+ this->hwcontrol = ppchameleonevb_hwcontrol;
+#ifdef USE_READY_BUSY_PIN
+ this->dev_ready = ppchameleonevb_device_ready;
+#endif
+ this->chip_delay = NAND_SMALL_DELAY_US;
+
+ /* ECC mode */
+ this->eccmode = NAND_ECC_SOFT;
+
+ /* Scan to find existence of the device */
+ if (nand_scan (ppchameleonevb_mtd, 1)) {
+ iounmap((void *)ppchameleonevb_fio_base);
+ kfree (ppchameleonevb_mtd);
+ return -ENXIO;
+ }
+
+#ifdef CONFIG_MTD_PARTITIONS
+ ppchameleonevb_mtd->name = NAND_EVB_MTD_NAME;
+ mtd_parts_nb = parse_mtd_partitions(ppchameleonevb_mtd, part_probes_evb, &mtd_parts, 0);
+ if (mtd_parts_nb > 0)
+ part_type = "command line";
+ else
+ mtd_parts_nb = 0;
+#endif
+ if (mtd_parts_nb == 0)
+ {
+ mtd_parts = partition_info_evb;
+ mtd_parts_nb = NUM_PARTITIONS;
+ part_type = "static";
+ }
+
+ /* Register the partitions */
+ printk(KERN_NOTICE "Using %s partition definition\n", part_type);
+ add_mtd_partitions(ppchameleonevb_mtd, mtd_parts, mtd_parts_nb);
+
+ /* Return happy */
+ return 0;
+}
+module_init(ppchameleonevb_init);
+
+/*
+ * Clean up routine
+ */
+static void __exit ppchameleonevb_cleanup (void)
+{
+ struct nand_chip *this;
+
+ /* Release resources, unregister device(s) */
+ nand_release (ppchameleon_mtd);
+ nand_release (ppchameleonevb_mtd);
+
+ /* Release iomaps */
+ this = (struct nand_chip *) &ppchameleon_mtd[1];
+ iounmap((void *) this->IO_ADDR_R;
+ this = (struct nand_chip *) &ppchameleonevb_mtd[1];
+ iounmap((void *) this->IO_ADDR_R;
+
+ /* Free the MTD device structure */
+ kfree (ppchameleon_mtd);
+ kfree (ppchameleonevb_mtd);
+}
+module_exit(ppchameleonevb_cleanup);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("DAVE Srl <support-ppchameleon@dave-tech.it>");
+MODULE_DESCRIPTION("MTD map driver for DAVE Srl PPChameleonEVB board");
--- /dev/null
+/*
+ * drivers/mtd/nand/toto.c
+ *
+ * Copyright (c) 2003 Texas Instruments
+ *
+ * Derived from drivers/mtd/autcpu12.c
+ *
+ * Copyright (c) 2002 Thomas Gleixner <tgxl@linutronix.de>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Overview:
+ * This is a device driver for the NAND flash device found on the
+ * TI fido board. It supports 32MiB and 64MiB cards
+ *
+ * $Id: toto.c,v 1.4 2004/10/05 13:50:20 gleixner Exp $
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/partitions.h>
+#include <asm/io.h>
+#include <asm/arch/hardware.h>
+#include <asm/sizes.h>
+#include <asm/arch/toto.h>
+#include <asm/arch-omap1510/hardware.h>
+#include <asm/arch/gpio.h>
+
+/*
+ * MTD structure for TOTO board
+ */
+static struct mtd_info *toto_mtd = NULL;
+
+static unsigned long toto_io_base = OMAP_FLASH_1_BASE;
+
+#define CONFIG_NAND_WORKAROUND 1
+
+#define NAND_NCE 0x4000
+#define NAND_CLE 0x1000
+#define NAND_ALE 0x0002
+#define NAND_MASK (NAND_CLE | NAND_ALE | NAND_NCE)
+
+#define T_NAND_CTL_CLRALE(iob) gpiosetout(NAND_ALE, 0)
+#define T_NAND_CTL_SETALE(iob) gpiosetout(NAND_ALE, NAND_ALE)
+#ifdef CONFIG_NAND_WORKAROUND /* "some" dev boards busted, blue wired to rts2 :( */
+#define T_NAND_CTL_CLRCLE(iob) gpiosetout(NAND_CLE, 0); rts2setout(2, 2)
+#define T_NAND_CTL_SETCLE(iob) gpiosetout(NAND_CLE, NAND_CLE); rts2setout(2, 0)
+#else
+#define T_NAND_CTL_CLRCLE(iob) gpiosetout(NAND_CLE, 0)
+#define T_NAND_CTL_SETCLE(iob) gpiosetout(NAND_CLE, NAND_CLE)
+#endif
+#define T_NAND_CTL_SETNCE(iob) gpiosetout(NAND_NCE, 0)
+#define T_NAND_CTL_CLRNCE(iob) gpiosetout(NAND_NCE, NAND_NCE)
+
+/*
+ * Define partitions for flash devices
+ */
+
+static struct mtd_partition partition_info64M[] = {
+ { .name = "toto kernel partition 1",
+ .offset = 0,
+ .size = 2 * SZ_1M },
+ { .name = "toto file sys partition 2",
+ .offset = 2 * SZ_1M,
+ .size = 14 * SZ_1M },
+ { .name = "toto user partition 3",
+ .offset = 16 * SZ_1M,
+ .size = 16 * SZ_1M },
+ { .name = "toto devboard extra partition 4",
+ .offset = 32 * SZ_1M,
+ .size = 32 * SZ_1M },
+};
+
+static struct mtd_partition partition_info32M[] = {
+ { .name = "toto kernel partition 1",
+ .offset = 0,
+ .size = 2 * SZ_1M },
+ { .name = "toto file sys partition 2",
+ .offset = 2 * SZ_1M,
+ .size = 14 * SZ_1M },
+ { .name = "toto user partition 3",
+ .offset = 16 * SZ_1M,
+ .size = 16 * SZ_1M },
+};
+
+#define NUM_PARTITIONS32M 3
+#define NUM_PARTITIONS64M 4
+/*
+ * hardware specific access to control-lines
+*/
+
+static void toto_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+
+ udelay(1); /* hopefully enough time for tc make proceding write to clear */
+ switch(cmd){
+
+ case NAND_CTL_SETCLE: T_NAND_CTL_SETCLE(cmd); break;
+ case NAND_CTL_CLRCLE: T_NAND_CTL_CLRCLE(cmd); break;
+
+ case NAND_CTL_SETALE: T_NAND_CTL_SETALE(cmd); break;
+ case NAND_CTL_CLRALE: T_NAND_CTL_CLRALE(cmd); break;
+
+ case NAND_CTL_SETNCE: T_NAND_CTL_SETNCE(cmd); break;
+ case NAND_CTL_CLRNCE: T_NAND_CTL_CLRNCE(cmd); break;
+ }
+ udelay(1); /* allow time to ensure gpio state to over take memory write */
+}
+
+/*
+ * Main initialization routine
+ */
+int __init toto_init (void)
+{
+ struct nand_chip *this;
+ int err = 0;
+
+ /* Allocate memory for MTD device structure and private data */
+ toto_mtd = kmalloc (sizeof(struct mtd_info) + sizeof (struct nand_chip),
+ GFP_KERNEL);
+ if (!toto_mtd) {
+ printk (KERN_WARNING "Unable to allocate toto NAND MTD device structure.\n");
+ err = -ENOMEM;
+ goto out;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&toto_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) toto_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ toto_mtd->priv = this;
+
+ /* Set address of NAND IO lines */
+ this->IO_ADDR_R = toto_io_base;
+ this->IO_ADDR_W = toto_io_base;
+ this->hwcontrol = toto_hwcontrol;
+ this->dev_ready = NULL;
+ /* 25 us command delay time */
+ this->chip_delay = 30;
+ this->eccmode = NAND_ECC_SOFT;
+
+ /* Scan to find existance of the device */
+ if (nand_scan (toto_mtd, 1)) {
+ err = -ENXIO;
+ goto out_mtd;
+ }
+
+ /* Register the partitions */
+ switch(toto_mtd->size){
+ case SZ_64M: add_mtd_partitions(toto_mtd, partition_info64M, NUM_PARTITIONS64M); break;
+ case SZ_32M: add_mtd_partitions(toto_mtd, partition_info32M, NUM_PARTITIONS32M); break;
+ default: {
+ printk (KERN_WARNING "Unsupported Nand device\n");
+ err = -ENXIO;
+ goto out_buf;
+ }
+ }
+
+ gpioreserve(NAND_MASK); /* claim our gpios */
+ archflashwp(0,0); /* open up flash for writing */
+
+ goto out;
+
+out_buf:
+ kfree (this->data_buf);
+out_mtd:
+ kfree (toto_mtd);
+out:
+ return err;
+}
+
+module_init(toto_init);
+
+/*
+ * Clean up routine
+ */
+static void __exit toto_cleanup (void)
+{
+ /* Release resources, unregister device */
+ nand_release (toto_mtd);
+
+ /* Free the MTD device structure */
+ kfree (toto_mtd);
+
+ /* stop flash writes */
+ archflashwp(0,1);
+
+ /* release gpios to system */
+ gpiorelease(NAND_MASK);
+}
+module_exit(toto_cleanup);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Richard Woodruff <r-woodruff2@ti.com>");
+MODULE_DESCRIPTION("Glue layer for NAND flash on toto board");
--- /dev/null
+/*
+ * drivers/mtd/tx4925ndfmc.c
+ *
+ * Overview:
+ * This is a device driver for the NAND flash device found on the
+ * Toshiba RBTX4925 reference board, which is a SmartMediaCard. It supports
+ * 16MiB, 32MiB and 64MiB cards.
+ *
+ * Author: MontaVista Software, Inc. source@mvista.com
+ *
+ * Derived from drivers/mtd/autcpu12.c
+ * Copyright (c) 2001 Thomas Gleixner (gleixner@autronix.de)
+ *
+ * $Id: tx4925ndfmc.c,v 1.5 2004/10/05 13:50:20 gleixner Exp $
+ *
+ * Copyright (C) 2001 Toshiba Corporation
+ *
+ * 2003 (c) MontaVista Software, Inc. This file is licensed under
+ * the terms of the GNU General Public License version 2. This program
+ * is licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ *
+ */
+
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/partitions.h>
+#include <linux/delay.h>
+#include <asm/io.h>
+#include <asm/tx4925/tx4925_nand.h>
+
+extern struct nand_oobinfo jffs2_oobinfo;
+
+/*
+ * MTD structure for RBTX4925 board
+ */
+static struct mtd_info *tx4925ndfmc_mtd = NULL;
+
+/*
+ * Define partitions for flash devices
+ */
+
+static struct mtd_partition partition_info16k[] = {
+ { .name = "RBTX4925 flash partition 1",
+ .offset = 0,
+ .size = 8 * 0x00100000 },
+ { .name = "RBTX4925 flash partition 2",
+ .offset = 8 * 0x00100000,
+ .size = 8 * 0x00100000 },
+};
+
+static struct mtd_partition partition_info32k[] = {
+ { .name = "RBTX4925 flash partition 1",
+ .offset = 0,
+ .size = 8 * 0x00100000 },
+ { .name = "RBTX4925 flash partition 2",
+ .offset = 8 * 0x00100000,
+ .size = 24 * 0x00100000 },
+};
+
+static struct mtd_partition partition_info64k[] = {
+ { .name = "User FS",
+ .offset = 0,
+ .size = 16 * 0x00100000 },
+ { .name = "RBTX4925 flash partition 2",
+ .offset = 16 * 0x00100000,
+ .size = 48 * 0x00100000},
+};
+
+static struct mtd_partition partition_info128k[] = {
+ { .name = "Skip bad section",
+ .offset = 0,
+ .size = 16 * 0x00100000 },
+ { .name = "User FS",
+ .offset = 16 * 0x00100000,
+ .size = 112 * 0x00100000 },
+};
+#define NUM_PARTITIONS16K 2
+#define NUM_PARTITIONS32K 2
+#define NUM_PARTITIONS64K 2
+#define NUM_PARTITIONS128K 2
+
+/*
+ * hardware specific access to control-lines
+*/
+static void tx4925ndfmc_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+
+ switch(cmd){
+
+ case NAND_CTL_SETCLE:
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_CLE;
+ break;
+ case NAND_CTL_CLRCLE:
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_CLE;
+ break;
+ case NAND_CTL_SETALE:
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_ALE;
+ break;
+ case NAND_CTL_CLRALE:
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_ALE;
+ break;
+ case NAND_CTL_SETNCE:
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_CE;
+ break;
+ case NAND_CTL_CLRNCE:
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_CE;
+ break;
+ case NAND_CTL_SETWP:
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_WE;
+ break;
+ case NAND_CTL_CLRWP:
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_WE;
+ break;
+ }
+}
+
+/*
+* read device ready pin
+*/
+static int tx4925ndfmc_device_ready(struct mtd_info *mtd)
+{
+ int ready;
+ ready = (tx4925_ndfmcptr->sr & TX4925_NDSFR_BUSY) ? 0 : 1;
+ return ready;
+}
+void tx4925ndfmc_enable_hwecc(struct mtd_info *mtd, int mode)
+{
+ /* reset first */
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_ECC_CNTL_MASK;
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_ECC_CNTL_MASK;
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_ECC_CNTL_ENAB;
+}
+static void tx4925ndfmc_disable_ecc(void)
+{
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_ECC_CNTL_MASK;
+}
+static void tx4925ndfmc_enable_read_ecc(void)
+{
+ tx4925_ndfmcptr->mcr &= ~TX4925_NDFMCR_ECC_CNTL_MASK;
+ tx4925_ndfmcptr->mcr |= TX4925_NDFMCR_ECC_CNTL_READ;
+}
+void tx4925ndfmc_readecc(struct mtd_info *mtd, const u_char *dat, u_char *ecc_code){
+ int i;
+ u_char *ecc = ecc_code;
+ tx4925ndfmc_enable_read_ecc();
+ for (i = 0;i < 6;i++,ecc++)
+ *ecc = tx4925_read_nfmc(&(tx4925_ndfmcptr->dtr));
+ tx4925ndfmc_disable_ecc();
+}
+void tx4925ndfmc_device_setup(void)
+{
+
+ *(unsigned char *)0xbb005000 &= ~0x08;
+
+ /* reset NDFMC */
+ tx4925_ndfmcptr->rstr |= TX4925_NDFRSTR_RST;
+ while (tx4925_ndfmcptr->rstr & TX4925_NDFRSTR_RST);
+
+ /* setup BusSeparete, Hold Time, Strobe Pulse Width */
+ tx4925_ndfmcptr->mcr = TX4925_BSPRT ? TX4925_NDFMCR_BSPRT : 0;
+ tx4925_ndfmcptr->spr = TX4925_HOLD << 4 | TX4925_SPW;
+}
+static u_char tx4925ndfmc_nand_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ return tx4925_read_nfmc(this->IO_ADDR_R);
+}
+
+static void tx4925ndfmc_nand_write_byte(struct mtd_info *mtd, u_char byte)
+{
+ struct nand_chip *this = mtd->priv;
+ tx4925_write_nfmc(byte, this->IO_ADDR_W);
+}
+
+static void tx4925ndfmc_nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ tx4925_write_nfmc(buf[i], this->IO_ADDR_W);
+}
+
+static void tx4925ndfmc_nand_read_buf(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ buf[i] = tx4925_read_nfmc(this->IO_ADDR_R);
+}
+
+static int tx4925ndfmc_nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ if (buf[i] != tx4925_read_nfmc(this->IO_ADDR_R))
+ return -EFAULT;
+
+ return 0;
+}
+
+/*
+ * Send command to NAND device
+ */
+static void tx4925ndfmc_nand_command (struct mtd_info *mtd, unsigned command, int column, int page_addr)
+{
+ register struct nand_chip *this = mtd->priv;
+
+ /* Begin command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ /*
+ * Write out the command to the device.
+ */
+ if (command == NAND_CMD_SEQIN) {
+ int readcmd;
+
+ if (column >= mtd->oobblock) {
+ /* OOB area */
+ column -= mtd->oobblock;
+ readcmd = NAND_CMD_READOOB;
+ } else if (column < 256) {
+ /* First 256 bytes --> READ0 */
+ readcmd = NAND_CMD_READ0;
+ } else {
+ column -= 256;
+ readcmd = NAND_CMD_READ1;
+ }
+ this->write_byte(mtd, readcmd);
+ }
+ this->write_byte(mtd, command);
+
+ /* Set ALE and clear CLE to start address cycle */
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+
+ if (column != -1 || page_addr != -1) {
+ this->hwcontrol(mtd, NAND_CTL_SETALE);
+
+ /* Serially input address */
+ if (column != -1)
+ this->write_byte(mtd, column);
+ if (page_addr != -1) {
+ this->write_byte(mtd, (unsigned char) (page_addr & 0xff));
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 8) & 0xff));
+ /* One more address cycle for higher density devices */
+ if (mtd->size & 0x0c000000)
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 16) & 0x0f));
+ }
+ /* Latch in address */
+ this->hwcontrol(mtd, NAND_CTL_CLRALE);
+ }
+
+ /*
+ * program and erase have their own busy handlers
+ * status and sequential in needs no delay
+ */
+ switch (command) {
+
+ case NAND_CMD_PAGEPROG:
+ /* Turn off WE */
+ this->hwcontrol (mtd, NAND_CTL_CLRWP);
+ return;
+
+ case NAND_CMD_SEQIN:
+ /* Turn on WE */
+ this->hwcontrol (mtd, NAND_CTL_SETWP);
+ return;
+
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_STATUS:
+ return;
+
+ case NAND_CMD_RESET:
+ if (this->dev_ready)
+ break;
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ this->write_byte(mtd, NAND_CMD_STATUS);
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+ while ( !(this->read_byte(mtd) & 0x40));
+ return;
+
+ /* This applies to read commands */
+ default:
+ /*
+ * If we don't have access to the busy pin, we apply the given
+ * command delay
+ */
+ if (!this->dev_ready) {
+ udelay (this->chip_delay);
+ return;
+ }
+ }
+
+ /* wait until command is processed */
+ while (!this->dev_ready(mtd));
+}
+
+#ifdef CONFIG_MTD_CMDLINE_PARTS
+extern int parse_cmdline_partitions(struct mtd_info *master, struct mtd_partitio
+n **pparts, char *);
+#endif
+
+/*
+ * Main initialization routine
+ */
+extern int nand_correct_data(struct mtd_info *mtd, u_char *dat, u_char *read_ecc, u_char *calc_ecc);
+int __init tx4925ndfmc_init (void)
+{
+ struct nand_chip *this;
+ int err = 0;
+
+ /* Allocate memory for MTD device structure and private data */
+ tx4925ndfmc_mtd = kmalloc (sizeof(struct mtd_info) + sizeof (struct nand_chip),
+ GFP_KERNEL);
+ if (!tx4925ndfmc_mtd) {
+ printk ("Unable to allocate RBTX4925 NAND MTD device structure.\n");
+ err = -ENOMEM;
+ goto out;
+ }
+
+ tx4925ndfmc_device_setup();
+
+ /* io is indirect via a register so don't need to ioremap address */
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&tx4925ndfmc_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) tx4925ndfmc_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ tx4925ndfmc_mtd->priv = this;
+
+ /* Set address of NAND IO lines */
+ this->IO_ADDR_R = (void __iomem *)&(tx4925_ndfmcptr->dtr);
+ this->IO_ADDR_W = (void __iomem *)&(tx4925_ndfmcptr->dtr);
+ this->hwcontrol = tx4925ndfmc_hwcontrol;
+ this->enable_hwecc = tx4925ndfmc_enable_hwecc;
+ this->calculate_ecc = tx4925ndfmc_readecc;
+ this->correct_data = nand_correct_data;
+ this->eccmode = NAND_ECC_HW6_512;
+ this->dev_ready = tx4925ndfmc_device_ready;
+ /* 20 us command delay time */
+ this->chip_delay = 20;
+ this->read_byte = tx4925ndfmc_nand_read_byte;
+ this->write_byte = tx4925ndfmc_nand_write_byte;
+ this->cmdfunc = tx4925ndfmc_nand_command;
+ this->write_buf = tx4925ndfmc_nand_write_buf;
+ this->read_buf = tx4925ndfmc_nand_read_buf;
+ this->verify_buf = tx4925ndfmc_nand_verify_buf;
+
+ /* Scan to find existance of the device */
+ if (nand_scan (tx4925ndfmc_mtd, 1)) {
+ err = -ENXIO;
+ goto out_ior;
+ }
+
+ /* Register the partitions */
+#ifdef CONFIG_MTD_CMDLINE_PARTS
+ {
+ int mtd_parts_nb = 0;
+ struct mtd_partition *mtd_parts = 0;
+ mtd_parts_nb = parse_cmdline_partitions(tx4925ndfmc_mtd, &mtd_parts, "tx4925ndfmc");
+ if (mtd_parts_nb > 0)
+ add_mtd_partitions(tx4925ndfmc_mtd, mtd_parts, mtd_parts_nb);
+ else
+ add_mtd_device(tx4925ndfmc_mtd);
+ }
+#else /* ifdef CONFIG_MTD_CMDLINE_PARTS */
+ switch(tx4925ndfmc_mtd->size){
+ case 0x01000000: add_mtd_partitions(tx4925ndfmc_mtd, partition_info16k, NUM_PARTITIONS16K); break;
+ case 0x02000000: add_mtd_partitions(tx4925ndfmc_mtd, partition_info32k, NUM_PARTITIONS32K); break;
+ case 0x04000000: add_mtd_partitions(tx4925ndfmc_mtd, partition_info64k, NUM_PARTITIONS64K); break;
+ case 0x08000000: add_mtd_partitions(tx4925ndfmc_mtd, partition_info128k, NUM_PARTITIONS128K); break;
+ default: {
+ printk ("Unsupported SmartMedia device\n");
+ err = -ENXIO;
+ goto out_ior;
+ }
+ }
+#endif /* ifdef CONFIG_MTD_CMDLINE_PARTS */
+ goto out;
+
+out_ior:
+out:
+ return err;
+}
+
+module_init(tx4925ndfmc_init);
+
+/*
+ * Clean up routine
+ */
+#ifdef MODULE
+static void __exit tx4925ndfmc_cleanup (void)
+{
+ /* Release resources, unregister device */
+ nand_release (tx4925ndfmc_mtd);
+
+ /* Free the MTD device structure */
+ kfree (tx4925ndfmc_mtd);
+}
+module_exit(tx4925ndfmc_cleanup);
+#endif
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Alice Hennessy <ahennessy@mvista.com>");
+MODULE_DESCRIPTION("Glue layer for SmartMediaCard on Toshiba RBTX4925");
--- /dev/null
+/*
+ * drivers/mtd/nand/tx4938ndfmc.c
+ *
+ * Overview:
+ * This is a device driver for the NAND flash device connected to
+ * TX4938 internal NAND Memory Controller.
+ * TX4938 NDFMC is almost same as TX4925 NDFMC, but register size are 64 bit.
+ *
+ * Author: source@mvista.com
+ *
+ * Based on spia.c by Steven J. Hill
+ *
+ * $Id: tx4938ndfmc.c,v 1.4 2004/10/05 13:50:20 gleixner Exp $
+ *
+ * Copyright (C) 2000-2001 Toshiba Corporation
+ *
+ * 2003 (c) MontaVista Software, Inc. This file is licensed under the
+ * terms of the GNU General Public License version 2. This program is
+ * licensed "as is" without any warranty of any kind, whether express
+ * or implied.
+ */
+#include <linux/config.h>
+#include <linux/slab.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mtd/mtd.h>
+#include <linux/mtd/nand.h>
+#include <linux/mtd/nand_ecc.h>
+#include <linux/mtd/partitions.h>
+#include <asm/io.h>
+#include <asm/bootinfo.h>
+#include <linux/delay.h>
+#include <asm/tx4938/rbtx4938.h>
+
+extern struct nand_oobinfo jffs2_oobinfo;
+
+/*
+ * MTD structure for TX4938 NDFMC
+ */
+static struct mtd_info *tx4938ndfmc_mtd;
+
+/*
+ * Define partitions for flash device
+ */
+#define flush_wb() (void)tx4938_ndfmcptr->mcr;
+
+#define NUM_PARTITIONS 3
+#define NUMBER_OF_CIS_BLOCKS 24
+#define SIZE_OF_BLOCK 0x00004000
+#define NUMBER_OF_BLOCK_PER_ZONE 1024
+#define SIZE_OF_ZONE (NUMBER_OF_BLOCK_PER_ZONE * SIZE_OF_BLOCK)
+#ifndef CONFIG_MTD_CMDLINE_PARTS
+/*
+ * You can use the following sample of MTD partitions
+ * on the NAND Flash Memory 32MB or more.
+ *
+ * The following figure shows the image of the sample partition on
+ * the 32MB NAND Flash Memory.
+ *
+ * Block No.
+ * 0 +-----------------------------+ ------
+ * | CIS | ^
+ * 24 +-----------------------------+ |
+ * | kernel image | | Zone 0
+ * | | |
+ * +-----------------------------+ |
+ * 1023 | unused area | v
+ * +-----------------------------+ ------
+ * 1024 | JFFS2 | ^
+ * | | |
+ * | | | Zone 1
+ * | | |
+ * | | |
+ * | | v
+ * 2047 +-----------------------------+ ------
+ *
+ */
+static struct mtd_partition partition_info[NUM_PARTITIONS] = {
+ {
+ .name = "RBTX4938 CIS Area",
+ .offset = 0,
+ .size = (NUMBER_OF_CIS_BLOCKS * SIZE_OF_BLOCK),
+ .mask_flags = MTD_WRITEABLE /* This partition is NOT writable */
+ },
+ {
+ .name = "RBTX4938 kernel image",
+ .offset = MTDPART_OFS_APPEND,
+ .size = 8 * 0x00100000, /* 8MB (Depends on size of kernel image) */
+ .mask_flags = MTD_WRITEABLE /* This partition is NOT writable */
+ },
+ {
+ .name = "Root FS (JFFS2)",
+ .offset = (0 + SIZE_OF_ZONE), /* start address of next zone */
+ .size = MTDPART_SIZ_FULL
+ },
+};
+#endif
+
+static void tx4938ndfmc_hwcontrol(struct mtd_info *mtd, int cmd)
+{
+ switch (cmd) {
+ case NAND_CTL_SETCLE:
+ tx4938_ndfmcptr->mcr |= TX4938_NDFMCR_CLE;
+ break;
+ case NAND_CTL_CLRCLE:
+ tx4938_ndfmcptr->mcr &= ~TX4938_NDFMCR_CLE;
+ break;
+ case NAND_CTL_SETALE:
+ tx4938_ndfmcptr->mcr |= TX4938_NDFMCR_ALE;
+ break;
+ case NAND_CTL_CLRALE:
+ tx4938_ndfmcptr->mcr &= ~TX4938_NDFMCR_ALE;
+ break;
+ /* TX4938_NDFMCR_CE bit is 0:high 1:low */
+ case NAND_CTL_SETNCE:
+ tx4938_ndfmcptr->mcr |= TX4938_NDFMCR_CE;
+ break;
+ case NAND_CTL_CLRNCE:
+ tx4938_ndfmcptr->mcr &= ~TX4938_NDFMCR_CE;
+ break;
+ case NAND_CTL_SETWP:
+ tx4938_ndfmcptr->mcr |= TX4938_NDFMCR_WE;
+ break;
+ case NAND_CTL_CLRWP:
+ tx4938_ndfmcptr->mcr &= ~TX4938_NDFMCR_WE;
+ break;
+ }
+}
+static int tx4938ndfmc_dev_ready(struct mtd_info *mtd)
+{
+ flush_wb();
+ return !(tx4938_ndfmcptr->sr & TX4938_NDFSR_BUSY);
+}
+static void tx4938ndfmc_calculate_ecc(struct mtd_info *mtd, const u_char *dat, u_char *ecc_code)
+{
+ u32 mcr = tx4938_ndfmcptr->mcr;
+ mcr &= ~TX4938_NDFMCR_ECC_ALL;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_OFF;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_READ;
+ ecc_code[1] = tx4938_ndfmcptr->dtr;
+ ecc_code[0] = tx4938_ndfmcptr->dtr;
+ ecc_code[2] = tx4938_ndfmcptr->dtr;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_OFF;
+}
+static void tx4938ndfmc_enable_hwecc(struct mtd_info *mtd, int mode)
+{
+ u32 mcr = tx4938_ndfmcptr->mcr;
+ mcr &= ~TX4938_NDFMCR_ECC_ALL;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_RESET;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_OFF;
+ tx4938_ndfmcptr->mcr = mcr | TX4938_NDFMCR_ECC_ON;
+}
+
+static u_char tx4938ndfmc_nand_read_byte(struct mtd_info *mtd)
+{
+ struct nand_chip *this = mtd->priv;
+ return tx4938_read_nfmc(this->IO_ADDR_R);
+}
+
+static void tx4938ndfmc_nand_write_byte(struct mtd_info *mtd, u_char byte)
+{
+ struct nand_chip *this = mtd->priv;
+ tx4938_write_nfmc(byte, this->IO_ADDR_W);
+}
+
+static void tx4938ndfmc_nand_write_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ tx4938_write_nfmc(buf[i], this->IO_ADDR_W);
+}
+
+static void tx4938ndfmc_nand_read_buf(struct mtd_info *mtd, u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ buf[i] = tx4938_read_nfmc(this->IO_ADDR_R);
+}
+
+static int tx4938ndfmc_nand_verify_buf(struct mtd_info *mtd, const u_char *buf, int len)
+{
+ int i;
+ struct nand_chip *this = mtd->priv;
+
+ for (i=0; i<len; i++)
+ if (buf[i] != tx4938_read_nfmc(this->IO_ADDR_R))
+ return -EFAULT;
+
+ return 0;
+}
+
+/*
+ * Send command to NAND device
+ */
+static void tx4938ndfmc_nand_command (struct mtd_info *mtd, unsigned command, int column, int page_addr)
+{
+ register struct nand_chip *this = mtd->priv;
+
+ /* Begin command latch cycle */
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ /*
+ * Write out the command to the device.
+ */
+ if (command == NAND_CMD_SEQIN) {
+ int readcmd;
+
+ if (column >= mtd->oobblock) {
+ /* OOB area */
+ column -= mtd->oobblock;
+ readcmd = NAND_CMD_READOOB;
+ } else if (column < 256) {
+ /* First 256 bytes --> READ0 */
+ readcmd = NAND_CMD_READ0;
+ } else {
+ column -= 256;
+ readcmd = NAND_CMD_READ1;
+ }
+ this->write_byte(mtd, readcmd);
+ }
+ this->write_byte(mtd, command);
+
+ /* Set ALE and clear CLE to start address cycle */
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+
+ if (column != -1 || page_addr != -1) {
+ this->hwcontrol(mtd, NAND_CTL_SETALE);
+
+ /* Serially input address */
+ if (column != -1)
+ this->write_byte(mtd, column);
+ if (page_addr != -1) {
+ this->write_byte(mtd, (unsigned char) (page_addr & 0xff));
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 8) & 0xff));
+ /* One more address cycle for higher density devices */
+ if (mtd->size & 0x0c000000)
+ this->write_byte(mtd, (unsigned char) ((page_addr >> 16) & 0x0f));
+ }
+ /* Latch in address */
+ this->hwcontrol(mtd, NAND_CTL_CLRALE);
+ }
+
+ /*
+ * program and erase have their own busy handlers
+ * status and sequential in needs no delay
+ */
+ switch (command) {
+
+ case NAND_CMD_PAGEPROG:
+ /* Turn off WE */
+ this->hwcontrol (mtd, NAND_CTL_CLRWP);
+ return;
+
+ case NAND_CMD_SEQIN:
+ /* Turn on WE */
+ this->hwcontrol (mtd, NAND_CTL_SETWP);
+ return;
+
+ case NAND_CMD_ERASE1:
+ case NAND_CMD_ERASE2:
+ case NAND_CMD_STATUS:
+ return;
+
+ case NAND_CMD_RESET:
+ if (this->dev_ready)
+ break;
+ this->hwcontrol(mtd, NAND_CTL_SETCLE);
+ this->write_byte(mtd, NAND_CMD_STATUS);
+ this->hwcontrol(mtd, NAND_CTL_CLRCLE);
+ while ( !(this->read_byte(mtd) & 0x40));
+ return;
+
+ /* This applies to read commands */
+ default:
+ /*
+ * If we don't have access to the busy pin, we apply the given
+ * command delay
+ */
+ if (!this->dev_ready) {
+ udelay (this->chip_delay);
+ return;
+ }
+ }
+
+ /* wait until command is processed */
+ while (!this->dev_ready(mtd));
+}
+
+#ifdef CONFIG_MTD_CMDLINE_PARTS
+extern int parse_cmdline_partitions(struct mtd_info *master, struct mtd_partition **pparts, char *);
+#endif
+/*
+ * Main initialization routine
+ */
+int __init tx4938ndfmc_init (void)
+{
+ struct nand_chip *this;
+ int bsprt = 0, hold = 0xf, spw = 0xf;
+ int protected = 0;
+
+ if ((*rbtx4938_piosel_ptr & 0x0c) != 0x08) {
+ printk("TX4938 NDFMC: disabled by IOC PIOSEL\n");
+ return -ENODEV;
+ }
+ bsprt = 1;
+ hold = 2;
+ spw = 9 - 1; /* 8 GBUSCLK = 80ns (@ GBUSCLK 100MHz) */
+
+ if ((tx4938_ccfgptr->pcfg &
+ (TX4938_PCFG_ATA_SEL|TX4938_PCFG_ISA_SEL|TX4938_PCFG_NDF_SEL))
+ != TX4938_PCFG_NDF_SEL) {
+ printk("TX4938 NDFMC: disabled by PCFG.\n");
+ return -ENODEV;
+ }
+
+ /* reset NDFMC */
+ tx4938_ndfmcptr->rstr |= TX4938_NDFRSTR_RST;
+ while (tx4938_ndfmcptr->rstr & TX4938_NDFRSTR_RST)
+ ;
+ /* setup BusSeparete, Hold Time, Strobe Pulse Width */
+ tx4938_ndfmcptr->mcr = bsprt ? TX4938_NDFMCR_BSPRT : 0;
+ tx4938_ndfmcptr->spr = hold << 4 | spw;
+
+ /* Allocate memory for MTD device structure and private data */
+ tx4938ndfmc_mtd = kmalloc (sizeof(struct mtd_info) + sizeof (struct nand_chip),
+ GFP_KERNEL);
+ if (!tx4938ndfmc_mtd) {
+ printk ("Unable to allocate TX4938 NDFMC MTD device structure.\n");
+ return -ENOMEM;
+ }
+
+ /* Get pointer to private data */
+ this = (struct nand_chip *) (&tx4938ndfmc_mtd[1]);
+
+ /* Initialize structures */
+ memset((char *) tx4938ndfmc_mtd, 0, sizeof(struct mtd_info));
+ memset((char *) this, 0, sizeof(struct nand_chip));
+
+ /* Link the private data with the MTD structure */
+ tx4938ndfmc_mtd->priv = this;
+
+ /* Set address of NAND IO lines */
+ this->IO_ADDR_R = (unsigned long)&tx4938_ndfmcptr->dtr;
+ this->IO_ADDR_W = (unsigned long)&tx4938_ndfmcptr->dtr;
+ this->hwcontrol = tx4938ndfmc_hwcontrol;
+ this->dev_ready = tx4938ndfmc_dev_ready;
+ this->calculate_ecc = tx4938ndfmc_calculate_ecc;
+ this->correct_data = nand_correct_data;
+ this->enable_hwecc = tx4938ndfmc_enable_hwecc;
+ this->eccmode = NAND_ECC_HW3_256;
+ this->chip_delay = 100;
+ this->read_byte = tx4938ndfmc_nand_read_byte;
+ this->write_byte = tx4938ndfmc_nand_write_byte;
+ this->cmdfunc = tx4938ndfmc_nand_command;
+ this->write_buf = tx4938ndfmc_nand_write_buf;
+ this->read_buf = tx4938ndfmc_nand_read_buf;
+ this->verify_buf = tx4938ndfmc_nand_verify_buf;
+
+ /* Scan to find existance of the device */
+ if (nand_scan (tx4938ndfmc_mtd, 1)) {
+ kfree (tx4938ndfmc_mtd);
+ return -ENXIO;
+ }
+
+ if (protected) {
+ printk(KERN_INFO "TX4938 NDFMC: write protected.\n");
+ tx4938ndfmc_mtd->flags &= ~(MTD_WRITEABLE | MTD_ERASEABLE);
+ }
+
+#ifdef CONFIG_MTD_CMDLINE_PARTS
+ {
+ int mtd_parts_nb = 0;
+ struct mtd_partition *mtd_parts = 0;
+ mtd_parts_nb = parse_cmdline_partitions(tx4938ndfmc_mtd, &mtd_parts, "tx4938ndfmc");
+ if (mtd_parts_nb > 0)
+ add_mtd_partitions(tx4938ndfmc_mtd, mtd_parts, mtd_parts_nb);
+ else
+ add_mtd_device(tx4938ndfmc_mtd);
+ }
+#else
+ add_mtd_partitions(tx4938ndfmc_mtd, partition_info, NUM_PARTITIONS );
+#endif
+
+ return 0;
+}
+module_init(tx4938ndfmc_init);
+
+/*
+ * Clean up routine
+ */
+static void __exit tx4938ndfmc_cleanup (void)
+{
+ /* Release resources, unregister device */
+ nand_release (tx4938ndfmc_mtd);
+
+ /* Free the MTD device structure */
+ kfree (tx4938ndfmc_mtd);
+}
+module_exit(tx4938ndfmc_cleanup);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Alice Hennessy <ahennessy@mvista.com>");
+MODULE_DESCRIPTION("Board-specific glue layer for NAND flash on TX4938 NDFMC");
--- /dev/null
+/*
+ * FEC instantatiation file for NETTA
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+#include <linux/bitops.h>
+
+#include <asm/8xx_immap.h>
+#include <asm/pgtable.h>
+#include <asm/mpc8xx.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <asm/commproc.h>
+
+#include "fec_8xx.h"
+
+/*************************************************/
+
+static struct fec_platform_info fec1_info = {
+ .fec_no = 0,
+ .use_mdio = 1,
+ .phy_addr = 8,
+ .fec_irq = SIU_LEVEL1,
+ .phy_irq = CPM_IRQ_OFFSET + CPMVEC_PIO_PC6,
+ .rx_ring = 128,
+ .tx_ring = 16,
+ .rx_copybreak = 240,
+ .use_napi = 1,
+ .napi_weight = 17,
+};
+
+static struct fec_platform_info fec2_info = {
+ .fec_no = 1,
+ .use_mdio = 1,
+ .phy_addr = 2,
+ .fec_irq = SIU_LEVEL3,
+ .phy_irq = CPM_IRQ_OFFSET + CPMVEC_PIO_PC7,
+ .rx_ring = 128,
+ .tx_ring = 16,
+ .rx_copybreak = 240,
+ .use_napi = 1,
+ .napi_weight = 17,
+};
+
+static struct net_device *fec1_dev;
+static struct net_device *fec2_dev;
+
+/* XXX custom u-boot & Linux startup needed */
+extern const char *__fw_getenv(const char *var);
+
+/* access ports */
+#define setbits32(_addr, _v) __fec_out32(&(_addr), __fec_in32(&(_addr)) | (_v))
+#define clrbits32(_addr, _v) __fec_out32(&(_addr), __fec_in32(&(_addr)) & ~(_v))
+
+#define setbits16(_addr, _v) __fec_out16(&(_addr), __fec_in16(&(_addr)) | (_v))
+#define clrbits16(_addr, _v) __fec_out16(&(_addr), __fec_in16(&(_addr)) & ~(_v))
+
+int fec_8xx_platform_init(void)
+{
+ immap_t *immap = (immap_t *)IMAP_ADDR;
+ bd_t *bd = (bd_t *) __res;
+ const char *s;
+ char *e;
+ int i;
+
+ /* use MDC for MII */
+ setbits16(immap->im_ioport.iop_pdpar, 0x0080);
+ clrbits16(immap->im_ioport.iop_pddir, 0x0080);
+
+ /* configure FEC1 pins */
+ setbits16(immap->im_ioport.iop_papar, 0xe810);
+ setbits16(immap->im_ioport.iop_padir, 0x0810);
+ clrbits16(immap->im_ioport.iop_padir, 0xe000);
+
+ setbits32(immap->im_cpm.cp_pbpar, 0x00000001);
+ clrbits32(immap->im_cpm.cp_pbdir, 0x00000001);
+
+ setbits32(immap->im_cpm.cp_cptr, 0x00000100);
+ clrbits32(immap->im_cpm.cp_cptr, 0x00000050);
+
+ clrbits16(immap->im_ioport.iop_pcpar, 0x0200);
+ clrbits16(immap->im_ioport.iop_pcdir, 0x0200);
+ clrbits16(immap->im_ioport.iop_pcso, 0x0200);
+ setbits16(immap->im_ioport.iop_pcint, 0x0200);
+
+ /* configure FEC2 pins */
+ setbits32(immap->im_cpm.cp_pepar, 0x00039620);
+ setbits32(immap->im_cpm.cp_pedir, 0x00039620);
+ setbits32(immap->im_cpm.cp_peso, 0x00031000);
+ clrbits32(immap->im_cpm.cp_peso, 0x00008620);
+
+ setbits32(immap->im_cpm.cp_cptr, 0x00000080);
+ clrbits32(immap->im_cpm.cp_cptr, 0x00000028);
+
+ clrbits16(immap->im_ioport.iop_pcpar, 0x0200);
+ clrbits16(immap->im_ioport.iop_pcdir, 0x0200);
+ clrbits16(immap->im_ioport.iop_pcso, 0x0200);
+ setbits16(immap->im_ioport.iop_pcint, 0x0200);
+
+ /* fill up */
+ fec1_info.sys_clk = bd->bi_intfreq;
+ fec2_info.sys_clk = bd->bi_intfreq;
+
+ s = __fw_getenv("ethaddr");
+ if (s != NULL) {
+ for (i = 0; i < 6; i++) {
+ fec1_info.macaddr[i] = simple_strtoul(s, &e, 16);
+ if (*e)
+ s = e + 1;
+ }
+ }
+
+ s = __fw_getenv("eth1addr");
+ if (s != NULL) {
+ for (i = 0; i < 6; i++) {
+ fec2_info.macaddr[i] = simple_strtoul(s, &e, 16);
+ if (*e)
+ s = e + 1;
+ }
+ }
+
+ fec_8xx_init_one(&fec1_info, &fec1_dev);
+ fec_8xx_init_one(&fec2_info, &fec2_dev);
+
+ return fec1_dev != NULL && fec2_dev != NULL ? 0 : -1;
+}
+
+void fec_8xx_platform_cleanup(void)
+{
+ if (fec2_dev != NULL)
+ fec_8xx_cleanup_one(fec2_dev);
+
+ if (fec1_dev != NULL)
+ fec_8xx_cleanup_one(fec1_dev);
+}
--- /dev/null
+/*
+ * Fast Ethernet Controller (FEC) driver for Motorola MPC8xx.
+ *
+ * Copyright (c) 2003 Intracom S.A.
+ * by Pantelis Antoniou <panto@intracom.gr>
+ *
+ * Heavily based on original FEC driver by Dan Malek <dan@embeddededge.com>
+ * and modifications by Joakim Tjernlund <joakim.tjernlund@lumentis.se>
+ *
+ * Released under the GPL
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+#include <linux/bitops.h>
+
+#include <asm/8xx_immap.h>
+#include <asm/pgtable.h>
+#include <asm/mpc8xx.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <asm/commproc.h>
+#include <asm/dma-mapping.h>
+
+#include "fec_8xx.h"
+
+/*************************************************/
+
+#define FEC_MAX_MULTICAST_ADDRS 64
+
+/*************************************************/
+
+static char version[] __devinitdata =
+ DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")" "\n";
+
+MODULE_AUTHOR("Pantelis Antoniou <panto@intracom.gr>");
+MODULE_DESCRIPTION("Motorola 8xx FEC ethernet driver");
+MODULE_LICENSE("GPL");
+
+MODULE_PARM(fec_8xx_debug, "i");
+MODULE_PARM_DESC(fec_8xx_debug,
+ "FEC 8xx bitmapped debugging message enable value");
+
+int fec_8xx_debug = -1; /* -1 == use FEC_8XX_DEF_MSG_ENABLE as value */
+
+/*************************************************/
+
+/*
+ * Delay to wait for FEC reset command to complete (in us)
+ */
+#define FEC_RESET_DELAY 50
+
+/*****************************************************************************************/
+
+static void fec_whack_reset(fec_t * fecp)
+{
+ int i;
+
+ /*
+ * Whack a reset. We should wait for this.
+ */
+ FW(fecp, ecntrl, FEC_ECNTRL_PINMUX | FEC_ECNTRL_RESET);
+ for (i = 0;
+ (FR(fecp, ecntrl) & FEC_ECNTRL_RESET) != 0 && i < FEC_RESET_DELAY;
+ i++)
+ udelay(1);
+
+ if (i == FEC_RESET_DELAY)
+ printk(KERN_WARNING "FEC Reset timeout!\n");
+
+}
+
+/****************************************************************************/
+
+/*
+ * Transmitter timeout.
+ */
+#define TX_TIMEOUT (2*HZ)
+
+/****************************************************************************/
+
+/*
+ * Returns the CRC needed when filling in the hash table for
+ * multicast group filtering
+ * pAddr must point to a MAC address (6 bytes)
+ */
+static __u32 fec_mulicast_calc_crc(char *pAddr)
+{
+ u8 byte;
+ int byte_count;
+ int bit_count;
+ __u32 crc = 0xffffffff;
+ u8 msb;
+
+ for (byte_count = 0; byte_count < 6; byte_count++) {
+ byte = pAddr[byte_count];
+ for (bit_count = 0; bit_count < 8; bit_count++) {
+ msb = crc >> 31;
+ crc <<= 1;
+ if (msb ^ (byte & 0x1)) {
+ crc ^= FEC_CRC_POLY;
+ }
+ byte >>= 1;
+ }
+ }
+ return (crc);
+}
+
+/*
+ * Set or clear the multicast filter for this adaptor.
+ * Skeleton taken from sunlance driver.
+ * The CPM Ethernet implementation allows Multicast as well as individual
+ * MAC address filtering. Some of the drivers check to make sure it is
+ * a group multicast address, and discard those that are not. I guess I
+ * will do the same for now, but just remove the test if you want
+ * individual filtering as well (do the upper net layers want or support
+ * this kind of feature?).
+ */
+static void fec_set_multicast_list(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp = fep->fecp;
+ struct dev_mc_list *pmc;
+ __u32 crc;
+ int temp;
+ __u32 csrVal;
+ int hash_index;
+ __u32 hthi, htlo;
+ unsigned long flags;
+
+
+ if ((dev->flags & IFF_PROMISC) != 0) {
+
+ spin_lock_irqsave(&fep->lock, flags);
+ FS(fecp, r_cntrl, FEC_RCNTRL_PROM);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ /*
+ * Log any net taps.
+ */
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s: Promiscuous mode enabled.\n", dev->name);
+ return;
+
+ }
+
+ if ((dev->flags & IFF_ALLMULTI) != 0 ||
+ dev->mc_count > FEC_MAX_MULTICAST_ADDRS) {
+ /*
+ * Catch all multicast addresses, set the filter to all 1's.
+ */
+ hthi = 0xffffffffU;
+ htlo = 0xffffffffU;
+ } else {
+ hthi = 0;
+ htlo = 0;
+
+ /*
+ * Now populate the hash table
+ */
+ for (pmc = dev->mc_list; pmc != NULL; pmc = pmc->next) {
+ crc = fec_mulicast_calc_crc(pmc->dmi_addr);
+ temp = (crc & 0x3f) >> 1;
+ hash_index = ((temp & 0x01) << 4) |
+ ((temp & 0x02) << 2) |
+ ((temp & 0x04)) |
+ ((temp & 0x08) >> 2) |
+ ((temp & 0x10) >> 4);
+ csrVal = (1 << hash_index);
+ if (crc & 1)
+ hthi |= csrVal;
+ else
+ htlo |= csrVal;
+ }
+ }
+
+ spin_lock_irqsave(&fep->lock, flags);
+ FC(fecp, r_cntrl, FEC_RCNTRL_PROM);
+ FW(fecp, hash_table_high, hthi);
+ FW(fecp, hash_table_low, htlo);
+ spin_unlock_irqrestore(&fep->lock, flags);
+}
+
+static int fec_set_mac_address(struct net_device *dev, void *addr)
+{
+ struct sockaddr *mac = addr;
+ struct fec_enet_private *fep = netdev_priv(dev);
+ struct fec *fecp = fep->fecp;
+ int i;
+ __u32 addrhi, addrlo;
+ unsigned long flags;
+
+ /* Get pointer to SCC area in parameter RAM. */
+ for (i = 0; i < 6; i++)
+ dev->dev_addr[i] = mac->sa_data[i];
+
+ /*
+ * Set station address.
+ */
+ addrhi = ((__u32) dev->dev_addr[0] << 24) |
+ ((__u32) dev->dev_addr[1] << 16) |
+ ((__u32) dev->dev_addr[2] << 8) |
+ (__u32) dev->dev_addr[3];
+ addrlo = ((__u32) dev->dev_addr[4] << 24) |
+ ((__u32) dev->dev_addr[5] << 16);
+
+ spin_lock_irqsave(&fep->lock, flags);
+ FW(fecp, addr_low, addrhi);
+ FW(fecp, addr_high, addrlo);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ return 0;
+}
+
+/*
+ * This function is called to start or restart the FEC during a link
+ * change. This only happens when switching between half and full
+ * duplex.
+ */
+void fec_restart(struct net_device *dev, int duplex, int speed)
+{
+#ifdef CONFIG_DUET
+ immap_t *immap = (immap_t *) IMAP_ADDR;
+ __u32 cptr;
+#endif
+ struct fec_enet_private *fep = netdev_priv(dev);
+ struct fec *fecp = fep->fecp;
+ const struct fec_platform_info *fpi = fep->fpi;
+ cbd_t *bdp;
+ struct sk_buff *skb;
+ int i;
+ __u32 addrhi, addrlo;
+
+ fec_whack_reset(fep->fecp);
+
+ /*
+ * Set station address.
+ */
+ addrhi = ((__u32) dev->dev_addr[0] << 24) |
+ ((__u32) dev->dev_addr[1] << 16) |
+ ((__u32) dev->dev_addr[2] << 8) |
+ (__u32) dev->dev_addr[3];
+ addrlo = ((__u32) dev->dev_addr[4] << 24) |
+ ((__u32) dev->dev_addr[5] << 16);
+ FW(fecp, addr_low, addrhi);
+ FW(fecp, addr_high, addrlo);
+
+ /*
+ * Reset all multicast.
+ */
+ FW(fecp, hash_table_high, 0);
+ FW(fecp, hash_table_low, 0);
+
+ /*
+ * Set maximum receive buffer size.
+ */
+ FW(fecp, r_buff_size, PKT_MAXBLR_SIZE);
+ FW(fecp, r_hash, PKT_MAXBUF_SIZE);
+
+ /*
+ * Set receive and transmit descriptor base.
+ */
+ FW(fecp, r_des_start, iopa((__u32) (fep->rx_bd_base)));
+ FW(fecp, x_des_start, iopa((__u32) (fep->tx_bd_base)));
+
+ fep->dirty_tx = fep->cur_tx = fep->tx_bd_base;
+ fep->tx_free = fep->tx_ring;
+ fep->cur_rx = fep->rx_bd_base;
+
+ /*
+ * Reset SKB receive buffers
+ */
+ for (i = 0; i < fep->rx_ring; i++) {
+ if ((skb = fep->rx_skbuff[i]) == NULL)
+ continue;
+ fep->rx_skbuff[i] = NULL;
+ dev_kfree_skb(skb);
+ }
+
+ /*
+ * Initialize the receive buffer descriptors.
+ */
+ for (i = 0, bdp = fep->rx_bd_base; i < fep->rx_ring; i++, bdp++) {
+ skb = dev_alloc_skb(ENET_RX_FRSIZE);
+ if (skb == NULL) {
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s Memory squeeze, unable to allocate skb\n",
+ dev->name);
+ fep->stats.rx_dropped++;
+ break;
+ }
+ fep->rx_skbuff[i] = skb;
+ skb->dev = dev;
+ CBDW_BUFADDR(bdp, dma_map_single(NULL, skb->data,
+ L1_CACHE_ALIGN(PKT_MAXBUF_SIZE),
+ DMA_FROM_DEVICE));
+ CBDW_DATLEN(bdp, 0); /* zero */
+ CBDW_SC(bdp, BD_ENET_RX_EMPTY |
+ ((i < fep->rx_ring - 1) ? 0 : BD_SC_WRAP));
+ }
+ /*
+ * if we failed, fillup remainder
+ */
+ for (; i < fep->rx_ring; i++, bdp++) {
+ fep->rx_skbuff[i] = NULL;
+ CBDW_SC(bdp, (i < fep->rx_ring - 1) ? 0 : BD_SC_WRAP);
+ }
+
+ /*
+ * Reset SKB transmit buffers.
+ */
+ for (i = 0; i < fep->tx_ring; i++) {
+ if ((skb = fep->tx_skbuff[i]) == NULL)
+ continue;
+ fep->tx_skbuff[i] = NULL;
+ dev_kfree_skb(skb);
+ }
+
+ /*
+ * ...and the same for transmit.
+ */
+ for (i = 0, bdp = fep->tx_bd_base; i < fep->tx_ring; i++, bdp++) {
+ fep->tx_skbuff[i] = NULL;
+ CBDW_BUFADDR(bdp, virt_to_bus(NULL));
+ CBDW_DATLEN(bdp, 0);
+ CBDW_SC(bdp, (i < fep->tx_ring - 1) ? 0 : BD_SC_WRAP);
+ }
+
+ /*
+ * Enable big endian and don't care about SDMA FC.
+ */
+ FW(fecp, fun_code, 0x78000000);
+
+ /*
+ * Set MII speed.
+ */
+ FW(fecp, mii_speed, fep->fec_phy_speed);
+
+ /*
+ * Clear any outstanding interrupt.
+ */
+ FW(fecp, ievent, 0xffc0);
+ FW(fecp, ivec, (fpi->fec_irq / 2) << 29);
+
+ /*
+ * adjust to speed (only for DUET & RMII)
+ */
+#ifdef CONFIG_DUET
+ cptr = in_be32(&immap->im_cpm.cp_cptr);
+ switch (fpi->fec_no) {
+ case 0:
+ /*
+ * check if in RMII mode
+ */
+ if ((cptr & 0x100) == 0)
+ break;
+
+ if (speed == 10)
+ cptr |= 0x0000010;
+ else if (speed == 100)
+ cptr &= ~0x0000010;
+ break;
+ case 1:
+ /*
+ * check if in RMII mode
+ */
+ if ((cptr & 0x80) == 0)
+ break;
+
+ if (speed == 10)
+ cptr |= 0x0000008;
+ else if (speed == 100)
+ cptr &= ~0x0000008;
+ break;
+ default:
+ break;
+ }
+ out_be32(&immap->im_cpm.cp_cptr, cptr);
+#endif
+
+ FW(fecp, r_cntrl, FEC_RCNTRL_MII_MODE); /* MII enable */
+ /*
+ * adjust to duplex mode
+ */
+ if (duplex) {
+ FC(fecp, r_cntrl, FEC_RCNTRL_DRT);
+ FS(fecp, x_cntrl, FEC_TCNTRL_FDEN); /* FD enable */
+ } else {
+ FS(fecp, r_cntrl, FEC_RCNTRL_DRT);
+ FC(fecp, x_cntrl, FEC_TCNTRL_FDEN); /* FD disable */
+ }
+
+ /*
+ * Enable interrupts we wish to service.
+ */
+ FW(fecp, imask, FEC_ENET_TXF | FEC_ENET_TXB |
+ FEC_ENET_RXF | FEC_ENET_RXB);
+
+ /*
+ * And last, enable the transmit and receive processing.
+ */
+ FW(fecp, ecntrl, FEC_ECNTRL_PINMUX | FEC_ECNTRL_ETHER_EN);
+ FW(fecp, r_des_active, 0x01000000);
+}
+
+void fec_stop(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp = fep->fecp;
+ struct sk_buff *skb;
+ int i;
+
+ if ((FR(fecp, ecntrl) & FEC_ECNTRL_ETHER_EN) == 0)
+ return; /* already down */
+
+ FW(fecp, x_cntrl, 0x01); /* Graceful transmit stop */
+ for (i = 0; ((FR(fecp, ievent) & 0x10000000) == 0) &&
+ i < FEC_RESET_DELAY; i++)
+ udelay(1);
+
+ if (i == FEC_RESET_DELAY)
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s FEC timeout on graceful transmit stop\n",
+ dev->name);
+ /*
+ * Disable FEC. Let only MII interrupts.
+ */
+ FW(fecp, imask, 0);
+ FW(fecp, ecntrl, ~FEC_ECNTRL_ETHER_EN);
+
+ /*
+ * Reset SKB transmit buffers.
+ */
+ for (i = 0; i < fep->tx_ring; i++) {
+ if ((skb = fep->tx_skbuff[i]) == NULL)
+ continue;
+ fep->tx_skbuff[i] = NULL;
+ dev_kfree_skb(skb);
+ }
+
+ /*
+ * Reset SKB receive buffers
+ */
+ for (i = 0; i < fep->rx_ring; i++) {
+ if ((skb = fep->rx_skbuff[i]) == NULL)
+ continue;
+ fep->rx_skbuff[i] = NULL;
+ dev_kfree_skb(skb);
+ }
+}
+
+/* common receive function */
+static int fec_enet_rx_common(struct net_device *dev, int *budget)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp = fep->fecp;
+ const struct fec_platform_info *fpi = fep->fpi;
+ cbd_t *bdp;
+ struct sk_buff *skb, *skbn, *skbt;
+ int received = 0;
+ __u16 pkt_len, sc;
+ int curidx;
+ int rx_work_limit;
+
+ if (fpi->use_napi) {
+ rx_work_limit = min(dev->quota, *budget);
+
+ if (!netif_running(dev))
+ return 0;
+ }
+
+ /*
+ * First, grab all of the stats for the incoming packet.
+ * These get messed up if we get called due to a busy condition.
+ */
+ bdp = fep->cur_rx;
+
+ /* clear RX status bits for napi*/
+ if (fpi->use_napi)
+ FW(fecp, ievent, FEC_ENET_RXF | FEC_ENET_RXB);
+
+ while (((sc = CBDR_SC(bdp)) & BD_ENET_RX_EMPTY) == 0) {
+
+ curidx = bdp - fep->rx_bd_base;
+
+ /*
+ * Since we have allocated space to hold a complete frame,
+ * the last indicator should be set.
+ */
+ if ((sc & BD_ENET_RX_LAST) == 0)
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s rcv is not +last\n",
+ dev->name);
+
+ /*
+ * Check for errors.
+ */
+ if (sc & (BD_ENET_RX_LG | BD_ENET_RX_SH | BD_ENET_RX_CL |
+ BD_ENET_RX_NO | BD_ENET_RX_CR | BD_ENET_RX_OV)) {
+ fep->stats.rx_errors++;
+ /* Frame too long or too short. */
+ if (sc & (BD_ENET_RX_LG | BD_ENET_RX_SH))
+ fep->stats.rx_length_errors++;
+ /* Frame alignment */
+ if (sc & (BD_ENET_RX_NO | BD_ENET_RX_CL))
+ fep->stats.rx_frame_errors++;
+ /* CRC Error */
+ if (sc & BD_ENET_RX_CR)
+ fep->stats.rx_crc_errors++;
+ /* FIFO overrun */
+ if (sc & BD_ENET_RX_OV)
+ fep->stats.rx_crc_errors++;
+
+ skbn = fep->rx_skbuff[curidx];
+ BUG_ON(skbn == NULL);
+
+ } else {
+
+ /* napi, got packet but no quota */
+ if (fpi->use_napi && --rx_work_limit < 0)
+ break;
+
+ skb = fep->rx_skbuff[curidx];
+ BUG_ON(skb == NULL);
+
+ /*
+ * Process the incoming frame.
+ */
+ fep->stats.rx_packets++;
+ pkt_len = CBDR_DATLEN(bdp) - 4; /* remove CRC */
+ fep->stats.rx_bytes += pkt_len + 4;
+
+ if (pkt_len <= fpi->rx_copybreak) {
+ /* +2 to make IP header L1 cache aligned */
+ skbn = dev_alloc_skb(pkt_len + 2);
+ if (skbn != NULL) {
+ skb_reserve(skbn, 2); /* align IP header */
+ memcpy(skbn->data, skb->data, pkt_len);
+ /* swap */
+ skbt = skb;
+ skb = skbn;
+ skbn = skbt;
+ }
+ } else
+ skbn = dev_alloc_skb(ENET_RX_FRSIZE);
+
+ if (skbn != NULL) {
+ skb->dev = dev;
+ skb_put(skb, pkt_len); /* Make room */
+ skb->protocol = eth_type_trans(skb, dev);
+ received++;
+ if (!fpi->use_napi)
+ netif_rx(skb);
+ else
+ netif_receive_skb(skb);
+ } else {
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s Memory squeeze, dropping packet.\n",
+ dev->name);
+ fep->stats.rx_dropped++;
+ skbn = skb;
+ }
+ }
+
+ fep->rx_skbuff[curidx] = skbn;
+ CBDW_BUFADDR(bdp, dma_map_single(NULL, skbn->data,
+ L1_CACHE_ALIGN(PKT_MAXBUF_SIZE),
+ DMA_FROM_DEVICE));
+ CBDW_DATLEN(bdp, 0);
+ CBDW_SC(bdp, (sc & ~BD_ENET_RX_STATS) | BD_ENET_RX_EMPTY);
+
+ /*
+ * Update BD pointer to next entry.
+ */
+ if ((sc & BD_ENET_RX_WRAP) == 0)
+ bdp++;
+ else
+ bdp = fep->rx_bd_base;
+
+ /*
+ * Doing this here will keep the FEC running while we process
+ * incoming frames. On a heavily loaded network, we should be
+ * able to keep up at the expense of system resources.
+ */
+ FW(fecp, r_des_active, 0x01000000);
+ }
+
+ fep->cur_rx = bdp;
+
+ if (fpi->use_napi) {
+ dev->quota -= received;
+ *budget -= received;
+
+ if (rx_work_limit < 0)
+ return 1; /* not done */
+
+ /* done */
+ netif_rx_complete(dev);
+
+ /* enable RX interrupt bits */
+ FS(fecp, imask, FEC_ENET_RXF | FEC_ENET_RXB);
+ }
+
+ return 0;
+}
+
+static void fec_enet_tx(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ cbd_t *bdp;
+ struct sk_buff *skb;
+ int dirtyidx, do_wake;
+ __u16 sc;
+
+ spin_lock(&fep->lock);
+ bdp = fep->dirty_tx;
+
+ do_wake = 0;
+ while (((sc = CBDR_SC(bdp)) & BD_ENET_TX_READY) == 0) {
+
+ dirtyidx = bdp - fep->tx_bd_base;
+
+ if (fep->tx_free == fep->tx_ring)
+ break;
+
+ skb = fep->tx_skbuff[dirtyidx];
+
+ /*
+ * Check for errors.
+ */
+ if (sc & (BD_ENET_TX_HB | BD_ENET_TX_LC |
+ BD_ENET_TX_RL | BD_ENET_TX_UN | BD_ENET_TX_CSL)) {
+ fep->stats.tx_errors++;
+ if (sc & BD_ENET_TX_HB) /* No heartbeat */
+ fep->stats.tx_heartbeat_errors++;
+ if (sc & BD_ENET_TX_LC) /* Late collision */
+ fep->stats.tx_window_errors++;
+ if (sc & BD_ENET_TX_RL) /* Retrans limit */
+ fep->stats.tx_aborted_errors++;
+ if (sc & BD_ENET_TX_UN) /* Underrun */
+ fep->stats.tx_fifo_errors++;
+ if (sc & BD_ENET_TX_CSL) /* Carrier lost */
+ fep->stats.tx_carrier_errors++;
+ } else
+ fep->stats.tx_packets++;
+
+ if (sc & BD_ENET_TX_READY)
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s HEY! Enet xmit interrupt and TX_READY.\n",
+ dev->name);
+
+ /*
+ * Deferred means some collisions occurred during transmit,
+ * but we eventually sent the packet OK.
+ */
+ if (sc & BD_ENET_TX_DEF)
+ fep->stats.collisions++;
+
+ /*
+ * Free the sk buffer associated with this last transmit.
+ */
+ dev_kfree_skb_irq(skb);
+ fep->tx_skbuff[dirtyidx] = NULL;
+
+ /*
+ * Update pointer to next buffer descriptor to be transmitted.
+ */
+ if ((sc & BD_ENET_TX_WRAP) == 0)
+ bdp++;
+ else
+ bdp = fep->tx_bd_base;
+
+ /*
+ * Since we have freed up a buffer, the ring is no longer
+ * full.
+ */
+ if (!fep->tx_free++)
+ do_wake = 1;
+ }
+
+ fep->dirty_tx = bdp;
+
+ spin_unlock(&fep->lock);
+
+ if (do_wake && netif_queue_stopped(dev))
+ netif_wake_queue(dev);
+}
+
+/*
+ * The interrupt handler.
+ * This is called from the MPC core interrupt.
+ */
+static irqreturn_t
+fec_enet_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_id;
+ struct fec_enet_private *fep;
+ const struct fec_platform_info *fpi;
+ fec_t *fecp;
+ __u32 int_events;
+ __u32 int_events_napi;
+
+ if (unlikely(dev == NULL))
+ return IRQ_NONE;
+
+ fep = netdev_priv(dev);
+ fecp = fep->fecp;
+ fpi = fep->fpi;
+
+ /*
+ * Get the interrupt events that caused us to be here.
+ */
+ while ((int_events = FR(fecp, ievent) & FR(fecp, imask)) != 0) {
+
+ if (!fpi->use_napi)
+ FW(fecp, ievent, int_events);
+ else {
+ int_events_napi = int_events & ~(FEC_ENET_RXF | FEC_ENET_RXB);
+ FW(fecp, ievent, int_events_napi);
+ }
+
+ if ((int_events & (FEC_ENET_HBERR | FEC_ENET_BABR |
+ FEC_ENET_BABT | FEC_ENET_EBERR)) != 0)
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s FEC ERROR(s) 0x%x\n",
+ dev->name, int_events);
+
+ if ((int_events & FEC_ENET_RXF) != 0) {
+ if (!fpi->use_napi)
+ fec_enet_rx_common(dev, NULL);
+ else {
+ if (netif_rx_schedule_prep(dev)) {
+ /* disable rx interrupts */
+ FC(fecp, imask, FEC_ENET_RXF | FEC_ENET_RXB);
+ __netif_rx_schedule(dev);
+ } else {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s driver bug! interrupt while in poll!\n",
+ dev->name);
+ FC(fecp, imask, FEC_ENET_RXF | FEC_ENET_RXB);
+ }
+ }
+ }
+
+ if ((int_events & FEC_ENET_TXF) != 0)
+ fec_enet_tx(dev);
+ }
+
+ return IRQ_HANDLED;
+}
+
+/* This interrupt occurs when the PHY detects a link change. */
+static irqreturn_t
+fec_mii_link_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_id;
+ struct fec_enet_private *fep;
+ const struct fec_platform_info *fpi;
+
+ if (unlikely(dev == NULL))
+ return IRQ_NONE;
+
+ fep = netdev_priv(dev);
+ fpi = fep->fpi;
+
+ if (!fpi->use_mdio)
+ return IRQ_NONE;
+
+ /*
+ * Acknowledge the interrupt if possible. If we have not
+ * found the PHY yet we can't process or acknowledge the
+ * interrupt now. Instead we ignore this interrupt for now,
+ * which we can do since it is edge triggered. It will be
+ * acknowledged later by fec_enet_open().
+ */
+ if (!fep->phy)
+ return IRQ_NONE;
+
+ fec_mii_ack_int(dev);
+ fec_mii_link_status_change_check(dev, 0);
+
+ return IRQ_HANDLED;
+}
+
+
+/**********************************************************************************/
+
+static int fec_enet_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp = fep->fecp;
+ cbd_t *bdp;
+ int curidx;
+ unsigned long flags;
+
+ spin_lock_irqsave(&fep->tx_lock, flags);
+
+ /*
+ * Fill in a Tx ring entry
+ */
+ bdp = fep->cur_tx;
+
+ if (!fep->tx_free || (CBDR_SC(bdp) & BD_ENET_TX_READY)) {
+ netif_stop_queue(dev);
+ spin_unlock_irqrestore(&fep->tx_lock, flags);
+
+ /*
+ * Ooops. All transmit buffers are full. Bail out.
+ * This should not happen, since the tx queue should be stopped.
+ */
+ printk(KERN_WARNING DRV_MODULE_NAME
+ ": %s tx queue full!.\n", dev->name);
+ return 1;
+ }
+
+ curidx = bdp - fep->tx_bd_base;
+ /*
+ * Clear all of the status flags.
+ */
+ CBDC_SC(bdp, BD_ENET_TX_STATS);
+
+ /*
+ * Save skb pointer.
+ */
+ fep->tx_skbuff[curidx] = skb;
+
+ fep->stats.tx_bytes += skb->len;
+
+ /*
+ * Push the data cache so the CPM does not get stale memory data.
+ */
+ CBDW_BUFADDR(bdp, dma_map_single(NULL, skb->data,
+ skb->len, DMA_TO_DEVICE));
+ CBDW_DATLEN(bdp, skb->len);
+
+ dev->trans_start = jiffies;
+
+ /*
+ * If this was the last BD in the ring, start at the beginning again.
+ */
+ if ((CBDR_SC(bdp) & BD_ENET_TX_WRAP) == 0)
+ fep->cur_tx++;
+ else
+ fep->cur_tx = fep->tx_bd_base;
+
+ if (!--fep->tx_free)
+ netif_stop_queue(dev);
+
+ /*
+ * Trigger transmission start
+ */
+ CBDS_SC(bdp, BD_ENET_TX_READY | BD_ENET_TX_INTR |
+ BD_ENET_TX_LAST | BD_ENET_TX_TC);
+ FW(fecp, x_des_active, 0x01000000);
+
+ spin_unlock_irqrestore(&fep->tx_lock, flags);
+
+ return 0;
+}
+
+static void fec_timeout(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fep->stats.tx_errors++;
+
+ if (fep->tx_free)
+ netif_wake_queue(dev);
+
+ /* check link status again */
+ fec_mii_link_status_change_check(dev, 0);
+}
+
+static int fec_enet_open(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+ unsigned long flags;
+
+ /* Install our interrupt handler. */
+ if (request_irq(fpi->fec_irq, fec_enet_interrupt, 0, "fec", dev) != 0) {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s Could not allocate FEC IRQ!", dev->name);
+ return -EINVAL;
+ }
+
+ /* Install our phy interrupt handler */
+ if (fpi->phy_irq != -1 &&
+ request_irq(fpi->phy_irq, fec_mii_link_interrupt, 0, "fec-phy",
+ dev) != 0) {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s Could not allocate PHY IRQ!", dev->name);
+ free_irq(fpi->fec_irq, dev);
+ return -EINVAL;
+ }
+
+ if (fpi->use_mdio) {
+ fec_mii_startup(dev);
+ netif_carrier_off(dev);
+ fec_mii_link_status_change_check(dev, 1);
+ } else {
+ spin_lock_irqsave(&fep->lock, flags);
+ fec_restart(dev, 1, 100); /* XXX this sucks */
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ netif_carrier_on(dev);
+ netif_start_queue(dev);
+ }
+ return 0;
+}
+
+static int fec_enet_close(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+ unsigned long flags;
+
+ netif_stop_queue(dev);
+ netif_carrier_off(dev);
+
+ if (fpi->use_mdio)
+ fec_mii_shutdown(dev);
+
+ spin_lock_irqsave(&fep->lock, flags);
+ fec_stop(dev);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ /* release any irqs */
+ if (fpi->phy_irq != -1)
+ free_irq(fpi->phy_irq, dev);
+ free_irq(fpi->fec_irq, dev);
+
+ return 0;
+}
+
+static struct net_device_stats *fec_enet_get_stats(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ return &fep->stats;
+}
+
+static int fec_enet_poll(struct net_device *dev, int *budget)
+{
+ return fec_enet_rx_common(dev, budget);
+}
+
+/*************************************************************************/
+
+static void fec_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ strcpy(info->driver, DRV_MODULE_NAME);
+ strcpy(info->version, DRV_MODULE_VERSION);
+}
+
+static int fec_get_regs_len(struct net_device *dev)
+{
+ return sizeof(fec_t);
+}
+
+static void fec_get_regs(struct net_device *dev, struct ethtool_regs *regs,
+ void *p)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ unsigned long flags;
+
+ if (regs->len < sizeof(fec_t))
+ return;
+
+ regs->version = 0;
+ spin_lock_irqsave(&fep->lock, flags);
+ memcpy_fromio(p, fep->fecp, sizeof(fec_t));
+ spin_unlock_irqrestore(&fep->lock, flags);
+}
+
+static int fec_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ unsigned long flags;
+ int rc;
+
+ spin_lock_irqsave(&fep->lock, flags);
+ rc = mii_ethtool_gset(&fep->mii_if, cmd);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ return rc;
+}
+
+static int fec_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ unsigned long flags;
+ int rc;
+
+ spin_lock_irqsave(&fep->lock, flags);
+ rc = mii_ethtool_sset(&fep->mii_if, cmd);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ return rc;
+}
+
+static int fec_nway_reset(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ return mii_nway_restart(&fep->mii_if);
+}
+
+static __u32 fec_get_msglevel(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ return fep->msg_enable;
+}
+
+static void fec_set_msglevel(struct net_device *dev, __u32 value)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fep->msg_enable = value;
+}
+
+static struct ethtool_ops fec_ethtool_ops = {
+ .get_drvinfo = fec_get_drvinfo,
+ .get_regs_len = fec_get_regs_len,
+ .get_settings = fec_get_settings,
+ .set_settings = fec_set_settings,
+ .nway_reset = fec_nway_reset,
+ .get_link = ethtool_op_get_link,
+ .get_msglevel = fec_get_msglevel,
+ .set_msglevel = fec_set_msglevel,
+ .get_tx_csum = ethtool_op_get_tx_csum,
+ .set_tx_csum = ethtool_op_set_tx_csum, /* local! */
+ .get_sg = ethtool_op_get_sg,
+ .set_sg = ethtool_op_set_sg,
+ .get_regs = fec_get_regs,
+};
+
+static int fec_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ struct mii_ioctl_data *mii = (struct mii_ioctl_data *)&rq->ifr_data;
+ unsigned long flags;
+ int rc;
+
+ if (!netif_running(dev))
+ return -EINVAL;
+
+ spin_lock_irqsave(&fep->lock, flags);
+ rc = generic_mii_ioctl(&fep->mii_if, mii, cmd, NULL);
+ spin_unlock_irqrestore(&fep->lock, flags);
+ return rc;
+}
+
+int fec_8xx_init_one(const struct fec_platform_info *fpi,
+ struct net_device **devp)
+{
+ immap_t *immap = (immap_t *) IMAP_ADDR;
+ static int fec_8xx_version_printed = 0;
+ struct net_device *dev = NULL;
+ struct fec_enet_private *fep = NULL;
+ fec_t *fecp = NULL;
+ int i;
+ int err = 0;
+ int registered = 0;
+ __u32 siel;
+
+ *devp = NULL;
+
+ switch (fpi->fec_no) {
+ case 0:
+ fecp = &((immap_t *) IMAP_ADDR)->im_cpm.cp_fec;
+ break;
+#ifdef CONFIG_DUET
+ case 1:
+ fecp = &((immap_t *) IMAP_ADDR)->im_cpm.cp_fec2;
+ break;
+#endif
+ default:
+ return -EINVAL;
+ }
+
+ if (fec_8xx_version_printed++ == 0)
+ printk(KERN_INFO "%s", version);
+
+ i = sizeof(*fep) + (sizeof(struct sk_buff **) *
+ (fpi->rx_ring + fpi->tx_ring));
+
+ dev = alloc_etherdev(i);
+ if (!dev) {
+ err = -ENOMEM;
+ goto err;
+ }
+ SET_MODULE_OWNER(dev);
+
+ fep = netdev_priv(dev);
+
+ /* partial reset of FEC */
+ fec_whack_reset(fecp);
+
+ /* point rx_skbuff, tx_skbuff */
+ fep->rx_skbuff = (struct sk_buff **)&fep[1];
+ fep->tx_skbuff = fep->rx_skbuff + fpi->rx_ring;
+
+ fep->fecp = fecp;
+ fep->fpi = fpi;
+
+ /* init locks */
+ spin_lock_init(&fep->lock);
+ spin_lock_init(&fep->tx_lock);
+
+ /*
+ * Set the Ethernet address.
+ */
+ for (i = 0; i < 6; i++)
+ dev->dev_addr[i] = fpi->macaddr[i];
+
+ fep->ring_base = dma_alloc_coherent(NULL,
+ (fpi->tx_ring + fpi->rx_ring) *
+ sizeof(cbd_t), &fep->ring_mem_addr,
+ GFP_KERNEL);
+ if (fep->ring_base == NULL) {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s dma alloc failed.\n", dev->name);
+ err = -ENOMEM;
+ goto err;
+ }
+
+ /*
+ * Set receive and transmit descriptor base.
+ */
+ fep->rx_bd_base = fep->ring_base;
+ fep->tx_bd_base = fep->rx_bd_base + fpi->rx_ring;
+
+ /* initialize ring size variables */
+ fep->tx_ring = fpi->tx_ring;
+ fep->rx_ring = fpi->rx_ring;
+
+ /* SIU interrupt */
+ if (fpi->phy_irq != -1 &&
+ (fpi->phy_irq >= SIU_IRQ0 && fpi->phy_irq < SIU_LEVEL7)) {
+
+ siel = in_be32(&immap->im_siu_conf.sc_siel);
+ if ((fpi->phy_irq & 1) == 0)
+ siel |= (0x80000000 >> fpi->phy_irq);
+ else
+ siel &= ~(0x80000000 >> (fpi->phy_irq & ~1));
+ out_be32(&immap->im_siu_conf.sc_siel, siel);
+ }
+
+ /*
+ * The FEC Ethernet specific entries in the device structure.
+ */
+ dev->open = fec_enet_open;
+ dev->hard_start_xmit = fec_enet_start_xmit;
+ dev->tx_timeout = fec_timeout;
+ dev->watchdog_timeo = TX_TIMEOUT;
+ dev->stop = fec_enet_close;
+ dev->get_stats = fec_enet_get_stats;
+ dev->set_multicast_list = fec_set_multicast_list;
+ dev->set_mac_address = fec_set_mac_address;
+ if (fpi->use_napi) {
+ dev->poll = fec_enet_poll;
+ dev->weight = fpi->napi_weight;
+ }
+ dev->ethtool_ops = &fec_ethtool_ops;
+ dev->do_ioctl = fec_ioctl;
+
+ fep->fec_phy_speed =
+ ((((fpi->sys_clk + 4999999) / 2500000) / 2) & 0x3F) << 1;
+
+ init_timer(&fep->phy_timer_list);
+
+ /* partial reset of FEC so that only MII works */
+ FW(fecp, mii_speed, fep->fec_phy_speed);
+ FW(fecp, ievent, 0xffc0);
+ FW(fecp, ivec, (fpi->fec_irq / 2) << 29);
+ FW(fecp, imask, 0);
+ FW(fecp, r_cntrl, FEC_RCNTRL_MII_MODE); /* MII enable */
+ FW(fecp, ecntrl, FEC_ECNTRL_PINMUX | FEC_ECNTRL_ETHER_EN);
+
+ netif_carrier_off(dev);
+
+ err = register_netdev(dev);
+ if (err != 0)
+ goto err;
+ registered = 1;
+
+ if (fpi->use_mdio) {
+ fep->mii_if.dev = dev;
+ fep->mii_if.mdio_read = fec_mii_read;
+ fep->mii_if.mdio_write = fec_mii_write;
+ fep->mii_if.phy_id_mask = 0x1f;
+ fep->mii_if.reg_num_mask = 0x1f;
+ fep->mii_if.phy_id = fec_mii_phy_id_detect(dev);
+ }
+
+ *devp = dev;
+
+ return 0;
+
+ err:
+ if (dev != NULL) {
+ if (fecp != NULL)
+ fec_whack_reset(fecp);
+
+ if (registered)
+ unregister_netdev(dev);
+
+ if (fep != NULL) {
+ if (fep->ring_base)
+ dma_free_coherent(NULL,
+ (fpi->tx_ring +
+ fpi->rx_ring) *
+ sizeof(cbd_t), fep->ring_base,
+ fep->ring_mem_addr);
+ }
+ free_netdev(dev);
+ }
+ return err;
+}
+
+int fec_8xx_cleanup_one(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp = fep->fecp;
+ const struct fec_platform_info *fpi = fep->fpi;
+
+ fec_whack_reset(fecp);
+
+ unregister_netdev(dev);
+
+ dma_free_coherent(NULL, (fpi->tx_ring + fpi->rx_ring) * sizeof(cbd_t),
+ fep->ring_base, fep->ring_mem_addr);
+
+ free_netdev(dev);
+
+ return 0;
+}
+
+/**************************************************************************************/
+/**************************************************************************************/
+/**************************************************************************************/
+
+static int __init fec_8xx_init(void)
+{
+ return fec_8xx_platform_init();
+}
+
+static void __exit fec_8xx_cleanup(void)
+{
+ fec_8xx_platform_cleanup();
+}
+
+/**************************************************************************************/
+/**************************************************************************************/
+/**************************************************************************************/
+
+module_init(fec_8xx_init);
+module_exit(fec_8xx_cleanup);
--- /dev/null
+/*
+ * Fast Ethernet Controller (FEC) driver for Motorola MPC8xx.
+ *
+ * Copyright (c) 2003 Intracom S.A.
+ * by Pantelis Antoniou <panto@intracom.gr>
+ *
+ * Heavily based on original FEC driver by Dan Malek <dan@embeddededge.com>
+ * and modifications by Joakim Tjernlund <joakim.tjernlund@lumentis.se>
+ *
+ * Released under the GPL
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/mii.h>
+#include <linux/ethtool.h>
+#include <linux/bitops.h>
+
+#include <asm/8xx_immap.h>
+#include <asm/pgtable.h>
+#include <asm/mpc8xx.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <asm/commproc.h>
+
+/*************************************************/
+
+#include "fec_8xx.h"
+
+/*************************************************/
+
+/* Make MII read/write commands for the FEC.
+*/
+#define mk_mii_read(REG) (0x60020000 | ((REG & 0x1f) << 18))
+#define mk_mii_write(REG, VAL) (0x50020000 | ((REG & 0x1f) << 18) | (VAL & 0xffff))
+#define mk_mii_end 0
+
+/*************************************************/
+
+/* XXX both FECs use the MII interface of FEC1 */
+static spinlock_t fec_mii_lock = SPIN_LOCK_UNLOCKED;
+
+#define FEC_MII_LOOPS 10000
+
+int fec_mii_read(struct net_device *dev, int phy_id, int location)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp;
+ int i, ret = -1;
+ unsigned long flags;
+
+ /* XXX MII interface is only connected to FEC1 */
+ fecp = &((immap_t *) IMAP_ADDR)->im_cpm.cp_fec;
+
+ spin_lock_irqsave(&fec_mii_lock, flags);
+
+ if ((FR(fecp, r_cntrl) & FEC_RCNTRL_MII_MODE) == 0) {
+ FS(fecp, r_cntrl, FEC_RCNTRL_MII_MODE); /* MII enable */
+ FS(fecp, ecntrl, FEC_ECNTRL_PINMUX | FEC_ECNTRL_ETHER_EN);
+ FW(fecp, ievent, FEC_ENET_MII);
+ }
+
+ /* Add PHY address to register command. */
+ FW(fecp, mii_speed, fep->fec_phy_speed);
+ FW(fecp, mii_data, (phy_id << 23) | mk_mii_read(location));
+
+ for (i = 0; i < FEC_MII_LOOPS; i++)
+ if ((FR(fecp, ievent) & FEC_ENET_MII) != 0)
+ break;
+
+ if (i < FEC_MII_LOOPS) {
+ FW(fecp, ievent, FEC_ENET_MII);
+ ret = FR(fecp, mii_data) & 0xffff;
+ }
+
+ spin_unlock_irqrestore(&fec_mii_lock, flags);
+
+ return ret;
+}
+
+void fec_mii_write(struct net_device *dev, int phy_id, int location, int value)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ fec_t *fecp;
+ unsigned long flags;
+ int i;
+
+ /* XXX MII interface is only connected to FEC1 */
+ fecp = &((immap_t *) IMAP_ADDR)->im_cpm.cp_fec;
+
+ spin_lock_irqsave(&fec_mii_lock, flags);
+
+ if ((FR(fecp, r_cntrl) & FEC_RCNTRL_MII_MODE) == 0) {
+ FS(fecp, r_cntrl, FEC_RCNTRL_MII_MODE); /* MII enable */
+ FS(fecp, ecntrl, FEC_ECNTRL_PINMUX | FEC_ECNTRL_ETHER_EN);
+ FW(fecp, ievent, FEC_ENET_MII);
+ }
+
+ /* Add PHY address to register command. */
+ FW(fecp, mii_speed, fep->fec_phy_speed); /* always adapt mii speed */
+ FW(fecp, mii_data, (phy_id << 23) | mk_mii_write(location, value));
+
+ for (i = 0; i < FEC_MII_LOOPS; i++)
+ if ((FR(fecp, ievent) & FEC_ENET_MII) != 0)
+ break;
+
+ if (i < FEC_MII_LOOPS)
+ FW(fecp, ievent, FEC_ENET_MII);
+
+ spin_unlock_irqrestore(&fec_mii_lock, flags);
+}
+
+/*************************************************/
+
+#ifdef CONFIG_FEC_8XX_GENERIC_PHY
+
+/*
+ * Generic PHY support.
+ * Should work for all PHYs, but link change is detected by polling
+ */
+
+static void generic_timer_callback(unsigned long data)
+{
+ struct net_device *dev = (struct net_device *)data;
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fep->phy_timer_list.expires = jiffies + HZ / 2;
+
+ add_timer(&fep->phy_timer_list);
+
+ fec_mii_link_status_change_check(dev, 0);
+}
+
+static void generic_startup(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fep->phy_timer_list.expires = jiffies + HZ / 2; /* every 500ms */
+ fep->phy_timer_list.data = (unsigned long)dev;
+ fep->phy_timer_list.function = generic_timer_callback;
+ add_timer(&fep->phy_timer_list);
+}
+
+static void generic_shutdown(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ del_timer_sync(&fep->phy_timer_list);
+}
+
+#endif
+
+#ifdef CONFIG_FEC_8XX_DM9161_PHY
+
+/* ------------------------------------------------------------------------- */
+/* The Davicom DM9161 is used on the NETTA board */
+
+/* register definitions */
+
+#define MII_DM9161_ACR 16 /* Aux. Config Register */
+#define MII_DM9161_ACSR 17 /* Aux. Config/Status Register */
+#define MII_DM9161_10TCSR 18 /* 10BaseT Config/Status Reg. */
+#define MII_DM9161_INTR 21 /* Interrupt Register */
+#define MII_DM9161_RECR 22 /* Receive Error Counter Reg. */
+#define MII_DM9161_DISCR 23 /* Disconnect Counter Register */
+
+static void dm9161_startup(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fec_mii_write(dev, fep->mii_if.phy_id, MII_DM9161_INTR, 0x0000);
+}
+
+static void dm9161_ack_int(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fec_mii_read(dev, fep->mii_if.phy_id, MII_DM9161_INTR);
+}
+
+static void dm9161_shutdown(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+
+ fec_mii_write(dev, fep->mii_if.phy_id, MII_DM9161_INTR, 0x0f00);
+}
+
+#endif
+
+/**********************************************************************************/
+
+static const struct phy_info phy_info[] = {
+#ifdef CONFIG_FEC_8XX_DM9161_PHY
+ {
+ .id = 0x00181b88,
+ .name = "DM9161",
+ .startup = dm9161_startup,
+ .ack_int = dm9161_ack_int,
+ .shutdown = dm9161_shutdown,
+ },
+#endif
+#ifdef CONFIG_FEC_8XX_GENERIC_PHY
+ {
+ .id = 0,
+ .name = "GENERIC",
+ .startup = generic_startup,
+ .shutdown = generic_shutdown,
+ },
+#endif
+};
+
+/**********************************************************************************/
+
+int fec_mii_phy_id_detect(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+ int i, r, start, end, phytype, physubtype;
+ const struct phy_info *phy;
+ int phy_hwid, phy_id;
+
+ /* if no MDIO */
+ if (fpi->use_mdio == 0)
+ return -1;
+
+ phy_hwid = -1;
+ fep->phy = NULL;
+
+ /* auto-detect? */
+ if (fpi->phy_addr == -1) {
+ start = 0;
+ end = 32;
+ } else { /* direct */
+ start = fpi->phy_addr;
+ end = start + 1;
+ }
+
+ for (phy_id = start; phy_id < end; phy_id++) {
+ r = fec_mii_read(dev, phy_id, MII_PHYSID1);
+ if (r == -1 || (phytype = (r & 0xffff)) == 0xffff)
+ continue;
+ r = fec_mii_read(dev, phy_id, MII_PHYSID2);
+ if (r == -1 || (physubtype = (r & 0xffff)) == 0xffff)
+ continue;
+ phy_hwid = (phytype << 16) | physubtype;
+ if (phy_hwid != -1)
+ break;
+ }
+
+ if (phy_hwid == -1) {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s No PHY detected!\n", dev->name);
+ return -1;
+ }
+
+ for (i = 0, phy = phy_info; i < sizeof(phy_info) / sizeof(phy_info[0]);
+ i++, phy++)
+ if (phy->id == (phy_hwid >> 4) || phy->id == 0)
+ break;
+
+ if (i >= sizeof(phy_info) / sizeof(phy_info[0])) {
+ printk(KERN_ERR DRV_MODULE_NAME
+ ": %s PHY id 0x%08x is not supported!\n",
+ dev->name, phy_hwid);
+ return -1;
+ }
+
+ fep->phy = phy;
+
+ printk(KERN_INFO DRV_MODULE_NAME
+ ": %s Phy @ 0x%x, type %s (0x%08x)\n",
+ dev->name, phy_id, fep->phy->name, phy_hwid);
+
+ return phy_id;
+}
+
+void fec_mii_startup(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+
+ if (!fpi->use_mdio || fep->phy == NULL)
+ return;
+
+ if (fep->phy->startup == NULL)
+ return;
+
+ (*fep->phy->startup) (dev);
+}
+
+void fec_mii_shutdown(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+
+ if (!fpi->use_mdio || fep->phy == NULL)
+ return;
+
+ if (fep->phy->shutdown == NULL)
+ return;
+
+ (*fep->phy->shutdown) (dev);
+}
+
+void fec_mii_ack_int(struct net_device *dev)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ const struct fec_platform_info *fpi = fep->fpi;
+
+ if (!fpi->use_mdio || fep->phy == NULL)
+ return;
+
+ if (fep->phy->ack_int == NULL)
+ return;
+
+ (*fep->phy->ack_int) (dev);
+}
+
+/* helper function */
+static int mii_negotiated(struct mii_if_info *mii)
+{
+ int advert, lpa, val;
+
+ if (!mii_link_ok(mii))
+ return 0;
+
+ val = (*mii->mdio_read) (mii->dev, mii->phy_id, MII_BMSR);
+ if ((val & BMSR_ANEGCOMPLETE) == 0)
+ return 0;
+
+ advert = (*mii->mdio_read) (mii->dev, mii->phy_id, MII_ADVERTISE);
+ lpa = (*mii->mdio_read) (mii->dev, mii->phy_id, MII_LPA);
+
+ return mii_nway_result(advert & lpa);
+}
+
+void fec_mii_link_status_change_check(struct net_device *dev, int init_media)
+{
+ struct fec_enet_private *fep = netdev_priv(dev);
+ unsigned int media;
+ unsigned long flags;
+
+ if (mii_check_media(&fep->mii_if, netif_msg_link(fep), init_media) == 0)
+ return;
+
+ media = mii_negotiated(&fep->mii_if);
+
+ if (netif_carrier_ok(dev)) {
+ spin_lock_irqsave(&fep->lock, flags);
+ fec_restart(dev, !!(media & ADVERTISE_FULL),
+ (media & (ADVERTISE_100FULL | ADVERTISE_100HALF)) ?
+ 100 : 10);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ netif_start_queue(dev);
+ } else {
+ netif_stop_queue(dev);
+
+ spin_lock_irqsave(&fep->lock, flags);
+ fec_stop(dev);
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ }
+}
--- /dev/null
+/*
+ * ibm_emac.h
+ *
+ *
+ * Armin Kuster akuster@mvista.com
+ * June, 2002
+ *
+ * Copyright 2002 MontaVista Softare Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef _IBM_EMAC_H_
+#define _IBM_EMAC_H_
+/* General defines needed for the driver */
+
+/* Emac */
+typedef struct emac_regs {
+ u32 em0mr0;
+ u32 em0mr1;
+ u32 em0tmr0;
+ u32 em0tmr1;
+ u32 em0rmr;
+ u32 em0isr;
+ u32 em0iser;
+ u32 em0iahr;
+ u32 em0ialr;
+ u32 em0vtpid;
+ u32 em0vtci;
+ u32 em0ptr;
+ u32 em0iaht1;
+ u32 em0iaht2;
+ u32 em0iaht3;
+ u32 em0iaht4;
+ u32 em0gaht1;
+ u32 em0gaht2;
+ u32 em0gaht3;
+ u32 em0gaht4;
+ u32 em0lsah;
+ u32 em0lsal;
+ u32 em0ipgvr;
+ u32 em0stacr;
+ u32 em0trtr;
+ u32 em0rwmr;
+} emac_t;
+
+/* MODE REG 0 */
+#define EMAC_M0_RXI 0x80000000
+#define EMAC_M0_TXI 0x40000000
+#define EMAC_M0_SRST 0x20000000
+#define EMAC_M0_TXE 0x10000000
+#define EMAC_M0_RXE 0x08000000
+#define EMAC_M0_WKE 0x04000000
+
+/* MODE Reg 1 */
+#define EMAC_M1_FDE 0x80000000
+#define EMAC_M1_ILE 0x40000000
+#define EMAC_M1_VLE 0x20000000
+#define EMAC_M1_EIFC 0x10000000
+#define EMAC_M1_APP 0x08000000
+#define EMAC_M1_AEMI 0x02000000
+#define EMAC_M1_IST 0x01000000
+#define EMAC_M1_MF_1000GPCS 0x00c00000 /* Internal GPCS */
+#define EMAC_M1_MF_1000MBPS 0x00800000 /* External GPCS */
+#define EMAC_M1_MF_100MBPS 0x00400000
+#define EMAC_M1_RFS_16K 0x00280000 /* 000 for 512 byte */
+#define EMAC_M1_TR 0x00008000
+#ifdef CONFIG_IBM_EMAC4
+#define EMAC_M1_RFS_8K 0x00200000
+#define EMAC_M1_RFS_4K 0x00180000
+#define EMAC_M1_RFS_2K 0x00100000
+#define EMAC_M1_RFS_1K 0x00080000
+#define EMAC_M1_TX_FIFO_16K 0x00050000 /* 0's for 512 byte */
+#define EMAC_M1_TX_FIFO_8K 0x00040000
+#define EMAC_M1_TX_FIFO_4K 0x00030000
+#define EMAC_M1_TX_FIFO_2K 0x00020000
+#define EMAC_M1_TX_FIFO_1K 0x00010000
+#define EMAC_M1_TX_TR 0x00008000
+#define EMAC_M1_TX_MWSW 0x00001000 /* 0 wait for status */
+#define EMAC_M1_JUMBO_ENABLE 0x00000800 /* Upt to 9Kr status */
+#define EMAC_M1_OPB_CLK_66 0x00000008 /* 66Mhz */
+#define EMAC_M1_OPB_CLK_83 0x00000010 /* 83Mhz */
+#define EMAC_M1_OPB_CLK_100 0x00000018 /* 100Mhz */
+#define EMAC_M1_OPB_CLK_100P 0x00000020 /* 100Mhz+ */
+#else /* CONFIG_IBM_EMAC4 */
+#define EMAC_M1_RFS_4K 0x00300000 /* ~4k for 512 byte */
+#define EMAC_M1_RFS_2K 0x00200000
+#define EMAC_M1_RFS_1K 0x00100000
+#define EMAC_M1_TX_FIFO_2K 0x00080000 /* 0's for 512 byte */
+#define EMAC_M1_TX_FIFO_1K 0x00040000
+#define EMAC_M1_TR0_DEPEND 0x00010000 /* 0'x for single packet */
+#define EMAC_M1_TR1_DEPEND 0x00004000
+#define EMAC_M1_TR1_MULTI 0x00002000
+#define EMAC_M1_JUMBO_ENABLE 0x00001000
+#endif /* CONFIG_IBM_EMAC4 */
+#define EMAC_M1_BASE (EMAC_M1_TX_FIFO_2K | \
+ EMAC_M1_APP | \
+ EMAC_M1_TR)
+
+/* Transmit Mode Register 0 */
+#define EMAC_TMR0_GNP0 0x80000000
+#define EMAC_TMR0_GNP1 0x40000000
+#define EMAC_TMR0_GNPD 0x20000000
+#define EMAC_TMR0_FC 0x10000000
+#define EMAC_TMR0_TFAE_2_32 0x00000001
+#define EMAC_TMR0_TFAE_4_64 0x00000002
+#define EMAC_TMR0_TFAE_8_128 0x00000003
+#define EMAC_TMR0_TFAE_16_256 0x00000004
+#define EMAC_TMR0_TFAE_32_512 0x00000005
+#define EMAC_TMR0_TFAE_64_1024 0x00000006
+#define EMAC_TMR0_TFAE_128_2048 0x00000007
+
+/* Receive Mode Register */
+#define EMAC_RMR_SP 0x80000000
+#define EMAC_RMR_SFCS 0x40000000
+#define EMAC_RMR_ARRP 0x20000000
+#define EMAC_RMR_ARP 0x10000000
+#define EMAC_RMR_AROP 0x08000000
+#define EMAC_RMR_ARPI 0x04000000
+#define EMAC_RMR_PPP 0x02000000
+#define EMAC_RMR_PME 0x01000000
+#define EMAC_RMR_PMME 0x00800000
+#define EMAC_RMR_IAE 0x00400000
+#define EMAC_RMR_MIAE 0x00200000
+#define EMAC_RMR_BAE 0x00100000
+#define EMAC_RMR_MAE 0x00080000
+#define EMAC_RMR_RFAF_2_32 0x00000001
+#define EMAC_RMR_RFAF_4_64 0x00000002
+#define EMAC_RMR_RFAF_8_128 0x00000003
+#define EMAC_RMR_RFAF_16_256 0x00000004
+#define EMAC_RMR_RFAF_32_512 0x00000005
+#define EMAC_RMR_RFAF_64_1024 0x00000006
+#define EMAC_RMR_RFAF_128_2048 0x00000007
+#define EMAC_RMR_BASE (EMAC_RMR_IAE | EMAC_RMR_BAE)
+
+/* Interrupt Status & enable Regs */
+#define EMAC_ISR_OVR 0x02000000
+#define EMAC_ISR_PP 0x01000000
+#define EMAC_ISR_BP 0x00800000
+#define EMAC_ISR_RP 0x00400000
+#define EMAC_ISR_SE 0x00200000
+#define EMAC_ISR_ALE 0x00100000
+#define EMAC_ISR_BFCS 0x00080000
+#define EMAC_ISR_PTLE 0x00040000
+#define EMAC_ISR_ORE 0x00020000
+#define EMAC_ISR_IRE 0x00010000
+#define EMAC_ISR_DBDM 0x00000200
+#define EMAC_ISR_DB0 0x00000100
+#define EMAC_ISR_SE0 0x00000080
+#define EMAC_ISR_TE0 0x00000040
+#define EMAC_ISR_DB1 0x00000020
+#define EMAC_ISR_SE1 0x00000010
+#define EMAC_ISR_TE1 0x00000008
+#define EMAC_ISR_MOS 0x00000002
+#define EMAC_ISR_MOF 0x00000001
+
+/* STA CONTROL REG */
+#define EMAC_STACR_OC 0x00008000
+#define EMAC_STACR_PHYE 0x00004000
+#define EMAC_STACR_WRITE 0x00002000
+#define EMAC_STACR_READ 0x00001000
+#define EMAC_STACR_CLK_83MHZ 0x00000800 /* 0's for 50Mhz */
+#define EMAC_STACR_CLK_66MHZ 0x00000400
+#define EMAC_STACR_CLK_100MHZ 0x00000C00
+
+/* Transmit Request Threshold Register */
+#define EMAC_TRTR_1600 0x18000000 /* 0's for 64 Bytes */
+#define EMAC_TRTR_1024 0x0f000000
+#define EMAC_TRTR_512 0x07000000
+#define EMAC_TRTR_256 0x03000000
+#define EMAC_TRTR_192 0x10000000
+#define EMAC_TRTR_128 0x01000000
+
+#define EMAC_TX_CTRL_GFCS 0x0200
+#define EMAC_TX_CTRL_GP 0x0100
+#define EMAC_TX_CTRL_ISA 0x0080
+#define EMAC_TX_CTRL_RSA 0x0040
+#define EMAC_TX_CTRL_IVT 0x0020
+#define EMAC_TX_CTRL_RVT 0x0010
+#define EMAC_TX_CTRL_TAH_CSUM 0x000e /* TAH only */
+#define EMAC_TX_CTRL_TAH_SEG4 0x000a /* TAH only */
+#define EMAC_TX_CTRL_TAH_SEG3 0x0008 /* TAH only */
+#define EMAC_TX_CTRL_TAH_SEG2 0x0006 /* TAH only */
+#define EMAC_TX_CTRL_TAH_SEG1 0x0004 /* TAH only */
+#define EMAC_TX_CTRL_TAH_SEG0 0x0002 /* TAH only */
+#define EMAC_TX_CTRL_TAH_DIS 0x0000 /* TAH only */
+
+#define EMAC_TX_CTRL_DFLT ( \
+ MAL_TX_CTRL_INTR | EMAC_TX_CTRL_GFCS | EMAC_TX_CTRL_GP )
+
+/* madmal transmit status / Control bits */
+#define EMAC_TX_ST_BFCS 0x0200
+#define EMAC_TX_ST_BPP 0x0100
+#define EMAC_TX_ST_LCS 0x0080
+#define EMAC_TX_ST_ED 0x0040
+#define EMAC_TX_ST_EC 0x0020
+#define EMAC_TX_ST_LC 0x0010
+#define EMAC_TX_ST_MC 0x0008
+#define EMAC_TX_ST_SC 0x0004
+#define EMAC_TX_ST_UR 0x0002
+#define EMAC_TX_ST_SQE 0x0001
+
+/* madmal receive status / Control bits */
+#define EMAC_RX_ST_OE 0x0200
+#define EMAC_RX_ST_PP 0x0100
+#define EMAC_RX_ST_BP 0x0080
+#define EMAC_RX_ST_RP 0x0040
+#define EMAC_RX_ST_SE 0x0020
+#define EMAC_RX_ST_AE 0x0010
+#define EMAC_RX_ST_BFCS 0x0008
+#define EMAC_RX_ST_PTL 0x0004
+#define EMAC_RX_ST_ORE 0x0002
+#define EMAC_RX_ST_IRE 0x0001
+#define EMAC_BAD_RX_PACKET 0x02ff
+#define EMAC_CSUM_VER_ERROR 0x0003
+
+/* identify a bad rx packet dependent on emac features */
+#ifdef CONFIG_IBM_EMAC4
+#define EMAC_IS_BAD_RX_PACKET(desc) \
+ (((desc & (EMAC_BAD_RX_PACKET & ~EMAC_CSUM_VER_ERROR)) || \
+ ((desc & EMAC_CSUM_VER_ERROR) == EMAC_RX_ST_ORE) || \
+ ((desc & EMAC_CSUM_VER_ERROR) == EMAC_RX_ST_IRE)))
+#else
+#define EMAC_IS_BAD_RX_PACKET(desc) \
+ (desc & EMAC_BAD_RX_PACKET)
+#endif
+
+/* SoC implementation specific EMAC register defaults */
+#if defined(CONFIG_440GP)
+#define EMAC_RWMR_DEFAULT 0x80009000
+#define EMAC_TMR0_DEFAULT 0x00000000
+#define EMAC_TMR1_DEFAULT 0xf8640000
+#elif defined(CONFIG_440GX)
+#define EMAC_RWMR_DEFAULT 0x1000a200
+#define EMAC_TMR0_DEFAULT EMAC_TMR0_TFAE_2_32
+#define EMAC_TMR1_DEFAULT 0xa00f0000
+#else
+#define EMAC_RWMR_DEFAULT 0x0f002000
+#define EMAC_TMR0_DEFAULT 0x00000000
+#define EMAC_TMR1_DEFAULT 0x380f0000
+#endif /* CONFIG_440GP */
+
+/* Revision specific EMAC register defaults */
+#ifdef CONFIG_IBM_EMAC4
+#define EMAC_M1_DEFAULT (EMAC_M1_BASE | \
+ EMAC_M1_OPB_CLK_83 | \
+ EMAC_M1_TX_MWSW)
+#define EMAC_RMR_DEFAULT (EMAC_RMR_BASE | \
+ EMAC_RMR_RFAF_128_2048)
+#define EMAC_TMR0_XMIT (EMAC_TMR0_GNP0 | \
+ EMAC_TMR0_DEFAULT)
+#define EMAC_TRTR_DEFAULT EMAC_TRTR_1024
+#else /* !CONFIG_IBM_EMAC4 */
+#define EMAC_M1_DEFAULT EMAC_M1_BASE
+#define EMAC_RMR_DEFAULT EMAC_RMR_BASE
+#define EMAC_TMR0_XMIT EMAC_TMR0_GNP0
+#define EMAC_TRTR_DEFAULT EMAC_TRTR_1600
+#endif /* CONFIG_IBM_EMAC4 */
+
+#endif
--- /dev/null
+/*
+ * ibm_emac_core.c
+ *
+ * Ethernet driver for the built in ethernet on the IBM 4xx PowerPC
+ * processors.
+ *
+ * (c) 2003 Benjamin Herrenschmidt <benh@kernel.crashing.org>
+ *
+ * Based on original work by
+ *
+ * Armin Kuster <akuster@mvista.com>
+ * Johnnie Peters <jpeters@mvista.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ * TODO
+ * - Check for races in the "remove" code path
+ * - Add some Power Management to the MAC and the PHY
+ * - Audit remaining of non-rewritten code (--BenH)
+ * - Cleanup message display using msglevel mecanism
+ * - Address all errata
+ * - Audit all register update paths to ensure they
+ * are being written post soft reset if required.
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/timer.h>
+#include <linux/ptrace.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/dma-mapping.h>
+#include <linux/ethtool.h>
+#include <linux/mii.h>
+#include <linux/bitops.h>
+
+#include <asm/processor.h>
+#include <asm/io.h>
+#include <asm/dma.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <asm/ocp.h>
+
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/crc32.h>
+
+#include "ibm_emac_core.h"
+
+//#define MDIO_DEBUG(fmt) printk fmt
+#define MDIO_DEBUG(fmt)
+
+//#define LINK_DEBUG(fmt) printk fmt
+#define LINK_DEBUG(fmt)
+
+//#define PKT_DEBUG(fmt) printk fmt
+#define PKT_DEBUG(fmt)
+
+#define DRV_NAME "emac"
+#define DRV_VERSION "2.0"
+#define DRV_AUTHOR "Benjamin Herrenschmidt <benh@kernel.crashing.org>"
+#define DRV_DESC "IBM EMAC Ethernet driver"
+
+/*
+ * When mdio_idx >= 0, contains a list of emac ocp_devs
+ * that have had their initialization deferred until the
+ * common MDIO controller has been initialized.
+ */
+LIST_HEAD(emac_init_list);
+
+MODULE_AUTHOR(DRV_AUTHOR);
+MODULE_DESCRIPTION(DRV_DESC);
+MODULE_LICENSE("GPL");
+
+static int skb_res = SKB_RES;
+module_param(skb_res, int, 0444);
+MODULE_PARM_DESC(skb_res, "Amount of data to reserve on skb buffs\n"
+ "The 405 handles a misaligned IP header fine but\n"
+ "this can help if you are routing to a tunnel or a\n"
+ "device that needs aligned data. 0..2");
+
+#define RGMII_PRIV(ocpdev) ((struct ibm_ocp_rgmii*)ocp_get_drvdata(ocpdev))
+
+static unsigned int rgmii_enable[] = {
+ RGMII_RTBI,
+ RGMII_RGMII,
+ RGMII_TBI,
+ RGMII_GMII
+};
+
+static unsigned int rgmii_speed_mask[] = {
+ RGMII_MII2_SPDMASK,
+ RGMII_MII3_SPDMASK
+};
+
+static unsigned int rgmii_speed100[] = {
+ RGMII_MII2_100MB,
+ RGMII_MII3_100MB
+};
+
+static unsigned int rgmii_speed1000[] = {
+ RGMII_MII2_1000MB,
+ RGMII_MII3_1000MB
+};
+
+#define ZMII_PRIV(ocpdev) ((struct ibm_ocp_zmii*)ocp_get_drvdata(ocpdev))
+
+static unsigned int zmii_enable[][4] = {
+ {ZMII_SMII0, ZMII_RMII0, ZMII_MII0,
+ ~(ZMII_MDI1 | ZMII_MDI2 | ZMII_MDI3)},
+ {ZMII_SMII1, ZMII_RMII1, ZMII_MII1,
+ ~(ZMII_MDI0 | ZMII_MDI2 | ZMII_MDI3)},
+ {ZMII_SMII2, ZMII_RMII2, ZMII_MII2,
+ ~(ZMII_MDI0 | ZMII_MDI1 | ZMII_MDI3)},
+ {ZMII_SMII3, ZMII_RMII3, ZMII_MII3, ~(ZMII_MDI0 | ZMII_MDI1 | ZMII_MDI2)}
+};
+
+static unsigned int mdi_enable[] = {
+ ZMII_MDI0,
+ ZMII_MDI1,
+ ZMII_MDI2,
+ ZMII_MDI3
+};
+
+static unsigned int zmii_speed = 0x0;
+static unsigned int zmii_speed100[] = {
+ ZMII_MII0_100MB,
+ ZMII_MII1_100MB,
+ ZMII_MII2_100MB,
+ ZMII_MII3_100MB
+};
+
+/* Since multiple EMACs share MDIO lines in various ways, we need
+ * to avoid re-using the same PHY ID in cases where the arch didn't
+ * setup precise phy_map entries
+ */
+static u32 busy_phy_map = 0;
+
+/* If EMACs share a common MDIO device, this points to it */
+static struct net_device *mdio_ndev = NULL;
+
+struct emac_def_dev {
+ struct list_head link;
+ struct ocp_device *ocpdev;
+ struct ibm_ocp_mal *mal;
+};
+
+static struct net_device_stats *emac_stats(struct net_device *dev)
+{
+ struct ocp_enet_private *fep = dev->priv;
+ return &fep->stats;
+};
+
+static int
+emac_init_rgmii(struct ocp_device *rgmii_dev, int input, int phy_mode)
+{
+ struct ibm_ocp_rgmii *rgmii = RGMII_PRIV(rgmii_dev);
+ const char *mode_name[] = { "RTBI", "RGMII", "TBI", "GMII" };
+ int mode = -1;
+
+ if (!rgmii) {
+ rgmii = kmalloc(sizeof(struct ibm_ocp_rgmii), GFP_KERNEL);
+
+ if (rgmii == NULL) {
+ printk(KERN_ERR
+ "rgmii%d: Out of memory allocating RGMII structure!\n",
+ rgmii_dev->def->index);
+ return -ENOMEM;
+ }
+
+ memset(rgmii, 0, sizeof(*rgmii));
+
+ rgmii->base =
+ (struct rgmii_regs *)ioremap(rgmii_dev->def->paddr,
+ sizeof(*rgmii->base));
+ if (rgmii->base == NULL) {
+ printk(KERN_ERR
+ "rgmii%d: Cannot ioremap bridge registers!\n",
+ rgmii_dev->def->index);
+
+ kfree(rgmii);
+ return -ENOMEM;
+ }
+ ocp_set_drvdata(rgmii_dev, rgmii);
+ }
+
+ if (phy_mode) {
+ switch (phy_mode) {
+ case PHY_MODE_GMII:
+ mode = GMII;
+ break;
+ case PHY_MODE_TBI:
+ mode = TBI;
+ break;
+ case PHY_MODE_RTBI:
+ mode = RTBI;
+ break;
+ case PHY_MODE_RGMII:
+ default:
+ mode = RGMII;
+ }
+ rgmii->base->fer &= ~RGMII_FER_MASK(input);
+ rgmii->base->fer |= rgmii_enable[mode] << (4 * input);
+ } else {
+ switch ((rgmii->base->fer & RGMII_FER_MASK(input)) >> (4 *
+ input)) {
+ case RGMII_RTBI:
+ mode = RTBI;
+ break;
+ case RGMII_RGMII:
+ mode = RGMII;
+ break;
+ case RGMII_TBI:
+ mode = TBI;
+ break;
+ case RGMII_GMII:
+ mode = GMII;
+ }
+ }
+
+ /* Set mode to RGMII if nothing valid is detected */
+ if (mode < 0)
+ mode = RGMII;
+
+ printk(KERN_NOTICE "rgmii%d: input %d in %s mode\n",
+ rgmii_dev->def->index, input, mode_name[mode]);
+
+ rgmii->mode[input] = mode;
+ rgmii->users++;
+
+ return 0;
+}
+
+static void
+emac_rgmii_port_speed(struct ocp_device *ocpdev, int input, int speed)
+{
+ struct ibm_ocp_rgmii *rgmii = RGMII_PRIV(ocpdev);
+ unsigned int rgmii_speed;
+
+ rgmii_speed = in_be32(&rgmii->base->ssr);
+
+ rgmii_speed &= ~rgmii_speed_mask[input];
+
+ if (speed == 1000)
+ rgmii_speed |= rgmii_speed1000[input];
+ else if (speed == 100)
+ rgmii_speed |= rgmii_speed100[input];
+
+ out_be32(&rgmii->base->ssr, rgmii_speed);
+}
+
+static void emac_close_rgmii(struct ocp_device *ocpdev)
+{
+ struct ibm_ocp_rgmii *rgmii = RGMII_PRIV(ocpdev);
+ BUG_ON(!rgmii || rgmii->users == 0);
+
+ if (!--rgmii->users) {
+ ocp_set_drvdata(ocpdev, NULL);
+ iounmap((void *)rgmii->base);
+ kfree(rgmii);
+ }
+}
+
+static int emac_init_zmii(struct ocp_device *zmii_dev, int input, int phy_mode)
+{
+ struct ibm_ocp_zmii *zmii = ZMII_PRIV(zmii_dev);
+ const char *mode_name[] = { "SMII", "RMII", "MII" };
+ int mode = -1;
+
+ if (!zmii) {
+ zmii = kmalloc(sizeof(struct ibm_ocp_zmii), GFP_KERNEL);
+ if (zmii == NULL) {
+ printk(KERN_ERR
+ "zmii%d: Out of memory allocating ZMII structure!\n",
+ zmii_dev->def->index);
+ return -ENOMEM;
+ }
+ memset(zmii, 0, sizeof(*zmii));
+
+ zmii->base =
+ (struct zmii_regs *)ioremap(zmii_dev->def->paddr,
+ sizeof(*zmii->base));
+ if (zmii->base == NULL) {
+ printk(KERN_ERR
+ "zmii%d: Cannot ioremap bridge registers!\n",
+ zmii_dev->def->index);
+
+ kfree(zmii);
+ return -ENOMEM;
+ }
+ ocp_set_drvdata(zmii_dev, zmii);
+ }
+
+ if (phy_mode) {
+ switch (phy_mode) {
+ case PHY_MODE_MII:
+ mode = MII;
+ break;
+ case PHY_MODE_RMII:
+ mode = RMII;
+ break;
+ case PHY_MODE_SMII:
+ default:
+ mode = SMII;
+ }
+ zmii->base->fer &= ~ZMII_FER_MASK(input);
+ zmii->base->fer |= zmii_enable[input][mode];
+ } else {
+ switch ((zmii->base->fer & ZMII_FER_MASK(input)) << (4 * input)) {
+ case ZMII_MII0:
+ mode = MII;
+ break;
+ case ZMII_RMII0:
+ mode = RMII;
+ break;
+ case ZMII_SMII0:
+ mode = SMII;
+ }
+ }
+
+ /* Set mode to SMII if nothing valid is detected */
+ if (mode < 0)
+ mode = SMII;
+
+ printk(KERN_NOTICE "zmii%d: input %d in %s mode\n",
+ zmii_dev->def->index, input, mode_name[mode]);
+
+ zmii->mode[input] = mode;
+ zmii->users++;
+
+ return 0;
+}
+
+static void emac_enable_zmii_port(struct ocp_device *ocpdev, int input)
+{
+ u32 mask;
+ struct ibm_ocp_zmii *zmii = ZMII_PRIV(ocpdev);
+
+ mask = in_be32(&zmii->base->fer);
+ mask &= zmii_enable[input][MDI]; /* turn all non enabled MDI's off */
+ mask |= zmii_enable[input][zmii->mode[input]] | mdi_enable[input];
+ out_be32(&zmii->base->fer, mask);
+}
+
+static void
+emac_zmii_port_speed(struct ocp_device *ocpdev, int input, int speed)
+{
+ struct ibm_ocp_zmii *zmii = ZMII_PRIV(ocpdev);
+
+ if (speed == 100)
+ zmii_speed |= zmii_speed100[input];
+ else
+ zmii_speed &= ~zmii_speed100[input];
+
+ out_be32(&zmii->base->ssr, zmii_speed);
+}
+
+static void emac_close_zmii(struct ocp_device *ocpdev)
+{
+ struct ibm_ocp_zmii *zmii = ZMII_PRIV(ocpdev);
+ BUG_ON(!zmii || zmii->users == 0);
+
+ if (!--zmii->users) {
+ ocp_set_drvdata(ocpdev, NULL);
+ iounmap((void *)zmii->base);
+ kfree(zmii);
+ }
+}
+
+int emac_phy_read(struct net_device *dev, int mii_id, int reg)
+{
+ int count;
+ uint32_t stacr;
+ struct ocp_enet_private *fep = dev->priv;
+ emac_t *emacp = fep->emacp;
+
+ MDIO_DEBUG(("%s: phy_read, id: 0x%x, reg: 0x%x\n", dev->name, mii_id,
+ reg));
+
+ /* Enable proper ZMII port */
+ if (fep->zmii_dev)
+ emac_enable_zmii_port(fep->zmii_dev, fep->zmii_input);
+
+ /* Use the EMAC that has the MDIO port */
+ if (fep->mdio_dev) {
+ dev = fep->mdio_dev;
+ fep = dev->priv;
+ emacp = fep->emacp;
+ }
+
+ count = 0;
+ while ((((stacr = in_be32(&emacp->em0stacr)) & EMAC_STACR_OC) == 0)
+ && (count++ < MDIO_DELAY))
+ udelay(1);
+ MDIO_DEBUG((" (count was %d)\n", count));
+
+ if ((stacr & EMAC_STACR_OC) == 0) {
+ printk(KERN_WARNING "%s: PHY read timeout #1!\n", dev->name);
+ return -1;
+ }
+
+ /* Clear the speed bits and make a read request to the PHY */
+ stacr = ((EMAC_STACR_READ | (reg & 0x1f)) & ~EMAC_STACR_CLK_100MHZ);
+ stacr |= ((mii_id & 0x1F) << 5);
+
+ out_be32(&emacp->em0stacr, stacr);
+
+ count = 0;
+ while ((((stacr = in_be32(&emacp->em0stacr)) & EMAC_STACR_OC) == 0)
+ && (count++ < MDIO_DELAY))
+ udelay(1);
+ MDIO_DEBUG((" (count was %d)\n", count));
+
+ if ((stacr & EMAC_STACR_OC) == 0) {
+ printk(KERN_WARNING "%s: PHY read timeout #2!\n", dev->name);
+ return -1;
+ }
+
+ /* Check for a read error */
+ if (stacr & EMAC_STACR_PHYE) {
+ MDIO_DEBUG(("EMAC MDIO PHY error !\n"));
+ return -1;
+ }
+
+ MDIO_DEBUG((" -> 0x%x\n", stacr >> 16));
+
+ return (stacr >> 16);
+}
+
+void emac_phy_write(struct net_device *dev, int mii_id, int reg, int data)
+{
+ int count;
+ uint32_t stacr;
+ struct ocp_enet_private *fep = dev->priv;
+ emac_t *emacp = fep->emacp;
+
+ MDIO_DEBUG(("%s phy_write, id: 0x%x, reg: 0x%x, data: 0x%x\n",
+ dev->name, mii_id, reg, data));
+
+ /* Enable proper ZMII port */
+ if (fep->zmii_dev)
+ emac_enable_zmii_port(fep->zmii_dev, fep->zmii_input);
+
+ /* Use the EMAC that has the MDIO port */
+ if (fep->mdio_dev) {
+ dev = fep->mdio_dev;
+ fep = dev->priv;
+ emacp = fep->emacp;
+ }
+
+ count = 0;
+ while ((((stacr = in_be32(&emacp->em0stacr)) & EMAC_STACR_OC) == 0)
+ && (count++ < MDIO_DELAY))
+ udelay(1);
+ MDIO_DEBUG((" (count was %d)\n", count));
+
+ if ((stacr & EMAC_STACR_OC) == 0) {
+ printk(KERN_WARNING "%s: PHY write timeout #2!\n", dev->name);
+ return;
+ }
+
+ /* Clear the speed bits and make a read request to the PHY */
+
+ stacr = ((EMAC_STACR_WRITE | (reg & 0x1f)) & ~EMAC_STACR_CLK_100MHZ);
+ stacr |= ((mii_id & 0x1f) << 5) | ((data & 0xffff) << 16);
+
+ out_be32(&emacp->em0stacr, stacr);
+
+ while (((stacr = in_be32(&emacp->em0stacr) & EMAC_STACR_OC) == 0)
+ && (count++ < 5000))
+ udelay(1);
+ MDIO_DEBUG((" (count was %d)\n", count));
+
+ if ((stacr & EMAC_STACR_OC) == 0)
+ printk(KERN_WARNING "%s: PHY write timeout #2!\n", dev->name);
+
+ /* Check for a write error */
+ if ((stacr & EMAC_STACR_PHYE) != 0) {
+ MDIO_DEBUG(("EMAC MDIO PHY error !\n"));
+ }
+}
+
+static void emac_txeob_dev(void *param, u32 chanmask)
+{
+ struct net_device *dev = param;
+ struct ocp_enet_private *fep = dev->priv;
+ unsigned long flags;
+
+ spin_lock_irqsave(&fep->lock, flags);
+
+ PKT_DEBUG(("emac_txeob_dev() entry, tx_cnt: %d\n", fep->tx_cnt));
+
+ while (fep->tx_cnt &&
+ !(fep->tx_desc[fep->ack_slot].ctrl & MAL_TX_CTRL_READY)) {
+
+ if (fep->tx_desc[fep->ack_slot].ctrl & MAL_TX_CTRL_LAST) {
+ /* Tell the system the transmit completed. */
+ dma_unmap_single(&fep->ocpdev->dev,
+ fep->tx_desc[fep->ack_slot].data_ptr,
+ fep->tx_desc[fep->ack_slot].data_len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb_irq(fep->tx_skb[fep->ack_slot]);
+
+ if (fep->tx_desc[fep->ack_slot].ctrl &
+ (EMAC_TX_ST_EC | EMAC_TX_ST_MC | EMAC_TX_ST_SC))
+ fep->stats.collisions++;
+ }
+
+ fep->tx_skb[fep->ack_slot] = (struct sk_buff *)NULL;
+ if (++fep->ack_slot == NUM_TX_BUFF)
+ fep->ack_slot = 0;
+
+ fep->tx_cnt--;
+ }
+ if (fep->tx_cnt < NUM_TX_BUFF)
+ netif_wake_queue(dev);
+
+ PKT_DEBUG(("emac_txeob_dev() exit, tx_cnt: %d\n", fep->tx_cnt));
+
+ spin_unlock_irqrestore(&fep->lock, flags);
+}
+
+/*
+ Fill/Re-fill the rx chain with valid ctrl/ptrs.
+ This function will fill from rx_slot up to the parm end.
+ So to completely fill the chain pre-set rx_slot to 0 and
+ pass in an end of 0.
+ */
+static void emac_rx_fill(struct net_device *dev, int end)
+{
+ int i;
+ struct ocp_enet_private *fep = dev->priv;
+
+ i = fep->rx_slot;
+ do {
+ /* We don't want the 16 bytes skb_reserve done by dev_alloc_skb,
+ * it breaks our cache line alignement. However, we still allocate
+ * +16 so that we end up allocating the exact same size as
+ * dev_alloc_skb() would do.
+ * Also, because of the skb_res, the max DMA size we give to EMAC
+ * is slighly wrong, causing it to potentially DMA 2 more bytes
+ * from a broken/oversized packet. These 16 bytes will take care
+ * that we don't walk on somebody else toes with that.
+ */
+ fep->rx_skb[i] =
+ alloc_skb(fep->rx_buffer_size + 16, GFP_ATOMIC);
+
+ if (fep->rx_skb[i] == NULL) {
+ /* Keep rx_slot here, the next time clean/fill is called
+ * we will try again before the MAL wraps back here
+ * If the MAL tries to use this descriptor with
+ * the EMPTY bit off it will cause the
+ * rxde interrupt. That is where we will
+ * try again to allocate an sk_buff.
+ */
+ break;
+
+ }
+
+ if (skb_res)
+ skb_reserve(fep->rx_skb[i], skb_res);
+
+ /* We must NOT dma_map_single the cache line right after the
+ * buffer, so we must crop our sync size to account for the
+ * reserved space
+ */
+ fep->rx_desc[i].data_ptr =
+ (unsigned char *)dma_map_single(&fep->ocpdev->dev,
+ (void *)fep->rx_skb[i]->
+ data,
+ fep->rx_buffer_size -
+ skb_res, DMA_FROM_DEVICE);
+
+ /*
+ * Some 4xx implementations use the previously
+ * reserved bits in data_len to encode the MS
+ * 4-bits of a 36-bit physical address (ERPN)
+ * This must be initialized.
+ */
+ fep->rx_desc[i].data_len = 0;
+ fep->rx_desc[i].ctrl = MAL_RX_CTRL_EMPTY | MAL_RX_CTRL_INTR |
+ (i == (NUM_RX_BUFF - 1) ? MAL_RX_CTRL_WRAP : 0);
+
+ } while ((i = (i + 1) % NUM_RX_BUFF) != end);
+
+ fep->rx_slot = i;
+}
+
+static void
+emac_rx_csum(struct net_device *dev, unsigned short ctrl, struct sk_buff *skb)
+{
+ struct ocp_enet_private *fep = dev->priv;
+
+ /* Exit if interface has no TAH engine */
+ if (!fep->tah_dev) {
+ skb->ip_summed = CHECKSUM_NONE;
+ return;
+ }
+
+ /* Check for TCP/UDP/IP csum error */
+ if (ctrl & EMAC_CSUM_VER_ERROR) {
+ /* Let the stack verify checksum errors */
+ skb->ip_summed = CHECKSUM_NONE;
+/* adapter->hw_csum_err++; */
+ } else {
+ /* Csum is good */
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+/* adapter->hw_csum_good++; */
+ }
+}
+
+static int emac_rx_clean(struct net_device *dev)
+{
+ int i, b, bnum = 0, buf[6];
+ int error, frame_length;
+ struct ocp_enet_private *fep = dev->priv;
+ unsigned short ctrl;
+
+ i = fep->rx_slot;
+
+ PKT_DEBUG(("emac_rx_clean() entry, rx_slot: %d\n", fep->rx_slot));
+
+ do {
+ if (fep->rx_skb[i] == NULL)
+ continue; /*we have already handled the packet but haved failed to alloc */
+ /*
+ since rx_desc is in uncached mem we don't keep reading it directly
+ we pull out a local copy of ctrl and do the checks on the copy.
+ */
+ ctrl = fep->rx_desc[i].ctrl;
+ if (ctrl & MAL_RX_CTRL_EMPTY)
+ break; /*we don't have any more ready packets */
+
+ if (EMAC_IS_BAD_RX_PACKET(ctrl)) {
+ fep->stats.rx_errors++;
+ fep->stats.rx_dropped++;
+
+ if (ctrl & EMAC_RX_ST_OE)
+ fep->stats.rx_fifo_errors++;
+ if (ctrl & EMAC_RX_ST_AE)
+ fep->stats.rx_frame_errors++;
+ if (ctrl & EMAC_RX_ST_BFCS)
+ fep->stats.rx_crc_errors++;
+ if (ctrl & (EMAC_RX_ST_RP | EMAC_RX_ST_PTL |
+ EMAC_RX_ST_ORE | EMAC_RX_ST_IRE))
+ fep->stats.rx_length_errors++;
+ } else {
+ if ((ctrl & (MAL_RX_CTRL_FIRST | MAL_RX_CTRL_LAST)) ==
+ (MAL_RX_CTRL_FIRST | MAL_RX_CTRL_LAST)) {
+ /* Single descriptor packet */
+ emac_rx_csum(dev, ctrl, fep->rx_skb[i]);
+ /* Send the skb up the chain. */
+ frame_length = fep->rx_desc[i].data_len - 4;
+ skb_put(fep->rx_skb[i], frame_length);
+ fep->rx_skb[i]->dev = dev;
+ fep->rx_skb[i]->protocol =
+ eth_type_trans(fep->rx_skb[i], dev);
+ error = netif_rx(fep->rx_skb[i]);
+
+ if ((error == NET_RX_DROP) ||
+ (error == NET_RX_BAD)) {
+ fep->stats.rx_dropped++;
+ } else {
+ fep->stats.rx_packets++;
+ fep->stats.rx_bytes += frame_length;
+ }
+ fep->rx_skb[i] = NULL;
+ } else {
+ /* Multiple descriptor packet */
+ if (ctrl & MAL_RX_CTRL_FIRST) {
+ if (fep->rx_desc[(i + 1) % NUM_RX_BUFF].
+ ctrl & MAL_RX_CTRL_EMPTY)
+ break;
+ bnum = 0;
+ buf[bnum] = i;
+ ++bnum;
+ continue;
+ }
+ if (((ctrl & MAL_RX_CTRL_FIRST) !=
+ MAL_RX_CTRL_FIRST) &&
+ ((ctrl & MAL_RX_CTRL_LAST) !=
+ MAL_RX_CTRL_LAST)) {
+ if (fep->rx_desc[(i + 1) %
+ NUM_RX_BUFF].ctrl &
+ MAL_RX_CTRL_EMPTY) {
+ i = buf[0];
+ break;
+ }
+ buf[bnum] = i;
+ ++bnum;
+ continue;
+ }
+ if (ctrl & MAL_RX_CTRL_LAST) {
+ buf[bnum] = i;
+ ++bnum;
+ skb_put(fep->rx_skb[buf[0]],
+ fep->rx_desc[buf[0]].data_len);
+ for (b = 1; b < bnum; b++) {
+ /*
+ * MAL is braindead, we need
+ * to copy the remainder
+ * of the packet from the
+ * latter descriptor buffers
+ * to the first skb. Then
+ * dispose of the source
+ * skbs.
+ *
+ * Once the stack is fixed
+ * to handle frags on most
+ * protocols we can generate
+ * a fragmented skb with
+ * no copies.
+ */
+ memcpy(fep->rx_skb[buf[0]]->
+ data +
+ fep->rx_skb[buf[0]]->len,
+ fep->rx_skb[buf[b]]->
+ data,
+ fep->rx_desc[buf[b]].
+ data_len);
+ skb_put(fep->rx_skb[buf[0]],
+ fep->rx_desc[buf[b]].
+ data_len);
+ dma_unmap_single(&fep->ocpdev->
+ dev,
+ fep->
+ rx_desc[buf
+ [b]].
+ data_ptr,
+ fep->
+ rx_desc[buf
+ [b]].
+ data_len,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(fep->
+ rx_skb[buf[b]]);
+ }
+ emac_rx_csum(dev, ctrl,
+ fep->rx_skb[buf[0]]);
+
+ fep->rx_skb[buf[0]]->dev = dev;
+ fep->rx_skb[buf[0]]->protocol =
+ eth_type_trans(fep->rx_skb[buf[0]],
+ dev);
+ error = netif_rx(fep->rx_skb[buf[0]]);
+
+ if ((error == NET_RX_DROP)
+ || (error == NET_RX_BAD)) {
+ fep->stats.rx_dropped++;
+ } else {
+ fep->stats.rx_packets++;
+ fep->stats.rx_bytes +=
+ fep->rx_skb[buf[0]]->len;
+ }
+ for (b = 0; b < bnum; b++)
+ fep->rx_skb[buf[b]] = NULL;
+ }
+ }
+ }
+ } while ((i = (i + 1) % NUM_RX_BUFF) != fep->rx_slot);
+
+ PKT_DEBUG(("emac_rx_clean() exit, rx_slot: %d\n", fep->rx_slot));
+
+ return i;
+}
+
+static void emac_rxeob_dev(void *param, u32 chanmask)
+{
+ struct net_device *dev = param;
+ struct ocp_enet_private *fep = dev->priv;
+ unsigned long flags;
+ int n;
+
+ spin_lock_irqsave(&fep->lock, flags);
+ if ((n = emac_rx_clean(dev)) != fep->rx_slot)
+ emac_rx_fill(dev, n);
+ spin_unlock_irqrestore(&fep->lock, flags);
+}
+
+/*
+ * This interrupt should never occurr, we don't program
+ * the MAL for contiunous mode.
+ */
+static void emac_txde_dev(void *param, u32 chanmask)
+{
+ struct net_device *dev = param;
+ struct ocp_enet_private *fep = dev->priv;
+
+ printk(KERN_WARNING "%s: transmit descriptor error\n", dev->name);
+
+ emac_mac_dump(dev);
+ emac_mal_dump(dev);
+
+ /* Reenable the transmit channel */
+ mal_enable_tx_channels(fep->mal, fep->commac.tx_chan_mask);
+}
+
+/*
+ * This interrupt should be very rare at best. This occurs when
+ * the hardware has a problem with the receive descriptors. The manual
+ * states that it occurs when the hardware cannot the receive descriptor
+ * empty bit is not set. The recovery mechanism will be to
+ * traverse through the descriptors, handle any that are marked to be
+ * handled and reinitialize each along the way. At that point the driver
+ * will be restarted.
+ */
+static void emac_rxde_dev(void *param, u32 chanmask)
+{
+ struct net_device *dev = param;
+ struct ocp_enet_private *fep = dev->priv;
+ unsigned long flags;
+
+ if (net_ratelimit()) {
+ printk(KERN_WARNING "%s: receive descriptor error\n",
+ fep->ndev->name);
+
+ emac_mac_dump(dev);
+ emac_mal_dump(dev);
+ emac_desc_dump(dev);
+ }
+
+ /* Disable RX channel */
+ spin_lock_irqsave(&fep->lock, flags);
+ mal_disable_rx_channels(fep->mal, fep->commac.rx_chan_mask);
+
+ /* For now, charge the error against all emacs */
+ fep->stats.rx_errors++;
+
+ /* so do we have any good packets still? */
+ emac_rx_clean(dev);
+
+ /* When the interface is restarted it resets processing to the
+ * first descriptor in the table.
+ */
+
+ fep->rx_slot = 0;
+ emac_rx_fill(dev, 0);
+
+ set_mal_dcrn(fep->mal, DCRN_MALRXEOBISR, fep->commac.rx_chan_mask);
+ set_mal_dcrn(fep->mal, DCRN_MALRXDEIR, fep->commac.rx_chan_mask);
+
+ /* Reenable the receive channels */
+ mal_enable_rx_channels(fep->mal, fep->commac.rx_chan_mask);
+ spin_unlock_irqrestore(&fep->lock, flags);
+}
+
+static irqreturn_t
+emac_mac_irq(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_instance;
+ struct ocp_enet_private *fep = dev->priv;
+ emac_t *emacp = fep->emacp;
+ unsigned long tmp_em0isr;
+
+ /* EMAC interrupt */
+ tmp_em0isr = in_be32(&emacp->em0isr);
+ if (tmp_em0isr & (EMAC_ISR_TE0 | EMAC_ISR_TE1)) {
+ /* This error is a hard transmit error - could retransmit */
+ fep->stats.tx_errors++;
+
+ /* Reenable the transmit channel */
+ mal_enable_tx_channels(fep->mal, fep->commac.tx_chan_mask);
+
+ } else {
+ fep->stats.rx_errors++;
+ }
+
+ if (tmp_em0isr & EMAC_ISR_RP)
+ fep->stats.rx_length_errors++;
+ if (tmp_em0isr & EMAC_ISR_ALE)
+ fep->stats.rx_frame_errors++;
+ if (tmp_em0isr & EMAC_ISR_BFCS)
+ fep->stats.rx_crc_errors++;
+ if (tmp_em0isr & EMAC_ISR_PTLE)
+ fep->stats.rx_length_errors++;
+ if (tmp_em0isr & EMAC_ISR_ORE)
+ fep->stats.rx_length_errors++;
+ if (tmp_em0isr & EMAC_ISR_TE0)
+ fep->stats.tx_aborted_errors++;
+
+ emac_err_dump(dev, tmp_em0isr);
+
+ out_be32(&emacp->em0isr, tmp_em0isr);
+
+ return IRQ_HANDLED;
+}
+
+static int emac_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ unsigned short ctrl;
+ unsigned long flags;
+ struct ocp_enet_private *fep = dev->priv;
+ emac_t *emacp = fep->emacp;
+ int len = skb->len;
+ unsigned int offset = 0, size, f, tx_slot_first;
+ unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
+
+ spin_lock_irqsave(&fep->lock, flags);
+
+ len -= skb->data_len;
+
+ if ((fep->tx_cnt + nr_frags + len / DESC_BUF_SIZE + 1) > NUM_TX_BUFF) {
+ PKT_DEBUG(("emac_start_xmit() stopping queue\n"));
+ netif_stop_queue(dev);
+ spin_unlock_irqrestore(&fep->lock, flags);
+ restore_flags(flags);
+ return -EBUSY;
+ }
+
+ tx_slot_first = fep->tx_slot;
+
+ while (len) {
+ size = min(len, DESC_BUF_SIZE);
+
+ fep->tx_desc[fep->tx_slot].data_len = (short)size;
+ fep->tx_desc[fep->tx_slot].data_ptr =
+ (unsigned char *)dma_map_single(&fep->ocpdev->dev,
+ (void *)((unsigned int)skb->
+ data + offset),
+ size, DMA_TO_DEVICE);
+
+ ctrl = EMAC_TX_CTRL_DFLT;
+ if (fep->tx_slot != tx_slot_first)
+ ctrl |= MAL_TX_CTRL_READY;
+ if ((NUM_TX_BUFF - 1) == fep->tx_slot)
+ ctrl |= MAL_TX_CTRL_WRAP;
+ if (!nr_frags && (len == size)) {
+ ctrl |= MAL_TX_CTRL_LAST;
+ fep->tx_skb[fep->tx_slot] = skb;
+ }
+ if (skb->ip_summed == CHECKSUM_HW)
+ ctrl |= EMAC_TX_CTRL_TAH_CSUM;
+
+ fep->tx_desc[fep->tx_slot].ctrl = ctrl;
+
+ len -= size;
+ offset += size;
+
+ /* Bump tx count */
+ if (++fep->tx_cnt == NUM_TX_BUFF)
+ netif_stop_queue(dev);
+
+ /* Next descriptor */
+ if (++fep->tx_slot == NUM_TX_BUFF)
+ fep->tx_slot = 0;
+ }
+
+ for (f = 0; f < nr_frags; f++) {
+ struct skb_frag_struct *frag;
+
+ frag = &skb_shinfo(skb)->frags[f];
+ len = frag->size;
+ offset = 0;
+
+ while (len) {
+ size = min(len, DESC_BUF_SIZE);
+
+ dma_map_page(&fep->ocpdev->dev,
+ frag->page,
+ frag->page_offset + offset,
+ size, DMA_TO_DEVICE);
+
+ ctrl = EMAC_TX_CTRL_DFLT | MAL_TX_CTRL_READY;
+ if ((NUM_TX_BUFF - 1) == fep->tx_slot)
+ ctrl |= MAL_TX_CTRL_WRAP;
+ if ((f == (nr_frags - 1)) && (len == size)) {
+ ctrl |= MAL_TX_CTRL_LAST;
+ fep->tx_skb[fep->tx_slot] = skb;
+ }
+
+ if (skb->ip_summed == CHECKSUM_HW)
+ ctrl |= EMAC_TX_CTRL_TAH_CSUM;
+
+ fep->tx_desc[fep->tx_slot].data_len = (short)size;
+ fep->tx_desc[fep->tx_slot].data_ptr =
+ (char *)((page_to_pfn(frag->page) << PAGE_SHIFT) +
+ frag->page_offset + offset);
+ fep->tx_desc[fep->tx_slot].ctrl = ctrl;
+
+ len -= size;
+ offset += size;
+
+ /* Bump tx count */
+ if (++fep->tx_cnt == NUM_TX_BUFF)
+ netif_stop_queue(dev);
+
+ /* Next descriptor */
+ if (++fep->tx_slot == NUM_TX_BUFF)
+ fep->tx_slot = 0;
+ }
+ }
+
+ /*
+ * Deferred set READY on first descriptor of packet to
+ * avoid TX MAL race.
+ */
+ fep->tx_desc[tx_slot_first].ctrl |= MAL_TX_CTRL_READY;
+
+ /* Send the packet out. */
+ out_be32(&emacp->em0tmr0, EMAC_TMR0_XMIT);
+
+ fep->stats.tx_packets++;
+ fep->stats.tx_bytes += skb->len;
+
+ PKT_DEBUG(("emac_start_xmit() exitn"));
+
+ spin_unlock_irqrestore(&fep->lock, flags);
+
+ return 0;
+}
+
+static int emac_adjust_to_link(struct ocp_enet_private *fep)
+{
+ emac_t *emacp = fep->emacp;
+ unsigned long mode_reg;
+ int full_duplex, speed;
+
+ full_duplex = 0;
+ speed = SPEED_10;
+
+ /* set mode register 1 defaults */
+ mode_reg = EMAC_M1_DEFAULT;
+
+ /* Read link mode on PHY */
+ if (fep->phy_mii.def->ops->read_link(&fep->phy_mii) == 0) {
+ /* If an error occurred, we don't deal with it yet */
+ full_duplex = (fep->phy_mii.duplex == DUPLEX_FULL);
+ speed = fep->phy_mii.speed;
+ }
+
+
+ /* set speed (default is 10Mb) */
+ switch (speed) {
+ case SPEED_1000:
+ mode_reg |= EMAC_M1_JUMBO_ENABLE | EMAC_M1_RFS_16K;
+ if (fep->rgmii_dev) {
+ struct ibm_ocp_rgmii *rgmii = RGMII_PRIV(fep->rgmii_dev);
+
+ if ((rgmii->mode[fep->rgmii_input] == RTBI)
+ || (rgmii->mode[fep->rgmii_input] == TBI))
+ mode_reg |= EMAC_M1_MF_1000GPCS;
+ else
+ mode_reg |= EMAC_M1_MF_1000MBPS;
+
+ emac_rgmii_port_speed(fep->rgmii_dev, fep->rgmii_input,
+ 1000);
+ }
+ break;
+ case SPEED_100:
+ mode_reg |= EMAC_M1_MF_100MBPS | EMAC_M1_RFS_4K;
+ if (fep->rgmii_dev)
+ emac_rgmii_port_speed(fep->rgmii_dev, fep->rgmii_input,
+ 100);
+ if (fep->zmii_dev)
+ emac_zmii_port_speed(fep->zmii_dev, fep->zmii_input,
+ 100);
+ break;
+ case SPEED_10:
+ default:
+ mode_reg = (mode_reg & ~EMAC_M1_MF_100MBPS) | EMAC_M1_RFS_4K;
+ if (fep->rgmii_dev)
+ emac_rgmii_port_speed(fep->rgmii_dev, fep->rgmii_input,
+ 10);
+ if (fep->zmii_dev)
+ emac_zmii_port_speed(fep->zmii_dev, fep->zmii_input,
+ 10);
+ }
+
+ if (full_duplex)
+ mode_reg |= EMAC_M1_FDE | EMAC_M1_EIFC | EMAC_M1_IST;
+ else
+ mode_reg &= ~(EMAC_M1_FDE | EMAC_M1_EIFC | EMAC_M1_ILE);
+
+ LINK_DEBUG(("%s: adjust to link, speed: %d, duplex: %d, opened: %d\n",
+ fep->ndev->name, speed, full_duplex, fep->opened));
+
+ printk(KERN_INFO "%s: Speed: %d, %s duplex.\n",
+ fep->ndev->name, speed, full_duplex ? "Full" : "Half");
+ if (fep->opened)
+ out_be32(&emacp->em0mr1, mode_reg);
+
+ return 0;
+}
+
+static int emac_set_mac_address(struct net_device *ndev, void *p)
+{
+ struct ocp_enet_private *fep = ndev->priv;
+ emac_t *emacp = fep->emacp;
+ struct sockaddr *addr = p;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ memcpy(ndev->dev_addr, addr->sa_data, ndev->addr_len);
+
+ /* set the high address */
+ out_be32(&emacp->em0iahr,
+ (fep->ndev->dev_addr[0] << 8) | fep->ndev->dev_addr[1]);
+
+ /* set the low address */
+ out_be32(&emacp->em0ialr,
+ (fep->ndev->dev_addr[2] << 24) | (fep->ndev->dev_addr[3] << 16)
+ | (fep->ndev->dev_addr[4] << 8) | fep->ndev->dev_addr[5]);
+
+ return 0;
+}
+
+static int emac_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct ocp_enet_private *fep = dev->priv;
+ int old_mtu = dev->mtu;
+ emac_t *emacp = fep->emacp;
+ u32 em0mr0;
+ int i, full;
+ unsigned long flags;
+
+ if ((new_mtu < EMAC_MIN_MTU) || (new_mtu > EMAC_MAX_MTU)) {
+ printk(KERN_ERR
+ "emac: Invalid MTU setting, MTU must be between %d and %d\n",
+ EMAC_MIN_MTU, EMAC_MAX_MTU);
+ return -EINVAL;
+ }
+
+ if (old_mtu != new_mtu && netif_running(dev)) {
+ /* Stop rx engine */
+ em0mr0 = in_be32(&emacp->em0mr0);
+ out_be32(&emacp->em0mr0, em0mr0 & ~EMAC_M0_RXE);
+
+ /* Wait for descriptors to be empty */
+ do {
+ full = 0;
+ for (i = 0; i < NUM_RX_BUFF; i++)
+ if (!(fep->rx_desc[i].ctrl & MAL_RX_CTRL_EMPTY)) {
+ printk(KERN_NOTICE
+ "emac: RX ring is still full\n");
+ full = 1;
+ }
+ } while (full);
+
+ spin_lock_irqsave(&fep->lock, flags);
+
+ mal_disable_rx_channels(fep->mal, fep->commac.rx_chan_mask);
+
+ /* Destroy all old rx skbs */
+ for (i = 0; i < NUM_RX_BUFF; i++) {
+ dma_unmap_single(&fep->ocpdev->dev,
+ fep->rx_desc[i].data_ptr,
+ fep->rx_desc[i].data_len,
+ DMA_FROM_DEVICE);
+ dev_kfree_skb(fep->rx_skb[i]);
+ fep->rx_skb[i] = NULL;
+ }
+
+ /* Set new rx_buffer_size and advertise new mtu */
+ fep->rx_buffer_size =
+ new_mtu + ENET_HEADER_SIZE + ENET_FCS_SIZE;
+ dev->mtu = new_mtu;
+
+ /* Re-init rx skbs */
+ fep->rx_slot = 0;
+ emac_rx_fill(dev, 0);
+
+ /* Restart the rx engine */
+ mal_enable_rx_channels(fep->mal, fep->commac.rx_chan_mask);
+ out_be32(&emacp->em0mr0, em0mr0 | EMAC_M0_RXE);
+
+ spin_unlock_irqrestore(&fep->lock, flags);
+ }
+
+ return 0;
+}
+
+static void __emac_set_multicast_list(struct net_device *dev)
+{
+ struct ocp_enet_private *fep = dev->priv;
+ emac_t *emacp = fep->emacp;
+ u32 rmr = in_be32(&emacp->em0rmr);
+
+ /* First clear all special bits, they can be set later */
+ rmr &= ~(EMAC_RMR_PME | EMAC_RMR_PMME | EMAC_RMR_MAE);
+
+ if (dev->flags & IFF_PROMISC) {
+ rmr |= EMAC_RMR_PME;
+ } else if (dev->flags & IFF_ALLMULTI || 32 < dev->mc_count) {
+ /*
+ * Must be setting up to use multicast
+ * Now check for promiscuous multicast
+ */
+ rmr |= EMAC_RMR_PMME;
+ } else if (dev->flags & IFF_MULTICAST && 0 < dev->mc_count) {
+ unsigned short em0gaht[4] = { 0, 0, 0, 0 };
+ struct dev_mc_list *dmi;
+
+ /* Need to hash on the multicast address. */
+ for (dmi = dev->mc_list; dmi; dmi = dmi->next) {
+ unsigned long mc_crc;
+ unsigned int bit_number;
+
+ mc_crc = ether_crc(6, (char *)dmi->dmi_addr);
+ bit_number = 63 - (mc_crc >> 26); /* MSB: 0 LSB: 63 */
+ em0gaht[bit_number >> 4] |=
+ 0x8000 >> (bit_number & 0x0f);
+ }
+ emacp->em0gaht1 = em0gaht[0];
+ emacp->em0gaht2 = em0gaht[1];
+ emacp->em0gaht3 = em0gaht[2];
+ emacp->em0gaht4 = em0gaht[3];
+
+ /* Turn on multicast addressing */
+ rmr |= EMAC_RMR_MAE;
+ }
+ out_be32(&emacp->em0rmr, rmr);
+}
+
+static int emac_init_tah(struct ocp_enet_private *fep)
+{
+ tah_t *tahp;
+
+ /* Initialize TAH and enable checksum verification */
+ tahp = (tah_t *) ioremap(fep->tah_dev->def->paddr, sizeof(*tahp));
+
+ if (tahp == NULL) {
+ printk(KERN_ERR "tah%d: Cannot ioremap TAH registers!\n",
+ fep->tah_dev->def->index);
+
+ return -ENOMEM;
+ }
+
+ out_be32(&tahp->tah_mr, TAH_MR_SR);
+
+ /* wait for reset to complete */
+ while (in_be32(&tahp->tah_mr) & TAH_MR_SR) ;
+
+ /* 10KB TAH TX FIFO accomodates the max MTU of 9000 */
+ out_be32(&tahp->tah_mr,
+ TAH_MR_CVR | TAH_MR_ST_768 | TAH_MR_TFS_10KB | TAH_MR_DTFP |
+ TAH_MR_DIG);
+
+ iounmap(&tahp);
+
+ return 0;
+}
+
+static void emac_init_rings(struct net_device *dev)
+{
+ struct ocp_enet_private *ep = dev->priv;
+ int loop;
+
+ ep->tx_desc = (struct mal_descriptor *)((char *)ep->mal->tx_virt_addr +
+ (ep->mal_tx_chan *
+ MAL_DT_ALIGN));
+ ep->rx_desc =
+ (struct mal_descriptor *)((char *)ep->mal->rx_virt_addr +
+ (ep->mal_rx_chan * MAL_DT_ALIGN));
+
+ /* Fill in the transmit descriptor ring. */
+ for (loop = 0; loop < NUM_TX_BUFF; loop++) {
+ if (ep->tx_skb[loop]) {
+ dma_unmap_single(&ep->ocpdev->dev,
+ ep->tx_desc[loop].data_ptr,
+ ep->tx_desc[loop].data_len,
+ DMA_TO_DEVICE);
+ dev_kfree_skb_irq(ep->tx_skb[loop]);
+ }
+ ep->tx_skb[loop] = NULL;
+ ep->tx_desc[loop].ctrl = 0;
+ ep->tx_desc[loop].data_len = 0;
+ ep->tx_desc[loop].data_ptr = NULL;
+ }
+ ep->tx_desc[loop - 1].ctrl |= MAL_TX_CTRL_WRAP;
+
+ /* Format the receive descriptor ring. */
+ ep->rx_slot = 0;
+ /* Default is MTU=1500 + Ethernet overhead */
+ ep->rx_buffer_size = ENET_DEF_BUF_SIZE;
+ emac_rx_fill(dev, 0);
+ if (ep->rx_slot != 0) {
+ printk(KERN_ERR
+ "%s: Not enough mem for RxChain durning Open?\n",
+ dev->name);
+ /*We couldn't fill the ring at startup?
+ *We could clean up and fail to open but right now we will try to
+ *carry on. It may be a sign of a bad NUM_RX_BUFF value
+ */
+ }
+
+ ep->tx_cnt = 0;
+ ep->tx_slot = 0;
+ ep->ack_slot = 0;
+}
+
+static void emac_reset_configure(struct ocp_enet_private *fep)
+{
+ emac_t *emacp = fep->emacp;
+ int i;
+
+ mal_disable_tx_channels(fep->mal, fep->commac.tx_chan_mask);
+ mal_disable_rx_channels(fep->mal, fep->commac.rx_chan_mask);
+
+ /*
+ * Check for a link, some PHYs don't provide a clock if
+ * no link is present. Some EMACs will not come out of
+ * soft reset without a PHY clock present.
+ */
+ if (fep->phy_mii.def->ops->poll_link(&fep->phy_mii)) {
+ /* Reset the EMAC */
+ out_be32(&emacp->em0mr0, EMAC_M0_SRST);
+ udelay(20);
+ for (i = 0; i < 100; i++) {
+ if ((in_be32(&emacp->em0mr0) & EMAC_M0_SRST) == 0)
+ break;
+ udelay(10);
+ }
+
+ if (i >= 100) {
+ printk(KERN_ERR "%s: Cannot reset EMAC\n",
+ fep->ndev->name);
+ return;
+ }
+ }
+
+ /* Switch IRQs off for now */
+ out_be32(&emacp->em0iser, 0);
+
+ /* Configure MAL rx channel */
+ mal_set_rcbs(fep->mal, fep->mal_rx_chan, DESC_BUF_SIZE_REG);
+
+ /* set the high address */
+ out_be32(&emacp->em0iahr,
+ (fep->ndev->dev_addr[0] << 8) | fep->ndev->dev_addr[1]);
+
+ /* set the low address */
+ out_be32(&emacp->em0ialr,
+ (fep->ndev->dev_addr[2] << 24) | (fep->ndev->dev_addr[3] << 16)
+ | (fep->ndev->dev_addr[4] << 8) | fep->ndev->dev_addr[5]);
+
+ /* Adjust to link */
+ if (netif_carrier_ok(fep->ndev))
+ emac_adjust_to_link(fep);
+
+ /* enable broadcast/individual address and RX FIFO defaults */
+ out_be32(&emacp->em0rmr, EMAC_RMR_DEFAULT);
+
+ /* set transmit request threshold register */
+ out_be32(&emacp->em0trtr, EMAC_TRTR_DEFAULT);
+
+ /* Reconfigure multicast */
+ __emac_set_multicast_list(fep->ndev);
+
+ /* Set receiver/transmitter defaults */
+ out_be32(&emacp->em0rwmr, EMAC_RWMR_DEFAULT);
+ out_be32(&emacp->em0tmr0, EMAC_TMR0_DEFAULT);
+ out_be32(&emacp->em0tmr1, EMAC_TMR1_DEFAULT);
+
+ /* set frame gap */
+ out_be32(&emacp->em0ipgvr, CONFIG_IBM_EMAC_FGAP);
+
+ /* Init ring buffers */
+ emac_init_rings(fep->ndev);
+}
+
+static void emac_kick(struct ocp_enet_private *fep)
+{
+ emac_t *emacp = fep->emacp;
+ unsigned long emac_ier;
+
+ emac_ier = EMAC_ISR_PP | EMAC_ISR_BP | EMAC_ISR_RP |
+ EMAC_ISR_SE | EMAC_ISR_PTLE | EMAC_ISR_ALE |
+ EMAC_ISR_BFCS | EMAC_ISR_ORE | EMAC_ISR_IRE;
+
+ out_be32(&emacp->em0iser, emac_ier);
+
+ /* enable all MAL transmit and receive channels */
+ mal_enable_tx_channels(fep->mal, fep->commac.tx_chan_mask);
+ mal_enable_rx_channels(fep->mal, fep->commac.rx_chan_mask);
+
+ /* set transmit and receive enable */
+ out_be32(&emacp->em0mr0, EMAC_M0_TXE | EMAC_M0_RXE);
+}
+
+static void
+emac_start_link(struct ocp_enet_private *fep, struct ethtool_cmd *ep)
+{
+ u32 advertise;
+ int autoneg;
+ int forced_speed;
+ int forced_duplex;
+
+ /* Default advertise */
+ advertise = ADVERTISED_10baseT_Half | ADVERTISED_10baseT_Full |
+ ADVERTISED_100baseT_Half | ADVERTISED_100baseT_Full |
+ ADVERTISED_1000baseT_Half | ADVERTISED_1000baseT_Full;
+ autoneg = fep->want_autoneg;
+ forced_speed = fep->phy_mii.speed;
+ forced_duplex = fep->phy_mii.duplex;
+
+ /* Setup link parameters */
+ if (ep) {
+ if (ep->autoneg == AUTONEG_ENABLE) {
+ advertise = ep->advertising;
+ autoneg = 1;
+ } else {
+ autoneg = 0;
+ forced_speed = ep->speed;
+ forced_duplex = ep->duplex;
+ }
+ }
+
+ /* Configure PHY & start aneg */
+ fep->want_autoneg = autoneg;
+ if (autoneg) {
+ LINK_DEBUG(("%s: start link aneg, advertise: 0x%x\n",
+ fep->ndev->name, advertise));
+ fep->phy_mii.def->ops->setup_aneg(&fep->phy_mii, advertise);
+ } else {
+ LINK_DEBUG(("%s: start link forced, speed: %d, duplex: %d\n",
+ fep->ndev->name, forced_speed, forced_duplex));
+ fep->phy_mii.def->ops->setup_forced(&fep->phy_mii, forced_speed,
+ forced_duplex);
+ }
+ fep->timer_ticks = 0;
+ mod_timer(&fep->link_timer, jiffies + HZ);
+}
+
+static void emac_link_timer(unsigned long data)
+{
+ struct ocp_enet_private *fep = (struct ocp_enet_private *)data;
+ int link;
+
+ if (fep->going_away)
+ return;
+
+ spin_lock_irq(&fep->lock);
+
+ link = fep->phy_mii.def->ops->poll_link(&fep->phy_mii);
+ LINK_DEBUG(("%s: poll_link: %d\n", fep->ndev->name, link));
+
+ if (link == netif_carrier_ok(fep->ndev)) {
+ if (!link && fep->want_autoneg && (++fep->timer_ticks) > 10)
+ emac_start_link(fep, NULL);
+ goto out;
+ }
+ printk(KERN_INFO "%s: Link is %s\n", fep->ndev->name,
+ link ? "Up" : "Down");
+ if (link) {
+ netif_carrier_on(fep->ndev);
+ /* Chip needs a full reset on config change. That sucks, so I
+ * should ultimately move that to some tasklet to limit
+ * latency peaks caused by this code
+ */
+ emac_reset_configure(fep);
+ if (fep->opened)
+ emac_kick(fep);
+ } else {
+ fep->timer_ticks = 0;
+ netif_carrier_off(fep->ndev);
+ }
+ out:
+ mod_timer(&fep->link_timer, jiffies + HZ);
+ spin_unlock_irq(&fep->lock);
+}
+
+static void emac_set_multicast_list(struct net_device *dev)
+{
+ struct ocp_enet_private *fep = dev->priv;
+
+ spin_lock_irq(&fep->lock);
+ __emac_set_multicast_list(dev);
+ spin_unlock_irq(&fep->lock);
+}
+
+static int emac_get_settings(struct net_device *ndev, struct ethtool_cmd *cmd)
+{
+ struct ocp_enet_private *fep = ndev->priv;
+
+ cmd->supported = fep->phy_mii.def->features;
+ cmd->port = PORT_MII;
+ cmd->transceiver = XCVR_EXTERNAL;
+ cmd->phy_address = fep->mii_phy_addr;
+ spin_lock_irq(&fep->lock);
+ cmd->autoneg = fep->want_autoneg;
+ cmd->speed = fep->phy_mii.speed;
+ cmd->duplex = fep->phy_mii.duplex;
+ spin_unlock_irq(&fep->lock);
+ return 0;
+}
+
+static int emac_set_settings(struct net_device *ndev, struct ethtool_cmd *cmd)
+{
+ struct ocp_enet_private *fep = ndev->priv;
+ unsigned long features = fep->phy_mii.def->features;
+
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+
+ if (cmd->autoneg != AUTONEG_ENABLE && cmd->autoneg != AUTONEG_DISABLE)
+ return -EINVAL;
+ if (cmd->autoneg == AUTONEG_ENABLE && cmd->advertising == 0)
+ return -EINVAL;
+ if (cmd->duplex != DUPLEX_HALF && cmd->duplex != DUPLEX_FULL)
+ return -EINVAL;
+ if (cmd->autoneg == AUTONEG_DISABLE)
+ switch (cmd->speed) {
+ case SPEED_10:
+ if (cmd->duplex == DUPLEX_HALF &&
+ (features & SUPPORTED_10baseT_Half) == 0)
+ return -EINVAL;
+ if (cmd->duplex == DUPLEX_FULL &&
+ (features & SUPPORTED_10baseT_Full) == 0)
+ return -EINVAL;
+ break;
+ case SPEED_100:
+ if (cmd->duplex == DUPLEX_HALF &&
+ (features & SUPPORTED_100baseT_Half) == 0)
+ return -EINVAL;
+ if (cmd->duplex == DUPLEX_FULL &&
+ (features & SUPPORTED_100baseT_Full) == 0)
+ return -EINVAL;
+ break;
+ case SPEED_1000:
+ if (cmd->duplex == DUPLEX_HALF &&
+ (features & SUPPORTED_1000baseT_Half) == 0)
+ return -EINVAL;
+ if (cmd->duplex == DUPLEX_FULL &&
+ (features & SUPPORTED_1000baseT_Full) == 0)
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ } else if ((features & SUPPORTED_Autoneg) == 0)
+ return -EINVAL;
+ spin_lock_irq(&fep->lock);
+ emac_start_link(fep, cmd);
+ spin_unlock_irq(&fep->lock);
+ return 0;
+}
+
+static void
+emac_get_drvinfo(struct net_device *ndev, struct ethtool_drvinfo *info)
+{
+ struct ocp_enet_private *fep = ndev->priv;
+
+ strcpy(info->driver, DRV_NAME);
+ strcpy(info->version, DRV_VERSION);
+ info->fw_version[0] = '\0';
+ sprintf(info->bus_info, "IBM EMAC %d", fep->ocpdev->def->index);
+ info->regdump_len = 0;
+}
+
+static int emac_nway_reset(struct net_device *ndev)
+{
+ struct ocp_enet_private *fep = ndev->priv;
+
+ if (!fep->want_autoneg)
+ return -EINVAL;
+ spin_lock_irq(&fep->lock);
+ emac_start_link(fep, NULL);
+ spin_unlock_irq(&fep->lock);
+ return 0;
+}
+
+static u32 emac_get_link(struct net_device *ndev)
+{
+ return netif_carrier_ok(ndev);
+}
+
+static struct ethtool_ops emac_ethtool_ops = {
+ .get_settings = emac_get_settings,
+ .set_settings = emac_set_settings,
+ .get_drvinfo = emac_get_drvinfo,
+ .nway_reset = emac_nway_reset,
+ .get_link = emac_get_link
+};
+
+static int emac_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ struct ocp_enet_private *fep = dev->priv;
+ uint *data = (uint *) & rq->ifr_ifru;
+
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ data[0] = fep->mii_phy_addr;
+ /* Fall through */
+ case SIOCGMIIREG:
+ data[3] = emac_phy_read(dev, fep->mii_phy_addr, data[1]);
+ return 0;
+ case SIOCSMIIREG:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+
+ emac_phy_write(dev, fep->mii_phy_addr, data[1], data[2]);
+ return 0;
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static int emac_open(struct net_device *dev)
+{
+ struct ocp_enet_private *fep = dev->priv;
+ int rc;
+
+ spin_lock_irq(&fep->lock);
+
+ fep->opened = 1;
+ netif_carrier_off(dev);
+
+ /* Reset & configure the chip */
+ emac_reset_configure(fep);
+
+ spin_unlock_irq(&fep->lock);
+
+ /* Request our interrupt lines */
+ rc = request_irq(dev->irq, emac_mac_irq, 0, "IBM EMAC MAC", dev);
+ if (rc != 0) {
+ printk("dev->irq %d failed\n", dev->irq);
+ goto bail;
+ }
+ /* Kick the chip rx & tx channels into life */
+ spin_lock_irq(&fep->lock);
+ emac_kick(fep);
+ spin_unlock_irq(&fep->lock);
+
+ netif_start_queue(dev);
+ bail:
+ return rc;
+}
+
+static int emac_close(struct net_device *dev)
+{
+ struct ocp_enet_private *fep = dev->priv;
+ emac_t *emacp = fep->emacp;
+
+ /* XXX Stop IRQ emitting here */
+ spin_lock_irq(&fep->lock);
+ fep->opened = 0;
+ mal_disable_tx_channels(fep->mal, fep->commac.tx_chan_mask);
+ mal_disable_rx_channels(fep->mal, fep->commac.rx_chan_mask);
+ netif_carrier_off(dev);
+ netif_stop_queue(dev);
+
+ /*
+ * Check for a link, some PHYs don't provide a clock if
+ * no link is present. Some EMACs will not come out of
+ * soft reset without a PHY clock present.
+ */
+ if (fep->phy_mii.def->ops->poll_link(&fep->phy_mii)) {
+ out_be32(&emacp->em0mr0, EMAC_M0_SRST);
+ udelay(10);
+
+ if (emacp->em0mr0 & EMAC_M0_SRST) {
+ /*not sure what to do here hopefully it clears before another open */
+ printk(KERN_ERR
+ "%s: Phy SoftReset didn't clear, no link?\n",
+ dev->name);
+ }
+ }
+
+ /* Free the irq's */
+ free_irq(dev->irq, dev);
+
+ spin_unlock_irq(&fep->lock);
+
+ return 0;
+}
+
+static void emac_remove(struct ocp_device *ocpdev)
+{
+ struct net_device *dev = ocp_get_drvdata(ocpdev);
+ struct ocp_enet_private *ep = dev->priv;
+
+ /* FIXME: locking, races, ... */
+ ep->going_away = 1;
+ ocp_set_drvdata(ocpdev, NULL);
+ if (ep->rgmii_dev)
+ emac_close_rgmii(ep->rgmii_dev);
+ if (ep->zmii_dev)
+ emac_close_zmii(ep->zmii_dev);
+
+ unregister_netdev(dev);
+ del_timer_sync(&ep->link_timer);
+ mal_unregister_commac(ep->mal, &ep->commac);
+ iounmap((void *)ep->emacp);
+ kfree(dev);
+}
+
+struct mal_commac_ops emac_commac_ops = {
+ .txeob = &emac_txeob_dev,
+ .txde = &emac_txde_dev,
+ .rxeob = &emac_rxeob_dev,
+ .rxde = &emac_rxde_dev,
+};
+
+static int emac_init_device(struct ocp_device *ocpdev, struct ibm_ocp_mal *mal)
+{
+ int deferred_init = 0;
+ int rc = 0, i;
+ struct net_device *ndev;
+ struct ocp_enet_private *ep;
+ struct ocp_func_emac_data *emacdata;
+ int commac_reg = 0;
+ u32 phy_map;
+
+ emacdata = (struct ocp_func_emac_data *)ocpdev->def->additions;
+ if (!emacdata) {
+ printk(KERN_ERR "emac%d: Missing additional data!\n",
+ ocpdev->def->index);
+ return -ENODEV;
+ }
+
+ /* Allocate our net_device structure */
+ ndev = alloc_etherdev(sizeof(struct ocp_enet_private));
+ if (ndev == NULL) {
+ printk(KERN_ERR
+ "emac%d: Could not allocate ethernet device.\n",
+ ocpdev->def->index);
+ return -ENOMEM;
+ }
+ ep = ndev->priv;
+ ep->ndev = ndev;
+ ep->ocpdev = ocpdev;
+ ndev->irq = ocpdev->def->irq;
+ ep->wol_irq = emacdata->wol_irq;
+ if (emacdata->mdio_idx >= 0) {
+ if (emacdata->mdio_idx == ocpdev->def->index) {
+ /* Set the common MDIO net_device */
+ mdio_ndev = ndev;
+ deferred_init = 1;
+ }
+ ep->mdio_dev = mdio_ndev;
+ } else {
+ ep->mdio_dev = ndev;
+ }
+
+ ocp_set_drvdata(ocpdev, ndev);
+
+ spin_lock_init(&ep->lock);
+
+ /* Fill out MAL informations and register commac */
+ ep->mal = mal;
+ ep->mal_tx_chan = emacdata->mal_tx_chan;
+ ep->mal_rx_chan = emacdata->mal_rx_chan;
+ ep->commac.ops = &emac_commac_ops;
+ ep->commac.dev = ndev;
+ ep->commac.tx_chan_mask = MAL_CHAN_MASK(ep->mal_tx_chan);
+ ep->commac.rx_chan_mask = MAL_CHAN_MASK(ep->mal_rx_chan);
+ rc = mal_register_commac(ep->mal, &ep->commac);
+ if (rc != 0)
+ goto bail;
+ commac_reg = 1;
+
+ /* Map our MMIOs */
+ ep->emacp = (emac_t *) ioremap(ocpdev->def->paddr, sizeof(emac_t));
+
+ /* Check if we need to attach to a ZMII */
+ if (emacdata->zmii_idx >= 0) {
+ ep->zmii_input = emacdata->zmii_mux;
+ ep->zmii_dev =
+ ocp_find_device(OCP_ANY_ID, OCP_FUNC_ZMII,
+ emacdata->zmii_idx);
+ if (ep->zmii_dev == NULL)
+ printk(KERN_WARNING
+ "emac%d: ZMII %d requested but not found !\n",
+ ocpdev->def->index, emacdata->zmii_idx);
+ else if ((rc =
+ emac_init_zmii(ep->zmii_dev, ep->zmii_input,
+ emacdata->phy_mode)) != 0)
+ goto bail;
+ }
+
+ /* Check if we need to attach to a RGMII */
+ if (emacdata->rgmii_idx >= 0) {
+ ep->rgmii_input = emacdata->rgmii_mux;
+ ep->rgmii_dev =
+ ocp_find_device(OCP_ANY_ID, OCP_FUNC_RGMII,
+ emacdata->rgmii_idx);
+ if (ep->rgmii_dev == NULL)
+ printk(KERN_WARNING
+ "emac%d: RGMII %d requested but not found !\n",
+ ocpdev->def->index, emacdata->rgmii_idx);
+ else if ((rc =
+ emac_init_rgmii(ep->rgmii_dev, ep->rgmii_input,
+ emacdata->phy_mode)) != 0)
+ goto bail;
+ }
+
+ /* Check if we need to attach to a TAH */
+ if (emacdata->tah_idx >= 0) {
+ ep->tah_dev =
+ ocp_find_device(OCP_ANY_ID, OCP_FUNC_TAH,
+ emacdata->tah_idx);
+ if (ep->tah_dev == NULL)
+ printk(KERN_WARNING
+ "emac%d: TAH %d requested but not found !\n",
+ ocpdev->def->index, emacdata->tah_idx);
+ else if ((rc = emac_init_tah(ep)) != 0)
+ goto bail;
+ }
+
+ if (deferred_init) {
+ if (!list_empty(&emac_init_list)) {
+ struct list_head *entry;
+ struct emac_def_dev *ddev;
+
+ list_for_each(entry, &emac_init_list) {
+ ddev =
+ list_entry(entry, struct emac_def_dev,
+ link);
+ emac_init_device(ddev->ocpdev, ddev->mal);
+ }
+ }
+ }
+
+ /* Init link monitoring timer */
+ init_timer(&ep->link_timer);
+ ep->link_timer.function = emac_link_timer;
+ ep->link_timer.data = (unsigned long)ep;
+ ep->timer_ticks = 0;
+
+ /* Fill up the mii_phy structure */
+ ep->phy_mii.dev = ndev;
+ ep->phy_mii.mdio_read = emac_phy_read;
+ ep->phy_mii.mdio_write = emac_phy_write;
+ ep->phy_mii.mode = emacdata->phy_mode;
+
+ /* Find PHY */
+ phy_map = emacdata->phy_map | busy_phy_map;
+ for (i = 0; i <= 0x1f; i++, phy_map >>= 1) {
+ if ((phy_map & 0x1) == 0) {
+ int val = emac_phy_read(ndev, i, MII_BMCR);
+ if (val != 0xffff && val != -1)
+ break;
+ }
+ }
+ if (i == 0x20) {
+ printk(KERN_WARNING "emac%d: Can't find PHY.\n",
+ ocpdev->def->index);
+ rc = -ENODEV;
+ goto bail;
+ }
+ busy_phy_map |= 1 << i;
+ ep->mii_phy_addr = i;
+ rc = mii_phy_probe(&ep->phy_mii, i);
+ if (rc) {
+ printk(KERN_WARNING "emac%d: Failed to probe PHY type.\n",
+ ocpdev->def->index);
+ rc = -ENODEV;
+ goto bail;
+ }
+
+ /* Setup initial PHY config & startup aneg */
+ if (ep->phy_mii.def->ops->init)
+ ep->phy_mii.def->ops->init(&ep->phy_mii);
+ netif_carrier_off(ndev);
+ if (ep->phy_mii.def->features & SUPPORTED_Autoneg)
+ ep->want_autoneg = 1;
+ emac_start_link(ep, NULL);
+
+ /* read the MAC Address */
+ for (i = 0; i < 6; i++)
+ ndev->dev_addr[i] = emacdata->mac_addr[i];
+
+ /* Fill in the driver function table */
+ ndev->open = &emac_open;
+ ndev->hard_start_xmit = &emac_start_xmit;
+ ndev->stop = &emac_close;
+ ndev->get_stats = &emac_stats;
+ if (emacdata->jumbo)
+ ndev->change_mtu = &emac_change_mtu;
+ ndev->set_mac_address = &emac_set_mac_address;
+ ndev->set_multicast_list = &emac_set_multicast_list;
+ ndev->do_ioctl = &emac_ioctl;
+ SET_ETHTOOL_OPS(ndev, &emac_ethtool_ops);
+ if (emacdata->tah_idx >= 0)
+ ndev->features = NETIF_F_IP_CSUM | NETIF_F_SG;
+
+ SET_MODULE_OWNER(ndev);
+
+ rc = register_netdev(ndev);
+ if (rc != 0)
+ goto bail;
+
+ printk("%s: IBM emac, MAC %02x:%02x:%02x:%02x:%02x:%02x\n",
+ ndev->name,
+ ndev->dev_addr[0], ndev->dev_addr[1], ndev->dev_addr[2],
+ ndev->dev_addr[3], ndev->dev_addr[4], ndev->dev_addr[5]);
+ printk(KERN_INFO "%s: Found %s PHY (0x%02x)\n",
+ ndev->name, ep->phy_mii.def->name, ep->mii_phy_addr);
+
+ bail:
+ if (rc && commac_reg)
+ mal_unregister_commac(ep->mal, &ep->commac);
+ if (rc && ndev)
+ kfree(ndev);
+
+ return rc;
+}
+
+static int emac_probe(struct ocp_device *ocpdev)
+{
+ struct ocp_device *maldev;
+ struct ibm_ocp_mal *mal;
+ struct ocp_func_emac_data *emacdata;
+
+ emacdata = (struct ocp_func_emac_data *)ocpdev->def->additions;
+ if (emacdata == NULL) {
+ printk(KERN_ERR "emac%d: Missing additional datas !\n",
+ ocpdev->def->index);
+ return -ENODEV;
+ }
+
+ /* Get the MAL device */
+ maldev = ocp_find_device(OCP_ANY_ID, OCP_FUNC_MAL, emacdata->mal_idx);
+ if (maldev == NULL) {
+ printk("No maldev\n");
+ return -ENODEV;
+ }
+ /*
+ * Get MAL driver data, it must be here due to link order.
+ * When the driver is modularized, symbol dependencies will
+ * ensure the MAL driver is already present if built as a
+ * module.
+ */
+ mal = (struct ibm_ocp_mal *)ocp_get_drvdata(maldev);
+ if (mal == NULL) {
+ printk("No maldrv\n");
+ return -ENODEV;
+ }
+
+ /* If we depend on another EMAC for MDIO, wait for it to show up */
+ if (emacdata->mdio_idx >= 0 &&
+ (emacdata->mdio_idx != ocpdev->def->index) && !mdio_ndev) {
+ struct emac_def_dev *ddev;
+ /* Add this index to the deferred init table */
+ ddev = kmalloc(sizeof(struct emac_def_dev), GFP_KERNEL);
+ ddev->ocpdev = ocpdev;
+ ddev->mal = mal;
+ list_add_tail(&ddev->link, &emac_init_list);
+ } else {
+ emac_init_device(ocpdev, mal);
+ }
+
+ return 0;
+}
+
+/* Structure for a device driver */
+static struct ocp_device_id emac_ids[] = {
+ {.vendor = OCP_ANY_ID,.function = OCP_FUNC_EMAC},
+ {.vendor = OCP_VENDOR_INVALID}
+};
+
+static struct ocp_driver emac_driver = {
+ .name = "emac",
+ .id_table = emac_ids,
+
+ .probe = emac_probe,
+ .remove = emac_remove,
+};
+
+static int __init emac_init(void)
+{
+ printk(KERN_INFO DRV_NAME ": " DRV_DESC ", version " DRV_VERSION "\n");
+ printk(KERN_INFO "Maintained by " DRV_AUTHOR "\n");
+
+ if (skb_res > 2) {
+ printk(KERN_WARNING "Invalid skb_res: %d, cropping to 2\n",
+ skb_res);
+ skb_res = 2;
+ }
+
+ return ocp_register_driver(&emac_driver);
+}
+
+static void __exit emac_exit(void)
+{
+ ocp_unregister_driver(&emac_driver);
+}
+
+module_init(emac_init);
+module_exit(emac_exit);
--- /dev/null
+/*
+ * ibm_emac_core.h
+ *
+ * Ethernet driver for the built in ethernet on the IBM 405 PowerPC
+ * processor.
+ *
+ * Armin Kuster akuster@mvista.com
+ * Sept, 2001
+ *
+ * Orignial driver
+ * Johnnie Peters
+ * jpeters@mvista.com
+ *
+ * Copyright 2000 MontaVista Softare Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef _IBM_EMAC_CORE_H_
+#define _IBM_EMAC_CORE_H_
+
+#include <linux/netdevice.h>
+#include <asm/ocp.h>
+#include <asm/mmu.h> /* For phys_addr_t */
+
+#include "ibm_emac.h"
+#include "ibm_emac_phy.h"
+#include "ibm_emac_rgmii.h"
+#include "ibm_emac_zmii.h"
+#include "ibm_emac_mal.h"
+#include "ibm_emac_tah.h"
+
+#ifndef CONFIG_IBM_EMAC_TXB
+#define NUM_TX_BUFF 64
+#define NUM_RX_BUFF 64
+#else
+#define NUM_TX_BUFF CONFIG_IBM_EMAC_TXB
+#define NUM_RX_BUFF CONFIG_IBM_EMAC_RXB
+#endif
+
+/* This does 16 byte alignment, exactly what we need.
+ * The packet length includes FCS, but we don't want to
+ * include that when passing upstream as it messes up
+ * bridging applications.
+ */
+#ifndef CONFIG_IBM_EMAC_SKBRES
+#define SKB_RES 2
+#else
+#define SKB_RES CONFIG_IBM_EMAC_SKBRES
+#endif
+
+/* Note about alignement. alloc_skb() returns a cache line
+ * aligned buffer. However, dev_alloc_skb() will add 16 more
+ * bytes and "reserve" them, so our buffer will actually end
+ * on a half cache line. What we do is to use directly
+ * alloc_skb, allocate 16 more bytes to match the total amount
+ * allocated by dev_alloc_skb(), but we don't reserve.
+ */
+#define MAX_NUM_BUF_DESC 255
+#define DESC_BUF_SIZE 4080 /* max 4096-16 */
+#define DESC_BUF_SIZE_REG (DESC_BUF_SIZE / 16)
+
+/* Transmitter timeout. */
+#define TX_TIMEOUT (2*HZ)
+
+/* MDIO latency delay */
+#define MDIO_DELAY 250
+
+/* Power managment shift registers */
+#define IBM_CPM_EMMII 0 /* Shift value for MII */
+#define IBM_CPM_EMRX 1 /* Shift value for recv */
+#define IBM_CPM_EMTX 2 /* Shift value for MAC */
+#define IBM_CPM_EMAC(x) (((x)>>IBM_CPM_EMMII) | ((x)>>IBM_CPM_EMRX) | ((x)>>IBM_CPM_EMTX))
+
+#define ENET_HEADER_SIZE 14
+#define ENET_FCS_SIZE 4
+#define ENET_DEF_MTU_SIZE 1500
+#define ENET_DEF_BUF_SIZE (ENET_DEF_MTU_SIZE + ENET_HEADER_SIZE + ENET_FCS_SIZE)
+#define EMAC_MIN_FRAME 64
+#define EMAC_MAX_FRAME 9018
+#define EMAC_MIN_MTU (EMAC_MIN_FRAME - ENET_HEADER_SIZE - ENET_FCS_SIZE)
+#define EMAC_MAX_MTU (EMAC_MAX_FRAME - ENET_HEADER_SIZE - ENET_FCS_SIZE)
+
+#ifdef CONFIG_IBM_EMAC_ERRMSG
+void emac_serr_dump_0(struct net_device *dev);
+void emac_serr_dump_1(struct net_device *dev);
+void emac_err_dump(struct net_device *dev, int em0isr);
+void emac_phy_dump(struct net_device *);
+void emac_desc_dump(struct net_device *);
+void emac_mac_dump(struct net_device *);
+void emac_mal_dump(struct net_device *);
+#else
+#define emac_serr_dump_0(dev) do { } while (0)
+#define emac_serr_dump_1(dev) do { } while (0)
+#define emac_err_dump(dev,x) do { } while (0)
+#define emac_phy_dump(dev) do { } while (0)
+#define emac_desc_dump(dev) do { } while (0)
+#define emac_mac_dump(dev) do { } while (0)
+#define emac_mal_dump(dev) do { } while (0)
+#endif
+
+struct ocp_enet_private {
+ struct sk_buff *tx_skb[NUM_TX_BUFF];
+ struct sk_buff *rx_skb[NUM_RX_BUFF];
+ struct mal_descriptor *tx_desc;
+ struct mal_descriptor *rx_desc;
+ struct mal_descriptor *rx_dirty;
+ struct net_device_stats stats;
+ int tx_cnt;
+ int rx_slot;
+ int dirty_rx;
+ int tx_slot;
+ int ack_slot;
+ int rx_buffer_size;
+
+ struct mii_phy phy_mii;
+ int mii_phy_addr;
+ int want_autoneg;
+ int timer_ticks;
+ struct timer_list link_timer;
+ struct net_device *mdio_dev;
+
+ struct ocp_device *rgmii_dev;
+ int rgmii_input;
+
+ struct ocp_device *zmii_dev;
+ int zmii_input;
+
+ struct ibm_ocp_mal *mal;
+ int mal_tx_chan, mal_rx_chan;
+ struct mal_commac commac;
+
+ struct ocp_device *tah_dev;
+
+ int opened;
+ int going_away;
+ int wol_irq;
+ emac_t *emacp;
+ struct ocp_device *ocpdev;
+ struct net_device *ndev;
+ spinlock_t lock;
+};
+#endif /* _IBM_EMAC_CORE_H_ */
--- /dev/null
+/*
+ * ibm_ocp_mal.c
+ *
+ * Armin Kuster akuster@mvista.com
+ * Juen, 2002
+ *
+ * Copyright 2002 MontaVista Softare Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/netdevice.h>
+#include <linux/init.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/ocp.h>
+
+#include "ibm_emac_mal.h"
+
+// Locking: Should we share a lock with the client ? The client could provide
+// a lock pointer (optionally) in the commac structure... I don't think this is
+// really necessary though
+
+/* This lock protects the commac list. On today UP implementations, it's
+ * really only used as IRQ protection in mal_{register,unregister}_commac()
+ */
+static rwlock_t mal_list_lock = RW_LOCK_UNLOCKED;
+
+int mal_register_commac(struct ibm_ocp_mal *mal, struct mal_commac *commac)
+{
+ unsigned long flags;
+
+ write_lock_irqsave(&mal_list_lock, flags);
+
+ /* Don't let multiple commacs claim the same channel */
+ if ((mal->tx_chan_mask & commac->tx_chan_mask) ||
+ (mal->rx_chan_mask & commac->rx_chan_mask)) {
+ write_unlock_irqrestore(&mal_list_lock, flags);
+ return -EBUSY;
+ }
+
+ mal->tx_chan_mask |= commac->tx_chan_mask;
+ mal->rx_chan_mask |= commac->rx_chan_mask;
+
+ list_add(&commac->list, &mal->commac);
+
+ write_unlock_irqrestore(&mal_list_lock, flags);
+
+ return 0;
+}
+
+int mal_unregister_commac(struct ibm_ocp_mal *mal, struct mal_commac *commac)
+{
+ unsigned long flags;
+
+ write_lock_irqsave(&mal_list_lock, flags);
+
+ mal->tx_chan_mask &= ~commac->tx_chan_mask;
+ mal->rx_chan_mask &= ~commac->rx_chan_mask;
+
+ list_del_init(&commac->list);
+
+ write_unlock_irqrestore(&mal_list_lock, flags);
+
+ return 0;
+}
+
+int mal_set_rcbs(struct ibm_ocp_mal *mal, int channel, unsigned long size)
+{
+ switch (channel) {
+ case 0:
+ set_mal_dcrn(mal, DCRN_MALRCBS0, size);
+ break;
+#ifdef DCRN_MALRCBS1
+ case 1:
+ set_mal_dcrn(mal, DCRN_MALRCBS1, size);
+ break;
+#endif
+#ifdef DCRN_MALRCBS2
+ case 2:
+ set_mal_dcrn(mal, DCRN_MALRCBS2, size);
+ break;
+#endif
+#ifdef DCRN_MALRCBS3
+ case 3:
+ set_mal_dcrn(mal, DCRN_MALRCBS3, size);
+ break;
+#endif
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static irqreturn_t mal_serr(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ibm_ocp_mal *mal = dev_instance;
+ unsigned long mal_error;
+
+ /*
+ * This SERR applies to one of the devices on the MAL, here we charge
+ * it against the first EMAC registered for the MAL.
+ */
+
+ mal_error = get_mal_dcrn(mal, DCRN_MALESR);
+
+ printk(KERN_ERR "%s: System Error (MALESR=%lx)\n",
+ "MAL" /* FIXME: get the name right */ , mal_error);
+
+ /* FIXME: decipher error */
+ /* DIXME: distribute to commacs, if possible */
+
+ /* Clear the error status register */
+ set_mal_dcrn(mal, DCRN_MALESR, mal_error);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t mal_txeob(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ibm_ocp_mal *mal = dev_instance;
+ struct list_head *l;
+ unsigned long isr;
+
+ isr = get_mal_dcrn(mal, DCRN_MALTXEOBISR);
+ set_mal_dcrn(mal, DCRN_MALTXEOBISR, isr);
+
+ read_lock(&mal_list_lock);
+ list_for_each(l, &mal->commac) {
+ struct mal_commac *mc = list_entry(l, struct mal_commac, list);
+
+ if (isr & mc->tx_chan_mask) {
+ mc->ops->txeob(mc->dev, isr & mc->tx_chan_mask);
+ }
+ }
+ read_unlock(&mal_list_lock);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t mal_rxeob(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ibm_ocp_mal *mal = dev_instance;
+ struct list_head *l;
+ unsigned long isr;
+
+ isr = get_mal_dcrn(mal, DCRN_MALRXEOBISR);
+ set_mal_dcrn(mal, DCRN_MALRXEOBISR, isr);
+
+ read_lock(&mal_list_lock);
+ list_for_each(l, &mal->commac) {
+ struct mal_commac *mc = list_entry(l, struct mal_commac, list);
+
+ if (isr & mc->rx_chan_mask) {
+ mc->ops->rxeob(mc->dev, isr & mc->rx_chan_mask);
+ }
+ }
+ read_unlock(&mal_list_lock);
+
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t mal_txde(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ibm_ocp_mal *mal = dev_instance;
+ struct list_head *l;
+ unsigned long deir;
+
+ deir = get_mal_dcrn(mal, DCRN_MALTXDEIR);
+
+ /* FIXME: print which MAL correctly */
+ printk(KERN_WARNING "%s: Tx descriptor error (MALTXDEIR=%lx)\n",
+ "MAL", deir);
+
+ read_lock(&mal_list_lock);
+ list_for_each(l, &mal->commac) {
+ struct mal_commac *mc = list_entry(l, struct mal_commac, list);
+
+ if (deir & mc->tx_chan_mask) {
+ mc->ops->txde(mc->dev, deir & mc->tx_chan_mask);
+ }
+ }
+ read_unlock(&mal_list_lock);
+
+ return IRQ_HANDLED;
+}
+
+/*
+ * This interrupt should be very rare at best. This occurs when
+ * the hardware has a problem with the receive descriptors. The manual
+ * states that it occurs when the hardware cannot the receive descriptor
+ * empty bit is not set. The recovery mechanism will be to
+ * traverse through the descriptors, handle any that are marked to be
+ * handled and reinitialize each along the way. At that point the driver
+ * will be restarted.
+ */
+static irqreturn_t mal_rxde(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ibm_ocp_mal *mal = dev_instance;
+ struct list_head *l;
+ unsigned long deir;
+
+ deir = get_mal_dcrn(mal, DCRN_MALRXDEIR);
+
+ /*
+ * This really is needed. This case encountered in stress testing.
+ */
+ if (deir == 0)
+ return IRQ_HANDLED;
+
+ /* FIXME: print which MAL correctly */
+ printk(KERN_WARNING "%s: Rx descriptor error (MALRXDEIR=%lx)\n",
+ "MAL", deir);
+
+ read_lock(&mal_list_lock);
+ list_for_each(l, &mal->commac) {
+ struct mal_commac *mc = list_entry(l, struct mal_commac, list);
+
+ if (deir & mc->rx_chan_mask) {
+ mc->ops->rxde(mc->dev, deir & mc->rx_chan_mask);
+ }
+ }
+ read_unlock(&mal_list_lock);
+
+ return IRQ_HANDLED;
+}
+
+static int __init mal_probe(struct ocp_device *ocpdev)
+{
+ struct ibm_ocp_mal *mal = NULL;
+ struct ocp_func_mal_data *maldata;
+ int err = 0;
+
+ maldata = (struct ocp_func_mal_data *)ocpdev->def->additions;
+ if (maldata == NULL) {
+ printk(KERN_ERR "mal%d: Missing additional datas !\n",
+ ocpdev->def->index);
+ return -ENODEV;
+ }
+
+ mal = kmalloc(sizeof(struct ibm_ocp_mal), GFP_KERNEL);
+ if (mal == NULL) {
+ printk(KERN_ERR
+ "mal%d: Out of memory allocating MAL structure !\n",
+ ocpdev->def->index);
+ return -ENOMEM;
+ }
+ memset(mal, 0, sizeof(*mal));
+
+ switch (ocpdev->def->index) {
+ case 0:
+ mal->dcrbase = DCRN_MAL_BASE;
+ break;
+#ifdef DCRN_MAL1_BASE
+ case 1:
+ mal->dcrbase = DCRN_MAL1_BASE;
+ break;
+#endif
+ default:
+ BUG();
+ }
+
+ /**************************/
+
+ INIT_LIST_HEAD(&mal->commac);
+
+ set_mal_dcrn(mal, DCRN_MALRXCARR, 0xFFFFFFFF);
+ set_mal_dcrn(mal, DCRN_MALTXCARR, 0xFFFFFFFF);
+
+ set_mal_dcrn(mal, DCRN_MALCR, MALCR_MMSR); /* 384 */
+ /* FIXME: Add delay */
+
+ /* Set the MAL configuration register */
+ set_mal_dcrn(mal, DCRN_MALCR,
+ MALCR_PLBB | MALCR_OPBBL | MALCR_LEA |
+ MALCR_PLBLT_DEFAULT);
+
+ /* It would be nice to allocate buffers separately for each
+ * channel, but we can't because the channels share the upper
+ * 13 bits of address lines. Each channels buffer must also
+ * be 4k aligned, so we allocate 4k for each channel. This is
+ * inefficient FIXME: do better, if possible */
+ mal->tx_virt_addr = dma_alloc_coherent(&ocpdev->dev,
+ MAL_DT_ALIGN *
+ maldata->num_tx_chans,
+ &mal->tx_phys_addr, GFP_KERNEL);
+ if (mal->tx_virt_addr == NULL) {
+ printk(KERN_ERR
+ "mal%d: Out of memory allocating MAL descriptors !\n",
+ ocpdev->def->index);
+ err = -ENOMEM;
+ goto fail;
+ }
+
+ /* God, oh, god, I hate DCRs */
+ set_mal_dcrn(mal, DCRN_MALTXCTP0R, mal->tx_phys_addr);
+#ifdef DCRN_MALTXCTP1R
+ if (maldata->num_tx_chans > 1)
+ set_mal_dcrn(mal, DCRN_MALTXCTP1R,
+ mal->tx_phys_addr + MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP1R */
+#ifdef DCRN_MALTXCTP2R
+ if (maldata->num_tx_chans > 2)
+ set_mal_dcrn(mal, DCRN_MALTXCTP2R,
+ mal->tx_phys_addr + 2 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP2R */
+#ifdef DCRN_MALTXCTP3R
+ if (maldata->num_tx_chans > 3)
+ set_mal_dcrn(mal, DCRN_MALTXCTP3R,
+ mal->tx_phys_addr + 3 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP3R */
+#ifdef DCRN_MALTXCTP4R
+ if (maldata->num_tx_chans > 4)
+ set_mal_dcrn(mal, DCRN_MALTXCTP4R,
+ mal->tx_phys_addr + 4 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP4R */
+#ifdef DCRN_MALTXCTP5R
+ if (maldata->num_tx_chans > 5)
+ set_mal_dcrn(mal, DCRN_MALTXCTP5R,
+ mal->tx_phys_addr + 5 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP5R */
+#ifdef DCRN_MALTXCTP6R
+ if (maldata->num_tx_chans > 6)
+ set_mal_dcrn(mal, DCRN_MALTXCTP6R,
+ mal->tx_phys_addr + 6 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP6R */
+#ifdef DCRN_MALTXCTP7R
+ if (maldata->num_tx_chans > 7)
+ set_mal_dcrn(mal, DCRN_MALTXCTP7R,
+ mal->tx_phys_addr + 7 * MAL_DT_ALIGN);
+#endif /* DCRN_MALTXCTP7R */
+
+ mal->rx_virt_addr = dma_alloc_coherent(&ocpdev->dev,
+ MAL_DT_ALIGN *
+ maldata->num_rx_chans,
+ &mal->rx_phys_addr, GFP_KERNEL);
+
+ set_mal_dcrn(mal, DCRN_MALRXCTP0R, mal->rx_phys_addr);
+#ifdef DCRN_MALRXCTP1R
+ if (maldata->num_rx_chans > 1)
+ set_mal_dcrn(mal, DCRN_MALRXCTP1R,
+ mal->rx_phys_addr + MAL_DT_ALIGN);
+#endif /* DCRN_MALRXCTP1R */
+#ifdef DCRN_MALRXCTP2R
+ if (maldata->num_rx_chans > 2)
+ set_mal_dcrn(mal, DCRN_MALRXCTP2R,
+ mal->rx_phys_addr + 2 * MAL_DT_ALIGN);
+#endif /* DCRN_MALRXCTP2R */
+#ifdef DCRN_MALRXCTP3R
+ if (maldata->num_rx_chans > 3)
+ set_mal_dcrn(mal, DCRN_MALRXCTP3R,
+ mal->rx_phys_addr + 3 * MAL_DT_ALIGN);
+#endif /* DCRN_MALRXCTP3R */
+
+ err = request_irq(maldata->serr_irq, mal_serr, 0, "MAL SERR", mal);
+ if (err)
+ goto fail;
+ err = request_irq(maldata->txde_irq, mal_txde, 0, "MAL TX DE ", mal);
+ if (err)
+ goto fail;
+ err = request_irq(maldata->txeob_irq, mal_txeob, 0, "MAL TX EOB", mal);
+ if (err)
+ goto fail;
+ err = request_irq(maldata->rxde_irq, mal_rxde, 0, "MAL RX DE", mal);
+ if (err)
+ goto fail;
+ err = request_irq(maldata->rxeob_irq, mal_rxeob, 0, "MAL RX EOB", mal);
+ if (err)
+ goto fail;
+
+ set_mal_dcrn(mal, DCRN_MALIER,
+ MALIER_DE | MALIER_NE | MALIER_TE |
+ MALIER_OPBE | MALIER_PLBE);
+
+ /* Advertise me to the rest of the world */
+ ocp_set_drvdata(ocpdev, mal);
+
+ printk(KERN_INFO "mal%d: Initialized, %d tx channels, %d rx channels\n",
+ ocpdev->def->index, maldata->num_tx_chans,
+ maldata->num_rx_chans);
+
+ return 0;
+
+ fail:
+ /* FIXME: dispose requested IRQs ! */
+ if (err && mal)
+ kfree(mal);
+ return err;
+}
+
+static void __exit mal_remove(struct ocp_device *ocpdev)
+{
+ struct ibm_ocp_mal *mal = ocp_get_drvdata(ocpdev);
+ struct ocp_func_mal_data *maldata = ocpdev->def->additions;
+
+ BUG_ON(!maldata);
+
+ ocp_set_drvdata(ocpdev, NULL);
+
+ /* FIXME: shut down the MAL, deal with dependency with emac */
+ free_irq(maldata->serr_irq, mal);
+ free_irq(maldata->txde_irq, mal);
+ free_irq(maldata->txeob_irq, mal);
+ free_irq(maldata->rxde_irq, mal);
+ free_irq(maldata->rxeob_irq, mal);
+
+ if (mal->tx_virt_addr)
+ dma_free_coherent(&ocpdev->dev,
+ MAL_DT_ALIGN * maldata->num_tx_chans,
+ mal->tx_virt_addr, mal->tx_phys_addr);
+
+ if (mal->rx_virt_addr)
+ dma_free_coherent(&ocpdev->dev,
+ MAL_DT_ALIGN * maldata->num_rx_chans,
+ mal->rx_virt_addr, mal->rx_phys_addr);
+
+ kfree(mal);
+}
+
+/* Structure for a device driver */
+static struct ocp_device_id mal_ids[] = {
+ {.vendor = OCP_ANY_ID,.function = OCP_FUNC_MAL},
+ {.vendor = OCP_VENDOR_INVALID}
+};
+
+static struct ocp_driver mal_driver = {
+ .name = "mal",
+ .id_table = mal_ids,
+
+ .probe = mal_probe,
+ .remove = mal_remove,
+};
+
+static int __init init_mals(void)
+{
+ int rc;
+
+ rc = ocp_register_driver(&mal_driver);
+ if (rc < 0) {
+ ocp_unregister_driver(&mal_driver);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static void __exit exit_mals(void)
+{
+ ocp_unregister_driver(&mal_driver);
+}
+
+module_init(init_mals);
+module_exit(exit_mals);
--- /dev/null
+#ifndef _IBM_EMAC_MAL_H
+#define _IBM_EMAC_MAL_H
+
+#include <linux/list.h>
+
+#define MAL_DT_ALIGN (4096) /* Alignment for each channel's descriptor table */
+
+#define MAL_CHAN_MASK(chan) (0x80000000 >> (chan))
+
+/* MAL Buffer Descriptor structure */
+struct mal_descriptor {
+ unsigned short ctrl; /* MAL / Commac status control bits */
+ short data_len; /* Max length is 4K-1 (12 bits) */
+ unsigned char *data_ptr; /* pointer to actual data buffer */
+} __attribute__ ((packed));
+
+/* the following defines are for the MadMAL status and control registers. */
+/* MADMAL transmit and receive status/control bits */
+#define MAL_RX_CTRL_EMPTY 0x8000
+#define MAL_RX_CTRL_WRAP 0x4000
+#define MAL_RX_CTRL_CM 0x2000
+#define MAL_RX_CTRL_LAST 0x1000
+#define MAL_RX_CTRL_FIRST 0x0800
+#define MAL_RX_CTRL_INTR 0x0400
+
+#define MAL_TX_CTRL_READY 0x8000
+#define MAL_TX_CTRL_WRAP 0x4000
+#define MAL_TX_CTRL_CM 0x2000
+#define MAL_TX_CTRL_LAST 0x1000
+#define MAL_TX_CTRL_INTR 0x0400
+
+struct mal_commac_ops {
+ void (*txeob) (void *dev, u32 chanmask);
+ void (*txde) (void *dev, u32 chanmask);
+ void (*rxeob) (void *dev, u32 chanmask);
+ void (*rxde) (void *dev, u32 chanmask);
+};
+
+struct mal_commac {
+ struct mal_commac_ops *ops;
+ void *dev;
+ u32 tx_chan_mask, rx_chan_mask;
+ struct list_head list;
+};
+
+struct ibm_ocp_mal {
+ int dcrbase;
+
+ struct list_head commac;
+ u32 tx_chan_mask, rx_chan_mask;
+
+ dma_addr_t tx_phys_addr;
+ struct mal_descriptor *tx_virt_addr;
+
+ dma_addr_t rx_phys_addr;
+ struct mal_descriptor *rx_virt_addr;
+};
+
+#define GET_MAL_STANZA(base,dcrn) \
+ case base: \
+ x = mfdcr(dcrn(base)); \
+ break;
+
+#define SET_MAL_STANZA(base,dcrn, val) \
+ case base: \
+ mtdcr(dcrn(base), (val)); \
+ break;
+
+#define GET_MAL0_STANZA(dcrn) GET_MAL_STANZA(DCRN_MAL_BASE,dcrn)
+#define SET_MAL0_STANZA(dcrn,val) SET_MAL_STANZA(DCRN_MAL_BASE,dcrn,val)
+
+#ifdef DCRN_MAL1_BASE
+#define GET_MAL1_STANZA(dcrn) GET_MAL_STANZA(DCRN_MAL1_BASE,dcrn)
+#define SET_MAL1_STANZA(dcrn,val) SET_MAL_STANZA(DCRN_MAL1_BASE,dcrn,val)
+#else /* ! DCRN_MAL1_BASE */
+#define GET_MAL1_STANZA(dcrn)
+#define SET_MAL1_STANZA(dcrn,val)
+#endif
+
+#define get_mal_dcrn(mal, dcrn) ({ \
+ u32 x; \
+ switch ((mal)->dcrbase) { \
+ GET_MAL0_STANZA(dcrn) \
+ GET_MAL1_STANZA(dcrn) \
+ default: \
+ x = 0; \
+ BUG(); \
+ } \
+x; })
+
+#define set_mal_dcrn(mal, dcrn, val) do { \
+ switch ((mal)->dcrbase) { \
+ SET_MAL0_STANZA(dcrn,val) \
+ SET_MAL1_STANZA(dcrn,val) \
+ default: \
+ BUG(); \
+ } } while (0)
+
+static inline void mal_enable_tx_channels(struct ibm_ocp_mal *mal, u32 chanmask)
+{
+ set_mal_dcrn(mal, DCRN_MALTXCASR,
+ get_mal_dcrn(mal, DCRN_MALTXCASR) | chanmask);
+}
+
+static inline void mal_disable_tx_channels(struct ibm_ocp_mal *mal,
+ u32 chanmask)
+{
+ set_mal_dcrn(mal, DCRN_MALTXCARR, chanmask);
+}
+
+static inline void mal_enable_rx_channels(struct ibm_ocp_mal *mal, u32 chanmask)
+{
+ set_mal_dcrn(mal, DCRN_MALRXCASR,
+ get_mal_dcrn(mal, DCRN_MALRXCASR) | chanmask);
+}
+
+static inline void mal_disable_rx_channels(struct ibm_ocp_mal *mal,
+ u32 chanmask)
+{
+ set_mal_dcrn(mal, DCRN_MALRXCARR, chanmask);
+}
+
+extern int mal_register_commac(struct ibm_ocp_mal *mal,
+ struct mal_commac *commac);
+extern int mal_unregister_commac(struct ibm_ocp_mal *mal,
+ struct mal_commac *commac);
+
+extern int mal_set_rcbs(struct ibm_ocp_mal *mal, int channel,
+ unsigned long size);
+
+#endif /* _IBM_EMAC_MAL_H */
--- /dev/null
+/*
+ * linux/drivers/net/netdump.c
+ *
+ * Copyright (C) 2001 Ingo Molnar <mingo@redhat.com>
+ * Copyright (C) 2002 Red Hat, Inc.
+ * Copyright (C) 2004 Red Hat, Inc.
+ *
+ * This file contains the implementation of an IRQ-safe, crash-safe
+ * kernel console implementation that outputs kernel messages to the
+ * network.
+ *
+ * Modification history:
+ *
+ * 2001-09-17 started by Ingo Molnar.
+ * 2002-03-14 simultaneous syslog packet option by Michael K. Johnson
+ * 2004-04-07 port to 2.6 netpoll facility by Dave Anderson and Jeff Moyer.
+ */
+#include <linux/mm.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/random.h>
+#include <linux/reboot.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <asm/unaligned.h>
+#include <asm/pgtable.h>
+#include <linux/console.h>
+#include <linux/smp_lock.h>
+#include <linux/elf.h>
+#include <linux/preempt.h>
+
+#include "netdump.h"
+#include <linux/netpoll.h>
+
+/*
+ * prototypes.
+ */
+void netdump_rx(struct netpoll *np, short source, char *data, int dlen);
+static void send_netdump_msg(struct netpoll *np, const char *msg, unsigned int msg_len, reply_t *reply);
+static void send_netdump_mem(struct netpoll *np, req_t *req);
+static void netdump_startup_handshake(struct netpoll *np);
+static asmlinkage void netpoll_netdump(struct pt_regs *regs, void *arg);
+static void netpoll_start_netdump(struct pt_regs *regs);
+
+
+#include <asm/netdump.h>
+
+
+#undef Dprintk
+#define DEBUG 0
+#if DEBUG
+# define Dprintk(x...) printk(KERN_INFO x)
+#else
+# define Dprintk(x...)
+#endif
+
+MODULE_AUTHOR("Maintainer: Dave Anderson <anderson@redhat.com>");
+MODULE_DESCRIPTION("Network kernel crash dump module");
+MODULE_LICENSE("GPL");
+
+static char config[256];
+module_param_string(netdump, config, 256, 0);
+MODULE_PARM_DESC(netdump,
+ " netdump=[src-port]@[src-ip]/[dev],[tgt-port]@<tgt-ip>/[tgt-macaddr]\n");
+
+static u32 magic1, magic2;
+module_param(magic1, uint, 000);
+module_param(magic2, uint, 000);
+
+static struct netpoll np = {
+ .name = "netdump",
+ .dev_name = "eth0",
+ .local_port = 6666,
+ .remote_port = 6666,
+ .remote_mac = {0x00, 0x00, 0x00, 0x00, 0x00, 0x00},
+ .rx_hook = netdump_rx,
+ .dump_func = netpoll_start_netdump,
+};
+
+
+/*
+ * NOTE: security depends on the trusted path between the netconsole
+ * server and netconsole client, since none of the packets are
+ * encrypted. The random magic number protects the protocol
+ * against spoofing.
+ */
+static u64 netdump_magic;
+
+static spinlock_t req_lock = SPIN_LOCK_UNLOCKED;
+static int nr_req = 0;
+static LIST_HEAD(request_list);
+
+static unsigned long long t0, jiffy_cycles = 1000 * (1000000/HZ);
+void *netdump_stack;
+
+
+static void update_jiffies(void)
+{
+ static unsigned long long prev_tick;
+ platform_timestamp(t0);
+
+ /* maintain jiffies in a polling fashion, based on rdtsc. */
+ if (t0 - prev_tick >= jiffy_cycles) {
+ prev_tick += jiffy_cycles;
+ jiffies++;
+ }
+}
+
+static void add_new_req(req_t *req)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&req_lock, flags);
+ list_add_tail(&req->list, &request_list);
+ nr_req++;
+ Dprintk("pending requests: %d.\n", nr_req);
+ spin_unlock_irqrestore(&req_lock, flags);
+}
+
+static req_t *get_new_req(void)
+{
+ req_t *req = NULL;
+ unsigned long flags;
+
+ update_jiffies();
+
+ spin_lock_irqsave(&req_lock, flags);
+ if (nr_req) {
+ req = list_entry(request_list.next, req_t, list);
+ list_del(&req->list);
+ nr_req--;
+ }
+ spin_unlock_irqrestore(&req_lock, flags);
+
+ return req;
+}
+
+static req_t *alloc_req(void)
+{
+ req_t *req;
+
+ req = (req_t *) kmalloc(sizeof(*req), GFP_ATOMIC);
+ return req;
+}
+
+static inline void print_status (req_t *req)
+{
+ static int count = 0;
+ static unsigned long prev_jiffies = 0;
+
+ if (jiffies/HZ != prev_jiffies/HZ) {
+ prev_jiffies = jiffies;
+ count++;
+ switch (count & 3) {
+ case 0: printk("%d(%lu)/\r", nr_req, jiffies); break;
+ case 1: printk("%d(%lu)|\r", nr_req, jiffies); break;
+ case 2: printk("%d(%lu)\\\r", nr_req, jiffies); break;
+ case 3: printk("%d(%lu)-\r", nr_req, jiffies); break;
+ }
+ }
+}
+
+void netdump_rx(struct netpoll *np, short source, char *data, int dlen)
+{
+ req_t *req, *__req = (req_t *)data;
+
+ if (!netdump_mode)
+ return;
+#if DEBUG
+ {
+ static int packet_count;
+ Dprintk(" %d\r", ++packet_count);
+ }
+#endif
+
+ if (dlen < NETDUMP_REQ_SIZE) {
+ Dprintk("... netdump_rx: len not ok.\n");
+ return;
+ }
+
+ req = alloc_req();
+ if (!req) {
+ printk("no more RAM to allocate request - dropping it.\n");
+ return;
+ }
+
+ req->command = ntohl(__req->command);
+ req->from = ntohl(__req->from);
+ req->to = ntohl(__req->to);
+ req->nr = ntohl(__req->nr);
+
+ Dprintk("... netdump command: %08x.\n", req->command);
+ Dprintk("... netdump from: %08x.\n", req->from);
+ Dprintk("... netdump to: %08x.\n", req->to);
+
+ add_new_req(req);
+ return;
+}
+
+#define MAX_MSG_LEN HEADER_LEN + 1024
+
+static unsigned char effective_version = NETDUMP_VERSION;
+
+static void send_netdump_msg(struct netpoll *np, const char *msg, unsigned int msg_len, reply_t *reply)
+{
+ /* max len should be 1024 + HEADER_LEN */
+ static unsigned char netpoll_msg[MAX_MSG_LEN + 1];
+
+ if (msg_len + HEADER_LEN > MAX_MSG_LEN + 1) {
+ printk("CODER ERROR!!! msg_len %ud too big for send msg\n",
+ msg_len);
+ for (;;) local_irq_disable();
+ /* NOTREACHED */
+ }
+
+ netpoll_msg[0] = effective_version;
+ put_unaligned(htonl(reply->nr), (u32 *) (&netpoll_msg[1]));
+ put_unaligned(htonl(reply->code), (u32 *) (&netpoll_msg[5]));
+ put_unaligned(htonl(reply->info), (u32 *) (&netpoll_msg[9]));
+ memcpy(&netpoll_msg[HEADER_LEN], msg, msg_len);
+
+ netpoll_send_udp(np, netpoll_msg, HEADER_LEN + msg_len);
+}
+
+static void send_netdump_mem(struct netpoll *np, req_t *req)
+{
+ int i;
+ char *kaddr;
+ char str[1024];
+ struct page *page = NULL;
+ unsigned long nr = req->from;
+ int nr_chunks = PAGE_SIZE/1024;
+ reply_t reply;
+
+ Dprintk(" ... send_netdump_mem\n");
+ reply.nr = req->nr;
+ reply.info = 0;
+ if (req->from >= platform_max_pfn()) {
+ sprintf(str, "page %08lx is bigger than max page # %08lx!\n",
+ nr, platform_max_pfn());
+ reply.code = REPLY_ERROR;
+ send_netdump_msg(np, str, strlen(str), &reply);
+ return;
+ }
+ if (platform_page_is_ram(nr)) {
+ page = pfn_to_page(nr);
+ if (page_to_pfn(page) != nr)
+ page = NULL;
+ }
+ if (!page) {
+ reply.code = REPLY_RESERVED;
+ reply.info = platform_next_available(nr);
+ send_netdump_msg(np, str, 0, &reply);
+ return;
+ }
+
+ kaddr = (char *)kmap_atomic(page, KM_CRASHDUMP);
+
+ for (i = 0; i < nr_chunks; i++) {
+ unsigned int offset = i*1024;
+ reply.code = REPLY_MEM;
+ reply.info = offset;
+ Dprintk(" ... send_netdump_mem: sending message\n");
+ send_netdump_msg(np, kaddr + offset, 1024, &reply);
+ Dprintk(" ... send_netdump_mem: sent message\n");
+ }
+
+ kunmap_atomic(kaddr, KM_CRASHDUMP);
+ Dprintk(" ... send_netdump_mem: returning\n");
+}
+
+/*
+ * This function waits for the client to acknowledge the receipt
+ * of the netdump startup reply, with the possibility of packets
+ * getting lost. We resend the startup packet if no ACK is received,
+ * after a 1 second delay.
+ *
+ * (The client can test the success of the handshake via the HELLO
+ * command, and send ACKs until we enter netdump mode.)
+ */
+static void netdump_startup_handshake(struct netpoll *np)
+{
+ char tmp[200];
+ reply_t reply;
+ req_t *req = NULL;
+ int i;
+
+repeat:
+ sprintf(tmp,
+ "task_struct:0x%lx page_offset:0x%llx netdump_magic:0x%llx\n",
+ (unsigned long)current, (unsigned long long)PAGE_OFFSET,
+ (unsigned long long)netdump_magic);
+ reply.code = REPLY_START_NETDUMP;
+ reply.nr = platform_machine_type();
+ reply.info = NETDUMP_VERSION_MAX;
+
+ send_netdump_msg(np, tmp, strlen(tmp), &reply);
+
+ for (i = 0; i < 10000; i++) {
+ // wait 1 sec.
+ udelay(100);
+ Dprintk("handshake: polling controller ...\n");
+ netpoll_poll(np);
+ req = get_new_req();
+ if (req)
+ break;
+ }
+ if (!req)
+ goto repeat;
+ if (req->command != COMM_START_NETDUMP_ACK) {
+ kfree(req);
+ goto repeat;
+ }
+
+ /*
+ * Negotiate an effective version that works with the server.
+ */
+ if ((effective_version = platform_effective_version(req)) == 0) {
+ printk(KERN_ERR
+ "netdump: server cannot handle this client -- rebooting.\n");
+ netdump_mdelay(3000);
+ machine_restart(NULL);
+ }
+
+ kfree(req);
+
+ printk("NETDUMP START!\n");
+}
+
+static char cpus_frozen[NR_CPUS] = { 0 };
+
+static void freeze_cpu (void * dummy)
+{
+ cpus_frozen[smp_processor_id()] = 1;
+ platform_freeze_cpu();
+}
+
+static void netpoll_start_netdump(struct pt_regs *regs)
+{
+ int i;
+ unsigned long flags;
+
+ /*
+ * The netdump code is not re-entrant for several reasons. Most
+ * immediately, we will switch to the base of our stack and
+ * overwrite all of our call history.
+ */
+ if (netdump_mode) {
+ printk(KERN_ERR
+ "netpoll_start_netdump: called recursively. rebooting.\n");
+ netdump_mdelay(3000);
+ machine_restart(NULL);
+ }
+ netdump_mode = 1;
+
+ local_irq_save(flags);
+ preempt_disable();
+
+ smp_call_function(freeze_cpu, NULL, 1, -1);
+ netdump_mdelay(3000);
+ for (i = 0; i < NR_CPUS; i++) {
+ if (cpus_frozen[i])
+ printk("CPU#%d is frozen.\n", i);
+ else if (i == smp_processor_id())
+ printk("CPU#%d is executing netdump.\n", i);
+ }
+
+ /*
+ * Some platforms may want to execute netdump on its own stack.
+ */
+ platform_start_crashdump(netdump_stack, netpoll_netdump, regs);
+
+ preempt_enable_no_resched();
+ local_irq_restore(flags);
+ return;
+}
+
+static char command_tmp[1024];
+
+static asmlinkage void netpoll_netdump(struct pt_regs *regs, void *platform_arg)
+{
+ reply_t reply;
+ char *tmp = command_tmp;
+ extern unsigned long totalram_pages;
+ struct pt_regs myregs;
+ req_t *req;
+
+ /*
+ * Just in case we are crashing within the networking code
+ * ... attempt to fix up.
+ */
+ netpoll_reset_locks(&np);
+ platform_fix_regs();
+ platform_timestamp(t0);
+ netpoll_set_trap(1); /* bypass networking stack */
+
+ printk("< netdump activated - performing handshake with the server. >\n");
+ netdump_startup_handshake(&np);
+
+ printk("< handshake completed - listening for dump requests. >\n");
+
+ while (netdump_mode) {
+ local_irq_disable();
+ Dprintk("main netdump loop: polling controller ...\n");
+ netpoll_poll(&np);
+
+ req = get_new_req();
+ if (!req)
+ continue;
+
+ Dprintk("got new req, command %d.\n", req->command);
+ print_status(req);
+ switch (req->command) {
+ case COMM_NONE:
+ Dprintk("got NO command.\n");
+ break;
+
+ case COMM_SEND_MEM:
+ Dprintk("got MEM command.\n");
+ send_netdump_mem(&np, req);
+ break;
+
+ case COMM_EXIT:
+ Dprintk("got EXIT command.\n");
+ netdump_mode = 0;
+ netpoll_set_trap(0);
+ break;
+
+ case COMM_REBOOT:
+ Dprintk("got REBOOT command.\n");
+ printk("netdump: rebooting in 3 seconds.\n");
+ netdump_mdelay(3000);
+ machine_restart(NULL);
+ break;
+
+ case COMM_HELLO:
+ sprintf(tmp, "Hello, this is netdump version 0.%02d\n",
+ NETDUMP_VERSION);
+ reply.code = REPLY_HELLO;
+ reply.nr = req->nr;
+ reply.info = NETDUMP_VERSION;
+ send_netdump_msg(&np, tmp, strlen(tmp), &reply);
+ break;
+
+ case COMM_GET_PAGE_SIZE:
+ sprintf(tmp, "PAGE_SIZE: %ld\n", PAGE_SIZE);
+ reply.code = REPLY_PAGE_SIZE;
+ reply.nr = req->nr;
+ reply.info = PAGE_SIZE;
+ send_netdump_msg(&np, tmp, strlen(tmp), &reply);
+ break;
+
+ case COMM_GET_REGS:
+ reply.code = REPLY_REGS;
+ reply.nr = req->nr;
+ reply.info = (u32)totalram_pages;
+ send_netdump_msg(&np, tmp,
+ platform_get_regs(tmp, &myregs), &reply);
+ break;
+
+ case COMM_GET_NR_PAGES:
+ reply.code = REPLY_NR_PAGES;
+ reply.nr = req->nr;
+ reply.info = platform_max_pfn();
+ sprintf(tmp,
+ "Number of pages: %ld\n", platform_max_pfn());
+ send_netdump_msg(&np, tmp, strlen(tmp), &reply);
+ break;
+
+ case COMM_SHOW_STATE:
+ /* send response first */
+ reply.code = REPLY_SHOW_STATE;
+ reply.nr = req->nr;
+ reply.info = 0;
+
+ send_netdump_msg(&np, tmp, strlen(tmp), &reply);
+
+ netdump_mode = 0;
+ if (regs)
+ show_regs(regs);
+ show_state();
+ show_mem();
+ netdump_mode = 1;
+ break;
+
+ default:
+ reply.code = REPLY_ERROR;
+ reply.nr = req->nr;
+ reply.info = req->command;
+ Dprintk("got UNKNOWN command!\n");
+ sprintf(tmp, "Got unknown command code %d!\n",
+ req->command);
+ send_netdump_msg(&np, tmp, strlen(tmp), &reply);
+ break;
+ }
+ kfree(req);
+ req = NULL;
+ }
+ sprintf(tmp, "NETDUMP end.\n");
+ reply.code = REPLY_END_NETDUMP;
+ reply.nr = 0;
+ reply.info = 0;
+ send_netdump_msg(&np, tmp, strlen(tmp), &reply);
+ printk("NETDUMP END!\n");
+}
+
+static int option_setup(char *opt)
+{
+ return !netpoll_parse_options(&np, opt);
+}
+
+__setup("netdump=", option_setup);
+
+static int init_netdump(void)
+{
+ int configured = 0;
+
+ if (strlen(config))
+ configured = option_setup(config);
+
+ if (!configured) {
+ printk(KERN_ERR "netdump: not configured, aborting\n");
+ return -EINVAL;
+ }
+
+ if (netpoll_setup(&np))
+ return -EINVAL;
+
+ if (magic1 || magic2)
+ netdump_magic = magic1 + (((u64)magic2)<<32);
+
+ /*
+ * Allocate a separate stack for netdump.
+ */
+ platform_init_stack(&netdump_stack);
+
+ platform_jiffy_cycles(&jiffy_cycles);
+
+ printk(KERN_INFO "netdump: network crash dump enabled\n");
+ return 0;
+}
+
+static void cleanup_netdump(void)
+{
+ netpoll_cleanup(&np);
+ platform_cleanup_stack(netdump_stack);
+}
+
+module_init(init_netdump);
+module_exit(cleanup_netdump);
--- /dev/null
+/*
+ * linux/drivers/net/netdump.h
+ *
+ * Copyright (C) 2001 Ingo Molnar <mingo@redhat.com>
+ *
+ * This file contains the implementation of an IRQ-safe, crash-safe
+ * kernel console implementation that outputs kernel messages to the
+ * network.
+ *
+ * Modification history:
+ *
+ * 2001-09-17 started by Ingo Molnar.
+ */
+
+/****************************************************************
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ ****************************************************************/
+
+#define NETDUMP_VERSION 0x04
+
+#define NETDUMP_VERSION_MAX 0x5
+
+enum netdump_commands {
+ COMM_NONE = 0,
+ COMM_SEND_MEM = 1,
+ COMM_EXIT = 2,
+ COMM_REBOOT = 3,
+ COMM_HELLO = 4,
+ COMM_GET_NR_PAGES = 5,
+ COMM_GET_PAGE_SIZE = 6,
+ COMM_START_NETDUMP_ACK = 7,
+ COMM_GET_REGS = 8,
+ COMM_SHOW_STATE = 9,
+};
+
+#define NETDUMP_REQ_SIZE (8+4*4)
+
+typedef struct netdump_req_s {
+ u64 magic;
+ u32 nr;
+ u32 command;
+ u32 from;
+ u32 to;
+ struct list_head list;
+} req_t;
+
+enum netdump_replies {
+ REPLY_NONE = 0,
+ REPLY_ERROR = 1,
+ REPLY_LOG = 2,
+ REPLY_MEM = 3,
+ REPLY_RESERVED = 4,
+ REPLY_HELLO = 5,
+ REPLY_NR_PAGES = 6,
+ REPLY_PAGE_SIZE = 7,
+ REPLY_START_NETDUMP = 8,
+ REPLY_END_NETDUMP = 9,
+ REPLY_REGS = 10,
+ REPLY_MAGIC = 11,
+ REPLY_SHOW_STATE = 12,
+};
+
+typedef struct netdump_reply_s {
+ u32 nr;
+ u32 code;
+ u32 info;
+} reply_t;
+
+#define HEADER_LEN (1 + sizeof(reply_t))
+
+#define MIN(a,b) ((a) < (b) ? (a) : (b))
+
+#define netdump_mdelay(n) ( \
+ { \
+ unsigned long __ms=(n); \
+ while (__ms--) udelay(1000); \
+ })
--- /dev/null
+/*
+ * smc91x.c
+ * This is a driver for SMSC's 91C9x/91C1xx single-chip Ethernet devices.
+ *
+ * Copyright (C) 1996 by Erik Stahlman
+ * Copyright (C) 2001 Standard Microsystems Corporation
+ * Developed by Simple Network Magic Corporation
+ * Copyright (C) 2003 Monta Vista Software, Inc.
+ * Unified SMC91x driver by Nicolas Pitre
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Arguments:
+ * io = for the base address
+ * irq = for the IRQ
+ * nowait = 0 for normal wait states, 1 eliminates additional wait states
+ *
+ * original author:
+ * Erik Stahlman <erik@vt.edu>
+ *
+ * hardware multicast code:
+ * Peter Cammaert <pc@denkart.be>
+ *
+ * contributors:
+ * Daris A Nevil <dnevil@snmc.com>
+ * Nicolas Pitre <nico@cam.org>
+ * Russell King <rmk@arm.linux.org.uk>
+ *
+ * History:
+ * 08/20/00 Arnaldo Melo fix kfree(skb) in smc_hardware_send_packet
+ * 12/15/00 Christian Jullien fix "Warning: kfree_skb on hard IRQ"
+ * 03/16/01 Daris A Nevil modified smc9194.c for use with LAN91C111
+ * 08/22/01 Scott Anderson merge changes from smc9194 to smc91111
+ * 08/21/01 Pramod B Bhardwaj added support for RevB of LAN91C111
+ * 12/20/01 Jeff Sutherland initial port to Xscale PXA with DMA support
+ * 04/07/03 Nicolas Pitre unified SMC91x driver, killed irq races,
+ * more bus abstraction, big cleanup, etc.
+ * 29/09/03 Russell King - add driver model support
+ * - ethtool support
+ * - convert to use generic MII interface
+ * - add link up/down notification
+ * - don't try to handle full negotiation in
+ * smc_phy_configure
+ * - clean up (and fix stack overrun) in PHY
+ * MII read/write functions
+ * 22/09/04 Nicolas Pitre big update (see commit log for details)
+ */
+static const char version[] =
+ "smc91x.c: v1.1, sep 22 2004 by Nicolas Pitre <nico@cam.org>\n";
+
+/* Debugging level */
+#ifndef SMC_DEBUG
+#define SMC_DEBUG 0
+#endif
+
+
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/crc32.h>
+#include <linux/device.h>
+#include <linux/spinlock.h>
+#include <linux/ethtool.h>
+#include <linux/mii.h>
+#include <linux/workqueue.h>
+
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#include "smc91x.h"
+
+#ifdef CONFIG_ISA
+/*
+ * the LAN91C111 can be at any of the following port addresses. To change,
+ * for a slightly different card, you can add it to the array. Keep in
+ * mind that the array must end in zero.
+ */
+static unsigned int smc_portlist[] __initdata = {
+ 0x200, 0x220, 0x240, 0x260, 0x280, 0x2A0, 0x2C0, 0x2E0,
+ 0x300, 0x320, 0x340, 0x360, 0x380, 0x3A0, 0x3C0, 0x3E0, 0
+};
+
+#ifndef SMC_IOADDR
+# define SMC_IOADDR -1
+#endif
+static unsigned long io = SMC_IOADDR;
+module_param(io, ulong, 0400);
+MODULE_PARM_DESC(io, "I/O base address");
+
+#ifndef SMC_IRQ
+# define SMC_IRQ -1
+#endif
+static int irq = SMC_IRQ;
+module_param(irq, int, 0400);
+MODULE_PARM_DESC(irq, "IRQ number");
+
+#endif /* CONFIG_ISA */
+
+#ifndef SMC_NOWAIT
+# define SMC_NOWAIT 0
+#endif
+static int nowait = SMC_NOWAIT;
+module_param(nowait, int, 0400);
+MODULE_PARM_DESC(nowait, "set to 1 for no wait state");
+
+/*
+ * Transmit timeout, default 5 seconds.
+ */
+static int watchdog = 5000;
+module_param(watchdog, int, 0400);
+MODULE_PARM_DESC(watchdog, "transmit timeout in milliseconds");
+
+MODULE_LICENSE("GPL");
+
+/*
+ * The internal workings of the driver. If you are changing anything
+ * here with the SMC stuff, you should have the datasheet and know
+ * what you are doing.
+ */
+#define CARDNAME "smc91x"
+
+/*
+ * Use power-down feature of the chip
+ */
+#define POWER_DOWN 1
+
+/*
+ * Wait time for memory to be free. This probably shouldn't be
+ * tuned that much, as waiting for this means nothing else happens
+ * in the system
+ */
+#define MEMORY_WAIT_TIME 16
+
+/*
+ * This selects whether TX packets are sent one by one to the SMC91x internal
+ * memory and throttled until transmission completes. This may prevent
+ * RX overruns a litle by keeping much of the memory free for RX packets
+ * but to the expense of reduced TX throughput and increased IRQ overhead.
+ * Note this is not a cure for a too slow data bus or too high IRQ latency.
+ */
+#define THROTTLE_TX_PKTS 0
+
+/*
+ * The MII clock high/low times. 2x this number gives the MII clock period
+ * in microseconds. (was 50, but this gives 6.4ms for each MII transaction!)
+ */
+#define MII_DELAY 1
+
+/* store this information for the driver.. */
+struct smc_local {
+ /*
+ * If I have to wait until memory is available to send a
+ * packet, I will store the skbuff here, until I get the
+ * desired memory. Then, I'll send it out and free it.
+ */
+ struct sk_buff *pending_tx_skb;
+ struct tasklet_struct tx_task;
+
+ /*
+ * these are things that the kernel wants me to keep, so users
+ * can find out semi-useless statistics of how well the card is
+ * performing
+ */
+ struct net_device_stats stats;
+
+ /* version/revision of the SMC91x chip */
+ int version;
+
+ /* Contains the current active transmission mode */
+ int tcr_cur_mode;
+
+ /* Contains the current active receive mode */
+ int rcr_cur_mode;
+
+ /* Contains the current active receive/phy mode */
+ int rpc_cur_mode;
+ int ctl_rfduplx;
+ int ctl_rspeed;
+
+ u32 msg_enable;
+ u32 phy_type;
+ struct mii_if_info mii;
+
+ /* work queue */
+ struct work_struct phy_configure;
+ int work_pending;
+
+ spinlock_t lock;
+
+#ifdef SMC_USE_PXA_DMA
+ /* DMA needs the physical address of the chip */
+ u_long physaddr;
+#endif
+};
+
+#if SMC_DEBUG > 0
+#define DBG(n, args...) \
+ do { \
+ if (SMC_DEBUG >= (n)) \
+ printk(args); \
+ } while (0)
+
+#define PRINTK(args...) printk(args)
+#else
+#define DBG(n, args...) do { } while(0)
+#define PRINTK(args...) printk(KERN_DEBUG args)
+#endif
+
+#if SMC_DEBUG > 3
+static void PRINT_PKT(u_char *buf, int length)
+{
+ int i;
+ int remainder;
+ int lines;
+
+ lines = length / 16;
+ remainder = length % 16;
+
+ for (i = 0; i < lines ; i ++) {
+ int cur;
+ for (cur = 0; cur < 8; cur++) {
+ u_char a, b;
+ a = *buf++;
+ b = *buf++;
+ printk("%02x%02x ", a, b);
+ }
+ printk("\n");
+ }
+ for (i = 0; i < remainder/2 ; i++) {
+ u_char a, b;
+ a = *buf++;
+ b = *buf++;
+ printk("%02x%02x ", a, b);
+ }
+ printk("\n");
+}
+#else
+#define PRINT_PKT(x...) do { } while(0)
+#endif
+
+
+/* this enables an interrupt in the interrupt mask register */
+#define SMC_ENABLE_INT(x) do { \
+ unsigned char mask; \
+ spin_lock_irq(&lp->lock); \
+ mask = SMC_GET_INT_MASK(); \
+ mask |= (x); \
+ SMC_SET_INT_MASK(mask); \
+ spin_unlock_irq(&lp->lock); \
+} while (0)
+
+/* this disables an interrupt from the interrupt mask register */
+#define SMC_DISABLE_INT(x) do { \
+ unsigned char mask; \
+ spin_lock_irq(&lp->lock); \
+ mask = SMC_GET_INT_MASK(); \
+ mask &= ~(x); \
+ SMC_SET_INT_MASK(mask); \
+ spin_unlock_irq(&lp->lock); \
+} while (0)
+
+/*
+ * Wait while MMU is busy. This is usually in the order of a few nanosecs
+ * if at all, but let's avoid deadlocking the system if the hardware
+ * decides to go south.
+ */
+#define SMC_WAIT_MMU_BUSY() do { \
+ if (unlikely(SMC_GET_MMU_CMD() & MC_BUSY)) { \
+ unsigned long timeout = jiffies + 2; \
+ while (SMC_GET_MMU_CMD() & MC_BUSY) { \
+ if (time_after(jiffies, timeout)) { \
+ printk("%s: timeout %s line %d\n", \
+ dev->name, __FILE__, __LINE__); \
+ break; \
+ } \
+ cpu_relax(); \
+ } \
+ } \
+} while (0)
+
+
+/*
+ * this does a soft reset on the device
+ */
+static void smc_reset(struct net_device *dev)
+{
+ unsigned long ioaddr = dev->base_addr;
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned int ctl, cfg;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ /* Disable all interrupts */
+ spin_lock(&lp->lock);
+ SMC_SELECT_BANK(2);
+ SMC_SET_INT_MASK(0);
+ spin_unlock(&lp->lock);
+
+ /*
+ * This resets the registers mostly to defaults, but doesn't
+ * affect EEPROM. That seems unnecessary
+ */
+ SMC_SELECT_BANK(0);
+ SMC_SET_RCR(RCR_SOFTRST);
+
+ /*
+ * Setup the Configuration Register
+ * This is necessary because the CONFIG_REG is not affected
+ * by a soft reset
+ */
+ SMC_SELECT_BANK(1);
+
+ cfg = CONFIG_DEFAULT;
+
+ /*
+ * Setup for fast accesses if requested. If the card/system
+ * can't handle it then there will be no recovery except for
+ * a hard reset or power cycle
+ */
+ if (nowait)
+ cfg |= CONFIG_NO_WAIT;
+
+ /*
+ * Release from possible power-down state
+ * Configuration register is not affected by Soft Reset
+ */
+ cfg |= CONFIG_EPH_POWER_EN;
+
+ SMC_SET_CONFIG(cfg);
+
+ /* this should pause enough for the chip to be happy */
+ /*
+ * elaborate? What does the chip _need_? --jgarzik
+ *
+ * This seems to be undocumented, but something the original
+ * driver(s) have always done. Suspect undocumented timing
+ * info/determined empirically. --rmk
+ */
+ udelay(1);
+
+ /* Disable transmit and receive functionality */
+ SMC_SELECT_BANK(0);
+ SMC_SET_RCR(RCR_CLEAR);
+ SMC_SET_TCR(TCR_CLEAR);
+
+ SMC_SELECT_BANK(1);
+ ctl = SMC_GET_CTL() | CTL_LE_ENABLE;
+
+ /*
+ * Set the control register to automatically release successfully
+ * transmitted packets, to make the best use out of our limited
+ * memory
+ */
+ if(!THROTTLE_TX_PKTS)
+ ctl |= CTL_AUTO_RELEASE;
+ else
+ ctl &= ~CTL_AUTO_RELEASE;
+ SMC_SET_CTL(ctl);
+
+ /* Reset the MMU */
+ SMC_SELECT_BANK(2);
+ SMC_SET_MMU_CMD(MC_RESET);
+ SMC_WAIT_MMU_BUSY();
+
+ /* clear anything saved */
+ if (lp->pending_tx_skb != NULL) {
+ dev_kfree_skb (lp->pending_tx_skb);
+ lp->pending_tx_skb = NULL;
+ lp->stats.tx_errors++;
+ lp->stats.tx_aborted_errors++;
+ }
+}
+
+/*
+ * Enable Interrupts, Receive, and Transmit
+ */
+static void smc_enable(struct net_device *dev)
+{
+ unsigned long ioaddr = dev->base_addr;
+ struct smc_local *lp = netdev_priv(dev);
+ int mask;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ /* see the header file for options in TCR/RCR DEFAULT */
+ SMC_SELECT_BANK(0);
+ SMC_SET_TCR(lp->tcr_cur_mode);
+ SMC_SET_RCR(lp->rcr_cur_mode);
+
+ SMC_SELECT_BANK(1);
+ SMC_SET_MAC_ADDR(dev->dev_addr);
+
+ /* now, enable interrupts */
+ mask = IM_EPH_INT|IM_RX_OVRN_INT|IM_RCV_INT;
+ if (lp->version >= (CHIP_91100 << 4))
+ mask |= IM_MDINT;
+ SMC_SELECT_BANK(2);
+ SMC_SET_INT_MASK(mask);
+
+ /*
+ * From this point the register bank must _NOT_ be switched away
+ * to something else than bank 2 without proper locking against
+ * races with any tasklet or interrupt handlers until smc_shutdown()
+ * or smc_reset() is called.
+ */
+}
+
+/*
+ * this puts the device in an inactive state
+ */
+static void smc_shutdown(struct net_device *dev)
+{
+ unsigned long ioaddr = dev->base_addr;
+ struct smc_local *lp = netdev_priv(dev);
+
+ DBG(2, "%s: %s\n", CARDNAME, __FUNCTION__);
+
+ /* no more interrupts for me */
+ spin_lock(&lp->lock);
+ SMC_SELECT_BANK(2);
+ SMC_SET_INT_MASK(0);
+ spin_unlock(&lp->lock);
+
+ /* and tell the card to stay away from that nasty outside world */
+ SMC_SELECT_BANK(0);
+ SMC_SET_RCR(RCR_CLEAR);
+ SMC_SET_TCR(TCR_CLEAR);
+
+#ifdef POWER_DOWN
+ /* finally, shut the chip down */
+ SMC_SELECT_BANK(1);
+ SMC_SET_CONFIG(SMC_GET_CONFIG() & ~CONFIG_EPH_POWER_EN);
+#endif
+}
+
+/*
+ * This is the procedure to handle the receipt of a packet.
+ */
+static inline void smc_rcv(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int packet_number, status, packet_len;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ packet_number = SMC_GET_RXFIFO();
+ if (unlikely(packet_number & RXFIFO_REMPTY)) {
+ PRINTK("%s: smc_rcv with nothing on FIFO.\n", dev->name);
+ return;
+ }
+
+ /* read from start of packet */
+ SMC_SET_PTR(PTR_READ | PTR_RCV | PTR_AUTOINC);
+
+ /* First two words are status and packet length */
+ SMC_GET_PKT_HDR(status, packet_len);
+ packet_len &= 0x07ff; /* mask off top bits */
+ DBG(2, "%s: RX PNR 0x%x STATUS 0x%04x LENGTH 0x%04x (%d)\n",
+ dev->name, packet_number, status,
+ packet_len, packet_len);
+
+ if (unlikely(status & RS_ERRORS)) {
+ SMC_WAIT_MMU_BUSY();
+ SMC_SET_MMU_CMD(MC_RELEASE);
+ lp->stats.rx_errors++;
+ if (status & RS_ALGNERR)
+ lp->stats.rx_frame_errors++;
+ if (status & (RS_TOOSHORT | RS_TOOLONG))
+ lp->stats.rx_length_errors++;
+ if (status & RS_BADCRC)
+ lp->stats.rx_crc_errors++;
+ } else {
+ struct sk_buff *skb;
+ unsigned char *data;
+ unsigned int data_len;
+
+ /* set multicast stats */
+ if (status & RS_MULTICAST)
+ lp->stats.multicast++;
+
+ /*
+ * Actual payload is packet_len - 6 (or 5 if odd byte).
+ * We want skb_reserve(2) and the final ctrl word
+ * (2 bytes, possibly containing the payload odd byte).
+ * Furthermore, we add 2 bytes to allow rounding up to
+ * multiple of 4 bytes on 32 bit buses.
+ * Ence packet_len - 6 + 2 + 2 + 2.
+ */
+ skb = dev_alloc_skb(packet_len);
+ if (unlikely(skb == NULL)) {
+ printk(KERN_NOTICE "%s: Low memory, packet dropped.\n",
+ dev->name);
+ SMC_WAIT_MMU_BUSY();
+ SMC_SET_MMU_CMD(MC_RELEASE);
+ lp->stats.rx_dropped++;
+ return;
+ }
+
+ /* Align IP header to 32 bits */
+ skb_reserve(skb, 2);
+
+ /* BUG: the LAN91C111 rev A never sets this bit. Force it. */
+ if (lp->version == 0x90)
+ status |= RS_ODDFRAME;
+
+ /*
+ * If odd length: packet_len - 5,
+ * otherwise packet_len - 6.
+ * With the trailing ctrl byte it's packet_len - 4.
+ */
+ data_len = packet_len - ((status & RS_ODDFRAME) ? 5 : 6);
+ data = skb_put(skb, data_len);
+ SMC_PULL_DATA(data, packet_len - 4);
+
+ SMC_WAIT_MMU_BUSY();
+ SMC_SET_MMU_CMD(MC_RELEASE);
+
+ PRINT_PKT(data, packet_len - 4);
+
+ dev->last_rx = jiffies;
+ skb->dev = dev;
+ skb->protocol = eth_type_trans(skb, dev);
+ netif_rx(skb);
+ lp->stats.rx_packets++;
+ lp->stats.rx_bytes += data_len;
+ }
+}
+
+#ifdef CONFIG_SMP
+/*
+ * On SMP we have the following problem:
+ *
+ * A = smc_hardware_send_pkt()
+ * B = smc_hard_start_xmit()
+ * C = smc_interrupt()
+ *
+ * A and B can never be executed simultaneously. However, at least on UP,
+ * it is possible (and even desirable) for C to interrupt execution of
+ * A or B in order to have better RX reliability and avoid overruns.
+ * C, just like A and B, must have exclusive access to the chip and
+ * each of them must lock against any other concurrent access.
+ * Unfortunately this is not possible to have C suspend execution of A or
+ * B taking place on another CPU. On UP this is no an issue since A and B
+ * are run from softirq context and C from hard IRQ context, and there is
+ * no other CPU where concurrent access can happen.
+ * If ever there is a way to force at least B and C to always be executed
+ * on the same CPU then we could use read/write locks to protect against
+ * any other concurrent access and C would always interrupt B. But life
+ * isn't that easy in a SMP world...
+ */
+#define smc_special_trylock(lock) \
+({ \
+ int __ret; \
+ local_irq_disable(); \
+ __ret = spin_trylock(lock); \
+ if (!__ret) \
+ local_irq_enable(); \
+ __ret; \
+})
+#define smc_special_lock(lock) spin_lock_irq(lock)
+#define smc_special_unlock(lock) spin_unlock_irq(lock)
+#else
+#define smc_special_trylock(lock) (1)
+#define smc_special_lock(lock) do { } while (0)
+#define smc_special_unlock(lock) do { } while (0)
+#endif
+
+/*
+ * This is called to actually send a packet to the chip.
+ */
+static void smc_hardware_send_pkt(unsigned long data)
+{
+ struct net_device *dev = (struct net_device *)data;
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ struct sk_buff *skb;
+ unsigned int packet_no, len;
+ unsigned char *buf;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ if (!smc_special_trylock(&lp->lock)) {
+ netif_stop_queue(dev);
+ tasklet_schedule(&lp->tx_task);
+ return;
+ }
+
+ skb = lp->pending_tx_skb;
+ lp->pending_tx_skb = NULL;
+ packet_no = SMC_GET_AR();
+ if (unlikely(packet_no & AR_FAILED)) {
+ printk("%s: Memory allocation failed.\n", dev->name);
+ lp->stats.tx_errors++;
+ lp->stats.tx_fifo_errors++;
+ smc_special_unlock(&lp->lock);
+ goto done;
+ }
+
+ /* point to the beginning of the packet */
+ SMC_SET_PN(packet_no);
+ SMC_SET_PTR(PTR_AUTOINC);
+
+ buf = skb->data;
+ len = skb->len;
+ DBG(2, "%s: TX PNR 0x%x LENGTH 0x%04x (%d) BUF 0x%p\n",
+ dev->name, packet_no, len, len, buf);
+ PRINT_PKT(buf, len);
+
+ /*
+ * Send the packet length (+6 for status words, length, and ctl.
+ * The card will pad to 64 bytes with zeroes if packet is too small.
+ */
+ SMC_PUT_PKT_HDR(0, len + 6);
+
+ /* send the actual data */
+ SMC_PUSH_DATA(buf, len & ~1);
+
+ /* Send final ctl word with the last byte if there is one */
+ SMC_outw(((len & 1) ? (0x2000 | buf[len-1]) : 0), ioaddr, DATA_REG);
+
+ /*
+ * If THROTTLE_TX_PKTS is set, we look at the TX_EMPTY flag
+ * before queueing this packet for TX, and if it's clear then
+ * we stop the queue here. This will have the effect of
+ * having at most 2 packets queued for TX in the chip's memory
+ * at all time. If THROTTLE_TX_PKTS is not set then the queue
+ * is stopped only when memory allocation (MC_ALLOC) does not
+ * succeed right away.
+ */
+ if (THROTTLE_TX_PKTS && !(SMC_GET_INT() & IM_TX_EMPTY_INT))
+ netif_stop_queue(dev);
+
+ /* queue the packet for TX */
+ SMC_SET_MMU_CMD(MC_ENQUEUE);
+ SMC_ACK_INT(IM_TX_EMPTY_INT);
+ smc_special_unlock(&lp->lock);
+
+ dev->trans_start = jiffies;
+ lp->stats.tx_packets++;
+ lp->stats.tx_bytes += len;
+
+ SMC_ENABLE_INT(IM_TX_INT | IM_TX_EMPTY_INT);
+
+done: if (!THROTTLE_TX_PKTS)
+ netif_wake_queue(dev);
+
+ dev_kfree_skb(skb);
+}
+
+/*
+ * Since I am not sure if I will have enough room in the chip's ram
+ * to store the packet, I call this routine which either sends it
+ * now, or set the card to generates an interrupt when ready
+ * for the packet.
+ */
+static int smc_hard_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int numPages, poll_count, status;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ BUG_ON(lp->pending_tx_skb != NULL);
+ lp->pending_tx_skb = skb;
+
+ /*
+ * The MMU wants the number of pages to be the number of 256 bytes
+ * 'pages', minus 1 (since a packet can't ever have 0 pages :))
+ *
+ * The 91C111 ignores the size bits, but earlier models don't.
+ *
+ * Pkt size for allocating is data length +6 (for additional status
+ * words, length and ctl)
+ *
+ * If odd size then last byte is included in ctl word.
+ */
+ numPages = ((skb->len & ~1) + (6 - 1)) >> 8;
+ if (unlikely(numPages > 7)) {
+ printk("%s: Far too big packet error.\n", dev->name);
+ lp->pending_tx_skb = NULL;
+ lp->stats.tx_errors++;
+ lp->stats.tx_dropped++;
+ dev_kfree_skb(skb);
+ return 0;
+ }
+
+ smc_special_lock(&lp->lock);
+
+ /* now, try to allocate the memory */
+ SMC_SET_MMU_CMD(MC_ALLOC | numPages);
+
+ /*
+ * Poll the chip for a short amount of time in case the
+ * allocation succeeds quickly.
+ */
+ poll_count = MEMORY_WAIT_TIME;
+ do {
+ status = SMC_GET_INT();
+ if (status & IM_ALLOC_INT) {
+ SMC_ACK_INT(IM_ALLOC_INT);
+ break;
+ }
+ } while (--poll_count);
+
+ smc_special_unlock(&lp->lock);
+
+ if (!poll_count) {
+ /* oh well, wait until the chip finds memory later */
+ netif_stop_queue(dev);
+ DBG(2, "%s: TX memory allocation deferred.\n", dev->name);
+ SMC_ENABLE_INT(IM_ALLOC_INT);
+ } else {
+ /*
+ * Allocation succeeded: push packet to the chip's own memory
+ * immediately.
+ */
+ smc_hardware_send_pkt((unsigned long)dev);
+ }
+
+ return 0;
+}
+
+/*
+ * This handles a TX interrupt, which is only called when:
+ * - a TX error occurred, or
+ * - CTL_AUTO_RELEASE is not set and TX of a packet completed.
+ */
+static void smc_tx(struct net_device *dev)
+{
+ unsigned long ioaddr = dev->base_addr;
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned int saved_packet, packet_no, tx_status, pkt_len;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ /* If the TX FIFO is empty then nothing to do */
+ packet_no = SMC_GET_TXFIFO();
+ if (unlikely(packet_no & TXFIFO_TEMPTY)) {
+ PRINTK("%s: smc_tx with nothing on FIFO.\n", dev->name);
+ return;
+ }
+
+ /* select packet to read from */
+ saved_packet = SMC_GET_PN();
+ SMC_SET_PN(packet_no);
+
+ /* read the first word (status word) from this packet */
+ SMC_SET_PTR(PTR_AUTOINC | PTR_READ);
+ SMC_GET_PKT_HDR(tx_status, pkt_len);
+ DBG(2, "%s: TX STATUS 0x%04x PNR 0x%02x\n",
+ dev->name, tx_status, packet_no);
+
+ if (!(tx_status & TS_SUCCESS))
+ lp->stats.tx_errors++;
+ if (tx_status & TS_LOSTCAR)
+ lp->stats.tx_carrier_errors++;
+
+ if (tx_status & TS_LATCOL) {
+ PRINTK("%s: late collision occurred on last xmit\n", dev->name);
+ lp->stats.tx_window_errors++;
+ if (!(lp->stats.tx_window_errors & 63) && net_ratelimit()) {
+ printk(KERN_INFO "%s: unexpectedly large numbers of "
+ "late collisions. Please check duplex "
+ "setting.\n", dev->name);
+ }
+ }
+
+ /* kill the packet */
+ SMC_WAIT_MMU_BUSY();
+ SMC_SET_MMU_CMD(MC_FREEPKT);
+
+ /* Don't restore Packet Number Reg until busy bit is cleared */
+ SMC_WAIT_MMU_BUSY();
+ SMC_SET_PN(saved_packet);
+
+ /* re-enable transmit */
+ SMC_SELECT_BANK(0);
+ SMC_SET_TCR(lp->tcr_cur_mode);
+ SMC_SELECT_BANK(2);
+}
+
+
+/*---PHY CONTROL AND CONFIGURATION-----------------------------------------*/
+
+static void smc_mii_out(struct net_device *dev, unsigned int val, int bits)
+{
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int mii_reg, mask;
+
+ mii_reg = SMC_GET_MII() & ~(MII_MCLK | MII_MDOE | MII_MDO);
+ mii_reg |= MII_MDOE;
+
+ for (mask = 1 << (bits - 1); mask; mask >>= 1) {
+ if (val & mask)
+ mii_reg |= MII_MDO;
+ else
+ mii_reg &= ~MII_MDO;
+
+ SMC_SET_MII(mii_reg);
+ udelay(MII_DELAY);
+ SMC_SET_MII(mii_reg | MII_MCLK);
+ udelay(MII_DELAY);
+ }
+}
+
+static unsigned int smc_mii_in(struct net_device *dev, int bits)
+{
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int mii_reg, mask, val;
+
+ mii_reg = SMC_GET_MII() & ~(MII_MCLK | MII_MDOE | MII_MDO);
+ SMC_SET_MII(mii_reg);
+
+ for (mask = 1 << (bits - 1), val = 0; mask; mask >>= 1) {
+ if (SMC_GET_MII() & MII_MDI)
+ val |= mask;
+
+ SMC_SET_MII(mii_reg);
+ udelay(MII_DELAY);
+ SMC_SET_MII(mii_reg | MII_MCLK);
+ udelay(MII_DELAY);
+ }
+
+ return val;
+}
+
+/*
+ * Reads a register from the MII Management serial interface
+ */
+static int smc_phy_read(struct net_device *dev, int phyaddr, int phyreg)
+{
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int phydata;
+
+ SMC_SELECT_BANK(3);
+
+ /* Idle - 32 ones */
+ smc_mii_out(dev, 0xffffffff, 32);
+
+ /* Start code (01) + read (10) + phyaddr + phyreg */
+ smc_mii_out(dev, 6 << 10 | phyaddr << 5 | phyreg, 14);
+
+ /* Turnaround (2bits) + phydata */
+ phydata = smc_mii_in(dev, 18);
+
+ /* Return to idle state */
+ SMC_SET_MII(SMC_GET_MII() & ~(MII_MCLK|MII_MDOE|MII_MDO));
+
+ DBG(3, "%s: phyaddr=0x%x, phyreg=0x%x, phydata=0x%x\n",
+ __FUNCTION__, phyaddr, phyreg, phydata);
+
+ SMC_SELECT_BANK(2);
+ return phydata;
+}
+
+/*
+ * Writes a register to the MII Management serial interface
+ */
+static void smc_phy_write(struct net_device *dev, int phyaddr, int phyreg,
+ int phydata)
+{
+ unsigned long ioaddr = dev->base_addr;
+
+ SMC_SELECT_BANK(3);
+
+ /* Idle - 32 ones */
+ smc_mii_out(dev, 0xffffffff, 32);
+
+ /* Start code (01) + write (01) + phyaddr + phyreg + turnaround + phydata */
+ smc_mii_out(dev, 5 << 28 | phyaddr << 23 | phyreg << 18 | 2 << 16 | phydata, 32);
+
+ /* Return to idle state */
+ SMC_SET_MII(SMC_GET_MII() & ~(MII_MCLK|MII_MDOE|MII_MDO));
+
+ DBG(3, "%s: phyaddr=0x%x, phyreg=0x%x, phydata=0x%x\n",
+ __FUNCTION__, phyaddr, phyreg, phydata);
+
+ SMC_SELECT_BANK(2);
+}
+
+/*
+ * Finds and reports the PHY address
+ */
+static void smc_phy_detect(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ int phyaddr;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ lp->phy_type = 0;
+
+ /*
+ * Scan all 32 PHY addresses if necessary, starting at
+ * PHY#1 to PHY#31, and then PHY#0 last.
+ */
+ for (phyaddr = 1; phyaddr < 33; ++phyaddr) {
+ unsigned int id1, id2;
+
+ /* Read the PHY identifiers */
+ id1 = smc_phy_read(dev, phyaddr & 31, MII_PHYSID1);
+ id2 = smc_phy_read(dev, phyaddr & 31, MII_PHYSID2);
+
+ DBG(3, "%s: phy_id1=0x%x, phy_id2=0x%x\n",
+ dev->name, id1, id2);
+
+ /* Make sure it is a valid identifier */
+ if (id1 != 0x0000 && id1 != 0xffff && id1 != 0x8000 &&
+ id2 != 0x0000 && id2 != 0xffff && id2 != 0x8000) {
+ /* Save the PHY's address */
+ lp->mii.phy_id = phyaddr & 31;
+ lp->phy_type = id1 << 16 | id2;
+ break;
+ }
+ }
+}
+
+/*
+ * Sets the PHY to a configuration as determined by the user
+ */
+static int smc_phy_fixed(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ int phyaddr = lp->mii.phy_id;
+ int bmcr, cfg1;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ /* Enter Link Disable state */
+ cfg1 = smc_phy_read(dev, phyaddr, PHY_CFG1_REG);
+ cfg1 |= PHY_CFG1_LNKDIS;
+ smc_phy_write(dev, phyaddr, PHY_CFG1_REG, cfg1);
+
+ /*
+ * Set our fixed capabilities
+ * Disable auto-negotiation
+ */
+ bmcr = 0;
+
+ if (lp->ctl_rfduplx)
+ bmcr |= BMCR_FULLDPLX;
+
+ if (lp->ctl_rspeed == 100)
+ bmcr |= BMCR_SPEED100;
+
+ /* Write our capabilities to the phy control register */
+ smc_phy_write(dev, phyaddr, MII_BMCR, bmcr);
+
+ /* Re-Configure the Receive/Phy Control register */
+ SMC_SELECT_BANK(0);
+ SMC_SET_RPC(lp->rpc_cur_mode);
+ SMC_SELECT_BANK(2);
+
+ return 1;
+}
+
+/*
+ * smc_phy_reset - reset the phy
+ * @dev: net device
+ * @phy: phy address
+ *
+ * Issue a software reset for the specified PHY and
+ * wait up to 100ms for the reset to complete. We should
+ * not access the PHY for 50ms after issuing the reset.
+ *
+ * The time to wait appears to be dependent on the PHY.
+ *
+ * Must be called with lp->lock locked.
+ */
+static int smc_phy_reset(struct net_device *dev, int phy)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned int bmcr;
+ int timeout;
+
+ smc_phy_write(dev, phy, MII_BMCR, BMCR_RESET);
+
+ for (timeout = 2; timeout; timeout--) {
+ spin_unlock_irq(&lp->lock);
+ msleep(50);
+ spin_lock_irq(&lp->lock);
+
+ bmcr = smc_phy_read(dev, phy, MII_BMCR);
+ if (!(bmcr & BMCR_RESET))
+ break;
+ }
+
+ return bmcr & BMCR_RESET;
+}
+
+/*
+ * smc_phy_powerdown - powerdown phy
+ * @dev: net device
+ * @phy: phy address
+ *
+ * Power down the specified PHY
+ */
+static void smc_phy_powerdown(struct net_device *dev, int phy)
+{
+ unsigned int bmcr;
+
+ bmcr = smc_phy_read(dev, phy, MII_BMCR);
+ smc_phy_write(dev, phy, MII_BMCR, bmcr | BMCR_PDOWN);
+}
+
+/*
+ * smc_phy_check_media - check the media status and adjust TCR
+ * @dev: net device
+ * @init: set true for initialisation
+ *
+ * Select duplex mode depending on negotiation state. This
+ * also updates our carrier state.
+ */
+static void smc_phy_check_media(struct net_device *dev, int init)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+
+ if (mii_check_media(&lp->mii, netif_msg_link(lp), init)) {
+ /* duplex state has changed */
+ if (lp->mii.full_duplex) {
+ lp->tcr_cur_mode |= TCR_SWFDUP;
+ } else {
+ lp->tcr_cur_mode &= ~TCR_SWFDUP;
+ }
+
+ SMC_SELECT_BANK(0);
+ SMC_SET_TCR(lp->tcr_cur_mode);
+ }
+}
+
+/*
+ * Configures the specified PHY through the MII management interface
+ * using Autonegotiation.
+ * Calls smc_phy_fixed() if the user has requested a certain config.
+ * If RPC ANEG bit is set, the media selection is dependent purely on
+ * the selection by the MII (either in the MII BMCR reg or the result
+ * of autonegotiation.) If the RPC ANEG bit is cleared, the selection
+ * is controlled by the RPC SPEED and RPC DPLX bits.
+ */
+static void smc_phy_configure(void *data)
+{
+ struct net_device *dev = data;
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ int phyaddr = lp->mii.phy_id;
+ int my_phy_caps; /* My PHY capabilities */
+ int my_ad_caps; /* My Advertised capabilities */
+ int status;
+
+ DBG(3, "%s:smc_program_phy()\n", dev->name);
+
+ spin_lock_irq(&lp->lock);
+
+ /*
+ * We should not be called if phy_type is zero.
+ */
+ if (lp->phy_type == 0)
+ goto smc_phy_configure_exit;
+
+ if (smc_phy_reset(dev, phyaddr)) {
+ printk("%s: PHY reset timed out\n", dev->name);
+ goto smc_phy_configure_exit;
+ }
+
+ /*
+ * Enable PHY Interrupts (for register 18)
+ * Interrupts listed here are disabled
+ */
+ smc_phy_write(dev, phyaddr, PHY_MASK_REG,
+ PHY_INT_LOSSSYNC | PHY_INT_CWRD | PHY_INT_SSD |
+ PHY_INT_ESD | PHY_INT_RPOL | PHY_INT_JAB |
+ PHY_INT_SPDDET | PHY_INT_DPLXDET);
+
+ /* Configure the Receive/Phy Control register */
+ SMC_SELECT_BANK(0);
+ SMC_SET_RPC(lp->rpc_cur_mode);
+
+ /* If the user requested no auto neg, then go set his request */
+ if (lp->mii.force_media) {
+ smc_phy_fixed(dev);
+ goto smc_phy_configure_exit;
+ }
+
+ /* Copy our capabilities from MII_BMSR to MII_ADVERTISE */
+ my_phy_caps = smc_phy_read(dev, phyaddr, MII_BMSR);
+
+ if (!(my_phy_caps & BMSR_ANEGCAPABLE)) {
+ printk(KERN_INFO "Auto negotiation NOT supported\n");
+ smc_phy_fixed(dev);
+ goto smc_phy_configure_exit;
+ }
+
+ my_ad_caps = ADVERTISE_CSMA; /* I am CSMA capable */
+
+ if (my_phy_caps & BMSR_100BASE4)
+ my_ad_caps |= ADVERTISE_100BASE4;
+ if (my_phy_caps & BMSR_100FULL)
+ my_ad_caps |= ADVERTISE_100FULL;
+ if (my_phy_caps & BMSR_100HALF)
+ my_ad_caps |= ADVERTISE_100HALF;
+ if (my_phy_caps & BMSR_10FULL)
+ my_ad_caps |= ADVERTISE_10FULL;
+ if (my_phy_caps & BMSR_10HALF)
+ my_ad_caps |= ADVERTISE_10HALF;
+
+ /* Disable capabilities not selected by our user */
+ if (lp->ctl_rspeed != 100)
+ my_ad_caps &= ~(ADVERTISE_100BASE4|ADVERTISE_100FULL|ADVERTISE_100HALF);
+
+ if (!lp->ctl_rfduplx)
+ my_ad_caps &= ~(ADVERTISE_100FULL|ADVERTISE_10FULL);
+
+ /* Update our Auto-Neg Advertisement Register */
+ smc_phy_write(dev, phyaddr, MII_ADVERTISE, my_ad_caps);
+ lp->mii.advertising = my_ad_caps;
+
+ /*
+ * Read the register back. Without this, it appears that when
+ * auto-negotiation is restarted, sometimes it isn't ready and
+ * the link does not come up.
+ */
+ status = smc_phy_read(dev, phyaddr, MII_ADVERTISE);
+
+ DBG(2, "%s: phy caps=%x\n", dev->name, my_phy_caps);
+ DBG(2, "%s: phy advertised caps=%x\n", dev->name, my_ad_caps);
+
+ /* Restart auto-negotiation process in order to advertise my caps */
+ smc_phy_write(dev, phyaddr, MII_BMCR, BMCR_ANENABLE | BMCR_ANRESTART);
+
+ smc_phy_check_media(dev, 1);
+
+smc_phy_configure_exit:
+ spin_unlock_irq(&lp->lock);
+ lp->work_pending = 0;
+}
+
+/*
+ * smc_phy_interrupt
+ *
+ * Purpose: Handle interrupts relating to PHY register 18. This is
+ * called from the "hard" interrupt handler under our private spinlock.
+ */
+static void smc_phy_interrupt(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ int phyaddr = lp->mii.phy_id;
+ int phy18;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ if (lp->phy_type == 0)
+ return;
+
+ for(;;) {
+ smc_phy_check_media(dev, 0);
+
+ /* Read PHY Register 18, Status Output */
+ phy18 = smc_phy_read(dev, phyaddr, PHY_INT_REG);
+ if ((phy18 & PHY_INT_INT) == 0)
+ break;
+ }
+}
+
+/*--- END PHY CONTROL AND CONFIGURATION-------------------------------------*/
+
+static void smc_10bt_check_media(struct net_device *dev, int init)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int old_carrier, new_carrier;
+
+ old_carrier = netif_carrier_ok(dev) ? 1 : 0;
+
+ SMC_SELECT_BANK(0);
+ new_carrier = SMC_inw(ioaddr, EPH_STATUS_REG) & ES_LINK_OK ? 1 : 0;
+ SMC_SELECT_BANK(2);
+
+ if (init || (old_carrier != new_carrier)) {
+ if (!new_carrier) {
+ netif_carrier_off(dev);
+ } else {
+ netif_carrier_on(dev);
+ }
+ if (netif_msg_link(lp))
+ printk(KERN_INFO "%s: link %s\n", dev->name,
+ new_carrier ? "up" : "down");
+ }
+}
+
+static void smc_eph_interrupt(struct net_device *dev)
+{
+ unsigned long ioaddr = dev->base_addr;
+ unsigned int ctl;
+
+ smc_10bt_check_media(dev, 0);
+
+ SMC_SELECT_BANK(1);
+ ctl = SMC_GET_CTL();
+ SMC_SET_CTL(ctl & ~CTL_LE_ENABLE);
+ SMC_SET_CTL(ctl);
+ SMC_SELECT_BANK(2);
+}
+
+/*
+ * This is the main routine of the driver, to handle the device when
+ * it needs some attention.
+ */
+static irqreturn_t smc_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_id;
+ unsigned long ioaddr = dev->base_addr;
+ struct smc_local *lp = netdev_priv(dev);
+ int status, mask, timeout, card_stats;
+ int saved_pointer;
+
+ DBG(3, "%s: %s\n", dev->name, __FUNCTION__);
+
+ spin_lock(&lp->lock);
+
+ /* A preamble may be used when there is a potential race
+ * between the interruptible transmit functions and this
+ * ISR. */
+ SMC_INTERRUPT_PREAMBLE;
+
+ saved_pointer = SMC_GET_PTR();
+ mask = SMC_GET_INT_MASK();
+ SMC_SET_INT_MASK(0);
+
+ /* set a timeout value, so I don't stay here forever */
+ timeout = 8;
+
+ do {
+ status = SMC_GET_INT();
+
+ DBG(2, "%s: INT 0x%02x MASK 0x%02x MEM 0x%04x FIFO 0x%04x\n",
+ dev->name, status, mask,
+ ({ int meminfo; SMC_SELECT_BANK(0);
+ meminfo = SMC_GET_MIR();
+ SMC_SELECT_BANK(2); meminfo; }),
+ SMC_GET_FIFO());
+
+ status &= mask;
+ if (!status)
+ break;
+
+ if (status & IM_RCV_INT) {
+ DBG(3, "%s: RX irq\n", dev->name);
+ smc_rcv(dev);
+ } else if (status & IM_TX_INT) {
+ DBG(3, "%s: TX int\n", dev->name);
+ smc_tx(dev);
+ SMC_ACK_INT(IM_TX_INT);
+ if (THROTTLE_TX_PKTS)
+ netif_wake_queue(dev);
+ } else if (status & IM_ALLOC_INT) {
+ DBG(3, "%s: Allocation irq\n", dev->name);
+ tasklet_hi_schedule(&lp->tx_task);
+ mask &= ~IM_ALLOC_INT;
+ } else if (status & IM_TX_EMPTY_INT) {
+ DBG(3, "%s: TX empty\n", dev->name);
+ mask &= ~IM_TX_EMPTY_INT;
+
+ /* update stats */
+ SMC_SELECT_BANK(0);
+ card_stats = SMC_GET_COUNTER();
+ SMC_SELECT_BANK(2);
+
+ /* single collisions */
+ lp->stats.collisions += card_stats & 0xF;
+ card_stats >>= 4;
+
+ /* multiple collisions */
+ lp->stats.collisions += card_stats & 0xF;
+ } else if (status & IM_RX_OVRN_INT) {
+ DBG(1, "%s: RX overrun\n", dev->name);
+ SMC_ACK_INT(IM_RX_OVRN_INT);
+ lp->stats.rx_errors++;
+ lp->stats.rx_fifo_errors++;
+ } else if (status & IM_EPH_INT) {
+ smc_eph_interrupt(dev);
+ } else if (status & IM_MDINT) {
+ SMC_ACK_INT(IM_MDINT);
+ smc_phy_interrupt(dev);
+ } else if (status & IM_ERCV_INT) {
+ SMC_ACK_INT(IM_ERCV_INT);
+ PRINTK("%s: UNSUPPORTED: ERCV INTERRUPT \n", dev->name);
+ }
+ } while (--timeout);
+
+ /* restore register states */
+ SMC_SET_PTR(saved_pointer);
+ SMC_SET_INT_MASK(mask);
+
+ spin_unlock(&lp->lock);
+
+ DBG(3, "%s: Interrupt done (%d loops)\n", dev->name, 8-timeout);
+
+ /*
+ * We return IRQ_HANDLED unconditionally here even if there was
+ * nothing to do. There is a possibility that a packet might
+ * get enqueued into the chip right after TX_EMPTY_INT is raised
+ * but just before the CPU acknowledges the IRQ.
+ * Better take an unneeded IRQ in some occasions than complexifying
+ * the code for all cases.
+ */
+ return IRQ_HANDLED;
+}
+
+/* Our watchdog timed out. Called by the networking layer */
+static void smc_timeout(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ int status, mask, meminfo, fifo;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ spin_lock_irq(&lp->lock);
+ status = SMC_GET_INT();
+ mask = SMC_GET_INT_MASK();
+ fifo = SMC_GET_FIFO();
+ SMC_SELECT_BANK(0);
+ meminfo = SMC_GET_MIR();
+ SMC_SELECT_BANK(2);
+ spin_unlock_irq(&lp->lock);
+ PRINTK( "%s: INT 0x%02x MASK 0x%02x MEM 0x%04x FIFO 0x%04x\n",
+ dev->name, status, mask, meminfo, fifo );
+
+ smc_reset(dev);
+ smc_enable(dev);
+
+ /*
+ * Reconfiguring the PHY doesn't seem like a bad idea here, but
+ * smc_phy_configure() calls msleep() which calls schedule_timeout()
+ * which calls schedule(). Hence we use a work queue.
+ */
+ if (lp->phy_type != 0) {
+ if (schedule_work(&lp->phy_configure)) {
+ lp->work_pending = 1;
+ }
+ }
+
+ /* We can accept TX packets again */
+ dev->trans_start = jiffies;
+ netif_wake_queue(dev);
+}
+
+/*
+ * This routine will, depending on the values passed to it,
+ * either make it accept multicast packets, go into
+ * promiscuous mode (for TCPDUMP and cousins) or accept
+ * a select set of multicast packets
+ */
+static void smc_set_multicast_list(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ unsigned long ioaddr = dev->base_addr;
+ unsigned char multicast_table[8];
+ int update_multicast = 0;
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ if (dev->flags & IFF_PROMISC) {
+ DBG(2, "%s: RCR_PRMS\n", dev->name);
+ lp->rcr_cur_mode |= RCR_PRMS;
+ }
+
+/* BUG? I never disable promiscuous mode if multicasting was turned on.
+ Now, I turn off promiscuous mode, but I don't do anything to multicasting
+ when promiscuous mode is turned on.
+*/
+
+ /*
+ * Here, I am setting this to accept all multicast packets.
+ * I don't need to zero the multicast table, because the flag is
+ * checked before the table is
+ */
+ else if (dev->flags & IFF_ALLMULTI || dev->mc_count > 16) {
+ DBG(2, "%s: RCR_ALMUL\n", dev->name);
+ lp->rcr_cur_mode |= RCR_ALMUL;
+ }
+
+ /*
+ * This sets the internal hardware table to filter out unwanted
+ * multicast packets before they take up memory.
+ *
+ * The SMC chip uses a hash table where the high 6 bits of the CRC of
+ * address are the offset into the table. If that bit is 1, then the
+ * multicast packet is accepted. Otherwise, it's dropped silently.
+ *
+ * To use the 6 bits as an offset into the table, the high 3 bits are
+ * the number of the 8 bit register, while the low 3 bits are the bit
+ * within that register.
+ */
+ else if (dev->mc_count) {
+ int i;
+ struct dev_mc_list *cur_addr;
+
+ /* table for flipping the order of 3 bits */
+ static const unsigned char invert3[] = {0, 4, 2, 6, 1, 5, 3, 7};
+
+ /* start with a table of all zeros: reject all */
+ memset(multicast_table, 0, sizeof(multicast_table));
+
+ cur_addr = dev->mc_list;
+ for (i = 0; i < dev->mc_count; i++, cur_addr = cur_addr->next) {
+ int position;
+
+ /* do we have a pointer here? */
+ if (!cur_addr)
+ break;
+ /* make sure this is a multicast address -
+ shouldn't this be a given if we have it here ? */
+ if (!(*cur_addr->dmi_addr & 1))
+ continue;
+
+ /* only use the low order bits */
+ position = crc32_le(~0, cur_addr->dmi_addr, 6) & 0x3f;
+
+ /* do some messy swapping to put the bit in the right spot */
+ multicast_table[invert3[position&7]] |=
+ (1<<invert3[(position>>3)&7]);
+ }
+
+ /* be sure I get rid of flags I might have set */
+ lp->rcr_cur_mode &= ~(RCR_PRMS | RCR_ALMUL);
+
+ /* now, the table can be loaded into the chipset */
+ update_multicast = 1;
+ } else {
+ DBG(2, "%s: ~(RCR_PRMS|RCR_ALMUL)\n", dev->name);
+ lp->rcr_cur_mode &= ~(RCR_PRMS | RCR_ALMUL);
+
+ /*
+ * since I'm disabling all multicast entirely, I need to
+ * clear the multicast list
+ */
+ memset(multicast_table, 0, sizeof(multicast_table));
+ update_multicast = 1;
+ }
+
+ spin_lock_irq(&lp->lock);
+ SMC_SELECT_BANK(0);
+ SMC_SET_RCR(lp->rcr_cur_mode);
+ if (update_multicast) {
+ SMC_SELECT_BANK(3);
+ SMC_SET_MCAST(multicast_table);
+ }
+ SMC_SELECT_BANK(2);
+ spin_unlock_irq(&lp->lock);
+}
+
+
+/*
+ * Open and Initialize the board
+ *
+ * Set up everything, reset the card, etc..
+ */
+static int
+smc_open(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ /*
+ * Check that the address is valid. If its not, refuse
+ * to bring the device up. The user must specify an
+ * address using ifconfig eth0 hw ether xx:xx:xx:xx:xx:xx
+ */
+ if (!is_valid_ether_addr(dev->dev_addr)) {
+ PRINTK("%s: no valid ethernet hw addr\n", __FUNCTION__);
+ return -EINVAL;
+ }
+
+ /* Setup the default Register Modes */
+ lp->tcr_cur_mode = TCR_DEFAULT;
+ lp->rcr_cur_mode = RCR_DEFAULT;
+ lp->rpc_cur_mode = RPC_DEFAULT;
+
+ /*
+ * If we are not using a MII interface, we need to
+ * monitor our own carrier signal to detect faults.
+ */
+ if (lp->phy_type == 0)
+ lp->tcr_cur_mode |= TCR_MON_CSN;
+
+ /* reset the hardware */
+ smc_reset(dev);
+ smc_enable(dev);
+
+ /* Configure the PHY, initialize the link state */
+ if (lp->phy_type != 0)
+ smc_phy_configure(dev);
+ else {
+ spin_lock_irq(&lp->lock);
+ smc_10bt_check_media(dev, 1);
+ spin_unlock_irq(&lp->lock);
+ }
+
+ netif_start_queue(dev);
+ return 0;
+}
+
+/*
+ * smc_close
+ *
+ * this makes the board clean up everything that it can
+ * and not talk to the outside world. Caused by
+ * an 'ifconfig ethX down'
+ */
+static int smc_close(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ netif_stop_queue(dev);
+ netif_carrier_off(dev);
+
+ /* clear everything */
+ smc_shutdown(dev);
+
+ if (lp->phy_type != 0) {
+ /* We need to ensure that no calls to
+ smc_phy_configure are pending.
+
+ flush_scheduled_work() cannot be called because we
+ are running with the netlink semaphore held (from
+ devinet_ioctl()) and the pending work queue
+ contains linkwatch_event() (scheduled by
+ netif_carrier_off() above). linkwatch_event() also
+ wants the netlink semaphore.
+ */
+ while(lp->work_pending)
+ schedule();
+ smc_phy_powerdown(dev, lp->mii.phy_id);
+ }
+
+ if (lp->pending_tx_skb) {
+ dev_kfree_skb(lp->pending_tx_skb);
+ lp->pending_tx_skb = NULL;
+ }
+
+ return 0;
+}
+
+/*
+ * Get the current statistics.
+ * This may be called with the card open or closed.
+ */
+static struct net_device_stats *smc_query_statistics(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+
+ DBG(2, "%s: %s\n", dev->name, __FUNCTION__);
+
+ return &lp->stats;
+}
+
+/*
+ * Ethtool support
+ */
+static int
+smc_ethtool_getsettings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ int ret;
+
+ cmd->maxtxpkt = 1;
+ cmd->maxrxpkt = 1;
+
+ if (lp->phy_type != 0) {
+ spin_lock_irq(&lp->lock);
+ ret = mii_ethtool_gset(&lp->mii, cmd);
+ spin_unlock_irq(&lp->lock);
+ } else {
+ cmd->supported = SUPPORTED_10baseT_Half |
+ SUPPORTED_10baseT_Full |
+ SUPPORTED_TP | SUPPORTED_AUI;
+
+ if (lp->ctl_rspeed == 10)
+ cmd->speed = SPEED_10;
+ else if (lp->ctl_rspeed == 100)
+ cmd->speed = SPEED_100;
+
+ cmd->autoneg = AUTONEG_DISABLE;
+ cmd->transceiver = XCVR_INTERNAL;
+ cmd->port = 0;
+ cmd->duplex = lp->tcr_cur_mode & TCR_SWFDUP ? DUPLEX_FULL : DUPLEX_HALF;
+
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static int
+smc_ethtool_setsettings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ int ret;
+
+ if (lp->phy_type != 0) {
+ spin_lock_irq(&lp->lock);
+ ret = mii_ethtool_sset(&lp->mii, cmd);
+ spin_unlock_irq(&lp->lock);
+ } else {
+ if (cmd->autoneg != AUTONEG_DISABLE ||
+ cmd->speed != SPEED_10 ||
+ (cmd->duplex != DUPLEX_HALF && cmd->duplex != DUPLEX_FULL) ||
+ (cmd->port != PORT_TP && cmd->port != PORT_AUI))
+ return -EINVAL;
+
+// lp->port = cmd->port;
+ lp->ctl_rfduplx = cmd->duplex == DUPLEX_FULL;
+
+// if (netif_running(dev))
+// smc_set_port(dev);
+
+ ret = 0;
+ }
+
+ return ret;
+}
+
+static void
+smc_ethtool_getdrvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
+{
+ strncpy(info->driver, CARDNAME, sizeof(info->driver));
+ strncpy(info->version, version, sizeof(info->version));
+ strncpy(info->bus_info, dev->class_dev.dev->bus_id, sizeof(info->bus_info));
+}
+
+static int smc_ethtool_nwayreset(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ int ret = -EINVAL;
+
+ if (lp->phy_type != 0) {
+ spin_lock_irq(&lp->lock);
+ ret = mii_nway_restart(&lp->mii);
+ spin_unlock_irq(&lp->lock);
+ }
+
+ return ret;
+}
+
+static u32 smc_ethtool_getmsglevel(struct net_device *dev)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ return lp->msg_enable;
+}
+
+static void smc_ethtool_setmsglevel(struct net_device *dev, u32 level)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ lp->msg_enable = level;
+}
+
+static struct ethtool_ops smc_ethtool_ops = {
+ .get_settings = smc_ethtool_getsettings,
+ .set_settings = smc_ethtool_setsettings,
+ .get_drvinfo = smc_ethtool_getdrvinfo,
+
+ .get_msglevel = smc_ethtool_getmsglevel,
+ .set_msglevel = smc_ethtool_setmsglevel,
+ .nway_reset = smc_ethtool_nwayreset,
+ .get_link = ethtool_op_get_link,
+// .get_eeprom = smc_ethtool_geteeprom,
+// .set_eeprom = smc_ethtool_seteeprom,
+};
+
+/*
+ * smc_findirq
+ *
+ * This routine has a simple purpose -- make the SMC chip generate an
+ * interrupt, so an auto-detect routine can detect it, and find the IRQ,
+ */
+/*
+ * does this still work?
+ *
+ * I just deleted auto_irq.c, since it was never built...
+ * --jgarzik
+ */
+static int __init smc_findirq(unsigned long ioaddr)
+{
+ int timeout = 20;
+ unsigned long cookie;
+
+ DBG(2, "%s: %s\n", CARDNAME, __FUNCTION__);
+
+ cookie = probe_irq_on();
+
+ /*
+ * What I try to do here is trigger an ALLOC_INT. This is done
+ * by allocating a small chunk of memory, which will give an interrupt
+ * when done.
+ */
+ /* enable ALLOCation interrupts ONLY */
+ SMC_SELECT_BANK(2);
+ SMC_SET_INT_MASK(IM_ALLOC_INT);
+
+ /*
+ * Allocate 512 bytes of memory. Note that the chip was just
+ * reset so all the memory is available
+ */
+ SMC_SET_MMU_CMD(MC_ALLOC | 1);
+
+ /*
+ * Wait until positive that the interrupt has been generated
+ */
+ do {
+ int int_status;
+ udelay(10);
+ int_status = SMC_GET_INT();
+ if (int_status & IM_ALLOC_INT)
+ break; /* got the interrupt */
+ } while (--timeout);
+
+ /*
+ * there is really nothing that I can do here if timeout fails,
+ * as autoirq_report will return a 0 anyway, which is what I
+ * want in this case. Plus, the clean up is needed in both
+ * cases.
+ */
+
+ /* and disable all interrupts again */
+ SMC_SET_INT_MASK(0);
+
+ /* and return what I found */
+ return probe_irq_off(cookie);
+}
+
+/*
+ * Function: smc_probe(unsigned long ioaddr)
+ *
+ * Purpose:
+ * Tests to see if a given ioaddr points to an SMC91x chip.
+ * Returns a 0 on success
+ *
+ * Algorithm:
+ * (1) see if the high byte of BANK_SELECT is 0x33
+ * (2) compare the ioaddr with the base register's address
+ * (3) see if I recognize the chip ID in the appropriate register
+ *
+ * Here I do typical initialization tasks.
+ *
+ * o Initialize the structure if needed
+ * o print out my vanity message if not done so already
+ * o print out what type of hardware is detected
+ * o print out the ethernet address
+ * o find the IRQ
+ * o set up my private data
+ * o configure the dev structure with my subroutines
+ * o actually GRAB the irq.
+ * o GRAB the region
+ */
+static int __init smc_probe(struct net_device *dev, unsigned long ioaddr)
+{
+ struct smc_local *lp = netdev_priv(dev);
+ static int version_printed = 0;
+ int i, retval;
+ unsigned int val, revision_register;
+ const char *version_string;
+
+ DBG(2, "%s: %s\n", CARDNAME, __FUNCTION__);
+
+ /* First, see if the high byte is 0x33 */
+ val = SMC_CURRENT_BANK();
+ DBG(2, "%s: bank signature probe returned 0x%04x\n", CARDNAME, val);
+ if ((val & 0xFF00) != 0x3300) {
+ if ((val & 0xFF) == 0x33) {
+ printk(KERN_WARNING
+ "%s: Detected possible byte-swapped interface"
+ " at IOADDR 0x%lx\n", CARDNAME, ioaddr);
+ }
+ retval = -ENODEV;
+ goto err_out;
+ }
+
+ /*
+ * The above MIGHT indicate a device, but I need to write to
+ * further test this.
+ */
+ SMC_SELECT_BANK(0);
+ val = SMC_CURRENT_BANK();
+ if ((val & 0xFF00) != 0x3300) {
+ retval = -ENODEV;
+ goto err_out;
+ }
+
+ /*
+ * well, we've already written once, so hopefully another
+ * time won't hurt. This time, I need to switch the bank
+ * register to bank 1, so I can access the base address
+ * register
+ */
+ SMC_SELECT_BANK(1);
+ val = SMC_GET_BASE();
+ val = ((val & 0x1F00) >> 3) << SMC_IO_SHIFT;
+ if ((ioaddr & ((PAGE_SIZE-1)<<SMC_IO_SHIFT)) != val) {
+ printk("%s: IOADDR %lx doesn't match configuration (%x).\n",
+ CARDNAME, ioaddr, val);
+ }
+
+ /*
+ * check if the revision register is something that I
+ * recognize. These might need to be added to later,
+ * as future revisions could be added.
+ */
+ SMC_SELECT_BANK(3);
+ revision_register = SMC_GET_REV();
+ DBG(2, "%s: revision = 0x%04x\n", CARDNAME, revision_register);
+ version_string = chip_ids[ (revision_register >> 4) & 0xF];
+ if (!version_string || (revision_register & 0xff00) != 0x3300) {
+ /* I don't recognize this chip, so... */
+ printk("%s: IO 0x%lx: Unrecognized revision register 0x%04x"
+ ", Contact author.\n", CARDNAME,
+ ioaddr, revision_register);
+
+ retval = -ENODEV;
+ goto err_out;
+ }
+
+ /* At this point I'll assume that the chip is an SMC91x. */
+ if (version_printed++ == 0)
+ printk("%s", version);
+
+ /* fill in some of the fields */
+ dev->base_addr = ioaddr;
+ lp->version = revision_register & 0xff;
+ spin_lock_init(&lp->lock);
+
+ /* Get the MAC address */
+ SMC_SELECT_BANK(1);
+ SMC_GET_MAC_ADDR(dev->dev_addr);
+
+ /* now, reset the chip, and put it into a known state */
+ smc_reset(dev);
+
+ /*
+ * If dev->irq is 0, then the device has to be banged on to see
+ * what the IRQ is.
+ *
+ * This banging doesn't always detect the IRQ, for unknown reasons.
+ * a workaround is to reset the chip and try again.
+ *
+ * Interestingly, the DOS packet driver *SETS* the IRQ on the card to
+ * be what is requested on the command line. I don't do that, mostly
+ * because the card that I have uses a non-standard method of accessing
+ * the IRQs, and because this _should_ work in most configurations.
+ *
+ * Specifying an IRQ is done with the assumption that the user knows
+ * what (s)he is doing. No checking is done!!!!
+ */
+ if (dev->irq < 1) {
+ int trials;
+
+ trials = 3;
+ while (trials--) {
+ dev->irq = smc_findirq(ioaddr);
+ if (dev->irq)
+ break;
+ /* kick the card and try again */
+ smc_reset(dev);
+ }
+ }
+ if (dev->irq == 0) {
+ printk("%s: Couldn't autodetect your IRQ. Use irq=xx.\n",
+ dev->name);
+ retval = -ENODEV;
+ goto err_out;
+ }
+ dev->irq = irq_canonicalize(dev->irq);
+
+ /* Fill in the fields of the device structure with ethernet values. */
+ ether_setup(dev);
+
+ dev->open = smc_open;
+ dev->stop = smc_close;
+ dev->hard_start_xmit = smc_hard_start_xmit;
+ dev->tx_timeout = smc_timeout;
+ dev->watchdog_timeo = msecs_to_jiffies(watchdog);
+ dev->get_stats = smc_query_statistics;
+ dev->set_multicast_list = smc_set_multicast_list;
+ dev->ethtool_ops = &smc_ethtool_ops;
+
+ tasklet_init(&lp->tx_task, smc_hardware_send_pkt, (unsigned long)dev);
+ INIT_WORK(&lp->phy_configure, smc_phy_configure, dev);
+ lp->mii.phy_id_mask = 0x1f;
+ lp->mii.reg_num_mask = 0x1f;
+ lp->mii.force_media = 0;
+ lp->mii.full_duplex = 0;
+ lp->mii.dev = dev;
+ lp->mii.mdio_read = smc_phy_read;
+ lp->mii.mdio_write = smc_phy_write;
+
+ /*
+ * Locate the phy, if any.
+ */
+ if (lp->version >= (CHIP_91100 << 4))
+ smc_phy_detect(dev);
+
+ /* Set default parameters */
+ lp->msg_enable = NETIF_MSG_LINK;
+ lp->ctl_rfduplx = 0;
+ lp->ctl_rspeed = 10;
+
+ if (lp->version >= (CHIP_91100 << 4)) {
+ lp->ctl_rfduplx = 1;
+ lp->ctl_rspeed = 100;
+ }
+
+ /* Grab the IRQ */
+ retval = request_irq(dev->irq, &smc_interrupt, 0, dev->name, dev);
+ if (retval)
+ goto err_out;
+
+ set_irq_type(dev->irq, IRQT_RISING);
+
+#ifdef SMC_USE_PXA_DMA
+ {
+ int dma = pxa_request_dma(dev->name, DMA_PRIO_LOW,
+ smc_pxa_dma_irq, NULL);
+ if (dma >= 0)
+ dev->dma = dma;
+ }
+#endif
+
+ retval = register_netdev(dev);
+ if (retval == 0) {
+ /* now, print out the card info, in a short format.. */
+ printk("%s: %s (rev %d) at %#lx IRQ %d",
+ dev->name, version_string, revision_register & 0x0f,
+ dev->base_addr, dev->irq);
+
+ if (dev->dma != (unsigned char)-1)
+ printk(" DMA %d", dev->dma);
+
+ printk("%s%s\n", nowait ? " [nowait]" : "",
+ THROTTLE_TX_PKTS ? " [throttle_tx]" : "");
+
+ if (!is_valid_ether_addr(dev->dev_addr)) {
+ printk("%s: Invalid ethernet MAC address. Please "
+ "set using ifconfig\n", dev->name);
+ } else {
+ /* Print the Ethernet address */
+ printk("%s: Ethernet addr: ", dev->name);
+ for (i = 0; i < 5; i++)
+ printk("%2.2x:", dev->dev_addr[i]);
+ printk("%2.2x\n", dev->dev_addr[5]);
+ }
+
+ if (lp->phy_type == 0) {
+ PRINTK("%s: No PHY found\n", dev->name);
+ } else if ((lp->phy_type & 0xfffffff0) == 0x0016f840) {
+ PRINTK("%s: PHY LAN83C183 (LAN91C111 Internal)\n", dev->name);
+ } else if ((lp->phy_type & 0xfffffff0) == 0x02821c50) {
+ PRINTK("%s: PHY LAN83C180\n", dev->name);
+ }
+ }
+
+err_out:
+#ifdef SMC_USE_PXA_DMA
+ if (retval && dev->dma != (unsigned char)-1)
+ pxa_free_dma(dev->dma);
+#endif
+ return retval;
+}
+
+static int smc_enable_device(unsigned long attrib_phys)
+{
+ unsigned long flags;
+ unsigned char ecor, ecsr;
+ void *addr;
+
+ /*
+ * Map the attribute space. This is overkill, but clean.
+ */
+ addr = ioremap(attrib_phys, ATTRIB_SIZE);
+ if (!addr)
+ return -ENOMEM;
+
+ /*
+ * Reset the device. We must disable IRQs around this
+ * since a reset causes the IRQ line become active.
+ */
+ local_irq_save(flags);
+ ecor = readb(addr + (ECOR << SMC_IO_SHIFT)) & ~ECOR_RESET;
+ writeb(ecor | ECOR_RESET, addr + (ECOR << SMC_IO_SHIFT));
+ readb(addr + (ECOR << SMC_IO_SHIFT));
+
+ /*
+ * Wait 100us for the chip to reset.
+ */
+ udelay(100);
+
+ /*
+ * The device will ignore all writes to the enable bit while
+ * reset is asserted, even if the reset bit is cleared in the
+ * same write. Must clear reset first, then enable the device.
+ */
+ writeb(ecor, addr + (ECOR << SMC_IO_SHIFT));
+ writeb(ecor | ECOR_ENABLE, addr + (ECOR << SMC_IO_SHIFT));
+
+ /*
+ * Set the appropriate byte/word mode.
+ */
+ ecsr = readb(addr + (ECSR << SMC_IO_SHIFT)) & ~ECSR_IOIS8;
+#ifndef SMC_CAN_USE_16BIT
+ ecsr |= ECSR_IOIS8;
+#endif
+ writeb(ecsr, addr + (ECSR << SMC_IO_SHIFT));
+ local_irq_restore(flags);
+
+ iounmap(addr);
+
+ /*
+ * Wait for the chip to wake up. We could poll the control
+ * register in the main register space, but that isn't mapped
+ * yet. We know this is going to take 750us.
+ */
+ msleep(1);
+
+ return 0;
+}
+
+/*
+ * smc_init(void)
+ * Input parameters:
+ * dev->base_addr == 0, try to find all possible locations
+ * dev->base_addr > 0x1ff, this is the address to check
+ * dev->base_addr == <anything else>, return failure code
+ *
+ * Output:
+ * 0 --> there is a device
+ * anything else, error
+ */
+static int smc_drv_probe(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct net_device *ndev;
+ struct resource *res, *ext = NULL;
+ unsigned int *addr;
+ int ret;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ ret = -ENODEV;
+ goto out;
+ }
+
+ /*
+ * Request the regions.
+ */
+ if (!request_mem_region(res->start, SMC_IO_EXTENT, "smc91x")) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ ndev = alloc_etherdev(sizeof(struct smc_local));
+ if (!ndev) {
+ printk("%s: could not allocate device.\n", CARDNAME);
+ ret = -ENOMEM;
+ goto release_1;
+ }
+ SET_MODULE_OWNER(ndev);
+ SET_NETDEV_DEV(ndev, dev);
+
+ ndev->dma = (unsigned char)-1;
+ ndev->irq = platform_get_irq(pdev, 0);
+
+ ext = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (ext) {
+ if (!request_mem_region(ext->start, ATTRIB_SIZE, ndev->name)) {
+ ret = -EBUSY;
+ goto release_1;
+ }
+
+#if defined(CONFIG_SA1100_ASSABET)
+ NCR_0 |= NCR_ENET_OSC_EN;
+#endif
+
+ ret = smc_enable_device(ext->start);
+ if (ret)
+ goto release_both;
+ }
+
+ addr = ioremap(res->start, SMC_IO_EXTENT);
+ if (!addr) {
+ ret = -ENOMEM;
+ goto release_both;
+ }
+
+ dev_set_drvdata(dev, ndev);
+ ret = smc_probe(ndev, (unsigned long)addr);
+ if (ret != 0) {
+ dev_set_drvdata(dev, NULL);
+ iounmap(addr);
+ release_both:
+ if (ext)
+ release_mem_region(ext->start, ATTRIB_SIZE);
+ free_netdev(ndev);
+ release_1:
+ release_mem_region(res->start, SMC_IO_EXTENT);
+ out:
+ printk("%s: not found (%d).\n", CARDNAME, ret);
+ }
+#ifdef SMC_USE_PXA_DMA
+ else {
+ struct smc_local *lp = netdev_priv(ndev);
+ lp->physaddr = res->start;
+ }
+#endif
+
+ return ret;
+}
+
+static int smc_drv_remove(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct resource *res;
+
+ dev_set_drvdata(dev, NULL);
+
+ unregister_netdev(ndev);
+
+ free_irq(ndev->irq, ndev);
+
+#ifdef SMC_USE_PXA_DMA
+ if (ndev->dma != (unsigned char)-1)
+ pxa_free_dma(ndev->dma);
+#endif
+ iounmap((void *)ndev->base_addr);
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ if (res)
+ release_mem_region(res->start, ATTRIB_SIZE);
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ release_mem_region(res->start, SMC_IO_EXTENT);
+
+ free_netdev(ndev);
+
+ return 0;
+}
+
+static int smc_drv_suspend(struct device *dev, u32 state, u32 level)
+{
+ struct net_device *ndev = dev_get_drvdata(dev);
+
+ if (ndev && level == SUSPEND_DISABLE) {
+ if (netif_running(ndev)) {
+ netif_device_detach(ndev);
+ smc_shutdown(ndev);
+ }
+ }
+ return 0;
+}
+
+static int smc_drv_resume(struct device *dev, u32 level)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct net_device *ndev = dev_get_drvdata(dev);
+
+ if (ndev && level == RESUME_ENABLE) {
+ struct smc_local *lp = netdev_priv(ndev);
+
+ if (pdev->num_resources == 3)
+ smc_enable_device(pdev->resource[2].start);
+ if (netif_running(ndev)) {
+ smc_reset(ndev);
+ smc_enable(ndev);
+ if (lp->phy_type != 0)
+ smc_phy_configure(ndev);
+ netif_device_attach(ndev);
+ }
+ }
+ return 0;
+}
+
+static struct device_driver smc_driver = {
+ .name = CARDNAME,
+ .bus = &platform_bus_type,
+ .probe = smc_drv_probe,
+ .remove = smc_drv_remove,
+ .suspend = smc_drv_suspend,
+ .resume = smc_drv_resume,
+};
+
+static int __init smc_init(void)
+{
+#ifdef MODULE
+#ifdef CONFIG_ISA
+ if (io == -1)
+ printk(KERN_WARNING
+ "%s: You shouldn't use auto-probing with insmod!\n",
+ CARDNAME);
+#endif
+#endif
+
+ return driver_register(&smc_driver);
+}
+
+static void __exit smc_cleanup(void)
+{
+ driver_unregister(&smc_driver);
+}
+
+module_init(smc_init);
+module_exit(smc_cleanup);
--- /dev/null
+/*------------------------------------------------------------------------
+ . smc91x.h - macros for SMSC's 91C9x/91C1xx single-chip Ethernet device.
+ .
+ . Copyright (C) 1996 by Erik Stahlman
+ . Copyright (C) 2001 Standard Microsystems Corporation
+ . Developed by Simple Network Magic Corporation
+ . Copyright (C) 2003 Monta Vista Software, Inc.
+ . Unified SMC91x driver by Nicolas Pitre
+ .
+ . This program is free software; you can redistribute it and/or modify
+ . it under the terms of the GNU General Public License as published by
+ . the Free Software Foundation; either version 2 of the License, or
+ . (at your option) any later version.
+ .
+ . This program is distributed in the hope that it will be useful,
+ . but WITHOUT ANY WARRANTY; without even the implied warranty of
+ . MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ . GNU General Public License for more details.
+ .
+ . You should have received a copy of the GNU General Public License
+ . along with this program; if not, write to the Free Software
+ . Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ .
+ . Information contained in this file was obtained from the LAN91C111
+ . manual from SMC. To get a copy, if you really want one, you can find
+ . information under www.smsc.com.
+ .
+ . Authors
+ . Erik Stahlman <erik@vt.edu>
+ . Daris A Nevil <dnevil@snmc.com>
+ . Nicolas Pitre <nico@cam.org>
+ .
+ ---------------------------------------------------------------------------*/
+#ifndef _SMC91X_H_
+#define _SMC91X_H_
+
+
+/*
+ * Define your architecture specific bus configuration parameters here.
+ */
+
+#if defined(CONFIG_ARCH_LUBBOCK)
+
+/* We can only do 16-bit reads and writes in the static memory space. */
+#define SMC_CAN_USE_8BIT 0
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 0
+#define SMC_NOWAIT 1
+
+/* The first two address lines aren't connected... */
+#define SMC_IO_SHIFT 2
+
+#define SMC_inw(a, r) readw((a) + (r))
+#define SMC_outw(v, a, r) writew(v, (a) + (r))
+#define SMC_insw(a, r, p, l) readsw((a) + (r), p, l)
+#define SMC_outsw(a, r, p, l) writesw((a) + (r), p, l)
+
+#elif defined(CONFIG_REDWOOD_5) || defined(CONFIG_REDWOOD_6)
+
+/* We can only do 16-bit reads and writes in the static memory space. */
+#define SMC_CAN_USE_8BIT 0
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 0
+#define SMC_NOWAIT 1
+
+#define SMC_IO_SHIFT 0
+
+#define SMC_inw(a, r) in_be16((volatile u16 *)((a) + (r)))
+#define SMC_outw(v, a, r) out_be16((volatile u16 *)((a) + (r)), v)
+#define SMC_insw(a, r, p, l) \
+ do { \
+ unsigned long __port = (a) + (r); \
+ u16 *__p = (u16 *)(p); \
+ int __l = (l); \
+ insw(__port, __p, __l); \
+ while (__l > 0) { \
+ *__p = swab16(*__p); \
+ __p++; \
+ __l--; \
+ } \
+ } while (0)
+#define SMC_outsw(a, r, p, l) \
+ do { \
+ unsigned long __port = (a) + (r); \
+ u16 *__p = (u16 *)(p); \
+ int __l = (l); \
+ while (__l > 0) { \
+ /* Believe it or not, the swab isn't needed. */ \
+ outw( /* swab16 */ (*__p++), __port); \
+ __l--; \
+ } \
+ } while (0)
+#define set_irq_type(irq, type)
+
+#elif defined(CONFIG_SA1100_PLEB)
+/* We can only do 16-bit reads and writes in the static memory space. */
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 0
+#define SMC_IO_SHIFT 0
+#define SMC_NOWAIT 1
+
+#define SMC_inb(a, r) inb((a) + (r))
+#define SMC_insb(a, r, p, l) insb((a) + (r), p, (l))
+#define SMC_inw(a, r) inw((a) + (r))
+#define SMC_insw(a, r, p, l) insw((a) + (r), p, l)
+#define SMC_outb(v, a, r) outb(v, (a) + (r))
+#define SMC_outsb(a, r, p, l) outsb((a) + (r), p, (l))
+#define SMC_outw(v, a, r) outw(v, (a) + (r))
+#define SMC_outsw(a, r, p, l) outsw((a) + (r), p, l)
+
+#define set_irq_type(irq, type) do {} while (0)
+
+#elif defined(CONFIG_SA1100_ASSABET)
+
+#include <asm/arch/neponset.h>
+
+/* We can only do 8-bit reads and writes in the static memory space. */
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 0
+#define SMC_CAN_USE_32BIT 0
+#define SMC_NOWAIT 1
+
+/* The first two address lines aren't connected... */
+#define SMC_IO_SHIFT 2
+
+#define SMC_inb(a, r) readb((a) + (r))
+#define SMC_outb(v, a, r) writeb(v, (a) + (r))
+#define SMC_insb(a, r, p, l) readsb((a) + (r), p, (l))
+#define SMC_outsb(a, r, p, l) writesb((a) + (r), p, (l))
+
+#elif defined(CONFIG_ARCH_INNOKOM) || \
+ defined(CONFIG_MACH_MAINSTONE) || \
+ defined(CONFIG_ARCH_PXA_IDP) || \
+ defined(CONFIG_ARCH_RAMSES)
+
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 1
+#define SMC_IO_SHIFT 0
+#define SMC_NOWAIT 1
+#define SMC_USE_PXA_DMA 1
+
+#define SMC_inb(a, r) readb((a) + (r))
+#define SMC_inw(a, r) readw((a) + (r))
+#define SMC_inl(a, r) readl((a) + (r))
+#define SMC_outb(v, a, r) writeb(v, (a) + (r))
+#define SMC_outl(v, a, r) writel(v, (a) + (r))
+#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)
+#define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l)
+
+/* We actually can't write halfwords properly if not word aligned */
+static inline void
+SMC_outw(u16 val, unsigned long ioaddr, int reg)
+{
+ if (reg & 2) {
+ unsigned int v = val << 16;
+ v |= readl(ioaddr + (reg & ~2)) & 0xffff;
+ writel(v, ioaddr + (reg & ~2));
+ } else {
+ writew(val, ioaddr + reg);
+ }
+}
+
+#elif defined(CONFIG_ISA)
+
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 0
+
+#define SMC_inb(a, r) inb((a) + (r))
+#define SMC_inw(a, r) inw((a) + (r))
+#define SMC_outb(v, a, r) outb(v, (a) + (r))
+#define SMC_outw(v, a, r) outw(v, (a) + (r))
+#define SMC_insw(a, r, p, l) insw((a) + (r), p, l)
+#define SMC_outsw(a, r, p, l) outsw((a) + (r), p, l)
+
+#elif defined(CONFIG_M32R)
+
+#define SMC_CAN_USE_8BIT 0
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 0
+
+#define SMC_inb(a, r) inb((a) + (r) - 0xa0000000)
+#define SMC_inw(a, r) inw((a) + (r) - 0xa0000000)
+#define SMC_outb(v, a, r) outb(v, (a) + (r) - 0xa0000000)
+#define SMC_outw(v, a, r) outw(v, (a) + (r) - 0xa0000000)
+#define SMC_insw(a, r, p, l) insw((a) + (r) - 0xa0000000, p, l)
+#define SMC_outsw(a, r, p, l) outsw((a) + (r) - 0xa0000000, p, l)
+
+#define set_irq_type(irq, type) do {} while(0)
+
+#define RPC_LSA_DEFAULT RPC_LED_TX_RX
+#define RPC_LSB_DEFAULT RPC_LED_100_10
+
+#elif defined(CONFIG_MACH_LPD7A400) || defined(CONFIG_MACH_LPD7A404)
+
+/* The LPD7A40X_IOBARRIER is necessary to overcome a mismatch between
+ * the way that the CPU handles chip selects and the way that the SMC
+ * chip expects the chip select to operate. Refer to
+ * Documentation/arm/Sharp-LH/IOBarrier for details. The read from
+ * IOBARRIER is a byte as a least-common denominator of possible
+ * regions to use as the barrier. It would be wasteful to read 32
+ * bits from a byte oriented region.
+ *
+ * There is no explicit protection against interrupts intervening
+ * between the writew and the IOBARRIER. In SMC ISR there is a
+ * preamble that performs an IOBARRIER in the extremely unlikely event
+ * that the driver interrupts itself between a writew to the chip an
+ * the IOBARRIER that follows *and* the cache is large enough that the
+ * first off-chip access while handing the interrupt is to the SMC
+ * chip. Other devices in the same address space as the SMC chip must
+ * be aware of the potential for trouble and perform a similar
+ * IOBARRIER on entry to their ISR.
+ */
+
+#include <asm/arch/constants.h> /* IOBARRIER_VIRT */
+
+#define SMC_CAN_USE_8BIT 0
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 0
+#define SMC_NOWAIT 0
+#define LPD7A40X_IOBARRIER readb (IOBARRIER_VIRT)
+
+#define SMC_inw(a,r) readw ((void*) ((a) + (r)))
+#define SMC_insw(a,r,p,l) readsw ((void*) ((a) + (r)), p, l)
+#define SMC_outw(v,a,r) ({ writew ((v), (a) + (r)); LPD7A40X_IOBARRIER; })
+
+static inline void SMC_outsw (unsigned long a, int r, unsigned char* p, int l)
+{
+ unsigned short* ps = (unsigned short*) p;
+ while (l-- > 0) {
+ writew (*ps++, a + r);
+ LPD7A40X_IOBARRIER;
+ }
+}
+
+#define SMC_INTERRUPT_PREAMBLE LPD7A40X_IOBARRIER
+
+#define RPC_LSA_DEFAULT RPC_LED_TX_RX
+#define RPC_LSB_DEFAULT RPC_LED_100_10
+
+#else
+
+#define SMC_CAN_USE_8BIT 1
+#define SMC_CAN_USE_16BIT 1
+#define SMC_CAN_USE_32BIT 1
+#define SMC_NOWAIT 1
+
+#define SMC_inb(a, r) readb((a) + (r))
+#define SMC_inw(a, r) readw((a) + (r))
+#define SMC_inl(a, r) readl((a) + (r))
+#define SMC_outb(v, a, r) writeb(v, (a) + (r))
+#define SMC_outw(v, a, r) writew(v, (a) + (r))
+#define SMC_outl(v, a, r) writel(v, (a) + (r))
+#define SMC_insl(a, r, p, l) readsl((a) + (r), p, l)
+#define SMC_outsl(a, r, p, l) writesl((a) + (r), p, l)
+
+#define RPC_LSA_DEFAULT RPC_LED_100_10
+#define RPC_LSB_DEFAULT RPC_LED_TX_RX
+
+#endif
+
+
+#ifdef SMC_USE_PXA_DMA
+/*
+ * Let's use the DMA engine on the XScale PXA2xx for RX packets. This is
+ * always happening in irq context so no need to worry about races. TX is
+ * different and probably not worth it for that reason, and not as critical
+ * as RX which can overrun memory and lose packets.
+ */
+#include <linux/dma-mapping.h>
+#include <asm/dma.h>
+#include <asm/arch/pxa-regs.h>
+
+#ifdef SMC_insl
+#undef SMC_insl
+#define SMC_insl(a, r, p, l) \
+ smc_pxa_dma_insl(a, lp->physaddr, r, dev->dma, p, l)
+static inline void
+smc_pxa_dma_insl(u_long ioaddr, u_long physaddr, int reg, int dma,
+ u_char *buf, int len)
+{
+ dma_addr_t dmabuf;
+
+ /* fallback if no DMA available */
+ if (dma == (unsigned char)-1) {
+ readsl(ioaddr + reg, buf, len);
+ return;
+ }
+
+ /* 64 bit alignment is required for memory to memory DMA */
+ if ((long)buf & 4) {
+ *((u32 *)buf) = SMC_inl(ioaddr, reg);
+ buf += 4;
+ len--;
+ }
+
+ len *= 4;
+ dmabuf = dma_map_single(NULL, buf, len, DMA_FROM_DEVICE);
+ DCSR(dma) = DCSR_NODESC;
+ DTADR(dma) = dmabuf;
+ DSADR(dma) = physaddr + reg;
+ DCMD(dma) = (DCMD_INCTRGADDR | DCMD_BURST32 |
+ DCMD_WIDTH4 | (DCMD_LENGTH & len));
+ DCSR(dma) = DCSR_NODESC | DCSR_RUN;
+ while (!(DCSR(dma) & DCSR_STOPSTATE))
+ cpu_relax();
+ DCSR(dma) = 0;
+ dma_unmap_single(NULL, dmabuf, len, DMA_FROM_DEVICE);
+}
+#endif
+
+#ifdef SMC_insw
+#undef SMC_insw
+#define SMC_insw(a, r, p, l) \
+ smc_pxa_dma_insw(a, lp->physaddr, r, dev->dma, p, l)
+static inline void
+smc_pxa_dma_insw(u_long ioaddr, u_long physaddr, int reg, int dma,
+ u_char *buf, int len)
+{
+ dma_addr_t dmabuf;
+
+ /* fallback if no DMA available */
+ if (dma == (unsigned char)-1) {
+ readsw(ioaddr + reg, buf, len);
+ return;
+ }
+
+ /* 64 bit alignment is required for memory to memory DMA */
+ while ((long)buf & 6) {
+ *((u16 *)buf) = SMC_inw(ioaddr, reg);
+ buf += 2;
+ len--;
+ }
+
+ len *= 2;
+ dmabuf = dma_map_single(NULL, buf, len, DMA_FROM_DEVICE);
+ DCSR(dma) = DCSR_NODESC;
+ DTADR(dma) = dmabuf;
+ DSADR(dma) = physaddr + reg;
+ DCMD(dma) = (DCMD_INCTRGADDR | DCMD_BURST32 |
+ DCMD_WIDTH2 | (DCMD_LENGTH & len));
+ DCSR(dma) = DCSR_NODESC | DCSR_RUN;
+ while (!(DCSR(dma) & DCSR_STOPSTATE))
+ cpu_relax();
+ DCSR(dma) = 0;
+ dma_unmap_single(NULL, dmabuf, len, DMA_FROM_DEVICE);
+}
+#endif
+
+static void
+smc_pxa_dma_irq(int dma, void *dummy, struct pt_regs *regs)
+{
+ DCSR(dma) = 0;
+}
+#endif /* SMC_USE_PXA_DMA */
+
+
+/* Because of bank switching, the LAN91x uses only 16 I/O ports */
+#ifndef SMC_IO_SHIFT
+#define SMC_IO_SHIFT 0
+#endif
+#define SMC_IO_EXTENT (16 << SMC_IO_SHIFT)
+
+
+/*
+ . Bank Select Register:
+ .
+ . yyyy yyyy 0000 00xx
+ . xx = bank number
+ . yyyy yyyy = 0x33, for identification purposes.
+*/
+#define BANK_SELECT (14 << SMC_IO_SHIFT)
+
+
+// Transmit Control Register
+/* BANK 0 */
+#define TCR_REG SMC_REG(0x0000, 0)
+#define TCR_ENABLE 0x0001 // When 1 we can transmit
+#define TCR_LOOP 0x0002 // Controls output pin LBK
+#define TCR_FORCOL 0x0004 // When 1 will force a collision
+#define TCR_PAD_EN 0x0080 // When 1 will pad tx frames < 64 bytes w/0
+#define TCR_NOCRC 0x0100 // When 1 will not append CRC to tx frames
+#define TCR_MON_CSN 0x0400 // When 1 tx monitors carrier
+#define TCR_FDUPLX 0x0800 // When 1 enables full duplex operation
+#define TCR_STP_SQET 0x1000 // When 1 stops tx if Signal Quality Error
+#define TCR_EPH_LOOP 0x2000 // When 1 enables EPH block loopback
+#define TCR_SWFDUP 0x8000 // When 1 enables Switched Full Duplex mode
+
+#define TCR_CLEAR 0 /* do NOTHING */
+/* the default settings for the TCR register : */
+#define TCR_DEFAULT (TCR_ENABLE | TCR_PAD_EN)
+
+
+// EPH Status Register
+/* BANK 0 */
+#define EPH_STATUS_REG SMC_REG(0x0002, 0)
+#define ES_TX_SUC 0x0001 // Last TX was successful
+#define ES_SNGL_COL 0x0002 // Single collision detected for last tx
+#define ES_MUL_COL 0x0004 // Multiple collisions detected for last tx
+#define ES_LTX_MULT 0x0008 // Last tx was a multicast
+#define ES_16COL 0x0010 // 16 Collisions Reached
+#define ES_SQET 0x0020 // Signal Quality Error Test
+#define ES_LTXBRD 0x0040 // Last tx was a broadcast
+#define ES_TXDEFR 0x0080 // Transmit Deferred
+#define ES_LATCOL 0x0200 // Late collision detected on last tx
+#define ES_LOSTCARR 0x0400 // Lost Carrier Sense
+#define ES_EXC_DEF 0x0800 // Excessive Deferral
+#define ES_CTR_ROL 0x1000 // Counter Roll Over indication
+#define ES_LINK_OK 0x4000 // Driven by inverted value of nLNK pin
+#define ES_TXUNRN 0x8000 // Tx Underrun
+
+
+// Receive Control Register
+/* BANK 0 */
+#define RCR_REG SMC_REG(0x0004, 0)
+#define RCR_RX_ABORT 0x0001 // Set if a rx frame was aborted
+#define RCR_PRMS 0x0002 // Enable promiscuous mode
+#define RCR_ALMUL 0x0004 // When set accepts all multicast frames
+#define RCR_RXEN 0x0100 // IFF this is set, we can receive packets
+#define RCR_STRIP_CRC 0x0200 // When set strips CRC from rx packets
+#define RCR_ABORT_ENB 0x0200 // When set will abort rx on collision
+#define RCR_FILT_CAR 0x0400 // When set filters leading 12 bit s of carrier
+#define RCR_SOFTRST 0x8000 // resets the chip
+
+/* the normal settings for the RCR register : */
+#define RCR_DEFAULT (RCR_STRIP_CRC | RCR_RXEN)
+#define RCR_CLEAR 0x0 // set it to a base state
+
+
+// Counter Register
+/* BANK 0 */
+#define COUNTER_REG SMC_REG(0x0006, 0)
+
+
+// Memory Information Register
+/* BANK 0 */
+#define MIR_REG SMC_REG(0x0008, 0)
+
+
+// Receive/Phy Control Register
+/* BANK 0 */
+#define RPC_REG SMC_REG(0x000A, 0)
+#define RPC_SPEED 0x2000 // When 1 PHY is in 100Mbps mode.
+#define RPC_DPLX 0x1000 // When 1 PHY is in Full-Duplex Mode
+#define RPC_ANEG 0x0800 // When 1 PHY is in Auto-Negotiate Mode
+#define RPC_LSXA_SHFT 5 // Bits to shift LS2A,LS1A,LS0A to lsb
+#define RPC_LSXB_SHFT 2 // Bits to get LS2B,LS1B,LS0B to lsb
+#define RPC_LED_100_10 (0x00) // LED = 100Mbps OR's with 10Mbps link detect
+#define RPC_LED_RES (0x01) // LED = Reserved
+#define RPC_LED_10 (0x02) // LED = 10Mbps link detect
+#define RPC_LED_FD (0x03) // LED = Full Duplex Mode
+#define RPC_LED_TX_RX (0x04) // LED = TX or RX packet occurred
+#define RPC_LED_100 (0x05) // LED = 100Mbps link dectect
+#define RPC_LED_TX (0x06) // LED = TX packet occurred
+#define RPC_LED_RX (0x07) // LED = RX packet occurred
+
+#ifndef RPC_LSA_DEFAULT
+#define RPC_LSA_DEFAULT RPC_LED_100
+#endif
+#ifndef RPC_LSB_DEFAULT
+#define RPC_LSB_DEFAULT RPC_LED_FD
+#endif
+
+#define RPC_DEFAULT (RPC_ANEG | (RPC_LSA_DEFAULT << RPC_LSXA_SHFT) | (RPC_LSB_DEFAULT << RPC_LSXB_SHFT) | RPC_SPEED | RPC_DPLX)
+
+
+/* Bank 0 0x0C is reserved */
+
+// Bank Select Register
+/* All Banks */
+#define BSR_REG 0x000E
+
+
+// Configuration Reg
+/* BANK 1 */
+#define CONFIG_REG SMC_REG(0x0000, 1)
+#define CONFIG_EXT_PHY 0x0200 // 1=external MII, 0=internal Phy
+#define CONFIG_GPCNTRL 0x0400 // Inverse value drives pin nCNTRL
+#define CONFIG_NO_WAIT 0x1000 // When 1 no extra wait states on ISA bus
+#define CONFIG_EPH_POWER_EN 0x8000 // When 0 EPH is placed into low power mode.
+
+// Default is powered-up, Internal Phy, Wait States, and pin nCNTRL=low
+#define CONFIG_DEFAULT (CONFIG_EPH_POWER_EN)
+
+
+// Base Address Register
+/* BANK 1 */
+#define BASE_REG SMC_REG(0x0002, 1)
+
+
+// Individual Address Registers
+/* BANK 1 */
+#define ADDR0_REG SMC_REG(0x0004, 1)
+#define ADDR1_REG SMC_REG(0x0006, 1)
+#define ADDR2_REG SMC_REG(0x0008, 1)
+
+
+// General Purpose Register
+/* BANK 1 */
+#define GP_REG SMC_REG(0x000A, 1)
+
+
+// Control Register
+/* BANK 1 */
+#define CTL_REG SMC_REG(0x000C, 1)
+#define CTL_RCV_BAD 0x4000 // When 1 bad CRC packets are received
+#define CTL_AUTO_RELEASE 0x0800 // When 1 tx pages are released automatically
+#define CTL_LE_ENABLE 0x0080 // When 1 enables Link Error interrupt
+#define CTL_CR_ENABLE 0x0040 // When 1 enables Counter Rollover interrupt
+#define CTL_TE_ENABLE 0x0020 // When 1 enables Transmit Error interrupt
+#define CTL_EEPROM_SELECT 0x0004 // Controls EEPROM reload & store
+#define CTL_RELOAD 0x0002 // When set reads EEPROM into registers
+#define CTL_STORE 0x0001 // When set stores registers into EEPROM
+
+
+// MMU Command Register
+/* BANK 2 */
+#define MMU_CMD_REG SMC_REG(0x0000, 2)
+#define MC_BUSY 1 // When 1 the last release has not completed
+#define MC_NOP (0<<5) // No Op
+#define MC_ALLOC (1<<5) // OR with number of 256 byte packets
+#define MC_RESET (2<<5) // Reset MMU to initial state
+#define MC_REMOVE (3<<5) // Remove the current rx packet
+#define MC_RELEASE (4<<5) // Remove and release the current rx packet
+#define MC_FREEPKT (5<<5) // Release packet in PNR register
+#define MC_ENQUEUE (6<<5) // Enqueue the packet for transmit
+#define MC_RSTTXFIFO (7<<5) // Reset the TX FIFOs
+
+
+// Packet Number Register
+/* BANK 2 */
+#define PN_REG SMC_REG(0x0002, 2)
+
+
+// Allocation Result Register
+/* BANK 2 */
+#define AR_REG SMC_REG(0x0003, 2)
+#define AR_FAILED 0x80 // Alocation Failed
+
+
+// TX FIFO Ports Register
+/* BANK 2 */
+#define TXFIFO_REG SMC_REG(0x0004, 2)
+#define TXFIFO_TEMPTY 0x80 // TX FIFO Empty
+
+// RX FIFO Ports Register
+/* BANK 2 */
+#define RXFIFO_REG SMC_REG(0x0005, 2)
+#define RXFIFO_REMPTY 0x80 // RX FIFO Empty
+
+#define FIFO_REG SMC_REG(0x0004, 2)
+
+// Pointer Register
+/* BANK 2 */
+#define PTR_REG SMC_REG(0x0006, 2)
+#define PTR_RCV 0x8000 // 1=Receive area, 0=Transmit area
+#define PTR_AUTOINC 0x4000 // Auto increment the pointer on each access
+#define PTR_READ 0x2000 // When 1 the operation is a read
+
+
+// Data Register
+/* BANK 2 */
+#define DATA_REG SMC_REG(0x0008, 2)
+
+
+// Interrupt Status/Acknowledge Register
+/* BANK 2 */
+#define INT_REG SMC_REG(0x000C, 2)
+
+
+// Interrupt Mask Register
+/* BANK 2 */
+#define IM_REG SMC_REG(0x000D, 2)
+#define IM_MDINT 0x80 // PHY MI Register 18 Interrupt
+#define IM_ERCV_INT 0x40 // Early Receive Interrupt
+#define IM_EPH_INT 0x20 // Set by Ethernet Protocol Handler section
+#define IM_RX_OVRN_INT 0x10 // Set by Receiver Overruns
+#define IM_ALLOC_INT 0x08 // Set when allocation request is completed
+#define IM_TX_EMPTY_INT 0x04 // Set if the TX FIFO goes empty
+#define IM_TX_INT 0x02 // Transmit Interrupt
+#define IM_RCV_INT 0x01 // Receive Interrupt
+
+
+// Multicast Table Registers
+/* BANK 3 */
+#define MCAST_REG1 SMC_REG(0x0000, 3)
+#define MCAST_REG2 SMC_REG(0x0002, 3)
+#define MCAST_REG3 SMC_REG(0x0004, 3)
+#define MCAST_REG4 SMC_REG(0x0006, 3)
+
+
+// Management Interface Register (MII)
+/* BANK 3 */
+#define MII_REG SMC_REG(0x0008, 3)
+#define MII_MSK_CRS100 0x4000 // Disables CRS100 detection during tx half dup
+#define MII_MDOE 0x0008 // MII Output Enable
+#define MII_MCLK 0x0004 // MII Clock, pin MDCLK
+#define MII_MDI 0x0002 // MII Input, pin MDI
+#define MII_MDO 0x0001 // MII Output, pin MDO
+
+
+// Revision Register
+/* BANK 3 */
+/* ( hi: chip id low: rev # ) */
+#define REV_REG SMC_REG(0x000A, 3)
+
+
+// Early RCV Register
+/* BANK 3 */
+/* this is NOT on SMC9192 */
+#define ERCV_REG SMC_REG(0x000C, 3)
+#define ERCV_RCV_DISCRD 0x0080 // When 1 discards a packet being received
+#define ERCV_THRESHOLD 0x001F // ERCV Threshold Mask
+
+
+// External Register
+/* BANK 7 */
+#define EXT_REG SMC_REG(0x0000, 7)
+
+
+#define CHIP_9192 3
+#define CHIP_9194 4
+#define CHIP_9195 5
+#define CHIP_9196 6
+#define CHIP_91100 7
+#define CHIP_91100FD 8
+#define CHIP_91111FD 9
+
+static const char * chip_ids[ 16 ] = {
+ NULL, NULL, NULL,
+ /* 3 */ "SMC91C90/91C92",
+ /* 4 */ "SMC91C94",
+ /* 5 */ "SMC91C95",
+ /* 6 */ "SMC91C96",
+ /* 7 */ "SMC91C100",
+ /* 8 */ "SMC91C100FD",
+ /* 9 */ "SMC91C11xFD",
+ NULL, NULL, NULL,
+ NULL, NULL, NULL};
+
+
+/*
+ . Transmit status bits
+*/
+#define TS_SUCCESS 0x0001
+#define TS_LOSTCAR 0x0400
+#define TS_LATCOL 0x0200
+#define TS_16COL 0x0010
+
+/*
+ . Receive status bits
+*/
+#define RS_ALGNERR 0x8000
+#define RS_BRODCAST 0x4000
+#define RS_BADCRC 0x2000
+#define RS_ODDFRAME 0x1000
+#define RS_TOOLONG 0x0800
+#define RS_TOOSHORT 0x0400
+#define RS_MULTICAST 0x0001
+#define RS_ERRORS (RS_ALGNERR | RS_BADCRC | RS_TOOLONG | RS_TOOSHORT)
+
+
+/*
+ * PHY IDs
+ * LAN83C183 == LAN91C111 Internal PHY
+ */
+#define PHY_LAN83C183 0x0016f840
+#define PHY_LAN83C180 0x02821c50
+
+/*
+ * PHY Register Addresses (LAN91C111 Internal PHY)
+ *
+ * Generic PHY registers can be found in <linux/mii.h>
+ *
+ * These phy registers are specific to our on-board phy.
+ */
+
+// PHY Configuration Register 1
+#define PHY_CFG1_REG 0x10
+#define PHY_CFG1_LNKDIS 0x8000 // 1=Rx Link Detect Function disabled
+#define PHY_CFG1_XMTDIS 0x4000 // 1=TP Transmitter Disabled
+#define PHY_CFG1_XMTPDN 0x2000 // 1=TP Transmitter Powered Down
+#define PHY_CFG1_BYPSCR 0x0400 // 1=Bypass scrambler/descrambler
+#define PHY_CFG1_UNSCDS 0x0200 // 1=Unscramble Idle Reception Disable
+#define PHY_CFG1_EQLZR 0x0100 // 1=Rx Equalizer Disabled
+#define PHY_CFG1_CABLE 0x0080 // 1=STP(150ohm), 0=UTP(100ohm)
+#define PHY_CFG1_RLVL0 0x0040 // 1=Rx Squelch level reduced by 4.5db
+#define PHY_CFG1_TLVL_SHIFT 2 // Transmit Output Level Adjust
+#define PHY_CFG1_TLVL_MASK 0x003C
+#define PHY_CFG1_TRF_MASK 0x0003 // Transmitter Rise/Fall time
+
+
+// PHY Configuration Register 2
+#define PHY_CFG2_REG 0x11
+#define PHY_CFG2_APOLDIS 0x0020 // 1=Auto Polarity Correction disabled
+#define PHY_CFG2_JABDIS 0x0010 // 1=Jabber disabled
+#define PHY_CFG2_MREG 0x0008 // 1=Multiple register access (MII mgt)
+#define PHY_CFG2_INTMDIO 0x0004 // 1=Interrupt signaled with MDIO pulseo
+
+// PHY Status Output (and Interrupt status) Register
+#define PHY_INT_REG 0x12 // Status Output (Interrupt Status)
+#define PHY_INT_INT 0x8000 // 1=bits have changed since last read
+#define PHY_INT_LNKFAIL 0x4000 // 1=Link Not detected
+#define PHY_INT_LOSSSYNC 0x2000 // 1=Descrambler has lost sync
+#define PHY_INT_CWRD 0x1000 // 1=Invalid 4B5B code detected on rx
+#define PHY_INT_SSD 0x0800 // 1=No Start Of Stream detected on rx
+#define PHY_INT_ESD 0x0400 // 1=No End Of Stream detected on rx
+#define PHY_INT_RPOL 0x0200 // 1=Reverse Polarity detected
+#define PHY_INT_JAB 0x0100 // 1=Jabber detected
+#define PHY_INT_SPDDET 0x0080 // 1=100Base-TX mode, 0=10Base-T mode
+#define PHY_INT_DPLXDET 0x0040 // 1=Device in Full Duplex
+
+// PHY Interrupt/Status Mask Register
+#define PHY_MASK_REG 0x13 // Interrupt Mask
+// Uses the same bit definitions as PHY_INT_REG
+
+
+/*
+ * SMC91C96 ethernet config and status registers.
+ * These are in the "attribute" space.
+ */
+#define ECOR 0x8000
+#define ECOR_RESET 0x80
+#define ECOR_LEVEL_IRQ 0x40
+#define ECOR_WR_ATTRIB 0x04
+#define ECOR_ENABLE 0x01
+
+#define ECSR 0x8002
+#define ECSR_IOIS8 0x20
+#define ECSR_PWRDWN 0x04
+#define ECSR_INT 0x02
+
+#define ATTRIB_SIZE ((64*1024) << SMC_IO_SHIFT)
+
+
+/*
+ * Macros to abstract register access according to the data bus
+ * capabilities. Please use those and not the in/out primitives.
+ * Note: the following macros do *not* select the bank -- this must
+ * be done separately as needed in the main code. The SMC_REG() macro
+ * only uses the bank argument for debugging purposes (when enabled).
+ */
+
+#if SMC_DEBUG > 0
+#define SMC_REG(reg, bank) \
+ ({ \
+ int __b = SMC_CURRENT_BANK(); \
+ if (unlikely((__b & ~0xf0) != (0x3300 | bank))) { \
+ printk( "%s: bank reg screwed (0x%04x)\n", \
+ CARDNAME, __b ); \
+ BUG(); \
+ } \
+ reg<<SMC_IO_SHIFT; \
+ })
+#else
+#define SMC_REG(reg, bank) (reg<<SMC_IO_SHIFT)
+#endif
+
+#if SMC_CAN_USE_8BIT
+#define SMC_GET_PN() SMC_inb( ioaddr, PN_REG )
+#define SMC_SET_PN(x) SMC_outb( x, ioaddr, PN_REG )
+#define SMC_GET_AR() SMC_inb( ioaddr, AR_REG )
+#define SMC_GET_TXFIFO() SMC_inb( ioaddr, TXFIFO_REG )
+#define SMC_GET_RXFIFO() SMC_inb( ioaddr, RXFIFO_REG )
+#define SMC_GET_INT() SMC_inb( ioaddr, INT_REG )
+#define SMC_ACK_INT(x) SMC_outb( x, ioaddr, INT_REG )
+#define SMC_GET_INT_MASK() SMC_inb( ioaddr, IM_REG )
+#define SMC_SET_INT_MASK(x) SMC_outb( x, ioaddr, IM_REG )
+#else
+#define SMC_GET_PN() (SMC_inw( ioaddr, PN_REG ) & 0xFF)
+#define SMC_SET_PN(x) SMC_outw( x, ioaddr, PN_REG )
+#define SMC_GET_AR() (SMC_inw( ioaddr, PN_REG ) >> 8)
+#define SMC_GET_TXFIFO() (SMC_inw( ioaddr, TXFIFO_REG ) & 0xFF)
+#define SMC_GET_RXFIFO() (SMC_inw( ioaddr, TXFIFO_REG ) >> 8)
+#define SMC_GET_INT() (SMC_inw( ioaddr, INT_REG ) & 0xFF)
+#define SMC_ACK_INT(x) \
+ do { \
+ unsigned long __flags; \
+ int __mask; \
+ local_irq_save(__flags); \
+ __mask = SMC_inw( ioaddr, INT_REG ) & ~0xff; \
+ SMC_outw( __mask | (x), ioaddr, INT_REG ); \
+ local_irq_restore(__flags); \
+ } while (0)
+#define SMC_GET_INT_MASK() (SMC_inw( ioaddr, INT_REG ) >> 8)
+#define SMC_SET_INT_MASK(x) SMC_outw( (x) << 8, ioaddr, INT_REG )
+#endif
+
+#define SMC_CURRENT_BANK() SMC_inw( ioaddr, BANK_SELECT )
+#define SMC_SELECT_BANK(x) SMC_outw( x, ioaddr, BANK_SELECT )
+#define SMC_GET_BASE() SMC_inw( ioaddr, BASE_REG )
+#define SMC_SET_BASE(x) SMC_outw( x, ioaddr, BASE_REG )
+#define SMC_GET_CONFIG() SMC_inw( ioaddr, CONFIG_REG )
+#define SMC_SET_CONFIG(x) SMC_outw( x, ioaddr, CONFIG_REG )
+#define SMC_GET_COUNTER() SMC_inw( ioaddr, COUNTER_REG )
+#define SMC_GET_CTL() SMC_inw( ioaddr, CTL_REG )
+#define SMC_SET_CTL(x) SMC_outw( x, ioaddr, CTL_REG )
+#define SMC_GET_MII() SMC_inw( ioaddr, MII_REG )
+#define SMC_SET_MII(x) SMC_outw( x, ioaddr, MII_REG )
+#define SMC_GET_MIR() SMC_inw( ioaddr, MIR_REG )
+#define SMC_SET_MIR(x) SMC_outw( x, ioaddr, MIR_REG )
+#define SMC_GET_MMU_CMD() SMC_inw( ioaddr, MMU_CMD_REG )
+#define SMC_SET_MMU_CMD(x) SMC_outw( x, ioaddr, MMU_CMD_REG )
+#define SMC_GET_FIFO() SMC_inw( ioaddr, FIFO_REG )
+#define SMC_GET_PTR() SMC_inw( ioaddr, PTR_REG )
+#define SMC_SET_PTR(x) SMC_outw( x, ioaddr, PTR_REG )
+#define SMC_GET_RCR() SMC_inw( ioaddr, RCR_REG )
+#define SMC_SET_RCR(x) SMC_outw( x, ioaddr, RCR_REG )
+#define SMC_GET_REV() SMC_inw( ioaddr, REV_REG )
+#define SMC_GET_RPC() SMC_inw( ioaddr, RPC_REG )
+#define SMC_SET_RPC(x) SMC_outw( x, ioaddr, RPC_REG )
+#define SMC_GET_TCR() SMC_inw( ioaddr, TCR_REG )
+#define SMC_SET_TCR(x) SMC_outw( x, ioaddr, TCR_REG )
+
+#ifndef SMC_GET_MAC_ADDR
+#define SMC_GET_MAC_ADDR(addr) \
+ do { \
+ unsigned int __v; \
+ __v = SMC_inw( ioaddr, ADDR0_REG ); \
+ addr[0] = __v; addr[1] = __v >> 8; \
+ __v = SMC_inw( ioaddr, ADDR1_REG ); \
+ addr[2] = __v; addr[3] = __v >> 8; \
+ __v = SMC_inw( ioaddr, ADDR2_REG ); \
+ addr[4] = __v; addr[5] = __v >> 8; \
+ } while (0)
+#endif
+
+#define SMC_SET_MAC_ADDR(addr) \
+ do { \
+ SMC_outw( addr[0]|(addr[1] << 8), ioaddr, ADDR0_REG ); \
+ SMC_outw( addr[2]|(addr[3] << 8), ioaddr, ADDR1_REG ); \
+ SMC_outw( addr[4]|(addr[5] << 8), ioaddr, ADDR2_REG ); \
+ } while (0)
+
+#define SMC_SET_MCAST(x) \
+ do { \
+ const unsigned char *mt = (x); \
+ SMC_outw( mt[0] | (mt[1] << 8), ioaddr, MCAST_REG1 ); \
+ SMC_outw( mt[2] | (mt[3] << 8), ioaddr, MCAST_REG2 ); \
+ SMC_outw( mt[4] | (mt[5] << 8), ioaddr, MCAST_REG3 ); \
+ SMC_outw( mt[6] | (mt[7] << 8), ioaddr, MCAST_REG4 ); \
+ } while (0)
+
+#if SMC_CAN_USE_32BIT
+/*
+ * Some setups just can't write 8 or 16 bits reliably when not aligned
+ * to a 32 bit boundary. I tell you that exists!
+ * We re-do the ones here that can be easily worked around if they can have
+ * their low parts written to 0 without adverse effects.
+ */
+#undef SMC_SELECT_BANK
+#define SMC_SELECT_BANK(x) SMC_outl( (x)<<16, ioaddr, 12<<SMC_IO_SHIFT )
+#undef SMC_SET_RPC
+#define SMC_SET_RPC(x) SMC_outl( (x)<<16, ioaddr, SMC_REG(8, 0) )
+#undef SMC_SET_PN
+#define SMC_SET_PN(x) SMC_outl( (x)<<16, ioaddr, SMC_REG(0, 2) )
+#undef SMC_SET_PTR
+#define SMC_SET_PTR(x) SMC_outl( (x)<<16, ioaddr, SMC_REG(4, 2) )
+#endif
+
+#if SMC_CAN_USE_32BIT
+#define SMC_PUT_PKT_HDR(status, length) \
+ SMC_outl( (status) | (length) << 16, ioaddr, DATA_REG )
+#define SMC_GET_PKT_HDR(status, length) \
+ do { \
+ unsigned int __val = SMC_inl( ioaddr, DATA_REG ); \
+ (status) = __val & 0xffff; \
+ (length) = __val >> 16; \
+ } while (0)
+#else
+#define SMC_PUT_PKT_HDR(status, length) \
+ do { \
+ SMC_outw( status, ioaddr, DATA_REG ); \
+ SMC_outw( length, ioaddr, DATA_REG ); \
+ } while (0)
+#define SMC_GET_PKT_HDR(status, length) \
+ do { \
+ (status) = SMC_inw( ioaddr, DATA_REG ); \
+ (length) = SMC_inw( ioaddr, DATA_REG ); \
+ } while (0)
+#endif
+
+#if SMC_CAN_USE_32BIT
+#define SMC_PUSH_DATA(p, l) \
+ do { \
+ char *__ptr = (p); \
+ int __len = (l); \
+ if (__len >= 2 && (unsigned long)__ptr & 2) { \
+ __len -= 2; \
+ SMC_outw( *(u16 *)__ptr, ioaddr, DATA_REG ); \
+ __ptr += 2; \
+ } \
+ SMC_outsl( ioaddr, DATA_REG, __ptr, __len >> 2); \
+ if (__len & 2) { \
+ __ptr += (__len & ~3); \
+ SMC_outw( *((u16 *)__ptr), ioaddr, DATA_REG ); \
+ } \
+ } while (0)
+#define SMC_PULL_DATA(p, l) \
+ do { \
+ char *__ptr = (p); \
+ int __len = (l); \
+ if ((unsigned long)__ptr & 2) { \
+ /* \
+ * We want 32bit alignment here. \
+ * Since some buses perform a full 32bit \
+ * fetch even for 16bit data we can't use \
+ * SMC_inw() here. Back both source (on chip \
+ * and destination) pointers of 2 bytes. \
+ */ \
+ __ptr -= 2; \
+ __len += 2; \
+ SMC_SET_PTR( 2|PTR_READ|PTR_RCV|PTR_AUTOINC ); \
+ } \
+ __len += 2; \
+ SMC_insl( ioaddr, DATA_REG, __ptr, __len >> 2); \
+ } while (0)
+#elif SMC_CAN_USE_16BIT
+#define SMC_PUSH_DATA(p, l) SMC_outsw( ioaddr, DATA_REG, p, (l) >> 1 )
+#define SMC_PULL_DATA(p, l) SMC_insw ( ioaddr, DATA_REG, p, (l) >> 1 )
+#elif SMC_CAN_USE_8BIT
+#define SMC_PUSH_DATA(p, l) SMC_outsb( ioaddr, DATA_REG, p, l )
+#define SMC_PULL_DATA(p, l) SMC_insb ( ioaddr, DATA_REG, p, l )
+#endif
+
+#if ! SMC_CAN_USE_16BIT
+#define SMC_outw(x, ioaddr, reg) \
+ do { \
+ unsigned int __val16 = (x); \
+ SMC_outb( __val16, ioaddr, reg ); \
+ SMC_outb( __val16 >> 8, ioaddr, reg + (1 << SMC_IO_SHIFT));\
+ } while (0)
+#define SMC_inw(ioaddr, reg) \
+ ({ \
+ unsigned int __val16; \
+ __val16 = SMC_inb( ioaddr, reg ); \
+ __val16 |= SMC_inb( ioaddr, reg + (1 << SMC_IO_SHIFT)) << 8; \
+ __val16; \
+ })
+#endif
+
+#if !defined (SMC_INTERRUPT_PREAMBLE)
+# define SMC_INTERRUPT_PREAMBLE
+#endif
+
+#endif /* _SMC91X_H_ */
--- /dev/null
+/*
+ * This code is derived from the VIA reference driver (copyright message
+ * below) provided to Red Hat by VIA Networking Technologies, Inc. for
+ * addition to the Linux kernel.
+ *
+ * The code has been merged into one source file, cleaned up to follow
+ * Linux coding style, ported to the Linux 2.6 kernel tree and cleaned
+ * for 64bit hardware platforms.
+ *
+ * TODO
+ * Big-endian support
+ * rx_copybreak/alignment
+ * Scatter gather
+ * More testing
+ *
+ * The changes are (c) Copyright 2004, Red Hat Inc. <alan@redhat.com>
+ * Additional fixes and clean up: Francois Romieu
+ *
+ * This source has not been verified for use in safety critical systems.
+ *
+ * Please direct queries about the revamped driver to the linux-kernel
+ * list not VIA.
+ *
+ * Original code:
+ *
+ * Copyright (c) 1996, 2003 VIA Networking Technologies, Inc.
+ * All rights reserved.
+ *
+ * This software may be redistributed and/or modified under
+ * the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ * Author: Chuang Liang-Shing, AJ Jiang
+ *
+ * Date: Jan 24, 2003
+ *
+ * MODULE_LICENSE("GPL");
+ *
+ */
+
+
+#include <linux/module.h>
+#include <linux/types.h>
+#include <linux/config.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/errno.h>
+#include <linux/ioport.h>
+#include <linux/pci.h>
+#include <linux/kernel.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/skbuff.h>
+#include <linux/delay.h>
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/version.h>
+#include <linux/string.h>
+#include <linux/wait.h>
+#include <asm/io.h>
+#include <linux/if.h>
+#include <linux/config.h>
+#include <asm/uaccess.h>
+#include <linux/proc_fs.h>
+#include <linux/inetdevice.h>
+#include <linux/reboot.h>
+#include <linux/ethtool.h>
+#include <linux/mii.h>
+#include <linux/in.h>
+#include <linux/if_arp.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/udp.h>
+#include <linux/crc-ccitt.h>
+#include <linux/crc32.h>
+
+#include "via-velocity.h"
+
+
+static int velocity_nics = 0;
+static int msglevel = MSG_LEVEL_INFO;
+
+
+static int velocity_mii_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd);
+static struct ethtool_ops velocity_ethtool_ops;
+
+/*
+ Define module options
+*/
+
+MODULE_AUTHOR("VIA Networking Technologies, Inc.");
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("VIA Networking Velocity Family Gigabit Ethernet Adapter Driver");
+
+#define VELOCITY_PARAM(N,D) \
+ static const int N[MAX_UNITS]=OPTION_DEFAULT;\
+ MODULE_PARM(N, "1-" __MODULE_STRING(MAX_UNITS) "i");\
+ MODULE_PARM_DESC(N, D);
+
+#define RX_DESC_MIN 64
+#define RX_DESC_MAX 255
+#define RX_DESC_DEF 64
+VELOCITY_PARAM(RxDescriptors, "Number of receive descriptors");
+
+#define TX_DESC_MIN 16
+#define TX_DESC_MAX 256
+#define TX_DESC_DEF 64
+VELOCITY_PARAM(TxDescriptors, "Number of transmit descriptors");
+
+#define VLAN_ID_MIN 0
+#define VLAN_ID_MAX 4095
+#define VLAN_ID_DEF 0
+/* VID_setting[] is used for setting the VID of NIC.
+ 0: default VID.
+ 1-4094: other VIDs.
+*/
+VELOCITY_PARAM(VID_setting, "802.1Q VLAN ID");
+
+#define RX_THRESH_MIN 0
+#define RX_THRESH_MAX 3
+#define RX_THRESH_DEF 0
+/* rx_thresh[] is used for controlling the receive fifo threshold.
+ 0: indicate the rxfifo threshold is 128 bytes.
+ 1: indicate the rxfifo threshold is 512 bytes.
+ 2: indicate the rxfifo threshold is 1024 bytes.
+ 3: indicate the rxfifo threshold is store & forward.
+*/
+VELOCITY_PARAM(rx_thresh, "Receive fifo threshold");
+
+#define DMA_LENGTH_MIN 0
+#define DMA_LENGTH_MAX 7
+#define DMA_LENGTH_DEF 0
+
+/* DMA_length[] is used for controlling the DMA length
+ 0: 8 DWORDs
+ 1: 16 DWORDs
+ 2: 32 DWORDs
+ 3: 64 DWORDs
+ 4: 128 DWORDs
+ 5: 256 DWORDs
+ 6: SF(flush till emply)
+ 7: SF(flush till emply)
+*/
+VELOCITY_PARAM(DMA_length, "DMA length");
+
+#define TAGGING_DEF 0
+/* enable_tagging[] is used for enabling 802.1Q VID tagging.
+ 0: disable VID seeting(default).
+ 1: enable VID setting.
+*/
+VELOCITY_PARAM(enable_tagging, "Enable 802.1Q tagging");
+
+#define IP_ALIG_DEF 0
+/* IP_byte_align[] is used for IP header DWORD byte aligned
+ 0: indicate the IP header won't be DWORD byte aligned.(Default) .
+ 1: indicate the IP header will be DWORD byte aligned.
+ In some enviroment, the IP header should be DWORD byte aligned,
+ or the packet will be droped when we receive it. (eg: IPVS)
+*/
+VELOCITY_PARAM(IP_byte_align, "Enable IP header dword aligned");
+
+#define TX_CSUM_DEF 1
+/* txcsum_offload[] is used for setting the checksum offload ability of NIC.
+ (We only support RX checksum offload now)
+ 0: disable csum_offload[checksum offload
+ 1: enable checksum offload. (Default)
+*/
+VELOCITY_PARAM(txcsum_offload, "Enable transmit packet checksum offload");
+
+#define FLOW_CNTL_DEF 1
+#define FLOW_CNTL_MIN 1
+#define FLOW_CNTL_MAX 5
+
+/* flow_control[] is used for setting the flow control ability of NIC.
+ 1: hardware deafult - AUTO (default). Use Hardware default value in ANAR.
+ 2: enable TX flow control.
+ 3: enable RX flow control.
+ 4: enable RX/TX flow control.
+ 5: disable
+*/
+VELOCITY_PARAM(flow_control, "Enable flow control ability");
+
+#define MED_LNK_DEF 0
+#define MED_LNK_MIN 0
+#define MED_LNK_MAX 4
+/* speed_duplex[] is used for setting the speed and duplex mode of NIC.
+ 0: indicate autonegotiation for both speed and duplex mode
+ 1: indicate 100Mbps half duplex mode
+ 2: indicate 100Mbps full duplex mode
+ 3: indicate 10Mbps half duplex mode
+ 4: indicate 10Mbps full duplex mode
+
+ Note:
+ if EEPROM have been set to the force mode, this option is ignored
+ by driver.
+*/
+VELOCITY_PARAM(speed_duplex, "Setting the speed and duplex mode");
+
+#define VAL_PKT_LEN_DEF 0
+/* ValPktLen[] is used for setting the checksum offload ability of NIC.
+ 0: Receive frame with invalid layer 2 length (Default)
+ 1: Drop frame with invalid layer 2 length
+*/
+VELOCITY_PARAM(ValPktLen, "Receiving or Drop invalid 802.3 frame");
+
+#define WOL_OPT_DEF 0
+#define WOL_OPT_MIN 0
+#define WOL_OPT_MAX 7
+/* wol_opts[] is used for controlling wake on lan behavior.
+ 0: Wake up if recevied a magic packet. (Default)
+ 1: Wake up if link status is on/off.
+ 2: Wake up if recevied an arp packet.
+ 4: Wake up if recevied any unicast packet.
+ Those value can be sumed up to support more than one option.
+*/
+VELOCITY_PARAM(wol_opts, "Wake On Lan options");
+
+#define INT_WORKS_DEF 20
+#define INT_WORKS_MIN 10
+#define INT_WORKS_MAX 64
+
+VELOCITY_PARAM(int_works, "Number of packets per interrupt services");
+
+static int rx_copybreak = 200;
+MODULE_PARM(rx_copybreak, "i");
+MODULE_PARM_DESC(rx_copybreak, "Copy breakpoint for copy-only-tiny-frames");
+
+static void velocity_init_info(struct pci_dev *pdev, struct velocity_info *vptr, struct velocity_info_tbl *info);
+static int velocity_get_pci_info(struct velocity_info *, struct pci_dev *pdev);
+static void velocity_print_info(struct velocity_info *vptr);
+static int velocity_open(struct net_device *dev);
+static int velocity_change_mtu(struct net_device *dev, int mtu);
+static int velocity_xmit(struct sk_buff *skb, struct net_device *dev);
+static int velocity_intr(int irq, void *dev_instance, struct pt_regs *regs);
+static void velocity_set_multi(struct net_device *dev);
+static struct net_device_stats *velocity_get_stats(struct net_device *dev);
+static int velocity_ioctl(struct net_device *dev, struct ifreq *rq, int cmd);
+static int velocity_close(struct net_device *dev);
+static int velocity_receive_frame(struct velocity_info *, int idx);
+static int velocity_alloc_rx_buf(struct velocity_info *, int idx);
+static void velocity_free_rd_ring(struct velocity_info *vptr);
+static void velocity_free_tx_buf(struct velocity_info *vptr, struct velocity_td_info *);
+static int velocity_soft_reset(struct velocity_info *vptr);
+static void mii_init(struct velocity_info *vptr, u32 mii_status);
+static u32 velocity_get_opt_media_mode(struct velocity_info *vptr);
+static void velocity_print_link_status(struct velocity_info *vptr);
+static void safe_disable_mii_autopoll(struct mac_regs __iomem * regs);
+static void velocity_shutdown(struct velocity_info *vptr);
+static void enable_flow_control_ability(struct velocity_info *vptr);
+static void enable_mii_autopoll(struct mac_regs __iomem * regs);
+static int velocity_mii_read(struct mac_regs __iomem *, u8 byIdx, u16 * pdata);
+static int velocity_mii_write(struct mac_regs __iomem *, u8 byMiiAddr, u16 data);
+static u32 mii_check_media_mode(struct mac_regs __iomem * regs);
+static u32 check_connection_type(struct mac_regs __iomem * regs);
+static int velocity_set_media_mode(struct velocity_info *vptr, u32 mii_status);
+
+#ifdef CONFIG_PM
+
+static int velocity_suspend(struct pci_dev *pdev, u32 state);
+static int velocity_resume(struct pci_dev *pdev);
+
+static int velocity_netdev_event(struct notifier_block *nb, unsigned long notification, void *ptr);
+
+static struct notifier_block velocity_inetaddr_notifier = {
+ .notifier_call = velocity_netdev_event,
+};
+
+static spinlock_t velocity_dev_list_lock = SPIN_LOCK_UNLOCKED;
+static LIST_HEAD(velocity_dev_list);
+
+static void velocity_register_notifier(void)
+{
+ register_inetaddr_notifier(&velocity_inetaddr_notifier);
+}
+
+static void velocity_unregister_notifier(void)
+{
+ unregister_inetaddr_notifier(&velocity_inetaddr_notifier);
+}
+
+#else /* CONFIG_PM */
+
+#define velocity_register_notifier() do {} while (0)
+#define velocity_unregister_notifier() do {} while (0)
+
+#endif /* !CONFIG_PM */
+
+/*
+ * Internal board variants. At the moment we have only one
+ */
+
+static struct velocity_info_tbl chip_info_table[] = {
+ {CHIP_TYPE_VT6110, "VIA Networking Velocity Family Gigabit Ethernet Adapter", 256, 1, 0x00FFFFFFUL},
+ {0, NULL}
+};
+
+/*
+ * Describe the PCI device identifiers that we support in this
+ * device driver. Used for hotplug autoloading.
+ */
+
+static struct pci_device_id velocity_id_table[] __devinitdata = {
+ {PCI_VENDOR_ID_VIA, PCI_DEVICE_ID_VIA_612X,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, (unsigned long) chip_info_table},
+ {0, }
+};
+
+MODULE_DEVICE_TABLE(pci, velocity_id_table);
+
+/**
+ * get_chip_name - identifier to name
+ * @id: chip identifier
+ *
+ * Given a chip identifier return a suitable description. Returns
+ * a pointer a static string valid while the driver is loaded.
+ */
+
+static char __devinit *get_chip_name(enum chip_type chip_id)
+{
+ int i;
+ for (i = 0; chip_info_table[i].name != NULL; i++)
+ if (chip_info_table[i].chip_id == chip_id)
+ break;
+ return chip_info_table[i].name;
+}
+
+/**
+ * velocity_remove1 - device unplug
+ * @pdev: PCI device being removed
+ *
+ * Device unload callback. Called on an unplug or on module
+ * unload for each active device that is present. Disconnects
+ * the device from the network layer and frees all the resources
+ */
+
+static void __devexit velocity_remove1(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct velocity_info *vptr = dev->priv;
+
+#ifdef CONFIG_PM
+ unsigned long flags;
+
+ spin_lock_irqsave(&velocity_dev_list_lock, flags);
+ if (!list_empty(&velocity_dev_list))
+ list_del(&vptr->list);
+ spin_unlock_irqrestore(&velocity_dev_list_lock, flags);
+#endif
+ unregister_netdev(dev);
+ iounmap(vptr->mac_regs);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ free_netdev(dev);
+
+ velocity_nics--;
+}
+
+/**
+ * velocity_set_int_opt - parser for integer options
+ * @opt: pointer to option value
+ * @val: value the user requested (or -1 for default)
+ * @min: lowest value allowed
+ * @max: highest value allowed
+ * @def: default value
+ * @name: property name
+ * @dev: device name
+ *
+ * Set an integer property in the module options. This function does
+ * all the verification and checking as well as reporting so that
+ * we don't duplicate code for each option.
+ */
+
+static void __devinit velocity_set_int_opt(int *opt, int val, int min, int max, int def, char *name, char *devname)
+{
+ if (val == -1)
+ *opt = def;
+ else if (val < min || val > max) {
+ VELOCITY_PRT(MSG_LEVEL_INFO, KERN_NOTICE "%s: the value of parameter %s is invalid, the valid range is (%d-%d)\n",
+ devname, name, min, max);
+ *opt = def;
+ } else {
+ VELOCITY_PRT(MSG_LEVEL_INFO, KERN_INFO "%s: set value of parameter %s to %d\n",
+ devname, name, val);
+ *opt = val;
+ }
+}
+
+/**
+ * velocity_set_bool_opt - parser for boolean options
+ * @opt: pointer to option value
+ * @val: value the user requested (or -1 for default)
+ * @def: default value (yes/no)
+ * @flag: numeric value to set for true.
+ * @name: property name
+ * @dev: device name
+ *
+ * Set a boolean property in the module options. This function does
+ * all the verification and checking as well as reporting so that
+ * we don't duplicate code for each option.
+ */
+
+static void __devinit velocity_set_bool_opt(u32 * opt, int val, int def, u32 flag, char *name, char *devname)
+{
+ (*opt) &= (~flag);
+ if (val == -1)
+ *opt |= (def ? flag : 0);
+ else if (val < 0 || val > 1) {
+ printk(KERN_NOTICE "%s: the value of parameter %s is invalid, the valid range is (0-1)\n",
+ devname, name);
+ *opt |= (def ? flag : 0);
+ } else {
+ printk(KERN_INFO "%s: set parameter %s to %s\n",
+ devname, name, val ? "TRUE" : "FALSE");
+ *opt |= (val ? flag : 0);
+ }
+}
+
+/**
+ * velocity_get_options - set options on device
+ * @opts: option structure for the device
+ * @index: index of option to use in module options array
+ * @devname: device name
+ *
+ * Turn the module and command options into a single structure
+ * for the current device
+ */
+
+static void __devinit velocity_get_options(struct velocity_opt *opts, int index, char *devname)
+{
+
+ velocity_set_int_opt(&opts->rx_thresh, rx_thresh[index], RX_THRESH_MIN, RX_THRESH_MAX, RX_THRESH_DEF, "rx_thresh", devname);
+ velocity_set_int_opt(&opts->DMA_length, DMA_length[index], DMA_LENGTH_MIN, DMA_LENGTH_MAX, DMA_LENGTH_DEF, "DMA_length", devname);
+ velocity_set_int_opt(&opts->numrx, RxDescriptors[index], RX_DESC_MIN, RX_DESC_MAX, RX_DESC_DEF, "RxDescriptors", devname);
+ velocity_set_int_opt(&opts->numtx, TxDescriptors[index], TX_DESC_MIN, TX_DESC_MAX, TX_DESC_DEF, "TxDescriptors", devname);
+ velocity_set_int_opt(&opts->vid, VID_setting[index], VLAN_ID_MIN, VLAN_ID_MAX, VLAN_ID_DEF, "VID_setting", devname);
+ velocity_set_bool_opt(&opts->flags, enable_tagging[index], TAGGING_DEF, VELOCITY_FLAGS_TAGGING, "enable_tagging", devname);
+ velocity_set_bool_opt(&opts->flags, txcsum_offload[index], TX_CSUM_DEF, VELOCITY_FLAGS_TX_CSUM, "txcsum_offload", devname);
+ velocity_set_int_opt(&opts->flow_cntl, flow_control[index], FLOW_CNTL_MIN, FLOW_CNTL_MAX, FLOW_CNTL_DEF, "flow_control", devname);
+ velocity_set_bool_opt(&opts->flags, IP_byte_align[index], IP_ALIG_DEF, VELOCITY_FLAGS_IP_ALIGN, "IP_byte_align", devname);
+ velocity_set_bool_opt(&opts->flags, ValPktLen[index], VAL_PKT_LEN_DEF, VELOCITY_FLAGS_VAL_PKT_LEN, "ValPktLen", devname);
+ velocity_set_int_opt((int *) &opts->spd_dpx, speed_duplex[index], MED_LNK_MIN, MED_LNK_MAX, MED_LNK_DEF, "Media link mode", devname);
+ velocity_set_int_opt((int *) &opts->wol_opts, wol_opts[index], WOL_OPT_MIN, WOL_OPT_MAX, WOL_OPT_DEF, "Wake On Lan options", devname);
+ velocity_set_int_opt((int *) &opts->int_works, int_works[index], INT_WORKS_MIN, INT_WORKS_MAX, INT_WORKS_DEF, "Interrupt service works", devname);
+ opts->numrx = (opts->numrx & ~3);
+}
+
+/**
+ * velocity_init_cam_filter - initialise CAM
+ * @vptr: velocity to program
+ *
+ * Initialize the content addressable memory used for filters. Load
+ * appropriately according to the presence of VLAN
+ */
+
+static void velocity_init_cam_filter(struct velocity_info *vptr)
+{
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+
+ /* Turn on MCFG_PQEN, turn off MCFG_RTGOPT */
+ WORD_REG_BITS_SET(MCFG_PQEN, MCFG_RTGOPT, ®s->MCFG);
+ WORD_REG_BITS_ON(MCFG_VIDFR, ®s->MCFG);
+
+ /* Disable all CAMs */
+ memset(vptr->vCAMmask, 0, sizeof(u8) * 8);
+ memset(vptr->mCAMmask, 0, sizeof(u8) * 8);
+ mac_set_cam_mask(regs, vptr->vCAMmask, VELOCITY_VLAN_ID_CAM);
+ mac_set_cam_mask(regs, vptr->mCAMmask, VELOCITY_MULTICAST_CAM);
+
+ /* Enable first VCAM */
+ if (vptr->flags & VELOCITY_FLAGS_TAGGING) {
+ /* If Tagging option is enabled and VLAN ID is not zero, then
+ turn on MCFG_RTGOPT also */
+ if (vptr->options.vid != 0)
+ WORD_REG_BITS_ON(MCFG_RTGOPT, ®s->MCFG);
+
+ mac_set_cam(regs, 0, (u8 *) & (vptr->options.vid), VELOCITY_VLAN_ID_CAM);
+ vptr->vCAMmask[0] |= 1;
+ mac_set_cam_mask(regs, vptr->vCAMmask, VELOCITY_VLAN_ID_CAM);
+ } else {
+ u16 temp = 0;
+ mac_set_cam(regs, 0, (u8 *) &temp, VELOCITY_VLAN_ID_CAM);
+ temp = 1;
+ mac_set_cam_mask(regs, (u8 *) &temp, VELOCITY_VLAN_ID_CAM);
+ }
+}
+
+/**
+ * velocity_rx_reset - handle a receive reset
+ * @vptr: velocity we are resetting
+ *
+ * Reset the ownership and status for the receive ring side.
+ * Hand all the receive queue to the NIC.
+ */
+
+static void velocity_rx_reset(struct velocity_info *vptr)
+{
+
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ int i;
+
+ vptr->rd_dirty = vptr->rd_filled = vptr->rd_curr = 0;
+
+ /*
+ * Init state, all RD entries belong to the NIC
+ */
+ for (i = 0; i < vptr->options.numrx; ++i)
+ vptr->rd_ring[i].rdesc0.owner = OWNED_BY_NIC;
+
+ writew(vptr->options.numrx, ®s->RBRDU);
+ writel(vptr->rd_pool_dma, ®s->RDBaseLo);
+ writew(0, ®s->RDIdx);
+ writew(vptr->options.numrx - 1, ®s->RDCSize);
+}
+
+/**
+ * velocity_init_registers - initialise MAC registers
+ * @vptr: velocity to init
+ * @type: type of initialisation (hot or cold)
+ *
+ * Initialise the MAC on a reset or on first set up on the
+ * hardware.
+ */
+
+static void velocity_init_registers(struct velocity_info *vptr,
+ enum velocity_init_type type)
+{
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ int i, mii_status;
+
+ mac_wol_reset(regs);
+
+ switch (type) {
+ case VELOCITY_INIT_RESET:
+ case VELOCITY_INIT_WOL:
+
+ netif_stop_queue(vptr->dev);
+
+ /*
+ * Reset RX to prevent RX pointer not on the 4X location
+ */
+ velocity_rx_reset(vptr);
+ mac_rx_queue_run(regs);
+ mac_rx_queue_wake(regs);
+
+ mii_status = velocity_get_opt_media_mode(vptr);
+ if (velocity_set_media_mode(vptr, mii_status) != VELOCITY_LINK_CHANGE) {
+ velocity_print_link_status(vptr);
+ if (!(vptr->mii_status & VELOCITY_LINK_FAIL))
+ netif_wake_queue(vptr->dev);
+ }
+
+ enable_flow_control_ability(vptr);
+
+ mac_clear_isr(regs);
+ writel(CR0_STOP, ®s->CR0Clr);
+ writel((CR0_DPOLL | CR0_TXON | CR0_RXON | CR0_STRT),
+ ®s->CR0Set);
+
+ break;
+
+ case VELOCITY_INIT_COLD:
+ default:
+ /*
+ * Do reset
+ */
+ velocity_soft_reset(vptr);
+ mdelay(5);
+
+ mac_eeprom_reload(regs);
+ for (i = 0; i < 6; i++) {
+ writeb(vptr->dev->dev_addr[i], &(regs->PAR[i]));
+ }
+ /*
+ * clear Pre_ACPI bit.
+ */
+ BYTE_REG_BITS_OFF(CFGA_PACPI, &(regs->CFGA));
+ mac_set_rx_thresh(regs, vptr->options.rx_thresh);
+ mac_set_dma_length(regs, vptr->options.DMA_length);
+
+ writeb(WOLCFG_SAM | WOLCFG_SAB, ®s->WOLCFGSet);
+ /*
+ * Back off algorithm use original IEEE standard
+ */
+ BYTE_REG_BITS_SET(CFGB_OFSET, (CFGB_CRANDOM | CFGB_CAP | CFGB_MBA | CFGB_BAKOPT), ®s->CFGB);
+
+ /*
+ * Init CAM filter
+ */
+ velocity_init_cam_filter(vptr);
+
+ /*
+ * Set packet filter: Receive directed and broadcast address
+ */
+ velocity_set_multi(vptr->dev);
+
+ /*
+ * Enable MII auto-polling
+ */
+ enable_mii_autopoll(regs);
+
+ vptr->int_mask = INT_MASK_DEF;
+
+ writel(cpu_to_le32(vptr->rd_pool_dma), ®s->RDBaseLo);
+ writew(vptr->options.numrx - 1, ®s->RDCSize);
+ mac_rx_queue_run(regs);
+ mac_rx_queue_wake(regs);
+
+ writew(vptr->options.numtx - 1, ®s->TDCSize);
+
+ for (i = 0; i < vptr->num_txq; i++) {
+ writel(cpu_to_le32(vptr->td_pool_dma[i]), &(regs->TDBaseLo[i]));
+ mac_tx_queue_run(regs, i);
+ }
+
+ init_flow_control_register(vptr);
+
+ writel(CR0_STOP, ®s->CR0Clr);
+ writel((CR0_DPOLL | CR0_TXON | CR0_RXON | CR0_STRT), ®s->CR0Set);
+
+ mii_status = velocity_get_opt_media_mode(vptr);
+ netif_stop_queue(vptr->dev);
+
+ mii_init(vptr, mii_status);
+
+ if (velocity_set_media_mode(vptr, mii_status) != VELOCITY_LINK_CHANGE) {
+ velocity_print_link_status(vptr);
+ if (!(vptr->mii_status & VELOCITY_LINK_FAIL))
+ netif_wake_queue(vptr->dev);
+ }
+
+ enable_flow_control_ability(vptr);
+ mac_hw_mibs_init(regs);
+ mac_write_int_mask(vptr->int_mask, regs);
+ mac_clear_isr(regs);
+
+ }
+}
+
+/**
+ * velocity_soft_reset - soft reset
+ * @vptr: velocity to reset
+ *
+ * Kick off a soft reset of the velocity adapter and then poll
+ * until the reset sequence has completed before returning.
+ */
+
+static int velocity_soft_reset(struct velocity_info *vptr)
+{
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ int i = 0;
+
+ writel(CR0_SFRST, ®s->CR0Set);
+
+ for (i = 0; i < W_MAX_TIMEOUT; i++) {
+ udelay(5);
+ if (!DWORD_REG_BITS_IS_ON(CR0_SFRST, ®s->CR0Set))
+ break;
+ }
+
+ if (i == W_MAX_TIMEOUT) {
+ writel(CR0_FORSRST, ®s->CR0Set);
+ /* FIXME: PCI POSTING */
+ /* delay 2ms */
+ mdelay(2);
+ }
+ return 0;
+}
+
+/**
+ * velocity_found1 - set up discovered velocity card
+ * @pdev: PCI device
+ * @ent: PCI device table entry that matched
+ *
+ * Configure a discovered adapter from scratch. Return a negative
+ * errno error code on failure paths.
+ */
+
+static int __devinit velocity_found1(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ static int first = 1;
+ struct net_device *dev;
+ int i;
+ struct velocity_info_tbl *info = (struct velocity_info_tbl *) ent->driver_data;
+ struct velocity_info *vptr;
+ struct mac_regs __iomem * regs;
+ int ret = -ENOMEM;
+
+ if (velocity_nics >= MAX_UNITS) {
+ printk(KERN_NOTICE VELOCITY_NAME ": already found %d NICs.\n",
+ velocity_nics);
+ return -ENODEV;
+ }
+
+ dev = alloc_etherdev(sizeof(struct velocity_info));
+
+ if (dev == NULL) {
+ printk(KERN_ERR VELOCITY_NAME ": allocate net device failed.\n");
+ goto out;
+ }
+
+ /* Chain it all together */
+
+ SET_MODULE_OWNER(dev);
+ SET_NETDEV_DEV(dev, &pdev->dev);
+ vptr = dev->priv;
+
+
+ if (first) {
+ printk(KERN_INFO "%s Ver. %s\n",
+ VELOCITY_FULL_DRV_NAM, VELOCITY_VERSION);
+ printk(KERN_INFO "Copyright (c) 2002, 2003 VIA Networking Technologies, Inc.\n");
+ printk(KERN_INFO "Copyright (c) 2004 Red Hat Inc.\n");
+ first = 0;
+ }
+
+ velocity_init_info(pdev, vptr, info);
+
+ vptr->dev = dev;
+
+ dev->irq = pdev->irq;
+
+ ret = pci_enable_device(pdev);
+ if (ret < 0)
+ goto err_free_dev;
+
+ ret = velocity_get_pci_info(vptr, pdev);
+ if (ret < 0) {
+ printk(KERN_ERR VELOCITY_NAME ": Failed to find PCI device.\n");
+ goto err_disable;
+ }
+
+ ret = pci_request_regions(pdev, VELOCITY_NAME);
+ if (ret < 0) {
+ printk(KERN_ERR VELOCITY_NAME ": Failed to find PCI device.\n");
+ goto err_disable;
+ }
+
+ regs = ioremap(vptr->memaddr, vptr->io_size);
+ if (regs == NULL) {
+ ret = -EIO;
+ goto err_release_res;
+ }
+
+ vptr->mac_regs = regs;
+
+ mac_wol_reset(regs);
+
+ dev->base_addr = vptr->ioaddr;
+
+ for (i = 0; i < 6; i++)
+ dev->dev_addr[i] = readb(®s->PAR[i]);
+
+
+ velocity_get_options(&vptr->options, velocity_nics, dev->name);
+
+ /*
+ * Mask out the options cannot be set to the chip
+ */
+
+ vptr->options.flags &= info->flags;
+
+ /*
+ * Enable the chip specified capbilities
+ */
+
+ vptr->flags = vptr->options.flags | (info->flags & 0xFF000000UL);
+
+ vptr->wol_opts = vptr->options.wol_opts;
+ vptr->flags |= VELOCITY_FLAGS_WOL_ENABLED;
+
+ vptr->phy_id = MII_GET_PHY_ID(vptr->mac_regs);
+
+ dev->irq = pdev->irq;
+ dev->open = velocity_open;
+ dev->hard_start_xmit = velocity_xmit;
+ dev->stop = velocity_close;
+ dev->get_stats = velocity_get_stats;
+ dev->set_multicast_list = velocity_set_multi;
+ dev->do_ioctl = velocity_ioctl;
+ dev->ethtool_ops = &velocity_ethtool_ops;
+ dev->change_mtu = velocity_change_mtu;
+#ifdef VELOCITY_ZERO_COPY_SUPPORT
+ dev->features |= NETIF_F_SG;
+#endif
+
+ if (vptr->flags & VELOCITY_FLAGS_TX_CSUM) {
+ dev->features |= NETIF_F_HW_CSUM;
+ }
+
+ ret = register_netdev(dev);
+ if (ret < 0)
+ goto err_iounmap;
+
+ velocity_print_info(vptr);
+ pci_set_drvdata(pdev, dev);
+
+ /* and leave the chip powered down */
+
+ pci_set_power_state(pdev, 3);
+#ifdef CONFIG_PM
+ {
+ unsigned long flags;
+
+ spin_lock_irqsave(&velocity_dev_list_lock, flags);
+ list_add(&vptr->list, &velocity_dev_list);
+ spin_unlock_irqrestore(&velocity_dev_list_lock, flags);
+ }
+#endif
+ velocity_nics++;
+out:
+ return ret;
+
+err_iounmap:
+ iounmap(regs);
+err_release_res:
+ pci_release_regions(pdev);
+err_disable:
+ pci_disable_device(pdev);
+err_free_dev:
+ free_netdev(dev);
+ goto out;
+}
+
+/**
+ * velocity_print_info - per driver data
+ * @vptr: velocity
+ *
+ * Print per driver data as the kernel driver finds Velocity
+ * hardware
+ */
+
+static void __devinit velocity_print_info(struct velocity_info *vptr)
+{
+ struct net_device *dev = vptr->dev;
+
+ printk(KERN_INFO "%s: %s\n", dev->name, get_chip_name(vptr->chip_id));
+ printk(KERN_INFO "%s: Ethernet Address: %2.2X:%2.2X:%2.2X:%2.2X:%2.2X:%2.2X\n",
+ dev->name,
+ dev->dev_addr[0], dev->dev_addr[1], dev->dev_addr[2],
+ dev->dev_addr[3], dev->dev_addr[4], dev->dev_addr[5]);
+}
+
+/**
+ * velocity_init_info - init private data
+ * @pdev: PCI device
+ * @vptr: Velocity info
+ * @info: Board type
+ *
+ * Set up the initial velocity_info struct for the device that has been
+ * discovered.
+ */
+
+static void __devinit velocity_init_info(struct pci_dev *pdev, struct velocity_info *vptr, struct velocity_info_tbl *info)
+{
+ memset(vptr, 0, sizeof(struct velocity_info));
+
+ vptr->pdev = pdev;
+ vptr->chip_id = info->chip_id;
+ vptr->io_size = info->io_size;
+ vptr->num_txq = info->txqueue;
+ vptr->multicast_limit = MCAM_SIZE;
+ spin_lock_init(&vptr->lock);
+ INIT_LIST_HEAD(&vptr->list);
+}
+
+/**
+ * velocity_get_pci_info - retrieve PCI info for device
+ * @vptr: velocity device
+ * @pdev: PCI device it matches
+ *
+ * Retrieve the PCI configuration space data that interests us from
+ * the kernel PCI layer
+ */
+
+static int __devinit velocity_get_pci_info(struct velocity_info *vptr, struct pci_dev *pdev)
+{
+
+ if(pci_read_config_byte(pdev, PCI_REVISION_ID, &vptr->rev_id) < 0)
+ return -EIO;
+
+ pci_set_master(pdev);
+
+ vptr->ioaddr = pci_resource_start(pdev, 0);
+ vptr->memaddr = pci_resource_start(pdev, 1);
+
+ if(!(pci_resource_flags(pdev, 0) & IORESOURCE_IO))
+ {
+ printk(KERN_ERR "%s: region #0 is not an I/O resource, aborting.\n",
+ pci_name(pdev));
+ return -EINVAL;
+ }
+
+ if((pci_resource_flags(pdev, 1) & IORESOURCE_IO))
+ {
+ printk(KERN_ERR "%s: region #1 is an I/O resource, aborting.\n",
+ pci_name(pdev));
+ return -EINVAL;
+ }
+
+ if(pci_resource_len(pdev, 1) < 256)
+ {
+ printk(KERN_ERR "%s: region #1 is too small.\n",
+ pci_name(pdev));
+ return -EINVAL;
+ }
+ vptr->pdev = pdev;
+
+ return 0;
+}
+
+/**
+ * velocity_init_rings - set up DMA rings
+ * @vptr: Velocity to set up
+ *
+ * Allocate PCI mapped DMA rings for the receive and transmit layer
+ * to use.
+ */
+
+static int velocity_init_rings(struct velocity_info *vptr)
+{
+ int i;
+ unsigned int psize;
+ unsigned int tsize;
+ dma_addr_t pool_dma;
+ u8 *pool;
+
+ /*
+ * Allocate all RD/TD rings a single pool
+ */
+
+ psize = vptr->options.numrx * sizeof(struct rx_desc) +
+ vptr->options.numtx * sizeof(struct tx_desc) * vptr->num_txq;
+
+ /*
+ * pci_alloc_consistent() fulfills the requirement for 64 bytes
+ * alignment
+ */
+ pool = pci_alloc_consistent(vptr->pdev, psize, &pool_dma);
+
+ if (pool == NULL) {
+ printk(KERN_ERR "%s : DMA memory allocation failed.\n",
+ vptr->dev->name);
+ return -ENOMEM;
+ }
+
+ memset(pool, 0, psize);
+
+ vptr->rd_ring = (struct rx_desc *) pool;
+
+ vptr->rd_pool_dma = pool_dma;
+
+ tsize = vptr->options.numtx * PKT_BUF_SZ * vptr->num_txq;
+ vptr->tx_bufs = pci_alloc_consistent(vptr->pdev, tsize,
+ &vptr->tx_bufs_dma);
+
+ if (vptr->tx_bufs == NULL) {
+ printk(KERN_ERR "%s: DMA memory allocation failed.\n",
+ vptr->dev->name);
+ pci_free_consistent(vptr->pdev, psize, pool, pool_dma);
+ return -ENOMEM;
+ }
+
+ memset(vptr->tx_bufs, 0, vptr->options.numtx * PKT_BUF_SZ * vptr->num_txq);
+
+ i = vptr->options.numrx * sizeof(struct rx_desc);
+ pool += i;
+ pool_dma += i;
+ for (i = 0; i < vptr->num_txq; i++) {
+ int offset = vptr->options.numtx * sizeof(struct tx_desc);
+
+ vptr->td_pool_dma[i] = pool_dma;
+ vptr->td_rings[i] = (struct tx_desc *) pool;
+ pool += offset;
+ pool_dma += offset;
+ }
+ return 0;
+}
+
+/**
+ * velocity_free_rings - free PCI ring pointers
+ * @vptr: Velocity to free from
+ *
+ * Clean up the PCI ring buffers allocated to this velocity.
+ */
+
+static void velocity_free_rings(struct velocity_info *vptr)
+{
+ int size;
+
+ size = vptr->options.numrx * sizeof(struct rx_desc) +
+ vptr->options.numtx * sizeof(struct tx_desc) * vptr->num_txq;
+
+ pci_free_consistent(vptr->pdev, size, vptr->rd_ring, vptr->rd_pool_dma);
+
+ size = vptr->options.numtx * PKT_BUF_SZ * vptr->num_txq;
+
+ pci_free_consistent(vptr->pdev, size, vptr->tx_bufs, vptr->tx_bufs_dma);
+}
+
+static inline void velocity_give_many_rx_descs(struct velocity_info *vptr)
+{
+ struct mac_regs __iomem *regs = vptr->mac_regs;
+ int avail, dirty, unusable;
+
+ /*
+ * RD number must be equal to 4X per hardware spec
+ * (programming guide rev 1.20, p.13)
+ */
+ if (vptr->rd_filled < 4)
+ return;
+
+ wmb();
+
+ unusable = vptr->rd_filled & 0x0003;
+ dirty = vptr->rd_dirty - unusable;
+ for (avail = vptr->rd_filled & 0xfffc; avail; avail--) {
+ dirty = (dirty > 0) ? dirty - 1 : vptr->options.numrx - 1;
+ vptr->rd_ring[dirty].rdesc0.owner = OWNED_BY_NIC;
+ }
+
+ writew(vptr->rd_filled & 0xfffc, ®s->RBRDU);
+ vptr->rd_filled = unusable;
+}
+
+static int velocity_rx_refill(struct velocity_info *vptr)
+{
+ int dirty = vptr->rd_dirty, done = 0, ret = 0;
+
+ do {
+ struct rx_desc *rd = vptr->rd_ring + dirty;
+
+ /* Fine for an all zero Rx desc at init time as well */
+ if (rd->rdesc0.owner == OWNED_BY_NIC)
+ break;
+
+ if (!vptr->rd_info[dirty].skb) {
+ ret = velocity_alloc_rx_buf(vptr, dirty);
+ if (ret < 0)
+ break;
+ }
+ done++;
+ dirty = (dirty < vptr->options.numrx - 1) ? dirty + 1 : 0;
+ } while (dirty != vptr->rd_curr);
+
+ if (done) {
+ vptr->rd_dirty = dirty;
+ vptr->rd_filled += done;
+ velocity_give_many_rx_descs(vptr);
+ }
+
+ return ret;
+}
+
+/**
+ * velocity_init_rd_ring - set up receive ring
+ * @vptr: velocity to configure
+ *
+ * Allocate and set up the receive buffers for each ring slot and
+ * assign them to the network adapter.
+ */
+
+static int velocity_init_rd_ring(struct velocity_info *vptr)
+{
+ int ret = -ENOMEM;
+ unsigned int rsize = sizeof(struct velocity_rd_info) *
+ vptr->options.numrx;
+
+ vptr->rd_info = kmalloc(rsize, GFP_KERNEL);
+ if(vptr->rd_info == NULL)
+ goto out;
+ memset(vptr->rd_info, 0, rsize);
+
+ vptr->rd_filled = vptr->rd_dirty = vptr->rd_curr = 0;
+
+ ret = velocity_rx_refill(vptr);
+ if (ret < 0) {
+ VELOCITY_PRT(MSG_LEVEL_ERR, KERN_ERR
+ "%s: failed to allocate RX buffer.\n", vptr->dev->name);
+ velocity_free_rd_ring(vptr);
+ }
+out:
+ return ret;
+}
+
+/**
+ * velocity_free_rd_ring - free receive ring
+ * @vptr: velocity to clean up
+ *
+ * Free the receive buffers for each ring slot and any
+ * attached socket buffers that need to go away.
+ */
+
+static void velocity_free_rd_ring(struct velocity_info *vptr)
+{
+ int i;
+
+ if (vptr->rd_info == NULL)
+ return;
+
+ for (i = 0; i < vptr->options.numrx; i++) {
+ struct velocity_rd_info *rd_info = &(vptr->rd_info[i]);
+
+ if (!rd_info->skb)
+ continue;
+ pci_unmap_single(vptr->pdev, rd_info->skb_dma, vptr->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+ rd_info->skb_dma = (dma_addr_t) NULL;
+
+ dev_kfree_skb(rd_info->skb);
+ rd_info->skb = NULL;
+ }
+
+ kfree(vptr->rd_info);
+ vptr->rd_info = NULL;
+}
+
+/**
+ * velocity_init_td_ring - set up transmit ring
+ * @vptr: velocity
+ *
+ * Set up the transmit ring and chain the ring pointers together.
+ * Returns zero on success or a negative posix errno code for
+ * failure.
+ */
+
+static int velocity_init_td_ring(struct velocity_info *vptr)
+{
+ int i, j;
+ dma_addr_t curr;
+ struct tx_desc *td;
+ struct velocity_td_info *td_info;
+ unsigned int tsize = sizeof(struct velocity_td_info) *
+ vptr->options.numtx;
+
+ /* Init the TD ring entries */
+ for (j = 0; j < vptr->num_txq; j++) {
+ curr = vptr->td_pool_dma[j];
+
+ vptr->td_infos[j] = kmalloc(tsize, GFP_KERNEL);
+ if(vptr->td_infos[j] == NULL)
+ {
+ while(--j >= 0)
+ kfree(vptr->td_infos[j]);
+ return -ENOMEM;
+ }
+ memset(vptr->td_infos[j], 0, tsize);
+
+ for (i = 0; i < vptr->options.numtx; i++, curr += sizeof(struct tx_desc)) {
+ td = &(vptr->td_rings[j][i]);
+ td_info = &(vptr->td_infos[j][i]);
+ td_info->buf = vptr->tx_bufs +
+ (j * vptr->options.numtx + i) * PKT_BUF_SZ;
+ td_info->buf_dma = vptr->tx_bufs_dma +
+ (j * vptr->options.numtx + i) * PKT_BUF_SZ;
+ }
+ vptr->td_tail[j] = vptr->td_curr[j] = vptr->td_used[j] = 0;
+ }
+ return 0;
+}
+
+/*
+ * FIXME: could we merge this with velocity_free_tx_buf ?
+ */
+
+static void velocity_free_td_ring_entry(struct velocity_info *vptr,
+ int q, int n)
+{
+ struct velocity_td_info * td_info = &(vptr->td_infos[q][n]);
+ int i;
+
+ if (td_info == NULL)
+ return;
+
+ if (td_info->skb) {
+ for (i = 0; i < td_info->nskb_dma; i++)
+ {
+ if (td_info->skb_dma[i]) {
+ pci_unmap_single(vptr->pdev, td_info->skb_dma[i],
+ td_info->skb->len, PCI_DMA_TODEVICE);
+ td_info->skb_dma[i] = (dma_addr_t) NULL;
+ }
+ }
+ dev_kfree_skb(td_info->skb);
+ td_info->skb = NULL;
+ }
+}
+
+/**
+ * velocity_free_td_ring - free td ring
+ * @vptr: velocity
+ *
+ * Free up the transmit ring for this particular velocity adapter.
+ * We free the ring contents but not the ring itself.
+ */
+
+static void velocity_free_td_ring(struct velocity_info *vptr)
+{
+ int i, j;
+
+ for (j = 0; j < vptr->num_txq; j++) {
+ if (vptr->td_infos[j] == NULL)
+ continue;
+ for (i = 0; i < vptr->options.numtx; i++) {
+ velocity_free_td_ring_entry(vptr, j, i);
+
+ }
+ if (vptr->td_infos[j]) {
+ kfree(vptr->td_infos[j]);
+ vptr->td_infos[j] = NULL;
+ }
+ }
+}
+
+/**
+ * velocity_rx_srv - service RX interrupt
+ * @vptr: velocity
+ * @status: adapter status (unused)
+ *
+ * Walk the receive ring of the velocity adapter and remove
+ * any received packets from the receive queue. Hand the ring
+ * slots back to the adapter for reuse.
+ */
+
+static int velocity_rx_srv(struct velocity_info *vptr, int status)
+{
+ struct net_device_stats *stats = &vptr->stats;
+ int rd_curr = vptr->rd_curr;
+ int works = 0;
+
+ do {
+ struct rx_desc *rd = vptr->rd_ring + rd_curr;
+
+ if (!vptr->rd_info[rd_curr].skb)
+ break;
+
+ if (rd->rdesc0.owner == OWNED_BY_NIC)
+ break;
+
+ rmb();
+
+ /*
+ * Don't drop CE or RL error frame although RXOK is off
+ */
+ if ((rd->rdesc0.RSR & RSR_RXOK) || (!(rd->rdesc0.RSR & RSR_RXOK) && (rd->rdesc0.RSR & (RSR_CE | RSR_RL)))) {
+ if (velocity_receive_frame(vptr, rd_curr) < 0)
+ stats->rx_dropped++;
+ } else {
+ if (rd->rdesc0.RSR & RSR_CRC)
+ stats->rx_crc_errors++;
+ if (rd->rdesc0.RSR & RSR_FAE)
+ stats->rx_frame_errors++;
+
+ stats->rx_dropped++;
+ }
+
+ rd->inten = 1;
+
+ vptr->dev->last_rx = jiffies;
+
+ rd_curr++;
+ if (rd_curr >= vptr->options.numrx)
+ rd_curr = 0;
+ } while (++works <= 15);
+
+ vptr->rd_curr = rd_curr;
+
+ if (works > 0 && velocity_rx_refill(vptr) < 0) {
+ VELOCITY_PRT(MSG_LEVEL_ERR, KERN_ERR
+ "%s: rx buf allocation failure\n", vptr->dev->name);
+ }
+
+ VAR_USED(stats);
+ return works;
+}
+
+/**
+ * velocity_rx_csum - checksum process
+ * @rd: receive packet descriptor
+ * @skb: network layer packet buffer
+ *
+ * Process the status bits for the received packet and determine
+ * if the checksum was computed and verified by the hardware
+ */
+
+static inline void velocity_rx_csum(struct rx_desc *rd, struct sk_buff *skb)
+{
+ skb->ip_summed = CHECKSUM_NONE;
+
+ if (rd->rdesc1.CSM & CSM_IPKT) {
+ if (rd->rdesc1.CSM & CSM_IPOK) {
+ if ((rd->rdesc1.CSM & CSM_TCPKT) ||
+ (rd->rdesc1.CSM & CSM_UDPKT)) {
+ if (!(rd->rdesc1.CSM & CSM_TUPOK)) {
+ return;
+ }
+ }
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ }
+ }
+}
+
+/**
+ * velocity_rx_copy - in place Rx copy for small packets
+ * @rx_skb: network layer packet buffer candidate
+ * @pkt_size: received data size
+ * @rd: receive packet descriptor
+ * @dev: network device
+ *
+ * Replace the current skb that is scheduled for Rx processing by a
+ * shorter, immediatly allocated skb, if the received packet is small
+ * enough. This function returns a negative value if the received
+ * packet is too big or if memory is exhausted.
+ */
+static inline int velocity_rx_copy(struct sk_buff **rx_skb, int pkt_size,
+ struct velocity_info *vptr)
+{
+ int ret = -1;
+
+ if (pkt_size < rx_copybreak) {
+ struct sk_buff *new_skb;
+
+ new_skb = dev_alloc_skb(pkt_size + 2);
+ if (new_skb) {
+ new_skb->dev = vptr->dev;
+ new_skb->ip_summed = rx_skb[0]->ip_summed;
+
+ if (vptr->flags & VELOCITY_FLAGS_IP_ALIGN)
+ skb_reserve(new_skb, 2);
+
+ memcpy(new_skb->data, rx_skb[0]->tail, pkt_size);
+ *rx_skb = new_skb;
+ ret = 0;
+ }
+
+ }
+ return ret;
+}
+
+/**
+ * velocity_iph_realign - IP header alignment
+ * @vptr: velocity we are handling
+ * @skb: network layer packet buffer
+ * @pkt_size: received data size
+ *
+ * Align IP header on a 2 bytes boundary. This behavior can be
+ * configured by the user.
+ */
+static inline void velocity_iph_realign(struct velocity_info *vptr,
+ struct sk_buff *skb, int pkt_size)
+{
+ /* FIXME - memmove ? */
+ if (vptr->flags & VELOCITY_FLAGS_IP_ALIGN) {
+ int i;
+
+ for (i = pkt_size; i >= 0; i--)
+ *(skb->data + i + 2) = *(skb->data + i);
+ skb_reserve(skb, 2);
+ }
+}
+
+/**
+ * velocity_receive_frame - received packet processor
+ * @vptr: velocity we are handling
+ * @idx: ring index
+ *
+ * A packet has arrived. We process the packet and if appropriate
+ * pass the frame up the network stack
+ */
+
+static int velocity_receive_frame(struct velocity_info *vptr, int idx)
+{
+ void (*pci_action)(struct pci_dev *, dma_addr_t, size_t, int);
+ struct net_device_stats *stats = &vptr->stats;
+ struct velocity_rd_info *rd_info = &(vptr->rd_info[idx]);
+ struct rx_desc *rd = &(vptr->rd_ring[idx]);
+ int pkt_len = rd->rdesc0.len;
+ struct sk_buff *skb;
+
+ if (rd->rdesc0.RSR & (RSR_STP | RSR_EDP)) {
+ VELOCITY_PRT(MSG_LEVEL_VERBOSE, KERN_ERR " %s : the received frame span multple RDs.\n", vptr->dev->name);
+ stats->rx_length_errors++;
+ return -EINVAL;
+ }
+
+ if (rd->rdesc0.RSR & RSR_MAR)
+ vptr->stats.multicast++;
+
+ skb = rd_info->skb;
+ skb->dev = vptr->dev;
+
+ pci_dma_sync_single_for_cpu(vptr->pdev, rd_info->skb_dma,
+ vptr->rx_buf_sz, PCI_DMA_FROMDEVICE);
+
+ /*
+ * Drop frame not meeting IEEE 802.3
+ */
+
+ if (vptr->flags & VELOCITY_FLAGS_VAL_PKT_LEN) {
+ if (rd->rdesc0.RSR & RSR_RL) {
+ stats->rx_length_errors++;
+ return -EINVAL;
+ }
+ }
+
+ pci_action = pci_dma_sync_single_for_device;
+
+ velocity_rx_csum(rd, skb);
+
+ if (velocity_rx_copy(&skb, pkt_len, vptr) < 0) {
+ velocity_iph_realign(vptr, skb, pkt_len);
+ pci_action = pci_unmap_single;
+ rd_info->skb = NULL;
+ }
+
+ pci_action(vptr->pdev, rd_info->skb_dma, vptr->rx_buf_sz,
+ PCI_DMA_FROMDEVICE);
+
+ skb_put(skb, pkt_len - 4);
+ skb->protocol = eth_type_trans(skb, skb->dev);
+
+ stats->rx_bytes += pkt_len;
+ netif_rx(skb);
+
+ return 0;
+}
+
+/**
+ * velocity_alloc_rx_buf - allocate aligned receive buffer
+ * @vptr: velocity
+ * @idx: ring index
+ *
+ * Allocate a new full sized buffer for the reception of a frame and
+ * map it into PCI space for the hardware to use. The hardware
+ * requires *64* byte alignment of the buffer which makes life
+ * less fun than would be ideal.
+ */
+
+static int velocity_alloc_rx_buf(struct velocity_info *vptr, int idx)
+{
+ struct rx_desc *rd = &(vptr->rd_ring[idx]);
+ struct velocity_rd_info *rd_info = &(vptr->rd_info[idx]);
+
+ rd_info->skb = dev_alloc_skb(vptr->rx_buf_sz + 64);
+ if (rd_info->skb == NULL)
+ return -ENOMEM;
+
+ /*
+ * Do the gymnastics to get the buffer head for data at
+ * 64byte alignment.
+ */
+ skb_reserve(rd_info->skb, (unsigned long) rd_info->skb->tail & 63);
+ rd_info->skb->dev = vptr->dev;
+ rd_info->skb_dma = pci_map_single(vptr->pdev, rd_info->skb->tail, vptr->rx_buf_sz, PCI_DMA_FROMDEVICE);
+
+ /*
+ * Fill in the descriptor to match
+ */
+
+ *((u32 *) & (rd->rdesc0)) = 0;
+ rd->len = cpu_to_le32(vptr->rx_buf_sz);
+ rd->inten = 1;
+ rd->pa_low = cpu_to_le32(rd_info->skb_dma);
+ rd->pa_high = 0;
+ return 0;
+}
+
+/**
+ * tx_srv - transmit interrupt service
+ * @vptr; Velocity
+ * @status:
+ *
+ * Scan the queues looking for transmitted packets that
+ * we can complete and clean up. Update any statistics as
+ * neccessary/
+ */
+
+static int velocity_tx_srv(struct velocity_info *vptr, u32 status)
+{
+ struct tx_desc *td;
+ int qnum;
+ int full = 0;
+ int idx;
+ int works = 0;
+ struct velocity_td_info *tdinfo;
+ struct net_device_stats *stats = &vptr->stats;
+
+ for (qnum = 0; qnum < vptr->num_txq; qnum++) {
+ for (idx = vptr->td_tail[qnum]; vptr->td_used[qnum] > 0;
+ idx = (idx + 1) % vptr->options.numtx) {
+
+ /*
+ * Get Tx Descriptor
+ */
+ td = &(vptr->td_rings[qnum][idx]);
+ tdinfo = &(vptr->td_infos[qnum][idx]);
+
+ if (td->tdesc0.owner == OWNED_BY_NIC)
+ break;
+
+ if ((works++ > 15))
+ break;
+
+ if (td->tdesc0.TSR & TSR0_TERR) {
+ stats->tx_errors++;
+ stats->tx_dropped++;
+ if (td->tdesc0.TSR & TSR0_CDH)
+ stats->tx_heartbeat_errors++;
+ if (td->tdesc0.TSR & TSR0_CRS)
+ stats->tx_carrier_errors++;
+ if (td->tdesc0.TSR & TSR0_ABT)
+ stats->tx_aborted_errors++;
+ if (td->tdesc0.TSR & TSR0_OWC)
+ stats->tx_window_errors++;
+ } else {
+ stats->tx_packets++;
+ stats->tx_bytes += tdinfo->skb->len;
+ }
+ velocity_free_tx_buf(vptr, tdinfo);
+ vptr->td_used[qnum]--;
+ }
+ vptr->td_tail[qnum] = idx;
+
+ if (AVAIL_TD(vptr, qnum) < 1) {
+ full = 1;
+ }
+ }
+ /*
+ * Look to see if we should kick the transmit network
+ * layer for more work.
+ */
+ if (netif_queue_stopped(vptr->dev) && (full == 0)
+ && (!(vptr->mii_status & VELOCITY_LINK_FAIL))) {
+ netif_wake_queue(vptr->dev);
+ }
+ return works;
+}
+
+/**
+ * velocity_print_link_status - link status reporting
+ * @vptr: velocity to report on
+ *
+ * Turn the link status of the velocity card into a kernel log
+ * description of the new link state, detailing speed and duplex
+ * status
+ */
+
+static void velocity_print_link_status(struct velocity_info *vptr)
+{
+
+ if (vptr->mii_status & VELOCITY_LINK_FAIL) {
+ VELOCITY_PRT(MSG_LEVEL_INFO, KERN_NOTICE "%s: failed to detect cable link\n", vptr->dev->name);
+ } else if (vptr->options.spd_dpx == SPD_DPX_AUTO) {
+ VELOCITY_PRT(MSG_LEVEL_INFO, KERN_NOTICE "%s: Link autonegation", vptr->dev->name);
+
+ if (vptr->mii_status & VELOCITY_SPEED_1000)
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 1000M bps");
+ else if (vptr->mii_status & VELOCITY_SPEED_100)
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 100M bps");
+ else
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 10M bps");
+
+ if (vptr->mii_status & VELOCITY_DUPLEX_FULL)
+ VELOCITY_PRT(MSG_LEVEL_INFO, " full duplex\n");
+ else
+ VELOCITY_PRT(MSG_LEVEL_INFO, " half duplex\n");
+ } else {
+ VELOCITY_PRT(MSG_LEVEL_INFO, KERN_NOTICE "%s: Link forced", vptr->dev->name);
+ switch (vptr->options.spd_dpx) {
+ case SPD_DPX_100_HALF:
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 100M bps half duplex\n");
+ break;
+ case SPD_DPX_100_FULL:
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 100M bps full duplex\n");
+ break;
+ case SPD_DPX_10_HALF:
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 10M bps half duplex\n");
+ break;
+ case SPD_DPX_10_FULL:
+ VELOCITY_PRT(MSG_LEVEL_INFO, " speed 10M bps full duplex\n");
+ break;
+ default:
+ break;
+ }
+ }
+}
+
+/**
+ * velocity_error - handle error from controller
+ * @vptr: velocity
+ * @status: card status
+ *
+ * Process an error report from the hardware and attempt to recover
+ * the card itself. At the moment we cannot recover from some
+ * theoretically impossible errors but this could be fixed using
+ * the pci_device_failed logic to bounce the hardware
+ *
+ */
+
+static void velocity_error(struct velocity_info *vptr, int status)
+{
+
+ if (status & ISR_TXSTLI) {
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+
+ printk(KERN_ERR "TD structure errror TDindex=%hx\n", readw(®s->TDIdx[0]));
+ BYTE_REG_BITS_ON(TXESR_TDSTR, ®s->TXESR);
+ writew(TRDCSR_RUN, ®s->TDCSRClr);
+ netif_stop_queue(vptr->dev);
+
+ /* FIXME: port over the pci_device_failed code and use it
+ here */
+ }
+
+ if (status & ISR_SRCI) {
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ int linked;
+
+ if (vptr->options.spd_dpx == SPD_DPX_AUTO) {
+ vptr->mii_status = check_connection_type(regs);
+
+ /*
+ * If it is a 3119, disable frame bursting in
+ * halfduplex mode and enable it in fullduplex
+ * mode
+ */
+ if (vptr->rev_id < REV_ID_VT3216_A0) {
+ if (vptr->mii_status | VELOCITY_DUPLEX_FULL)
+ BYTE_REG_BITS_ON(TCR_TB2BDIS, ®s->TCR);
+ else
+ BYTE_REG_BITS_OFF(TCR_TB2BDIS, ®s->TCR);
+ }
+ /*
+ * Only enable CD heart beat counter in 10HD mode
+ */
+ if (!(vptr->mii_status & VELOCITY_DUPLEX_FULL) && (vptr->mii_status & VELOCITY_SPEED_10)) {
+ BYTE_REG_BITS_OFF(TESTCFG_HBDIS, ®s->TESTCFG);
+ } else {
+ BYTE_REG_BITS_ON(TESTCFG_HBDIS, ®s->TESTCFG);
+ }
+ }
+ /*
+ * Get link status from PHYSR0
+ */
+ linked = readb(®s->PHYSR0) & PHYSR0_LINKGD;
+
+ if (linked) {
+ vptr->mii_status &= ~VELOCITY_LINK_FAIL;
+ } else {
+ vptr->mii_status |= VELOCITY_LINK_FAIL;
+ }
+
+ velocity_print_link_status(vptr);
+ enable_flow_control_ability(vptr);
+
+ /*
+ * Re-enable auto-polling because SRCI will disable
+ * auto-polling
+ */
+
+ enable_mii_autopoll(regs);
+
+ if (vptr->mii_status & VELOCITY_LINK_FAIL)
+ netif_stop_queue(vptr->dev);
+ else
+ netif_wake_queue(vptr->dev);
+
+ };
+ if (status & ISR_MIBFI)
+ velocity_update_hw_mibs(vptr);
+ if (status & ISR_LSTEI)
+ mac_rx_queue_wake(vptr->mac_regs);
+}
+
+/**
+ * velocity_free_tx_buf - free transmit buffer
+ * @vptr: velocity
+ * @tdinfo: buffer
+ *
+ * Release an transmit buffer. If the buffer was preallocated then
+ * recycle it, if not then unmap the buffer.
+ */
+
+static void velocity_free_tx_buf(struct velocity_info *vptr, struct velocity_td_info *tdinfo)
+{
+ struct sk_buff *skb = tdinfo->skb;
+ int i;
+
+ /*
+ * Don't unmap the pre-allocated tx_bufs
+ */
+ if (tdinfo->skb_dma && (tdinfo->skb_dma[0] != tdinfo->buf_dma)) {
+
+ for (i = 0; i < tdinfo->nskb_dma; i++) {
+#ifdef VELOCITY_ZERO_COPY_SUPPORT
+ pci_unmap_single(vptr->pdev, tdinfo->skb_dma[i], td->tdesc1.len, PCI_DMA_TODEVICE);
+#else
+ pci_unmap_single(vptr->pdev, tdinfo->skb_dma[i], skb->len, PCI_DMA_TODEVICE);
+#endif
+ tdinfo->skb_dma[i] = 0;
+ }
+ }
+ dev_kfree_skb_irq(skb);
+ tdinfo->skb = NULL;
+}
+
+/**
+ * velocity_open - interface activation callback
+ * @dev: network layer device to open
+ *
+ * Called when the network layer brings the interface up. Returns
+ * a negative posix error code on failure, or zero on success.
+ *
+ * All the ring allocation and set up is done on open for this
+ * adapter to minimise memory usage when inactive
+ */
+
+static int velocity_open(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ int ret;
+
+ vptr->rx_buf_sz = (dev->mtu <= 1504 ? PKT_BUF_SZ : dev->mtu + 32);
+
+ ret = velocity_init_rings(vptr);
+ if (ret < 0)
+ goto out;
+
+ ret = velocity_init_rd_ring(vptr);
+ if (ret < 0)
+ goto err_free_desc_rings;
+
+ ret = velocity_init_td_ring(vptr);
+ if (ret < 0)
+ goto err_free_rd_ring;
+
+ /* Ensure chip is running */
+ pci_set_power_state(vptr->pdev, 0);
+
+ velocity_init_registers(vptr, VELOCITY_INIT_COLD);
+
+ ret = request_irq(vptr->pdev->irq, &velocity_intr, SA_SHIRQ,
+ dev->name, dev);
+ if (ret < 0) {
+ /* Power down the chip */
+ pci_set_power_state(vptr->pdev, 3);
+ goto err_free_td_ring;
+ }
+
+ mac_enable_int(vptr->mac_regs);
+ netif_start_queue(dev);
+ vptr->flags |= VELOCITY_FLAGS_OPENED;
+out:
+ return ret;
+
+err_free_td_ring:
+ velocity_free_td_ring(vptr);
+err_free_rd_ring:
+ velocity_free_rd_ring(vptr);
+err_free_desc_rings:
+ velocity_free_rings(vptr);
+ goto out;
+}
+
+/**
+ * velocity_change_mtu - MTU change callback
+ * @dev: network device
+ * @new_mtu: desired MTU
+ *
+ * Handle requests from the networking layer for MTU change on
+ * this interface. It gets called on a change by the network layer.
+ * Return zero for success or negative posix error code.
+ */
+
+static int velocity_change_mtu(struct net_device *dev, int new_mtu)
+{
+ struct velocity_info *vptr = dev->priv;
+ unsigned long flags;
+ int oldmtu = dev->mtu;
+ int ret = 0;
+
+ if ((new_mtu < VELOCITY_MIN_MTU) || new_mtu > (VELOCITY_MAX_MTU)) {
+ VELOCITY_PRT(MSG_LEVEL_ERR, KERN_NOTICE "%s: Invalid MTU.\n",
+ vptr->dev->name);
+ return -EINVAL;
+ }
+
+ if (new_mtu != oldmtu) {
+ spin_lock_irqsave(&vptr->lock, flags);
+
+ netif_stop_queue(dev);
+ velocity_shutdown(vptr);
+
+ velocity_free_td_ring(vptr);
+ velocity_free_rd_ring(vptr);
+
+ dev->mtu = new_mtu;
+ if (new_mtu > 8192)
+ vptr->rx_buf_sz = 9 * 1024;
+ else if (new_mtu > 4096)
+ vptr->rx_buf_sz = 8192;
+ else
+ vptr->rx_buf_sz = 4 * 1024;
+
+ ret = velocity_init_rd_ring(vptr);
+ if (ret < 0)
+ goto out_unlock;
+
+ ret = velocity_init_td_ring(vptr);
+ if (ret < 0)
+ goto out_unlock;
+
+ velocity_init_registers(vptr, VELOCITY_INIT_COLD);
+
+ mac_enable_int(vptr->mac_regs);
+ netif_start_queue(dev);
+out_unlock:
+ spin_unlock_irqrestore(&vptr->lock, flags);
+ }
+
+ return ret;
+}
+
+/**
+ * velocity_shutdown - shut down the chip
+ * @vptr: velocity to deactivate
+ *
+ * Shuts down the internal operations of the velocity and
+ * disables interrupts, autopolling, transmit and receive
+ */
+
+static void velocity_shutdown(struct velocity_info *vptr)
+{
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ mac_disable_int(regs);
+ writel(CR0_STOP, ®s->CR0Set);
+ writew(0xFFFF, ®s->TDCSRClr);
+ writeb(0xFF, ®s->RDCSRClr);
+ safe_disable_mii_autopoll(regs);
+ mac_clear_isr(regs);
+}
+
+/**
+ * velocity_close - close adapter callback
+ * @dev: network device
+ *
+ * Callback from the network layer when the velocity is being
+ * deactivated by the network layer
+ */
+
+static int velocity_close(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+
+ netif_stop_queue(dev);
+ velocity_shutdown(vptr);
+
+ if (vptr->flags & VELOCITY_FLAGS_WOL_ENABLED)
+ velocity_get_ip(vptr);
+ if (dev->irq != 0)
+ free_irq(dev->irq, dev);
+
+ /* Power down the chip */
+ pci_set_power_state(vptr->pdev, 3);
+
+ /* Free the resources */
+ velocity_free_td_ring(vptr);
+ velocity_free_rd_ring(vptr);
+ velocity_free_rings(vptr);
+
+ vptr->flags &= (~VELOCITY_FLAGS_OPENED);
+ return 0;
+}
+
+/**
+ * velocity_xmit - transmit packet callback
+ * @skb: buffer to transmit
+ * @dev: network device
+ *
+ * Called by the networ layer to request a packet is queued to
+ * the velocity. Returns zero on success.
+ */
+
+static int velocity_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ int qnum = 0;
+ struct tx_desc *td_ptr;
+ struct velocity_td_info *tdinfo;
+ unsigned long flags;
+ int index;
+
+ int pktlen = skb->len;
+
+ spin_lock_irqsave(&vptr->lock, flags);
+
+ index = vptr->td_curr[qnum];
+ td_ptr = &(vptr->td_rings[qnum][index]);
+ tdinfo = &(vptr->td_infos[qnum][index]);
+
+ td_ptr->tdesc1.TCPLS = TCPLS_NORMAL;
+ td_ptr->tdesc1.TCR = TCR0_TIC;
+ td_ptr->td_buf[0].queue = 0;
+
+ /*
+ * Pad short frames.
+ */
+ if (pktlen < ETH_ZLEN) {
+ /* Cannot occur until ZC support */
+ if(skb_linearize(skb, GFP_ATOMIC))
+ return 0;
+ pktlen = ETH_ZLEN;
+ memcpy(tdinfo->buf, skb->data, skb->len);
+ memset(tdinfo->buf + skb->len, 0, ETH_ZLEN - skb->len);
+ tdinfo->skb = skb;
+ tdinfo->skb_dma[0] = tdinfo->buf_dma;
+ td_ptr->tdesc0.pktsize = pktlen;
+ td_ptr->td_buf[0].pa_low = cpu_to_le32(tdinfo->skb_dma[0]);
+ td_ptr->td_buf[0].pa_high = 0;
+ td_ptr->td_buf[0].bufsize = td_ptr->tdesc0.pktsize;
+ tdinfo->nskb_dma = 1;
+ td_ptr->tdesc1.CMDZ = 2;
+ } else
+#ifdef VELOCITY_ZERO_COPY_SUPPORT
+ if (skb_shinfo(skb)->nr_frags > 0) {
+ int nfrags = skb_shinfo(skb)->nr_frags;
+ tdinfo->skb = skb;
+ if (nfrags > 6) {
+ skb_linearize(skb, GFP_ATOMIC);
+ memcpy(tdinfo->buf, skb->data, skb->len);
+ tdinfo->skb_dma[0] = tdinfo->buf_dma;
+ td_ptr->tdesc0.pktsize =
+ td_ptr->td_buf[0].pa_low = cpu_to_le32(tdinfo->skb_dma[0]);
+ td_ptr->td_buf[0].pa_high = 0;
+ td_ptr->td_buf[0].bufsize = td_ptr->tdesc0.pktsize;
+ tdinfo->nskb_dma = 1;
+ td_ptr->tdesc1.CMDZ = 2;
+ } else {
+ int i = 0;
+ tdinfo->nskb_dma = 0;
+ tdinfo->skb_dma[i] = pci_map_single(vptr->pdev, skb->data, skb->len - skb->data_len, PCI_DMA_TODEVICE);
+
+ td_ptr->tdesc0.pktsize = pktlen;
+
+ /* FIXME: support 48bit DMA later */
+ td_ptr->td_buf[i].pa_low = cpu_to_le32(tdinfo->skb_dma);
+ td_ptr->td_buf[i].pa_high = 0;
+ td_ptr->td_buf[i].bufsize = skb->len->skb->data_len;
+
+ for (i = 0; i < nfrags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+ void *addr = ((void *) page_address(frag->page + frag->page_offset));
+
+ tdinfo->skb_dma[i + 1] = pci_map_single(vptr->pdev, addr, frag->size, PCI_DMA_TODEVICE);
+
+ td_ptr->td_buf[i + 1].pa_low = cpu_to_le32(tdinfo->skb_dma[i + 1]);
+ td_ptr->td_buf[i + 1].pa_high = 0;
+ td_ptr->td_buf[i + 1].bufsize = frag->size;
+ }
+ tdinfo->nskb_dma = i - 1;
+ td_ptr->tdesc1.CMDZ = i;
+ }
+
+ } else
+#endif
+ {
+ /*
+ * Map the linear network buffer into PCI space and
+ * add it to the transmit ring.
+ */
+ tdinfo->skb = skb;
+ tdinfo->skb_dma[0] = pci_map_single(vptr->pdev, skb->data, pktlen, PCI_DMA_TODEVICE);
+ td_ptr->tdesc0.pktsize = pktlen;
+ td_ptr->td_buf[0].pa_low = cpu_to_le32(tdinfo->skb_dma[0]);
+ td_ptr->td_buf[0].pa_high = 0;
+ td_ptr->td_buf[0].bufsize = td_ptr->tdesc0.pktsize;
+ tdinfo->nskb_dma = 1;
+ td_ptr->tdesc1.CMDZ = 2;
+ }
+
+ if (vptr->flags & VELOCITY_FLAGS_TAGGING) {
+ td_ptr->tdesc1.pqinf.VID = (vptr->options.vid & 0xfff);
+ td_ptr->tdesc1.pqinf.priority = 0;
+ td_ptr->tdesc1.pqinf.CFI = 0;
+ td_ptr->tdesc1.TCR |= TCR0_VETAG;
+ }
+
+ /*
+ * Handle hardware checksum
+ */
+ if ((vptr->flags & VELOCITY_FLAGS_TX_CSUM)
+ && (skb->ip_summed == CHECKSUM_HW)) {
+ struct iphdr *ip = skb->nh.iph;
+ if (ip->protocol == IPPROTO_TCP)
+ td_ptr->tdesc1.TCR |= TCR0_TCPCK;
+ else if (ip->protocol == IPPROTO_UDP)
+ td_ptr->tdesc1.TCR |= (TCR0_UDPCK);
+ td_ptr->tdesc1.TCR |= TCR0_IPCK;
+ }
+ {
+
+ int prev = index - 1;
+
+ if (prev < 0)
+ prev = vptr->options.numtx - 1;
+ td_ptr->tdesc0.owner = OWNED_BY_NIC;
+ vptr->td_used[qnum]++;
+ vptr->td_curr[qnum] = (index + 1) % vptr->options.numtx;
+
+ if (AVAIL_TD(vptr, qnum) < 1)
+ netif_stop_queue(dev);
+
+ td_ptr = &(vptr->td_rings[qnum][prev]);
+ td_ptr->td_buf[0].queue = 1;
+ mac_tx_queue_wake(vptr->mac_regs, qnum);
+ }
+ dev->trans_start = jiffies;
+ spin_unlock_irqrestore(&vptr->lock, flags);
+ return 0;
+}
+
+/**
+ * velocity_intr - interrupt callback
+ * @irq: interrupt number
+ * @dev_instance: interrupting device
+ * @pt_regs: CPU register state at interrupt
+ *
+ * Called whenever an interrupt is generated by the velocity
+ * adapter IRQ line. We may not be the source of the interrupt
+ * and need to identify initially if we are, and if not exit as
+ * efficiently as possible.
+ */
+
+static int velocity_intr(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct net_device *dev = dev_instance;
+ struct velocity_info *vptr = dev->priv;
+ u32 isr_status;
+ int max_count = 0;
+
+
+ spin_lock(&vptr->lock);
+ isr_status = mac_read_isr(vptr->mac_regs);
+
+ /* Not us ? */
+ if (isr_status == 0) {
+ spin_unlock(&vptr->lock);
+ return IRQ_NONE;
+ }
+
+ mac_disable_int(vptr->mac_regs);
+
+ /*
+ * Keep processing the ISR until we have completed
+ * processing and the isr_status becomes zero
+ */
+
+ while (isr_status != 0) {
+ mac_write_isr(vptr->mac_regs, isr_status);
+ if (isr_status & (~(ISR_PRXI | ISR_PPRXI | ISR_PTXI | ISR_PPTXI)))
+ velocity_error(vptr, isr_status);
+ if (isr_status & (ISR_PRXI | ISR_PPRXI))
+ max_count += velocity_rx_srv(vptr, isr_status);
+ if (isr_status & (ISR_PTXI | ISR_PPTXI))
+ max_count += velocity_tx_srv(vptr, isr_status);
+ isr_status = mac_read_isr(vptr->mac_regs);
+ if (max_count > vptr->options.int_works)
+ {
+ printk(KERN_WARNING "%s: excessive work at interrupt.\n",
+ dev->name);
+ max_count = 0;
+ }
+ }
+ spin_unlock(&vptr->lock);
+ mac_enable_int(vptr->mac_regs);
+ return IRQ_HANDLED;
+
+}
+
+
+/**
+ * velocity_set_multi - filter list change callback
+ * @dev: network device
+ *
+ * Called by the network layer when the filter lists need to change
+ * for a velocity adapter. Reload the CAMs with the new address
+ * filter ruleset.
+ */
+
+static void velocity_set_multi(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ u8 rx_mode;
+ int i;
+ struct dev_mc_list *mclist;
+
+ if (dev->flags & IFF_PROMISC) { /* Set promiscuous. */
+ /* Unconditionally log net taps. */
+ printk(KERN_NOTICE "%s: Promiscuous mode enabled.\n", dev->name);
+ writel(0xffffffff, ®s->MARCAM[0]);
+ writel(0xffffffff, ®s->MARCAM[4]);
+ rx_mode = (RCR_AM | RCR_AB | RCR_PROM);
+ } else if ((dev->mc_count > vptr->multicast_limit)
+ || (dev->flags & IFF_ALLMULTI)) {
+ writel(0xffffffff, ®s->MARCAM[0]);
+ writel(0xffffffff, ®s->MARCAM[4]);
+ rx_mode = (RCR_AM | RCR_AB);
+ } else {
+ int offset = MCAM_SIZE - vptr->multicast_limit;
+ mac_get_cam_mask(regs, vptr->mCAMmask, VELOCITY_MULTICAST_CAM);
+
+ for (i = 0, mclist = dev->mc_list; mclist && i < dev->mc_count; i++, mclist = mclist->next) {
+ mac_set_cam(regs, i + offset, mclist->dmi_addr, VELOCITY_MULTICAST_CAM);
+ vptr->mCAMmask[(offset + i) / 8] |= 1 << ((offset + i) & 7);
+ }
+
+ mac_set_cam_mask(regs, vptr->mCAMmask, VELOCITY_MULTICAST_CAM);
+ rx_mode = (RCR_AM | RCR_AB);
+ }
+ if (dev->mtu > 1500)
+ rx_mode |= RCR_AL;
+
+ BYTE_REG_BITS_ON(rx_mode, ®s->RCR);
+
+}
+
+/**
+ * velocity_get_status - statistics callback
+ * @dev: network device
+ *
+ * Callback from the network layer to allow driver statistics
+ * to be resynchronized with hardware collected state. In the
+ * case of the velocity we need to pull the MIB counters from
+ * the hardware into the counters before letting the network
+ * layer display them.
+ */
+
+static struct net_device_stats *velocity_get_stats(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+
+ /* If the hardware is down, don't touch MII */
+ if(!netif_running(dev))
+ return &vptr->stats;
+
+ spin_lock_irq(&vptr->lock);
+ velocity_update_hw_mibs(vptr);
+ spin_unlock_irq(&vptr->lock);
+
+ vptr->stats.rx_packets = vptr->mib_counter[HW_MIB_ifRxAllPkts];
+ vptr->stats.rx_errors = vptr->mib_counter[HW_MIB_ifRxErrorPkts];
+ vptr->stats.rx_length_errors = vptr->mib_counter[HW_MIB_ifInRangeLengthErrors];
+
+// unsigned long rx_dropped; /* no space in linux buffers */
+ vptr->stats.collisions = vptr->mib_counter[HW_MIB_ifTxEtherCollisions];
+ /* detailed rx_errors: */
+// unsigned long rx_length_errors;
+// unsigned long rx_over_errors; /* receiver ring buff overflow */
+ vptr->stats.rx_crc_errors = vptr->mib_counter[HW_MIB_ifRxPktCRCE];
+// unsigned long rx_frame_errors; /* recv'd frame alignment error */
+// unsigned long rx_fifo_errors; /* recv'r fifo overrun */
+// unsigned long rx_missed_errors; /* receiver missed packet */
+
+ /* detailed tx_errors */
+// unsigned long tx_fifo_errors;
+
+ return &vptr->stats;
+}
+
+
+/**
+ * velocity_ioctl - ioctl entry point
+ * @dev: network device
+ * @rq: interface request ioctl
+ * @cmd: command code
+ *
+ * Called when the user issues an ioctl request to the network
+ * device in question. The velocity interface supports MII.
+ */
+
+static int velocity_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+ struct velocity_info *vptr = dev->priv;
+ int ret;
+
+ /* If we are asked for information and the device is power
+ saving then we need to bring the device back up to talk to it */
+
+ if(!netif_running(dev))
+ pci_set_power_state(vptr->pdev, 0);
+
+ switch (cmd) {
+ case SIOCGMIIPHY: /* Get address of MII PHY in use. */
+ case SIOCGMIIREG: /* Read MII PHY register. */
+ case SIOCSMIIREG: /* Write to MII PHY register. */
+ ret = velocity_mii_ioctl(dev, rq, cmd);
+ break;
+
+ default:
+ ret = -EOPNOTSUPP;
+ }
+ if(!netif_running(dev))
+ pci_set_power_state(vptr->pdev, 3);
+
+
+ return ret;
+}
+
+/*
+ * Definition for our device driver. The PCI layer interface
+ * uses this to handle all our card discover and plugging
+ */
+
+static struct pci_driver velocity_driver = {
+ .name = VELOCITY_NAME,
+ .id_table = velocity_id_table,
+ .probe = velocity_found1,
+ .remove = __devexit_p(velocity_remove1),
+#ifdef CONFIG_PM
+ .suspend = velocity_suspend,
+ .resume = velocity_resume,
+#endif
+};
+
+/**
+ * velocity_init_module - load time function
+ *
+ * Called when the velocity module is loaded. The PCI driver
+ * is registered with the PCI layer, and in turn will call
+ * the probe functions for each velocity adapter installed
+ * in the system.
+ */
+
+static int __init velocity_init_module(void)
+{
+ int ret;
+
+ velocity_register_notifier();
+ ret = pci_module_init(&velocity_driver);
+ if (ret < 0)
+ velocity_unregister_notifier();
+ return ret;
+}
+
+/**
+ * velocity_cleanup - module unload
+ *
+ * When the velocity hardware is unloaded this function is called.
+ * It will clean up the notifiers and the unregister the PCI
+ * driver interface for this hardware. This in turn cleans up
+ * all discovered interfaces before returning from the function
+ */
+
+static void __exit velocity_cleanup_module(void)
+{
+ velocity_unregister_notifier();
+ pci_unregister_driver(&velocity_driver);
+}
+
+module_init(velocity_init_module);
+module_exit(velocity_cleanup_module);
+
+
+/*
+ * MII access , media link mode setting functions
+ */
+
+
+/**
+ * mii_init - set up MII
+ * @vptr: velocity adapter
+ * @mii_status: links tatus
+ *
+ * Set up the PHY for the current link state.
+ */
+
+static void mii_init(struct velocity_info *vptr, u32 mii_status)
+{
+ u16 BMCR;
+
+ switch (PHYID_GET_PHY_ID(vptr->phy_id)) {
+ case PHYID_CICADA_CS8201:
+ /*
+ * Reset to hardware default
+ */
+ MII_REG_BITS_OFF((ANAR_ASMDIR | ANAR_PAUSE), MII_REG_ANAR, vptr->mac_regs);
+ /*
+ * Turn on ECHODIS bit in NWay-forced full mode and turn it
+ * off it in NWay-forced half mode for NWay-forced v.s.
+ * legacy-forced issue.
+ */
+ if (vptr->mii_status & VELOCITY_DUPLEX_FULL)
+ MII_REG_BITS_ON(TCSR_ECHODIS, MII_REG_TCSR, vptr->mac_regs);
+ else
+ MII_REG_BITS_OFF(TCSR_ECHODIS, MII_REG_TCSR, vptr->mac_regs);
+ /*
+ * Turn on Link/Activity LED enable bit for CIS8201
+ */
+ MII_REG_BITS_ON(PLED_LALBE, MII_REG_PLED, vptr->mac_regs);
+ break;
+ case PHYID_VT3216_32BIT:
+ case PHYID_VT3216_64BIT:
+ /*
+ * Reset to hardware default
+ */
+ MII_REG_BITS_ON((ANAR_ASMDIR | ANAR_PAUSE), MII_REG_ANAR, vptr->mac_regs);
+ /*
+ * Turn on ECHODIS bit in NWay-forced full mode and turn it
+ * off it in NWay-forced half mode for NWay-forced v.s.
+ * legacy-forced issue
+ */
+ if (vptr->mii_status & VELOCITY_DUPLEX_FULL)
+ MII_REG_BITS_ON(TCSR_ECHODIS, MII_REG_TCSR, vptr->mac_regs);
+ else
+ MII_REG_BITS_OFF(TCSR_ECHODIS, MII_REG_TCSR, vptr->mac_regs);
+ break;
+
+ case PHYID_MARVELL_1000:
+ case PHYID_MARVELL_1000S:
+ /*
+ * Assert CRS on Transmit
+ */
+ MII_REG_BITS_ON(PSCR_ACRSTX, MII_REG_PSCR, vptr->mac_regs);
+ /*
+ * Reset to hardware default
+ */
+ MII_REG_BITS_ON((ANAR_ASMDIR | ANAR_PAUSE), MII_REG_ANAR, vptr->mac_regs);
+ break;
+ default:
+ ;
+ }
+ velocity_mii_read(vptr->mac_regs, MII_REG_BMCR, &BMCR);
+ if (BMCR & BMCR_ISO) {
+ BMCR &= ~BMCR_ISO;
+ velocity_mii_write(vptr->mac_regs, MII_REG_BMCR, BMCR);
+ }
+}
+
+/**
+ * safe_disable_mii_autopoll - autopoll off
+ * @regs: velocity registers
+ *
+ * Turn off the autopoll and wait for it to disable on the chip
+ */
+
+static void safe_disable_mii_autopoll(struct mac_regs __iomem * regs)
+{
+ u16 ww;
+
+ /* turn off MAUTO */
+ writeb(0, ®s->MIICR);
+ for (ww = 0; ww < W_MAX_TIMEOUT; ww++) {
+ udelay(1);
+ if (BYTE_REG_BITS_IS_ON(MIISR_MIDLE, ®s->MIISR))
+ break;
+ }
+}
+
+/**
+ * enable_mii_autopoll - turn on autopolling
+ * @regs: velocity registers
+ *
+ * Enable the MII link status autopoll feature on the Velocity
+ * hardware. Wait for it to enable.
+ */
+
+static void enable_mii_autopoll(struct mac_regs __iomem * regs)
+{
+ int ii;
+
+ writeb(0, &(regs->MIICR));
+ writeb(MIIADR_SWMPL, ®s->MIIADR);
+
+ for (ii = 0; ii < W_MAX_TIMEOUT; ii++) {
+ udelay(1);
+ if (BYTE_REG_BITS_IS_ON(MIISR_MIDLE, ®s->MIISR))
+ break;
+ }
+
+ writeb(MIICR_MAUTO, ®s->MIICR);
+
+ for (ii = 0; ii < W_MAX_TIMEOUT; ii++) {
+ udelay(1);
+ if (!BYTE_REG_BITS_IS_ON(MIISR_MIDLE, ®s->MIISR))
+ break;
+ }
+
+}
+
+/**
+ * velocity_mii_read - read MII data
+ * @regs: velocity registers
+ * @index: MII register index
+ * @data: buffer for received data
+ *
+ * Perform a single read of an MII 16bit register. Returns zero
+ * on success or -ETIMEDOUT if the PHY did not respond.
+ */
+
+static int velocity_mii_read(struct mac_regs __iomem *regs, u8 index, u16 *data)
+{
+ u16 ww;
+
+ /*
+ * Disable MIICR_MAUTO, so that mii addr can be set normally
+ */
+ safe_disable_mii_autopoll(regs);
+
+ writeb(index, ®s->MIIADR);
+
+ BYTE_REG_BITS_ON(MIICR_RCMD, ®s->MIICR);
+
+ for (ww = 0; ww < W_MAX_TIMEOUT; ww++) {
+ if (!(readb(®s->MIICR) & MIICR_RCMD))
+ break;
+ }
+
+ *data = readw(®s->MIIDATA);
+
+ enable_mii_autopoll(regs);
+ if (ww == W_MAX_TIMEOUT)
+ return -ETIMEDOUT;
+ return 0;
+}
+
+/**
+ * velocity_mii_write - write MII data
+ * @regs: velocity registers
+ * @index: MII register index
+ * @data: 16bit data for the MII register
+ *
+ * Perform a single write to an MII 16bit register. Returns zero
+ * on success or -ETIMEDOUT if the PHY did not respond.
+ */
+
+static int velocity_mii_write(struct mac_regs __iomem *regs, u8 mii_addr, u16 data)
+{
+ u16 ww;
+
+ /*
+ * Disable MIICR_MAUTO, so that mii addr can be set normally
+ */
+ safe_disable_mii_autopoll(regs);
+
+ /* MII reg offset */
+ writeb(mii_addr, ®s->MIIADR);
+ /* set MII data */
+ writew(data, ®s->MIIDATA);
+
+ /* turn on MIICR_WCMD */
+ BYTE_REG_BITS_ON(MIICR_WCMD, ®s->MIICR);
+
+ /* W_MAX_TIMEOUT is the timeout period */
+ for (ww = 0; ww < W_MAX_TIMEOUT; ww++) {
+ udelay(5);
+ if (!(readb(®s->MIICR) & MIICR_WCMD))
+ break;
+ }
+ enable_mii_autopoll(regs);
+
+ if (ww == W_MAX_TIMEOUT)
+ return -ETIMEDOUT;
+ return 0;
+}
+
+/**
+ * velocity_get_opt_media_mode - get media selection
+ * @vptr: velocity adapter
+ *
+ * Get the media mode stored in EEPROM or module options and load
+ * mii_status accordingly. The requested link state information
+ * is also returned.
+ */
+
+static u32 velocity_get_opt_media_mode(struct velocity_info *vptr)
+{
+ u32 status = 0;
+
+ switch (vptr->options.spd_dpx) {
+ case SPD_DPX_AUTO:
+ status = VELOCITY_AUTONEG_ENABLE;
+ break;
+ case SPD_DPX_100_FULL:
+ status = VELOCITY_SPEED_100 | VELOCITY_DUPLEX_FULL;
+ break;
+ case SPD_DPX_10_FULL:
+ status = VELOCITY_SPEED_10 | VELOCITY_DUPLEX_FULL;
+ break;
+ case SPD_DPX_100_HALF:
+ status = VELOCITY_SPEED_100;
+ break;
+ case SPD_DPX_10_HALF:
+ status = VELOCITY_SPEED_10;
+ break;
+ }
+ vptr->mii_status = status;
+ return status;
+}
+
+/**
+ * mii_set_auto_on - autonegotiate on
+ * @vptr: velocity
+ *
+ * Enable autonegotation on this interface
+ */
+
+static void mii_set_auto_on(struct velocity_info *vptr)
+{
+ if (MII_REG_BITS_IS_ON(BMCR_AUTO, MII_REG_BMCR, vptr->mac_regs))
+ MII_REG_BITS_ON(BMCR_REAUTO, MII_REG_BMCR, vptr->mac_regs);
+ else
+ MII_REG_BITS_ON(BMCR_AUTO, MII_REG_BMCR, vptr->mac_regs);
+}
+
+
+/*
+static void mii_set_auto_off(struct velocity_info * vptr)
+{
+ MII_REG_BITS_OFF(BMCR_AUTO, MII_REG_BMCR, vptr->mac_regs);
+}
+*/
+
+/**
+ * set_mii_flow_control - flow control setup
+ * @vptr: velocity interface
+ *
+ * Set up the flow control on this interface according to
+ * the supplied user/eeprom options.
+ */
+
+static void set_mii_flow_control(struct velocity_info *vptr)
+{
+ /*Enable or Disable PAUSE in ANAR */
+ switch (vptr->options.flow_cntl) {
+ case FLOW_CNTL_TX:
+ MII_REG_BITS_OFF(ANAR_PAUSE, MII_REG_ANAR, vptr->mac_regs);
+ MII_REG_BITS_ON(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs);
+ break;
+
+ case FLOW_CNTL_RX:
+ MII_REG_BITS_ON(ANAR_PAUSE, MII_REG_ANAR, vptr->mac_regs);
+ MII_REG_BITS_ON(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs);
+ break;
+
+ case FLOW_CNTL_TX_RX:
+ MII_REG_BITS_ON(ANAR_PAUSE, MII_REG_ANAR, vptr->mac_regs);
+ MII_REG_BITS_ON(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs);
+ break;
+
+ case FLOW_CNTL_DISABLE:
+ MII_REG_BITS_OFF(ANAR_PAUSE, MII_REG_ANAR, vptr->mac_regs);
+ MII_REG_BITS_OFF(ANAR_ASMDIR, MII_REG_ANAR, vptr->mac_regs);
+ break;
+ default:
+ break;
+ }
+}
+
+/**
+ * velocity_set_media_mode - set media mode
+ * @mii_status: old MII link state
+ *
+ * Check the media link state and configure the flow control
+ * PHY and also velocity hardware setup accordingly. In particular
+ * we need to set up CD polling and frame bursting.
+ */
+
+static int velocity_set_media_mode(struct velocity_info *vptr, u32 mii_status)
+{
+ u32 curr_status;
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+
+ vptr->mii_status = mii_check_media_mode(vptr->mac_regs);
+ curr_status = vptr->mii_status & (~VELOCITY_LINK_FAIL);
+
+ /* Set mii link status */
+ set_mii_flow_control(vptr);
+
+ /*
+ Check if new status is consisent with current status
+ if (((mii_status & curr_status) & VELOCITY_AUTONEG_ENABLE)
+ || (mii_status==curr_status)) {
+ vptr->mii_status=mii_check_media_mode(vptr->mac_regs);
+ vptr->mii_status=check_connection_type(vptr->mac_regs);
+ VELOCITY_PRT(MSG_LEVEL_INFO, "Velocity link no change\n");
+ return 0;
+ }
+ */
+
+ if (PHYID_GET_PHY_ID(vptr->phy_id) == PHYID_CICADA_CS8201) {
+ MII_REG_BITS_ON(AUXCR_MDPPS, MII_REG_AUXCR, vptr->mac_regs);
+ }
+
+ /*
+ * If connection type is AUTO
+ */
+ if (mii_status & VELOCITY_AUTONEG_ENABLE) {
+ VELOCITY_PRT(MSG_LEVEL_INFO, "Velocity is AUTO mode\n");
+ /* clear force MAC mode bit */
+ BYTE_REG_BITS_OFF(CHIPGCR_FCMODE, ®s->CHIPGCR);
+ /* set duplex mode of MAC according to duplex mode of MII */
+ MII_REG_BITS_ON(ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10, MII_REG_ANAR, vptr->mac_regs);
+ MII_REG_BITS_ON(G1000CR_1000FD | G1000CR_1000, MII_REG_G1000CR, vptr->mac_regs);
+ MII_REG_BITS_ON(BMCR_SPEED1G, MII_REG_BMCR, vptr->mac_regs);
+
+ /* enable AUTO-NEGO mode */
+ mii_set_auto_on(vptr);
+ } else {
+ u16 ANAR;
+ u8 CHIPGCR;
+
+ /*
+ * 1. if it's 3119, disable frame bursting in halfduplex mode
+ * and enable it in fullduplex mode
+ * 2. set correct MII/GMII and half/full duplex mode in CHIPGCR
+ * 3. only enable CD heart beat counter in 10HD mode
+ */
+
+ /* set force MAC mode bit */
+ BYTE_REG_BITS_ON(CHIPGCR_FCMODE, ®s->CHIPGCR);
+
+ CHIPGCR = readb(®s->CHIPGCR);
+ CHIPGCR &= ~CHIPGCR_FCGMII;
+
+ if (mii_status & VELOCITY_DUPLEX_FULL) {
+ CHIPGCR |= CHIPGCR_FCFDX;
+ writeb(CHIPGCR, ®s->CHIPGCR);
+ VELOCITY_PRT(MSG_LEVEL_INFO, "set Velocity to forced full mode\n");
+ if (vptr->rev_id < REV_ID_VT3216_A0)
+ BYTE_REG_BITS_OFF(TCR_TB2BDIS, ®s->TCR);
+ } else {
+ CHIPGCR &= ~CHIPGCR_FCFDX;
+ VELOCITY_PRT(MSG_LEVEL_INFO, "set Velocity to forced half mode\n");
+ writeb(CHIPGCR, ®s->CHIPGCR);
+ if (vptr->rev_id < REV_ID_VT3216_A0)
+ BYTE_REG_BITS_ON(TCR_TB2BDIS, ®s->TCR);
+ }
+
+ MII_REG_BITS_OFF(G1000CR_1000FD | G1000CR_1000, MII_REG_G1000CR, vptr->mac_regs);
+
+ if (!(mii_status & VELOCITY_DUPLEX_FULL) && (mii_status & VELOCITY_SPEED_10)) {
+ BYTE_REG_BITS_OFF(TESTCFG_HBDIS, ®s->TESTCFG);
+ } else {
+ BYTE_REG_BITS_ON(TESTCFG_HBDIS, ®s->TESTCFG);
+ }
+ /* MII_REG_BITS_OFF(BMCR_SPEED1G, MII_REG_BMCR, vptr->mac_regs); */
+ velocity_mii_read(vptr->mac_regs, MII_REG_ANAR, &ANAR);
+ ANAR &= (~(ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10));
+ if (mii_status & VELOCITY_SPEED_100) {
+ if (mii_status & VELOCITY_DUPLEX_FULL)
+ ANAR |= ANAR_TXFD;
+ else
+ ANAR |= ANAR_TX;
+ } else {
+ if (mii_status & VELOCITY_DUPLEX_FULL)
+ ANAR |= ANAR_10FD;
+ else
+ ANAR |= ANAR_10;
+ }
+ velocity_mii_write(vptr->mac_regs, MII_REG_ANAR, ANAR);
+ /* enable AUTO-NEGO mode */
+ mii_set_auto_on(vptr);
+ /* MII_REG_BITS_ON(BMCR_AUTO, MII_REG_BMCR, vptr->mac_regs); */
+ }
+ /* vptr->mii_status=mii_check_media_mode(vptr->mac_regs); */
+ /* vptr->mii_status=check_connection_type(vptr->mac_regs); */
+ return VELOCITY_LINK_CHANGE;
+}
+
+/**
+ * mii_check_media_mode - check media state
+ * @regs: velocity registers
+ *
+ * Check the current MII status and determine the link status
+ * accordingly
+ */
+
+static u32 mii_check_media_mode(struct mac_regs __iomem * regs)
+{
+ u32 status = 0;
+ u16 ANAR;
+
+ if (!MII_REG_BITS_IS_ON(BMSR_LNK, MII_REG_BMSR, regs))
+ status |= VELOCITY_LINK_FAIL;
+
+ if (MII_REG_BITS_IS_ON(G1000CR_1000FD, MII_REG_G1000CR, regs))
+ status |= VELOCITY_SPEED_1000 | VELOCITY_DUPLEX_FULL;
+ else if (MII_REG_BITS_IS_ON(G1000CR_1000, MII_REG_G1000CR, regs))
+ status |= (VELOCITY_SPEED_1000);
+ else {
+ velocity_mii_read(regs, MII_REG_ANAR, &ANAR);
+ if (ANAR & ANAR_TXFD)
+ status |= (VELOCITY_SPEED_100 | VELOCITY_DUPLEX_FULL);
+ else if (ANAR & ANAR_TX)
+ status |= VELOCITY_SPEED_100;
+ else if (ANAR & ANAR_10FD)
+ status |= (VELOCITY_SPEED_10 | VELOCITY_DUPLEX_FULL);
+ else
+ status |= (VELOCITY_SPEED_10);
+ }
+
+ if (MII_REG_BITS_IS_ON(BMCR_AUTO, MII_REG_BMCR, regs)) {
+ velocity_mii_read(regs, MII_REG_ANAR, &ANAR);
+ if ((ANAR & (ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10))
+ == (ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10)) {
+ if (MII_REG_BITS_IS_ON(G1000CR_1000 | G1000CR_1000FD, MII_REG_G1000CR, regs))
+ status |= VELOCITY_AUTONEG_ENABLE;
+ }
+ }
+
+ return status;
+}
+
+static u32 check_connection_type(struct mac_regs __iomem * regs)
+{
+ u32 status = 0;
+ u8 PHYSR0;
+ u16 ANAR;
+ PHYSR0 = readb(®s->PHYSR0);
+
+ /*
+ if (!(PHYSR0 & PHYSR0_LINKGD))
+ status|=VELOCITY_LINK_FAIL;
+ */
+
+ if (PHYSR0 & PHYSR0_FDPX)
+ status |= VELOCITY_DUPLEX_FULL;
+
+ if (PHYSR0 & PHYSR0_SPDG)
+ status |= VELOCITY_SPEED_1000;
+ if (PHYSR0 & PHYSR0_SPD10)
+ status |= VELOCITY_SPEED_10;
+ else
+ status |= VELOCITY_SPEED_100;
+
+ if (MII_REG_BITS_IS_ON(BMCR_AUTO, MII_REG_BMCR, regs)) {
+ velocity_mii_read(regs, MII_REG_ANAR, &ANAR);
+ if ((ANAR & (ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10))
+ == (ANAR_TXFD | ANAR_TX | ANAR_10FD | ANAR_10)) {
+ if (MII_REG_BITS_IS_ON(G1000CR_1000 | G1000CR_1000FD, MII_REG_G1000CR, regs))
+ status |= VELOCITY_AUTONEG_ENABLE;
+ }
+ }
+
+ return status;
+}
+
+/**
+ * enable_flow_control_ability - flow control
+ * @vptr: veloity to configure
+ *
+ * Set up flow control according to the flow control options
+ * determined by the eeprom/configuration.
+ */
+
+static void enable_flow_control_ability(struct velocity_info *vptr)
+{
+
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+
+ switch (vptr->options.flow_cntl) {
+
+ case FLOW_CNTL_DEFAULT:
+ if (BYTE_REG_BITS_IS_ON(PHYSR0_RXFLC, ®s->PHYSR0))
+ writel(CR0_FDXRFCEN, ®s->CR0Set);
+ else
+ writel(CR0_FDXRFCEN, ®s->CR0Clr);
+
+ if (BYTE_REG_BITS_IS_ON(PHYSR0_TXFLC, ®s->PHYSR0))
+ writel(CR0_FDXTFCEN, ®s->CR0Set);
+ else
+ writel(CR0_FDXTFCEN, ®s->CR0Clr);
+ break;
+
+ case FLOW_CNTL_TX:
+ writel(CR0_FDXTFCEN, ®s->CR0Set);
+ writel(CR0_FDXRFCEN, ®s->CR0Clr);
+ break;
+
+ case FLOW_CNTL_RX:
+ writel(CR0_FDXRFCEN, ®s->CR0Set);
+ writel(CR0_FDXTFCEN, ®s->CR0Clr);
+ break;
+
+ case FLOW_CNTL_TX_RX:
+ writel(CR0_FDXTFCEN, ®s->CR0Set);
+ writel(CR0_FDXRFCEN, ®s->CR0Set);
+ break;
+
+ case FLOW_CNTL_DISABLE:
+ writel(CR0_FDXRFCEN, ®s->CR0Clr);
+ writel(CR0_FDXTFCEN, ®s->CR0Clr);
+ break;
+
+ default:
+ break;
+ }
+
+}
+
+
+/**
+ * velocity_ethtool_up - pre hook for ethtool
+ * @dev: network device
+ *
+ * Called before an ethtool operation. We need to make sure the
+ * chip is out of D3 state before we poke at it.
+ */
+
+static int velocity_ethtool_up(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ if(!netif_running(dev))
+ pci_set_power_state(vptr->pdev, 0);
+ return 0;
+}
+
+/**
+ * velocity_ethtool_down - post hook for ethtool
+ * @dev: network device
+ *
+ * Called after an ethtool operation. Restore the chip back to D3
+ * state if it isn't running.
+ */
+
+static void velocity_ethtool_down(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ if(!netif_running(dev))
+ pci_set_power_state(vptr->pdev, 3);
+}
+
+static int velocity_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct velocity_info *vptr = dev->priv;
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ u32 status;
+ status = check_connection_type(vptr->mac_regs);
+
+ cmd->supported = SUPPORTED_TP | SUPPORTED_Autoneg | SUPPORTED_10baseT_Half | SUPPORTED_10baseT_Full | SUPPORTED_100baseT_Half | SUPPORTED_100baseT_Full | SUPPORTED_1000baseT_Half | SUPPORTED_1000baseT_Full;
+ if (status & VELOCITY_SPEED_100)
+ cmd->speed = SPEED_100;
+ else
+ cmd->speed = SPEED_10;
+ cmd->autoneg = (status & VELOCITY_AUTONEG_ENABLE) ? AUTONEG_ENABLE : AUTONEG_DISABLE;
+ cmd->port = PORT_TP;
+ cmd->transceiver = XCVR_INTERNAL;
+ cmd->phy_address = readb(®s->MIIADR) & 0x1F;
+
+ if (status & VELOCITY_DUPLEX_FULL)
+ cmd->duplex = DUPLEX_FULL;
+ else
+ cmd->duplex = DUPLEX_HALF;
+
+ return 0;
+}
+
+static int velocity_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct velocity_info *vptr = dev->priv;
+ u32 curr_status;
+ u32 new_status = 0;
+ int ret = 0;
+
+ curr_status = check_connection_type(vptr->mac_regs);
+ curr_status &= (~VELOCITY_LINK_FAIL);
+
+ new_status |= ((cmd->autoneg) ? VELOCITY_AUTONEG_ENABLE : 0);
+ new_status |= ((cmd->speed == SPEED_100) ? VELOCITY_SPEED_100 : 0);
+ new_status |= ((cmd->speed == SPEED_10) ? VELOCITY_SPEED_10 : 0);
+ new_status |= ((cmd->duplex == DUPLEX_FULL) ? VELOCITY_DUPLEX_FULL : 0);
+
+ if ((new_status & VELOCITY_AUTONEG_ENABLE) && (new_status != (curr_status | VELOCITY_AUTONEG_ENABLE)))
+ ret = -EINVAL;
+ else
+ velocity_set_media_mode(vptr, new_status);
+
+ return ret;
+}
+
+static u32 velocity_get_link(struct net_device *dev)
+{
+ struct velocity_info *vptr = dev->priv;
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ return BYTE_REG_BITS_IS_ON(PHYSR0_LINKGD, ®s->PHYSR0) ? 0 : 1;
+}
+
+static void velocity_get_drvinfo(struct net_device *dev, struct ethtool_drvinfo *info)
+{
+ struct velocity_info *vptr = dev->priv;
+ strcpy(info->driver, VELOCITY_NAME);
+ strcpy(info->version, VELOCITY_VERSION);
+ strcpy(info->bus_info, vptr->pdev->slot_name);
+}
+
+static void velocity_ethtool_get_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct velocity_info *vptr = dev->priv;
+ wol->supported = WAKE_PHY | WAKE_MAGIC | WAKE_UCAST | WAKE_ARP;
+ wol->wolopts |= WAKE_MAGIC;
+ /*
+ if (vptr->wol_opts & VELOCITY_WOL_PHY)
+ wol.wolopts|=WAKE_PHY;
+ */
+ if (vptr->wol_opts & VELOCITY_WOL_UCAST)
+ wol->wolopts |= WAKE_UCAST;
+ if (vptr->wol_opts & VELOCITY_WOL_ARP)
+ wol->wolopts |= WAKE_ARP;
+ memcpy(&wol->sopass, vptr->wol_passwd, 6);
+}
+
+static int velocity_ethtool_set_wol(struct net_device *dev, struct ethtool_wolinfo *wol)
+{
+ struct velocity_info *vptr = dev->priv;
+
+ if (!(wol->wolopts & (WAKE_PHY | WAKE_MAGIC | WAKE_UCAST | WAKE_ARP)))
+ return -EFAULT;
+ vptr->wol_opts = VELOCITY_WOL_MAGIC;
+
+ /*
+ if (wol.wolopts & WAKE_PHY) {
+ vptr->wol_opts|=VELOCITY_WOL_PHY;
+ vptr->flags |=VELOCITY_FLAGS_WOL_ENABLED;
+ }
+ */
+
+ if (wol->wolopts & WAKE_MAGIC) {
+ vptr->wol_opts |= VELOCITY_WOL_MAGIC;
+ vptr->flags |= VELOCITY_FLAGS_WOL_ENABLED;
+ }
+ if (wol->wolopts & WAKE_UCAST) {
+ vptr->wol_opts |= VELOCITY_WOL_UCAST;
+ vptr->flags |= VELOCITY_FLAGS_WOL_ENABLED;
+ }
+ if (wol->wolopts & WAKE_ARP) {
+ vptr->wol_opts |= VELOCITY_WOL_ARP;
+ vptr->flags |= VELOCITY_FLAGS_WOL_ENABLED;
+ }
+ memcpy(vptr->wol_passwd, wol->sopass, 6);
+ return 0;
+}
+
+static u32 velocity_get_msglevel(struct net_device *dev)
+{
+ return msglevel;
+}
+
+static void velocity_set_msglevel(struct net_device *dev, u32 value)
+{
+ msglevel = value;
+}
+
+static struct ethtool_ops velocity_ethtool_ops = {
+ .get_settings = velocity_get_settings,
+ .set_settings = velocity_set_settings,
+ .get_drvinfo = velocity_get_drvinfo,
+ .get_wol = velocity_ethtool_get_wol,
+ .set_wol = velocity_ethtool_set_wol,
+ .get_msglevel = velocity_get_msglevel,
+ .set_msglevel = velocity_set_msglevel,
+ .get_link = velocity_get_link,
+ .begin = velocity_ethtool_up,
+ .complete = velocity_ethtool_down
+};
+
+/**
+ * velocity_mii_ioctl - MII ioctl handler
+ * @dev: network device
+ * @ifr: the ifreq block for the ioctl
+ * @cmd: the command
+ *
+ * Process MII requests made via ioctl from the network layer. These
+ * are used by tools like kudzu to interrogate the link state of the
+ * hardware
+ */
+
+static int velocity_mii_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ struct velocity_info *vptr = dev->priv;
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ unsigned long flags;
+ struct mii_ioctl_data *miidata = if_mii(ifr);
+ int err;
+
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ miidata->phy_id = readb(®s->MIIADR) & 0x1f;
+ break;
+ case SIOCGMIIREG:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ if(velocity_mii_read(vptr->mac_regs, miidata->reg_num & 0x1f, &(miidata->val_out)) < 0)
+ return -ETIMEDOUT;
+ break;
+ case SIOCSMIIREG:
+ if (!capable(CAP_NET_ADMIN))
+ return -EPERM;
+ spin_lock_irqsave(&vptr->lock, flags);
+ err = velocity_mii_write(vptr->mac_regs, miidata->reg_num & 0x1f, miidata->val_in);
+ spin_unlock_irqrestore(&vptr->lock, flags);
+ check_connection_type(vptr->mac_regs);
+ if(err)
+ return err;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+ return 0;
+}
+
+#ifdef CONFIG_PM
+
+/**
+ * velocity_save_context - save registers
+ * @vptr: velocity
+ * @context: buffer for stored context
+ *
+ * Retrieve the current configuration from the velocity hardware
+ * and stash it in the context structure, for use by the context
+ * restore functions. This allows us to save things we need across
+ * power down states
+ */
+
+static void velocity_save_context(struct velocity_info *vptr, struct velocity_context * context)
+{
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ u16 i;
+ u8 __iomem *ptr = (u8 __iomem *)regs;
+
+ for (i = MAC_REG_PAR; i < MAC_REG_CR0_CLR; i += 4)
+ *((u32 *) (context->mac_reg + i)) = readl(ptr + i);
+
+ for (i = MAC_REG_MAR; i < MAC_REG_TDCSR_CLR; i += 4)
+ *((u32 *) (context->mac_reg + i)) = readl(ptr + i);
+
+ for (i = MAC_REG_RDBASE_LO; i < MAC_REG_FIFO_TEST0; i += 4)
+ *((u32 *) (context->mac_reg + i)) = readl(ptr + i);
+
+}
+
+/**
+ * velocity_restore_context - restore registers
+ * @vptr: velocity
+ * @context: buffer for stored context
+ *
+ * Reload the register configuration from the velocity context
+ * created by velocity_save_context.
+ */
+
+static void velocity_restore_context(struct velocity_info *vptr, struct velocity_context *context)
+{
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ int i;
+ u8 __iomem *ptr = (u8 __iomem *)regs;
+
+ for (i = MAC_REG_PAR; i < MAC_REG_CR0_SET; i += 4) {
+ writel(*((u32 *) (context->mac_reg + i)), ptr + i);
+ }
+
+ /* Just skip cr0 */
+ for (i = MAC_REG_CR1_SET; i < MAC_REG_CR0_CLR; i++) {
+ /* Clear */
+ writeb(~(*((u8 *) (context->mac_reg + i))), ptr + i + 4);
+ /* Set */
+ writeb(*((u8 *) (context->mac_reg + i)), ptr + i);
+ }
+
+ for (i = MAC_REG_MAR; i < MAC_REG_IMR; i += 4) {
+ writel(*((u32 *) (context->mac_reg + i)), ptr + i);
+ }
+
+ for (i = MAC_REG_RDBASE_LO; i < MAC_REG_FIFO_TEST0; i += 4) {
+ writel(*((u32 *) (context->mac_reg + i)), ptr + i);
+ }
+
+ for (i = MAC_REG_TDCSR_SET; i <= MAC_REG_RDCSR_SET; i++) {
+ writeb(*((u8 *) (context->mac_reg + i)), ptr + i);
+ }
+
+}
+
+/**
+ * wol_calc_crc - WOL CRC
+ * @pattern: data pattern
+ * @mask_pattern: mask
+ *
+ * Compute the wake on lan crc hashes for the packet header
+ * we are interested in.
+ */
+
+u16 wol_calc_crc(int size, u8 * pattern, u8 *mask_pattern)
+{
+ u16 crc = 0xFFFF;
+ u8 mask;
+ int i, j;
+
+ for (i = 0; i < size; i++) {
+ mask = mask_pattern[i];
+
+ /* Skip this loop if the mask equals to zero */
+ if (mask == 0x00)
+ continue;
+
+ for (j = 0; j < 8; j++) {
+ if ((mask & 0x01) == 0) {
+ mask >>= 1;
+ continue;
+ }
+ mask >>= 1;
+ crc = crc_ccitt(crc, &(pattern[i * 8 + j]), 1);
+ }
+ }
+ /* Finally, invert the result once to get the correct data */
+ crc = ~crc;
+ return bitreverse(crc) >> 16;
+}
+
+/**
+ * velocity_set_wol - set up for wake on lan
+ * @vptr: velocity to set WOL status on
+ *
+ * Set a card up for wake on lan either by unicast or by
+ * ARP packet.
+ *
+ * FIXME: check static buffer is safe here
+ */
+
+static int velocity_set_wol(struct velocity_info *vptr)
+{
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+ static u8 buf[256];
+ int i;
+
+ static u32 mask_pattern[2][4] = {
+ {0x00203000, 0x000003C0, 0x00000000, 0x0000000}, /* ARP */
+ {0xfffff000, 0xffffffff, 0xffffffff, 0x000ffff} /* Magic Packet */
+ };
+
+ writew(0xFFFF, ®s->WOLCRClr);
+ writeb(WOLCFG_SAB | WOLCFG_SAM, ®s->WOLCFGSet);
+ writew(WOLCR_MAGIC_EN, ®s->WOLCRSet);
+
+ /*
+ if (vptr->wol_opts & VELOCITY_WOL_PHY)
+ writew((WOLCR_LINKON_EN|WOLCR_LINKOFF_EN), ®s->WOLCRSet);
+ */
+
+ if (vptr->wol_opts & VELOCITY_WOL_UCAST) {
+ writew(WOLCR_UNICAST_EN, ®s->WOLCRSet);
+ }
+
+ if (vptr->wol_opts & VELOCITY_WOL_ARP) {
+ struct arp_packet *arp = (struct arp_packet *) buf;
+ u16 crc;
+ memset(buf, 0, sizeof(struct arp_packet) + 7);
+
+ for (i = 0; i < 4; i++)
+ writel(mask_pattern[0][i], ®s->ByteMask[0][i]);
+
+ arp->type = htons(ETH_P_ARP);
+ arp->ar_op = htons(1);
+
+ memcpy(arp->ar_tip, vptr->ip_addr, 4);
+
+ crc = wol_calc_crc((sizeof(struct arp_packet) + 7) / 8, buf,
+ (u8 *) & mask_pattern[0][0]);
+
+ writew(crc, ®s->PatternCRC[0]);
+ writew(WOLCR_ARP_EN, ®s->WOLCRSet);
+ }
+
+ BYTE_REG_BITS_ON(PWCFG_WOLTYPE, ®s->PWCFGSet);
+ BYTE_REG_BITS_ON(PWCFG_LEGACY_WOLEN, ®s->PWCFGSet);
+
+ writew(0x0FFF, ®s->WOLSRClr);
+
+ if (vptr->mii_status & VELOCITY_AUTONEG_ENABLE) {
+ if (PHYID_GET_PHY_ID(vptr->phy_id) == PHYID_CICADA_CS8201)
+ MII_REG_BITS_ON(AUXCR_MDPPS, MII_REG_AUXCR, vptr->mac_regs);
+
+ MII_REG_BITS_OFF(G1000CR_1000FD | G1000CR_1000, MII_REG_G1000CR, vptr->mac_regs);
+ }
+
+ if (vptr->mii_status & VELOCITY_SPEED_1000)
+ MII_REG_BITS_ON(BMCR_REAUTO, MII_REG_BMCR, vptr->mac_regs);
+
+ BYTE_REG_BITS_ON(CHIPGCR_FCMODE, ®s->CHIPGCR);
+
+ {
+ u8 GCR;
+ GCR = readb(®s->CHIPGCR);
+ GCR = (GCR & ~CHIPGCR_FCGMII) | CHIPGCR_FCFDX;
+ writeb(GCR, ®s->CHIPGCR);
+ }
+
+ BYTE_REG_BITS_OFF(ISR_PWEI, ®s->ISR);
+ /* Turn on SWPTAG just before entering power mode */
+ BYTE_REG_BITS_ON(STICKHW_SWPTAG, ®s->STICKHW);
+ /* Go to bed ..... */
+ BYTE_REG_BITS_ON((STICKHW_DS1 | STICKHW_DS0), ®s->STICKHW);
+
+ return 0;
+}
+
+static int velocity_suspend(struct pci_dev *pdev, u32 state)
+{
+ struct velocity_info *vptr = pci_get_drvdata(pdev);
+ unsigned long flags;
+
+ if(!netif_running(vptr->dev))
+ return 0;
+
+ netif_device_detach(vptr->dev);
+
+ spin_lock_irqsave(&vptr->lock, flags);
+ pci_save_state(pdev);
+#ifdef ETHTOOL_GWOL
+ if (vptr->flags & VELOCITY_FLAGS_WOL_ENABLED) {
+ velocity_get_ip(vptr);
+ velocity_save_context(vptr, &vptr->context);
+ velocity_shutdown(vptr);
+ velocity_set_wol(vptr);
+ pci_enable_wake(pdev, 3, 1);
+ pci_set_power_state(pdev, 3);
+ } else {
+ velocity_save_context(vptr, &vptr->context);
+ velocity_shutdown(vptr);
+ pci_disable_device(pdev);
+ pci_set_power_state(pdev, state);
+ }
+#else
+ pci_set_power_state(pdev, state);
+#endif
+ spin_unlock_irqrestore(&vptr->lock, flags);
+ return 0;
+}
+
+static int velocity_resume(struct pci_dev *pdev)
+{
+ struct velocity_info *vptr = pci_get_drvdata(pdev);
+ unsigned long flags;
+ int i;
+
+ if(!netif_running(vptr->dev))
+ return 0;
+
+ pci_set_power_state(pdev, 0);
+ pci_enable_wake(pdev, 0, 0);
+ pci_restore_state(pdev);
+
+ mac_wol_reset(vptr->mac_regs);
+
+ spin_lock_irqsave(&vptr->lock, flags);
+ velocity_restore_context(vptr, &vptr->context);
+ velocity_init_registers(vptr, VELOCITY_INIT_WOL);
+ mac_disable_int(vptr->mac_regs);
+
+ velocity_tx_srv(vptr, 0);
+
+ for (i = 0; i < vptr->num_txq; i++) {
+ if (vptr->td_used[i]) {
+ mac_tx_queue_wake(vptr->mac_regs, i);
+ }
+ }
+
+ mac_enable_int(vptr->mac_regs);
+ spin_unlock_irqrestore(&vptr->lock, flags);
+ netif_device_attach(vptr->dev);
+
+ return 0;
+}
+
+static int velocity_netdev_event(struct notifier_block *nb, unsigned long notification, void *ptr)
+{
+ struct in_ifaddr *ifa = (struct in_ifaddr *) ptr;
+
+ if (ifa) {
+ struct net_device *dev = ifa->ifa_dev->dev;
+ struct velocity_info *vptr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&velocity_dev_list_lock, flags);
+ list_for_each_entry(vptr, &velocity_dev_list, list) {
+ if (vptr->dev == dev) {
+ velocity_get_ip(vptr);
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&velocity_dev_list_lock, flags);
+ }
+ return NOTIFY_DONE;
+}
+#endif
--- /dev/null
+/*
+ * Copyright (c) 1996, 2003 VIA Networking Technologies, Inc.
+ * All rights reserved.
+ *
+ * This software may be redistributed and/or modified under
+ * the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+ * or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ *
+ * File: via-velocity.h
+ *
+ * Purpose: Header file to define driver's private structures.
+ *
+ * Author: Chuang Liang-Shing, AJ Jiang
+ *
+ * Date: Jan 24, 2003
+ */
+
+
+#ifndef VELOCITY_H
+#define VELOCITY_H
+
+#define VELOCITY_TX_CSUM_SUPPORT
+
+#define VELOCITY_NAME "via-velocity"
+#define VELOCITY_FULL_DRV_NAM "VIA Networking Velocity Family Gigabit Ethernet Adapter Driver"
+#define VELOCITY_VERSION "1.13"
+
+#define PKT_BUF_SZ 1540
+
+#define MAX_UNITS 8
+#define OPTION_DEFAULT { [0 ... MAX_UNITS-1] = -1}
+
+#define REV_ID_VT6110 (0)
+
+#define BYTE_REG_BITS_ON(x,p) do { writeb(readb((p))|(x),(p));} while (0)
+#define WORD_REG_BITS_ON(x,p) do { writew(readw((p))|(x),(p));} while (0)
+#define DWORD_REG_BITS_ON(x,p) do { writel(readl((p))|(x),(p));} while (0)
+
+#define BYTE_REG_BITS_IS_ON(x,p) (readb((p)) & (x))
+#define WORD_REG_BITS_IS_ON(x,p) (readw((p)) & (x))
+#define DWORD_REG_BITS_IS_ON(x,p) (readl((p)) & (x))
+
+#define BYTE_REG_BITS_OFF(x,p) do { writeb(readb((p)) & (~(x)),(p));} while (0)
+#define WORD_REG_BITS_OFF(x,p) do { writew(readw((p)) & (~(x)),(p));} while (0)
+#define DWORD_REG_BITS_OFF(x,p) do { writel(readl((p)) & (~(x)),(p));} while (0)
+
+#define BYTE_REG_BITS_SET(x,m,p) do { writeb( (readb((p)) & (~(m))) |(x),(p));} while (0)
+#define WORD_REG_BITS_SET(x,m,p) do { writew( (readw((p)) & (~(m))) |(x),(p));} while (0)
+#define DWORD_REG_BITS_SET(x,m,p) do { writel( (readl((p)) & (~(m)))|(x),(p));} while (0)
+
+#define VAR_USED(p) do {(p)=(p);} while (0)
+
+/*
+ * Purpose: Structures for MAX RX/TX descriptors.
+ */
+
+
+#define B_OWNED_BY_CHIP 1
+#define B_OWNED_BY_HOST 0
+
+/*
+ * Bits in the RSR0 register
+ */
+
+#define RSR_DETAG 0x0080
+#define RSR_SNTAG 0x0040
+#define RSR_RXER 0x0020
+#define RSR_RL 0x0010
+#define RSR_CE 0x0008
+#define RSR_FAE 0x0004
+#define RSR_CRC 0x0002
+#define RSR_VIDM 0x0001
+
+/*
+ * Bits in the RSR1 register
+ */
+
+#define RSR_RXOK 0x8000 // rx OK
+#define RSR_PFT 0x4000 // Perfect filtering address match
+#define RSR_MAR 0x2000 // MAC accept multicast address packet
+#define RSR_BAR 0x1000 // MAC accept broadcast address packet
+#define RSR_PHY 0x0800 // MAC accept physical address packet
+#define RSR_VTAG 0x0400 // 802.1p/1q tagging packet indicator
+#define RSR_STP 0x0200 // start of packet
+#define RSR_EDP 0x0100 // end of packet
+
+/*
+ * Bits in the RSR1 register
+ */
+
+#define RSR1_RXOK 0x80 // rx OK
+#define RSR1_PFT 0x40 // Perfect filtering address match
+#define RSR1_MAR 0x20 // MAC accept multicast address packet
+#define RSR1_BAR 0x10 // MAC accept broadcast address packet
+#define RSR1_PHY 0x08 // MAC accept physical address packet
+#define RSR1_VTAG 0x04 // 802.1p/1q tagging packet indicator
+#define RSR1_STP 0x02 // start of packet
+#define RSR1_EDP 0x01 // end of packet
+
+/*
+ * Bits in the CSM register
+ */
+
+#define CSM_IPOK 0x40 //IP Checkusm validatiaon ok
+#define CSM_TUPOK 0x20 //TCP/UDP Checkusm validatiaon ok
+#define CSM_FRAG 0x10 //Fragment IP datagram
+#define CSM_IPKT 0x04 //Received an IP packet
+#define CSM_TCPKT 0x02 //Received a TCP packet
+#define CSM_UDPKT 0x01 //Received a UDP packet
+
+/*
+ * Bits in the TSR0 register
+ */
+
+#define TSR0_ABT 0x0080 // Tx abort because of excessive collision
+#define TSR0_OWT 0x0040 // Jumbo frame Tx abort
+#define TSR0_OWC 0x0020 // Out of window collision
+#define TSR0_COLS 0x0010 // experience collision in this transmit event
+#define TSR0_NCR3 0x0008 // collision retry counter[3]
+#define TSR0_NCR2 0x0004 // collision retry counter[2]
+#define TSR0_NCR1 0x0002 // collision retry counter[1]
+#define TSR0_NCR0 0x0001 // collision retry counter[0]
+#define TSR0_TERR 0x8000 //
+#define TSR0_FDX 0x4000 // current transaction is serviced by full duplex mode
+#define TSR0_GMII 0x2000 // current transaction is serviced by GMII mode
+#define TSR0_LNKFL 0x1000 // packet serviced during link down
+#define TSR0_SHDN 0x0400 // shutdown case
+#define TSR0_CRS 0x0200 // carrier sense lost
+#define TSR0_CDH 0x0100 // AQE test fail (CD heartbeat)
+
+/*
+ * Bits in the TSR1 register
+ */
+
+#define TSR1_TERR 0x80 //
+#define TSR1_FDX 0x40 // current transaction is serviced by full duplex mode
+#define TSR1_GMII 0x20 // current transaction is serviced by GMII mode
+#define TSR1_LNKFL 0x10 // packet serviced during link down
+#define TSR1_SHDN 0x04 // shutdown case
+#define TSR1_CRS 0x02 // carrier sense lost
+#define TSR1_CDH 0x01 // AQE test fail (CD heartbeat)
+
+//
+// Bits in the TCR0 register
+//
+#define TCR0_TIC 0x80 // assert interrupt immediately while descriptor has been send complete
+#define TCR0_PIC 0x40 // priority interrupt request, INA# is issued over adaptive interrupt scheme
+#define TCR0_VETAG 0x20 // enable VLAN tag
+#define TCR0_IPCK 0x10 // request IP checksum calculation.
+#define TCR0_UDPCK 0x08 // request UDP checksum calculation.
+#define TCR0_TCPCK 0x04 // request TCP checksum calculation.
+#define TCR0_JMBO 0x02 // indicate a jumbo packet in GMAC side
+#define TCR0_CRC 0x01 // disable CRC generation
+
+#define TCPLS_NORMAL 3
+#define TCPLS_START 2
+#define TCPLS_END 1
+#define TCPLS_MED 0
+
+
+// max transmit or receive buffer size
+#define CB_RX_BUF_SIZE 2048UL // max buffer size
+ // NOTE: must be multiple of 4
+
+#define CB_MAX_RD_NUM 512 // MAX # of RD
+#define CB_MAX_TD_NUM 256 // MAX # of TD
+
+#define CB_INIT_RD_NUM_3119 128 // init # of RD, for setup VT3119
+#define CB_INIT_TD_NUM_3119 64 // init # of TD, for setup VT3119
+
+#define CB_INIT_RD_NUM 128 // init # of RD, for setup default
+#define CB_INIT_TD_NUM 64 // init # of TD, for setup default
+
+// for 3119
+#define CB_TD_RING_NUM 4 // # of TD rings.
+#define CB_MAX_SEG_PER_PKT 7 // max data seg per packet (Tx)
+
+
+/*
+ * If collisions excess 15 times , tx will abort, and
+ * if tx fifo underflow, tx will fail
+ * we should try to resend it
+ */
+
+#define CB_MAX_TX_ABORT_RETRY 3
+
+/*
+ * Receive descriptor
+ */
+
+struct rdesc0 {
+ u16 RSR; /* Receive status */
+ u16 len:14; /* Received packet length */
+ u16 reserved:1;
+ u16 owner:1; /* Who owns this buffer ? */
+};
+
+struct rdesc1 {
+ u16 PQTAG;
+ u8 CSM;
+ u8 IPKT;
+};
+
+struct rx_desc {
+ struct rdesc0 rdesc0;
+ struct rdesc1 rdesc1;
+ u32 pa_low; /* Low 32 bit PCI address */
+ u16 pa_high; /* Next 16 bit PCI address (48 total) */
+ u16 len:15; /* Frame size */
+ u16 inten:1; /* Enable interrupt */
+} __attribute__ ((__packed__));
+
+/*
+ * Transmit descriptor
+ */
+
+struct tdesc0 {
+ u16 TSR; /* Transmit status register */
+ u16 pktsize:14; /* Size of frame */
+ u16 reserved:1;
+ u16 owner:1; /* Who owns the buffer */
+};
+
+struct pqinf { /* Priority queue info */
+ u16 VID:12;
+ u16 CFI:1;
+ u16 priority:3;
+} __attribute__ ((__packed__));
+
+struct tdesc1 {
+ struct pqinf pqinf;
+ u8 TCR;
+ u8 TCPLS:2;
+ u8 reserved:2;
+ u8 CMDZ:4;
+} __attribute__ ((__packed__));
+
+struct td_buf {
+ u32 pa_low;
+ u16 pa_high;
+ u16 bufsize:14;
+ u16 reserved:1;
+ u16 queue:1;
+} __attribute__ ((__packed__));
+
+struct tx_desc {
+ struct tdesc0 tdesc0;
+ struct tdesc1 tdesc1;
+ struct td_buf td_buf[7];
+};
+
+struct velocity_rd_info {
+ struct sk_buff *skb;
+ dma_addr_t skb_dma;
+};
+
+/**
+ * alloc_rd_info - allocate an rd info block
+ *
+ * Alocate and initialize a receive info structure used for keeping
+ * track of kernel side information related to each receive
+ * descriptor we are using
+ */
+
+static inline struct velocity_rd_info *alloc_rd_info(void)
+{
+ struct velocity_rd_info *ptr;
+ if ((ptr = kmalloc(sizeof(struct velocity_rd_info), GFP_ATOMIC)) == NULL)
+ return NULL;
+ else {
+ memset(ptr, 0, sizeof(struct velocity_rd_info));
+ return ptr;
+ }
+}
+
+/*
+ * Used to track transmit side buffers.
+ */
+
+struct velocity_td_info {
+ struct sk_buff *skb;
+ u8 *buf;
+ int nskb_dma;
+ dma_addr_t skb_dma[7];
+ dma_addr_t buf_dma;
+};
+
+enum {
+ OWNED_BY_HOST = 0,
+ OWNED_BY_NIC = 1
+} velocity_owner;
+
+
+/*
+ * MAC registers and macros.
+ */
+
+
+#define MCAM_SIZE 64
+#define VCAM_SIZE 64
+#define TX_QUEUE_NO 4
+
+#define MAX_HW_MIB_COUNTER 32
+#define VELOCITY_MIN_MTU (1514-14)
+#define VELOCITY_MAX_MTU (9000)
+
+/*
+ * Registers in the MAC
+ */
+
+#define MAC_REG_PAR 0x00 // physical address
+#define MAC_REG_RCR 0x06
+#define MAC_REG_TCR 0x07
+#define MAC_REG_CR0_SET 0x08
+#define MAC_REG_CR1_SET 0x09
+#define MAC_REG_CR2_SET 0x0A
+#define MAC_REG_CR3_SET 0x0B
+#define MAC_REG_CR0_CLR 0x0C
+#define MAC_REG_CR1_CLR 0x0D
+#define MAC_REG_CR2_CLR 0x0E
+#define MAC_REG_CR3_CLR 0x0F
+#define MAC_REG_MAR 0x10
+#define MAC_REG_CAM 0x10
+#define MAC_REG_DEC_BASE_HI 0x18
+#define MAC_REG_DBF_BASE_HI 0x1C
+#define MAC_REG_ISR_CTL 0x20
+#define MAC_REG_ISR_HOTMR 0x20
+#define MAC_REG_ISR_TSUPTHR 0x20
+#define MAC_REG_ISR_RSUPTHR 0x20
+#define MAC_REG_ISR_CTL1 0x21
+#define MAC_REG_TXE_SR 0x22
+#define MAC_REG_RXE_SR 0x23
+#define MAC_REG_ISR 0x24
+#define MAC_REG_ISR0 0x24
+#define MAC_REG_ISR1 0x25
+#define MAC_REG_ISR2 0x26
+#define MAC_REG_ISR3 0x27
+#define MAC_REG_IMR 0x28
+#define MAC_REG_IMR0 0x28
+#define MAC_REG_IMR1 0x29
+#define MAC_REG_IMR2 0x2A
+#define MAC_REG_IMR3 0x2B
+#define MAC_REG_TDCSR_SET 0x30
+#define MAC_REG_RDCSR_SET 0x32
+#define MAC_REG_TDCSR_CLR 0x34
+#define MAC_REG_RDCSR_CLR 0x36
+#define MAC_REG_RDBASE_LO 0x38
+#define MAC_REG_RDINDX 0x3C
+#define MAC_REG_TDBASE_LO 0x40
+#define MAC_REG_RDCSIZE 0x50
+#define MAC_REG_TDCSIZE 0x52
+#define MAC_REG_TDINDX 0x54
+#define MAC_REG_TDIDX0 0x54
+#define MAC_REG_TDIDX1 0x56
+#define MAC_REG_TDIDX2 0x58
+#define MAC_REG_TDIDX3 0x5A
+#define MAC_REG_PAUSE_TIMER 0x5C
+#define MAC_REG_RBRDU 0x5E
+#define MAC_REG_FIFO_TEST0 0x60
+#define MAC_REG_FIFO_TEST1 0x64
+#define MAC_REG_CAMADDR 0x68
+#define MAC_REG_CAMCR 0x69
+#define MAC_REG_GFTEST 0x6A
+#define MAC_REG_FTSTCMD 0x6B
+#define MAC_REG_MIICFG 0x6C
+#define MAC_REG_MIISR 0x6D
+#define MAC_REG_PHYSR0 0x6E
+#define MAC_REG_PHYSR1 0x6F
+#define MAC_REG_MIICR 0x70
+#define MAC_REG_MIIADR 0x71
+#define MAC_REG_MIIDATA 0x72
+#define MAC_REG_SOFT_TIMER0 0x74
+#define MAC_REG_SOFT_TIMER1 0x76
+#define MAC_REG_CFGA 0x78
+#define MAC_REG_CFGB 0x79
+#define MAC_REG_CFGC 0x7A
+#define MAC_REG_CFGD 0x7B
+#define MAC_REG_DCFG0 0x7C
+#define MAC_REG_DCFG1 0x7D
+#define MAC_REG_MCFG0 0x7E
+#define MAC_REG_MCFG1 0x7F
+
+#define MAC_REG_TBIST 0x80
+#define MAC_REG_RBIST 0x81
+#define MAC_REG_PMCC 0x82
+#define MAC_REG_STICKHW 0x83
+#define MAC_REG_MIBCR 0x84
+#define MAC_REG_EERSV 0x85
+#define MAC_REG_REVID 0x86
+#define MAC_REG_MIBREAD 0x88
+#define MAC_REG_BPMA 0x8C
+#define MAC_REG_EEWR_DATA 0x8C
+#define MAC_REG_BPMD_WR 0x8F
+#define MAC_REG_BPCMD 0x90
+#define MAC_REG_BPMD_RD 0x91
+#define MAC_REG_EECHKSUM 0x92
+#define MAC_REG_EECSR 0x93
+#define MAC_REG_EERD_DATA 0x94
+#define MAC_REG_EADDR 0x96
+#define MAC_REG_EMBCMD 0x97
+#define MAC_REG_JMPSR0 0x98
+#define MAC_REG_JMPSR1 0x99
+#define MAC_REG_JMPSR2 0x9A
+#define MAC_REG_JMPSR3 0x9B
+#define MAC_REG_CHIPGSR 0x9C
+#define MAC_REG_TESTCFG 0x9D
+#define MAC_REG_DEBUG 0x9E
+#define MAC_REG_CHIPGCR 0x9F
+#define MAC_REG_WOLCR0_SET 0xA0
+#define MAC_REG_WOLCR1_SET 0xA1
+#define MAC_REG_PWCFG_SET 0xA2
+#define MAC_REG_WOLCFG_SET 0xA3
+#define MAC_REG_WOLCR0_CLR 0xA4
+#define MAC_REG_WOLCR1_CLR 0xA5
+#define MAC_REG_PWCFG_CLR 0xA6
+#define MAC_REG_WOLCFG_CLR 0xA7
+#define MAC_REG_WOLSR0_SET 0xA8
+#define MAC_REG_WOLSR1_SET 0xA9
+#define MAC_REG_WOLSR0_CLR 0xAC
+#define MAC_REG_WOLSR1_CLR 0xAD
+#define MAC_REG_PATRN_CRC0 0xB0
+#define MAC_REG_PATRN_CRC1 0xB2
+#define MAC_REG_PATRN_CRC2 0xB4
+#define MAC_REG_PATRN_CRC3 0xB6
+#define MAC_REG_PATRN_CRC4 0xB8
+#define MAC_REG_PATRN_CRC5 0xBA
+#define MAC_REG_PATRN_CRC6 0xBC
+#define MAC_REG_PATRN_CRC7 0xBE
+#define MAC_REG_BYTEMSK0_0 0xC0
+#define MAC_REG_BYTEMSK0_1 0xC4
+#define MAC_REG_BYTEMSK0_2 0xC8
+#define MAC_REG_BYTEMSK0_3 0xCC
+#define MAC_REG_BYTEMSK1_0 0xD0
+#define MAC_REG_BYTEMSK1_1 0xD4
+#define MAC_REG_BYTEMSK1_2 0xD8
+#define MAC_REG_BYTEMSK1_3 0xDC
+#define MAC_REG_BYTEMSK2_0 0xE0
+#define MAC_REG_BYTEMSK2_1 0xE4
+#define MAC_REG_BYTEMSK2_2 0xE8
+#define MAC_REG_BYTEMSK2_3 0xEC
+#define MAC_REG_BYTEMSK3_0 0xF0
+#define MAC_REG_BYTEMSK3_1 0xF4
+#define MAC_REG_BYTEMSK3_2 0xF8
+#define MAC_REG_BYTEMSK3_3 0xFC
+
+/*
+ * Bits in the RCR register
+ */
+
+#define RCR_AS 0x80
+#define RCR_AP 0x40
+#define RCR_AL 0x20
+#define RCR_PROM 0x10
+#define RCR_AB 0x08
+#define RCR_AM 0x04
+#define RCR_AR 0x02
+#define RCR_SEP 0x01
+
+/*
+ * Bits in the TCR register
+ */
+
+#define TCR_TB2BDIS 0x80
+#define TCR_COLTMC1 0x08
+#define TCR_COLTMC0 0x04
+#define TCR_LB1 0x02 /* loopback[1] */
+#define TCR_LB0 0x01 /* loopback[0] */
+
+/*
+ * Bits in the CR0 register
+ */
+
+#define CR0_TXON 0x00000008UL
+#define CR0_RXON 0x00000004UL
+#define CR0_STOP 0x00000002UL /* stop MAC, default = 1 */
+#define CR0_STRT 0x00000001UL /* start MAC */
+#define CR0_SFRST 0x00008000UL /* software reset */
+#define CR0_TM1EN 0x00004000UL
+#define CR0_TM0EN 0x00002000UL
+#define CR0_DPOLL 0x00000800UL /* disable rx/tx auto polling */
+#define CR0_DISAU 0x00000100UL
+#define CR0_XONEN 0x00800000UL
+#define CR0_FDXTFCEN 0x00400000UL /* full-duplex TX flow control enable */
+#define CR0_FDXRFCEN 0x00200000UL /* full-duplex RX flow control enable */
+#define CR0_HDXFCEN 0x00100000UL /* half-duplex flow control enable */
+#define CR0_XHITH1 0x00080000UL /* TX XON high threshold 1 */
+#define CR0_XHITH0 0x00040000UL /* TX XON high threshold 0 */
+#define CR0_XLTH1 0x00020000UL /* TX pause frame low threshold 1 */
+#define CR0_XLTH0 0x00010000UL /* TX pause frame low threshold 0 */
+#define CR0_GSPRST 0x80000000UL
+#define CR0_FORSRST 0x40000000UL
+#define CR0_FPHYRST 0x20000000UL
+#define CR0_DIAG 0x10000000UL
+#define CR0_INTPCTL 0x04000000UL
+#define CR0_GINTMSK1 0x02000000UL
+#define CR0_GINTMSK0 0x01000000UL
+
+/*
+ * Bits in the CR1 register
+ */
+
+#define CR1_SFRST 0x80 /* software reset */
+#define CR1_TM1EN 0x40
+#define CR1_TM0EN 0x20
+#define CR1_DPOLL 0x08 /* disable rx/tx auto polling */
+#define CR1_DISAU 0x01
+
+/*
+ * Bits in the CR2 register
+ */
+
+#define CR2_XONEN 0x80
+#define CR2_FDXTFCEN 0x40 /* full-duplex TX flow control enable */
+#define CR2_FDXRFCEN 0x20 /* full-duplex RX flow control enable */
+#define CR2_HDXFCEN 0x10 /* half-duplex flow control enable */
+#define CR2_XHITH1 0x08 /* TX XON high threshold 1 */
+#define CR2_XHITH0 0x04 /* TX XON high threshold 0 */
+#define CR2_XLTH1 0x02 /* TX pause frame low threshold 1 */
+#define CR2_XLTH0 0x01 /* TX pause frame low threshold 0 */
+
+/*
+ * Bits in the CR3 register
+ */
+
+#define CR3_GSPRST 0x80
+#define CR3_FORSRST 0x40
+#define CR3_FPHYRST 0x20
+#define CR3_DIAG 0x10
+#define CR3_INTPCTL 0x04
+#define CR3_GINTMSK1 0x02
+#define CR3_GINTMSK0 0x01
+
+#define ISRCTL_UDPINT 0x8000
+#define ISRCTL_TSUPDIS 0x4000
+#define ISRCTL_RSUPDIS 0x2000
+#define ISRCTL_PMSK1 0x1000
+#define ISRCTL_PMSK0 0x0800
+#define ISRCTL_INTPD 0x0400
+#define ISRCTL_HCRLD 0x0200
+#define ISRCTL_SCRLD 0x0100
+
+/*
+ * Bits in the ISR_CTL1 register
+ */
+
+#define ISRCTL1_UDPINT 0x80
+#define ISRCTL1_TSUPDIS 0x40
+#define ISRCTL1_RSUPDIS 0x20
+#define ISRCTL1_PMSK1 0x10
+#define ISRCTL1_PMSK0 0x08
+#define ISRCTL1_INTPD 0x04
+#define ISRCTL1_HCRLD 0x02
+#define ISRCTL1_SCRLD 0x01
+
+/*
+ * Bits in the TXE_SR register
+ */
+
+#define TXESR_TFDBS 0x08
+#define TXESR_TDWBS 0x04
+#define TXESR_TDRBS 0x02
+#define TXESR_TDSTR 0x01
+
+/*
+ * Bits in the RXE_SR register
+ */
+
+#define RXESR_RFDBS 0x08
+#define RXESR_RDWBS 0x04
+#define RXESR_RDRBS 0x02
+#define RXESR_RDSTR 0x01
+
+/*
+ * Bits in the ISR register
+ */
+
+#define ISR_ISR3 0x80000000UL
+#define ISR_ISR2 0x40000000UL
+#define ISR_ISR1 0x20000000UL
+#define ISR_ISR0 0x10000000UL
+#define ISR_TXSTLI 0x02000000UL
+#define ISR_RXSTLI 0x01000000UL
+#define ISR_HFLD 0x00800000UL
+#define ISR_UDPI 0x00400000UL
+#define ISR_MIBFI 0x00200000UL
+#define ISR_SHDNI 0x00100000UL
+#define ISR_PHYI 0x00080000UL
+#define ISR_PWEI 0x00040000UL
+#define ISR_TMR1I 0x00020000UL
+#define ISR_TMR0I 0x00010000UL
+#define ISR_SRCI 0x00008000UL
+#define ISR_LSTPEI 0x00004000UL
+#define ISR_LSTEI 0x00002000UL
+#define ISR_OVFI 0x00001000UL
+#define ISR_FLONI 0x00000800UL
+#define ISR_RACEI 0x00000400UL
+#define ISR_TXWB1I 0x00000200UL
+#define ISR_TXWB0I 0x00000100UL
+#define ISR_PTX3I 0x00000080UL
+#define ISR_PTX2I 0x00000040UL
+#define ISR_PTX1I 0x00000020UL
+#define ISR_PTX0I 0x00000010UL
+#define ISR_PTXI 0x00000008UL
+#define ISR_PRXI 0x00000004UL
+#define ISR_PPTXI 0x00000002UL
+#define ISR_PPRXI 0x00000001UL
+
+/*
+ * Bits in the IMR register
+ */
+
+#define IMR_TXSTLM 0x02000000UL
+#define IMR_UDPIM 0x00400000UL
+#define IMR_MIBFIM 0x00200000UL
+#define IMR_SHDNIM 0x00100000UL
+#define IMR_PHYIM 0x00080000UL
+#define IMR_PWEIM 0x00040000UL
+#define IMR_TMR1IM 0x00020000UL
+#define IMR_TMR0IM 0x00010000UL
+
+#define IMR_SRCIM 0x00008000UL
+#define IMR_LSTPEIM 0x00004000UL
+#define IMR_LSTEIM 0x00002000UL
+#define IMR_OVFIM 0x00001000UL
+#define IMR_FLONIM 0x00000800UL
+#define IMR_RACEIM 0x00000400UL
+#define IMR_TXWB1IM 0x00000200UL
+#define IMR_TXWB0IM 0x00000100UL
+
+#define IMR_PTX3IM 0x00000080UL
+#define IMR_PTX2IM 0x00000040UL
+#define IMR_PTX1IM 0x00000020UL
+#define IMR_PTX0IM 0x00000010UL
+#define IMR_PTXIM 0x00000008UL
+#define IMR_PRXIM 0x00000004UL
+#define IMR_PPTXIM 0x00000002UL
+#define IMR_PPRXIM 0x00000001UL
+
+/* 0x0013FB0FUL = initial value of IMR */
+
+#define INT_MASK_DEF (IMR_PPTXIM|IMR_PPRXIM|IMR_PTXIM|IMR_PRXIM|\
+ IMR_PWEIM|IMR_TXWB0IM|IMR_TXWB1IM|IMR_FLONIM|\
+ IMR_OVFIM|IMR_LSTEIM|IMR_LSTPEIM|IMR_SRCIM|IMR_MIBFIM|\
+ IMR_SHDNIM|IMR_TMR1IM|IMR_TMR0IM|IMR_TXSTLM)
+
+/*
+ * Bits in the TDCSR0/1, RDCSR0 register
+ */
+
+#define TRDCSR_DEAD 0x0008
+#define TRDCSR_WAK 0x0004
+#define TRDCSR_ACT 0x0002
+#define TRDCSR_RUN 0x0001
+
+/*
+ * Bits in the CAMADDR register
+ */
+
+#define CAMADDR_CAMEN 0x80
+#define CAMADDR_VCAMSL 0x40
+
+/*
+ * Bits in the CAMCR register
+ */
+
+#define CAMCR_PS1 0x80
+#define CAMCR_PS0 0x40
+#define CAMCR_AITRPKT 0x20
+#define CAMCR_AITR16 0x10
+#define CAMCR_CAMRD 0x08
+#define CAMCR_CAMWR 0x04
+#define CAMCR_PS_CAM_MASK 0x40
+#define CAMCR_PS_CAM_DATA 0x80
+#define CAMCR_PS_MAR 0x00
+
+/*
+ * Bits in the MIICFG register
+ */
+
+#define MIICFG_MPO1 0x80
+#define MIICFG_MPO0 0x40
+#define MIICFG_MFDC 0x20
+
+/*
+ * Bits in the MIISR register
+ */
+
+#define MIISR_MIDLE 0x80
+
+/*
+ * Bits in the PHYSR0 register
+ */
+
+#define PHYSR0_PHYRST 0x80
+#define PHYSR0_LINKGD 0x40
+#define PHYSR0_FDPX 0x10
+#define PHYSR0_SPDG 0x08
+#define PHYSR0_SPD10 0x04
+#define PHYSR0_RXFLC 0x02
+#define PHYSR0_TXFLC 0x01
+
+/*
+ * Bits in the PHYSR1 register
+ */
+
+#define PHYSR1_PHYTBI 0x01
+
+/*
+ * Bits in the MIICR register
+ */
+
+#define MIICR_MAUTO 0x80
+#define MIICR_RCMD 0x40
+#define MIICR_WCMD 0x20
+#define MIICR_MDPM 0x10
+#define MIICR_MOUT 0x08
+#define MIICR_MDO 0x04
+#define MIICR_MDI 0x02
+#define MIICR_MDC 0x01
+
+/*
+ * Bits in the MIIADR register
+ */
+
+#define MIIADR_SWMPL 0x80
+
+/*
+ * Bits in the CFGA register
+ */
+
+#define CFGA_PMHCTG 0x08
+#define CFGA_GPIO1PD 0x04
+#define CFGA_ABSHDN 0x02
+#define CFGA_PACPI 0x01
+
+/*
+ * Bits in the CFGB register
+ */
+
+#define CFGB_GTCKOPT 0x80
+#define CFGB_MIIOPT 0x40
+#define CFGB_CRSEOPT 0x20
+#define CFGB_OFSET 0x10
+#define CFGB_CRANDOM 0x08
+#define CFGB_CAP 0x04
+#define CFGB_MBA 0x02
+#define CFGB_BAKOPT 0x01
+
+/*
+ * Bits in the CFGC register
+ */
+
+#define CFGC_EELOAD 0x80
+#define CFGC_BROPT 0x40
+#define CFGC_DLYEN 0x20
+#define CFGC_DTSEL 0x10
+#define CFGC_BTSEL 0x08
+#define CFGC_BPS2 0x04 /* bootrom select[2] */
+#define CFGC_BPS1 0x02 /* bootrom select[1] */
+#define CFGC_BPS0 0x01 /* bootrom select[0] */
+
+/*
+ * Bits in the CFGD register
+ */
+
+#define CFGD_IODIS 0x80
+#define CFGD_MSLVDACEN 0x40
+#define CFGD_CFGDACEN 0x20
+#define CFGD_PCI64EN 0x10
+#define CFGD_HTMRL4 0x08
+
+/*
+ * Bits in the DCFG1 register
+ */
+
+#define DCFG_XMWI 0x8000
+#define DCFG_XMRM 0x4000
+#define DCFG_XMRL 0x2000
+#define DCFG_PERDIS 0x1000
+#define DCFG_MRWAIT 0x0400
+#define DCFG_MWWAIT 0x0200
+#define DCFG_LATMEN 0x0100
+
+/*
+ * Bits in the MCFG0 register
+ */
+
+#define MCFG_RXARB 0x0080
+#define MCFG_RFT1 0x0020
+#define MCFG_RFT0 0x0010
+#define MCFG_LOWTHOPT 0x0008
+#define MCFG_PQEN 0x0004
+#define MCFG_RTGOPT 0x0002
+#define MCFG_VIDFR 0x0001
+
+/*
+ * Bits in the MCFG1 register
+ */
+
+#define MCFG_TXARB 0x8000
+#define MCFG_TXQBK1 0x0800
+#define MCFG_TXQBK0 0x0400
+#define MCFG_TXQNOBK 0x0200
+#define MCFG_SNAPOPT 0x0100
+
+/*
+ * Bits in the PMCC register
+ */
+
+#define PMCC_DSI 0x80
+#define PMCC_D2_DIS 0x40
+#define PMCC_D1_DIS 0x20
+#define PMCC_D3C_EN 0x10
+#define PMCC_D3H_EN 0x08
+#define PMCC_D2_EN 0x04
+#define PMCC_D1_EN 0x02
+#define PMCC_D0_EN 0x01
+
+/*
+ * Bits in STICKHW
+ */
+
+#define STICKHW_SWPTAG 0x10
+#define STICKHW_WOLSR 0x08
+#define STICKHW_WOLEN 0x04
+#define STICKHW_DS1 0x02 /* R/W by software/cfg cycle */
+#define STICKHW_DS0 0x01 /* suspend well DS write port */
+
+/*
+ * Bits in the MIBCR register
+ */
+
+#define MIBCR_MIBISTOK 0x80
+#define MIBCR_MIBISTGO 0x40
+#define MIBCR_MIBINC 0x20
+#define MIBCR_MIBHI 0x10
+#define MIBCR_MIBFRZ 0x08
+#define MIBCR_MIBFLSH 0x04
+#define MIBCR_MPTRINI 0x02
+#define MIBCR_MIBCLR 0x01
+
+/*
+ * Bits in the EERSV register
+ */
+
+#define EERSV_BOOT_RPL ((u8) 0x01) /* Boot method selection for VT6110 */
+
+#define EERSV_BOOT_MASK ((u8) 0x06)
+#define EERSV_BOOT_INT19 ((u8) 0x00)
+#define EERSV_BOOT_INT18 ((u8) 0x02)
+#define EERSV_BOOT_LOCAL ((u8) 0x04)
+#define EERSV_BOOT_BEV ((u8) 0x06)
+
+
+/*
+ * Bits in BPCMD
+ */
+
+#define BPCMD_BPDNE 0x80
+#define BPCMD_EBPWR 0x02
+#define BPCMD_EBPRD 0x01
+
+/*
+ * Bits in the EECSR register
+ */
+
+#define EECSR_EMBP 0x40 /* eeprom embeded programming */
+#define EECSR_RELOAD 0x20 /* eeprom content reload */
+#define EECSR_DPM 0x10 /* eeprom direct programming */
+#define EECSR_ECS 0x08 /* eeprom CS pin */
+#define EECSR_ECK 0x04 /* eeprom CK pin */
+#define EECSR_EDI 0x02 /* eeprom DI pin */
+#define EECSR_EDO 0x01 /* eeprom DO pin */
+
+/*
+ * Bits in the EMBCMD register
+ */
+
+#define EMBCMD_EDONE 0x80
+#define EMBCMD_EWDIS 0x08
+#define EMBCMD_EWEN 0x04
+#define EMBCMD_EWR 0x02
+#define EMBCMD_ERD 0x01
+
+/*
+ * Bits in TESTCFG register
+ */
+
+#define TESTCFG_HBDIS 0x80
+
+/*
+ * Bits in CHIPGCR register
+ */
+
+#define CHIPGCR_FCGMII 0x80
+#define CHIPGCR_FCFDX 0x40
+#define CHIPGCR_FCRESV 0x20
+#define CHIPGCR_FCMODE 0x10
+#define CHIPGCR_LPSOPT 0x08
+#define CHIPGCR_TM1US 0x04
+#define CHIPGCR_TM0US 0x02
+#define CHIPGCR_PHYINTEN 0x01
+
+/*
+ * Bits in WOLCR0
+ */
+
+#define WOLCR_MSWOLEN7 0x0080 /* enable pattern match filtering */
+#define WOLCR_MSWOLEN6 0x0040
+#define WOLCR_MSWOLEN5 0x0020
+#define WOLCR_MSWOLEN4 0x0010
+#define WOLCR_MSWOLEN3 0x0008
+#define WOLCR_MSWOLEN2 0x0004
+#define WOLCR_MSWOLEN1 0x0002
+#define WOLCR_MSWOLEN0 0x0001
+#define WOLCR_ARP_EN 0x0001
+
+/*
+ * Bits in WOLCR1
+ */
+
+#define WOLCR_LINKOFF_EN 0x0800 /* link off detected enable */
+#define WOLCR_LINKON_EN 0x0400 /* link on detected enable */
+#define WOLCR_MAGIC_EN 0x0200 /* magic packet filter enable */
+#define WOLCR_UNICAST_EN 0x0100 /* unicast filter enable */
+
+
+/*
+ * Bits in PWCFG
+ */
+
+#define PWCFG_PHYPWOPT 0x80 /* internal MII I/F timing */
+#define PWCFG_PCISTICK 0x40 /* PCI sticky R/W enable */
+#define PWCFG_WOLTYPE 0x20 /* pulse(1) or button (0) */
+#define PWCFG_LEGCY_WOL 0x10
+#define PWCFG_PMCSR_PME_SR 0x08
+#define PWCFG_PMCSR_PME_EN 0x04 /* control by PCISTICK */
+#define PWCFG_LEGACY_WOLSR 0x02 /* Legacy WOL_SR shadow */
+#define PWCFG_LEGACY_WOLEN 0x01 /* Legacy WOL_EN shadow */
+
+/*
+ * Bits in WOLCFG
+ */
+
+#define WOLCFG_PMEOVR 0x80 /* for legacy use, force PMEEN always */
+#define WOLCFG_SAM 0x20 /* accept multicast case reset, default=0 */
+#define WOLCFG_SAB 0x10 /* accept broadcast case reset, default=0 */
+#define WOLCFG_SMIIACC 0x08 /* ?? */
+#define WOLCFG_SGENWH 0x02
+#define WOLCFG_PHYINTEN 0x01 /* 0:PHYINT trigger enable, 1:use internal MII
+ to report status change */
+/*
+ * Bits in WOLSR1
+ */
+
+#define WOLSR_LINKOFF_INT 0x0800
+#define WOLSR_LINKON_INT 0x0400
+#define WOLSR_MAGIC_INT 0x0200
+#define WOLSR_UNICAST_INT 0x0100
+
+/*
+ * Ethernet address filter type
+ */
+
+#define PKT_TYPE_NONE 0x0000 /* Turn off receiver */
+#define PKT_TYPE_DIRECTED 0x0001 /* obselete, directed address is always accepted */
+#define PKT_TYPE_MULTICAST 0x0002
+#define PKT_TYPE_ALL_MULTICAST 0x0004
+#define PKT_TYPE_BROADCAST 0x0008
+#define PKT_TYPE_PROMISCUOUS 0x0020
+#define PKT_TYPE_LONG 0x2000 /* NOTE.... the definition of LONG is >2048 bytes in our chip */
+#define PKT_TYPE_RUNT 0x4000
+#define PKT_TYPE_ERROR 0x8000 /* Accept error packets, e.g. CRC error */
+
+/*
+ * Loopback mode
+ */
+
+#define MAC_LB_NONE 0x00
+#define MAC_LB_INTERNAL 0x01
+#define MAC_LB_EXTERNAL 0x02
+
+/*
+ * Enabled mask value of irq
+ */
+
+#if defined(_SIM)
+#define IMR_MASK_VALUE 0x0033FF0FUL /* initial value of IMR
+ set IMR0 to 0x0F according to spec */
+
+#else
+#define IMR_MASK_VALUE 0x0013FB0FUL /* initial value of IMR
+ ignore MIBFI,RACEI to
+ reduce intr. frequency
+ NOTE.... do not enable NoBuf int mask at driver driver
+ when (1) NoBuf -> RxThreshold = SF
+ (2) OK -> RxThreshold = original value
+ */
+#endif
+
+/*
+ * Revision id
+ */
+
+#define REV_ID_VT3119_A0 0x00
+#define REV_ID_VT3119_A1 0x01
+#define REV_ID_VT3216_A0 0x10
+
+/*
+ * Max time out delay time
+ */
+
+#define W_MAX_TIMEOUT 0x0FFFU
+
+
+/*
+ * MAC registers as a structure. Cannot be directly accessed this
+ * way but generates offsets for readl/writel() calls
+ */
+
+struct mac_regs {
+ volatile u8 PAR[6]; /* 0x00 */
+ volatile u8 RCR;
+ volatile u8 TCR;
+
+ volatile u32 CR0Set; /* 0x08 */
+ volatile u32 CR0Clr; /* 0x0C */
+
+ volatile u8 MARCAM[8]; /* 0x10 */
+
+ volatile u32 DecBaseHi; /* 0x18 */
+ volatile u16 DbfBaseHi; /* 0x1C */
+ volatile u16 reserved_1E;
+
+ volatile u16 ISRCTL; /* 0x20 */
+ volatile u8 TXESR;
+ volatile u8 RXESR;
+
+ volatile u32 ISR; /* 0x24 */
+ volatile u32 IMR;
+
+ volatile u32 TDStatusPort; /* 0x2C */
+
+ volatile u16 TDCSRSet; /* 0x30 */
+ volatile u8 RDCSRSet;
+ volatile u8 reserved_33;
+ volatile u16 TDCSRClr;
+ volatile u8 RDCSRClr;
+ volatile u8 reserved_37;
+
+ volatile u32 RDBaseLo; /* 0x38 */
+ volatile u16 RDIdx; /* 0x3C */
+ volatile u16 reserved_3E;
+
+ volatile u32 TDBaseLo[4]; /* 0x40 */
+
+ volatile u16 RDCSize; /* 0x50 */
+ volatile u16 TDCSize; /* 0x52 */
+ volatile u16 TDIdx[4]; /* 0x54 */
+ volatile u16 tx_pause_timer; /* 0x5C */
+ volatile u16 RBRDU; /* 0x5E */
+
+ volatile u32 FIFOTest0; /* 0x60 */
+ volatile u32 FIFOTest1; /* 0x64 */
+
+ volatile u8 CAMADDR; /* 0x68 */
+ volatile u8 CAMCR; /* 0x69 */
+ volatile u8 GFTEST; /* 0x6A */
+ volatile u8 FTSTCMD; /* 0x6B */
+
+ volatile u8 MIICFG; /* 0x6C */
+ volatile u8 MIISR;
+ volatile u8 PHYSR0;
+ volatile u8 PHYSR1;
+ volatile u8 MIICR;
+ volatile u8 MIIADR;
+ volatile u16 MIIDATA;
+
+ volatile u16 SoftTimer0; /* 0x74 */
+ volatile u16 SoftTimer1;
+
+ volatile u8 CFGA; /* 0x78 */
+ volatile u8 CFGB;
+ volatile u8 CFGC;
+ volatile u8 CFGD;
+
+ volatile u16 DCFG; /* 0x7C */
+ volatile u16 MCFG;
+
+ volatile u8 TBIST; /* 0x80 */
+ volatile u8 RBIST;
+ volatile u8 PMCPORT;
+ volatile u8 STICKHW;
+
+ volatile u8 MIBCR; /* 0x84 */
+ volatile u8 reserved_85;
+ volatile u8 rev_id;
+ volatile u8 PORSTS;
+
+ volatile u32 MIBData; /* 0x88 */
+
+ volatile u16 EEWrData;
+
+ volatile u8 reserved_8E;
+ volatile u8 BPMDWr;
+ volatile u8 BPCMD;
+ volatile u8 BPMDRd;
+
+ volatile u8 EECHKSUM; /* 0x92 */
+ volatile u8 EECSR;
+
+ volatile u16 EERdData; /* 0x94 */
+ volatile u8 EADDR;
+ volatile u8 EMBCMD;
+
+
+ volatile u8 JMPSR0; /* 0x98 */
+ volatile u8 JMPSR1;
+ volatile u8 JMPSR2;
+ volatile u8 JMPSR3;
+ volatile u8 CHIPGSR; /* 0x9C */
+ volatile u8 TESTCFG;
+ volatile u8 DEBUG;
+ volatile u8 CHIPGCR;
+
+ volatile u16 WOLCRSet; /* 0xA0 */
+ volatile u8 PWCFGSet;
+ volatile u8 WOLCFGSet;
+
+ volatile u16 WOLCRClr; /* 0xA4 */
+ volatile u8 PWCFGCLR;
+ volatile u8 WOLCFGClr;
+
+ volatile u16 WOLSRSet; /* 0xA8 */
+ volatile u16 reserved_AA;
+
+ volatile u16 WOLSRClr; /* 0xAC */
+ volatile u16 reserved_AE;
+
+ volatile u16 PatternCRC[8]; /* 0xB0 */
+ volatile u32 ByteMask[4][4]; /* 0xC0 */
+} __attribute__ ((__packed__));
+
+
+enum hw_mib {
+ HW_MIB_ifRxAllPkts = 0,
+ HW_MIB_ifRxOkPkts,
+ HW_MIB_ifTxOkPkts,
+ HW_MIB_ifRxErrorPkts,
+ HW_MIB_ifRxRuntOkPkt,
+ HW_MIB_ifRxRuntErrPkt,
+ HW_MIB_ifRx64Pkts,
+ HW_MIB_ifTx64Pkts,
+ HW_MIB_ifRx65To127Pkts,
+ HW_MIB_ifTx65To127Pkts,
+ HW_MIB_ifRx128To255Pkts,
+ HW_MIB_ifTx128To255Pkts,
+ HW_MIB_ifRx256To511Pkts,
+ HW_MIB_ifTx256To511Pkts,
+ HW_MIB_ifRx512To1023Pkts,
+ HW_MIB_ifTx512To1023Pkts,
+ HW_MIB_ifRx1024To1518Pkts,
+ HW_MIB_ifTx1024To1518Pkts,
+ HW_MIB_ifTxEtherCollisions,
+ HW_MIB_ifRxPktCRCE,
+ HW_MIB_ifRxJumboPkts,
+ HW_MIB_ifTxJumboPkts,
+ HW_MIB_ifRxMacControlFrames,
+ HW_MIB_ifTxMacControlFrames,
+ HW_MIB_ifRxPktFAE,
+ HW_MIB_ifRxLongOkPkt,
+ HW_MIB_ifRxLongPktErrPkt,
+ HW_MIB_ifTXSQEErrors,
+ HW_MIB_ifRxNobuf,
+ HW_MIB_ifRxSymbolErrors,
+ HW_MIB_ifInRangeLengthErrors,
+ HW_MIB_ifLateCollisions,
+ HW_MIB_SIZE
+};
+
+enum chip_type {
+ CHIP_TYPE_VT6110 = 1,
+};
+
+struct velocity_info_tbl {
+ enum chip_type chip_id;
+ char *name;
+ int io_size;
+ int txqueue;
+ u32 flags;
+};
+
+#define mac_hw_mibs_init(regs) {\
+ BYTE_REG_BITS_ON(MIBCR_MIBFRZ,&((regs)->MIBCR));\
+ BYTE_REG_BITS_ON(MIBCR_MIBCLR,&((regs)->MIBCR));\
+ do {}\
+ while (BYTE_REG_BITS_IS_ON(MIBCR_MIBCLR,&((regs)->MIBCR)));\
+ BYTE_REG_BITS_OFF(MIBCR_MIBFRZ,&((regs)->MIBCR));\
+}
+
+#define mac_read_isr(regs) readl(&((regs)->ISR))
+#define mac_write_isr(regs, x) writel((x),&((regs)->ISR))
+#define mac_clear_isr(regs) writel(0xffffffffL,&((regs)->ISR))
+
+#define mac_write_int_mask(mask, regs) writel((mask),&((regs)->IMR));
+#define mac_disable_int(regs) writel(CR0_GINTMSK1,&((regs)->CR0Clr))
+#define mac_enable_int(regs) writel(CR0_GINTMSK1,&((regs)->CR0Set))
+
+#define mac_hw_mibs_read(regs, MIBs) {\
+ int i;\
+ BYTE_REG_BITS_ON(MIBCR_MPTRINI,&((regs)->MIBCR));\
+ for (i=0;i<HW_MIB_SIZE;i++) {\
+ (MIBs)[i]=readl(&((regs)->MIBData));\
+ }\
+}
+
+#define mac_set_dma_length(regs, n) {\
+ BYTE_REG_BITS_SET((n),0x07,&((regs)->DCFG));\
+}
+
+#define mac_set_rx_thresh(regs, n) {\
+ BYTE_REG_BITS_SET((n),(MCFG_RFT0|MCFG_RFT1),&((regs)->MCFG));\
+}
+
+#define mac_rx_queue_run(regs) {\
+ writeb(TRDCSR_RUN, &((regs)->RDCSRSet));\
+}
+
+#define mac_rx_queue_wake(regs) {\
+ writeb(TRDCSR_WAK, &((regs)->RDCSRSet));\
+}
+
+#define mac_tx_queue_run(regs, n) {\
+ writew(TRDCSR_RUN<<((n)*4),&((regs)->TDCSRSet));\
+}
+
+#define mac_tx_queue_wake(regs, n) {\
+ writew(TRDCSR_WAK<<(n*4),&((regs)->TDCSRSet));\
+}
+
+#define mac_eeprom_reload(regs) {\
+ int i=0;\
+ BYTE_REG_BITS_ON(EECSR_RELOAD,&((regs)->EECSR));\
+ do {\
+ udelay(10);\
+ if (i++>0x1000) {\
+ break;\
+ }\
+ }while (BYTE_REG_BITS_IS_ON(EECSR_RELOAD,&((regs)->EECSR)));\
+}
+
+enum velocity_cam_type {
+ VELOCITY_VLAN_ID_CAM = 0,
+ VELOCITY_MULTICAST_CAM
+};
+
+/**
+ * mac_get_cam_mask - Read a CAM mask
+ * @regs: register block for this velocity
+ * @mask: buffer to store mask
+ * @cam_type: CAM to fetch
+ *
+ * Fetch the mask bits of the selected CAM and store them into the
+ * provided mask buffer.
+ */
+
+static inline void mac_get_cam_mask(struct mac_regs __iomem * regs, u8 * mask, enum velocity_cam_type cam_type)
+{
+ int i;
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_CAM_MASK, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ writeb(CAMADDR_VCAMSL, ®s->CAMADDR);
+ else
+ writeb(0, ®s->CAMADDR);
+
+ /* read mask */
+ for (i = 0; i < 8; i++)
+ *mask++ = readb(&(regs->MARCAM[i]));
+
+ /* disable CAMEN */
+ writeb(0, ®s->CAMADDR);
+
+ /* Select mar */
+ BYTE_REG_BITS_SET(CAMCR_PS_MAR, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+
+}
+
+/**
+ * mac_set_cam_mask - Set a CAM mask
+ * @regs: register block for this velocity
+ * @mask: CAM mask to load
+ * @cam_type: CAM to store
+ *
+ * Store a new mask into a CAM
+ */
+
+static inline void mac_set_cam_mask(struct mac_regs __iomem * regs, u8 * mask, enum velocity_cam_type cam_type)
+{
+ int i;
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_CAM_MASK, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ writeb(CAMADDR_CAMEN | CAMADDR_VCAMSL, ®s->CAMADDR);
+ else
+ writeb(CAMADDR_CAMEN, ®s->CAMADDR);
+
+ for (i = 0; i < 8; i++) {
+ writeb(*mask++, &(regs->MARCAM[i]));
+ }
+ /* disable CAMEN */
+ writeb(0, ®s->CAMADDR);
+
+ /* Select mar */
+ BYTE_REG_BITS_SET(CAMCR_PS_MAR, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+}
+
+/**
+ * mac_set_cam - set CAM data
+ * @regs: register block of this velocity
+ * @idx: Cam index
+ * @addr: 2 or 6 bytes of CAM data
+ * @cam_type: CAM to load
+ *
+ * Load an address or vlan tag into a CAM
+ */
+
+static inline void mac_set_cam(struct mac_regs __iomem * regs, int idx, u8 *addr, enum velocity_cam_type cam_type)
+{
+ int i;
+
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_CAM_DATA, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+
+ idx &= (64 - 1);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ writeb(CAMADDR_CAMEN | CAMADDR_VCAMSL | idx, ®s->CAMADDR);
+ else
+ writeb(CAMADDR_CAMEN | idx, ®s->CAMADDR);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ writew(*((u16 *) addr), ®s->MARCAM[0]);
+ else {
+ for (i = 0; i < 6; i++) {
+ writeb(*addr++, &(regs->MARCAM[i]));
+ }
+ }
+ BYTE_REG_BITS_ON(CAMCR_CAMWR, ®s->CAMCR);
+
+ udelay(10);
+
+ writeb(0, ®s->CAMADDR);
+
+ /* Select mar */
+ BYTE_REG_BITS_SET(CAMCR_PS_MAR, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+}
+
+/**
+ * mac_get_cam - fetch CAM data
+ * @regs: register block of this velocity
+ * @idx: Cam index
+ * @addr: buffer to hold up to 6 bytes of CAM data
+ * @cam_type: CAM to load
+ *
+ * Load an address or vlan tag from a CAM into the buffer provided by
+ * the caller. VLAN tags are 2 bytes the address cam entries are 6.
+ */
+
+static inline void mac_get_cam(struct mac_regs __iomem * regs, int idx, u8 *addr, enum velocity_cam_type cam_type)
+{
+ int i;
+
+ /* Select CAM mask */
+ BYTE_REG_BITS_SET(CAMCR_PS_CAM_DATA, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+
+ idx &= (64 - 1);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ writeb(CAMADDR_CAMEN | CAMADDR_VCAMSL | idx, ®s->CAMADDR);
+ else
+ writeb(CAMADDR_CAMEN | idx, ®s->CAMADDR);
+
+ BYTE_REG_BITS_ON(CAMCR_CAMRD, ®s->CAMCR);
+
+ udelay(10);
+
+ if (cam_type == VELOCITY_VLAN_ID_CAM)
+ *((u16 *) addr) = readw(&(regs->MARCAM[0]));
+ else
+ for (i = 0; i < 6; i++, addr++)
+ *((u8 *) addr) = readb(&(regs->MARCAM[i]));
+
+ writeb(0, ®s->CAMADDR);
+
+ /* Select mar */
+ BYTE_REG_BITS_SET(CAMCR_PS_MAR, CAMCR_PS1 | CAMCR_PS0, ®s->CAMCR);
+}
+
+/**
+ * mac_wol_reset - reset WOL after exiting low power
+ * @regs: register block of this velocity
+ *
+ * Called after we drop out of wake on lan mode in order to
+ * reset the Wake on lan features. This function doesn't restore
+ * the rest of the logic from the result of sleep/wakeup
+ */
+
+inline static void mac_wol_reset(struct mac_regs __iomem * regs)
+{
+
+ /* Turn off SWPTAG right after leaving power mode */
+ BYTE_REG_BITS_OFF(STICKHW_SWPTAG, ®s->STICKHW);
+ /* clear sticky bits */
+ BYTE_REG_BITS_OFF((STICKHW_DS1 | STICKHW_DS0), ®s->STICKHW);
+
+ BYTE_REG_BITS_OFF(CHIPGCR_FCGMII, ®s->CHIPGCR);
+ BYTE_REG_BITS_OFF(CHIPGCR_FCMODE, ®s->CHIPGCR);
+ /* disable force PME-enable */
+ writeb(WOLCFG_PMEOVR, ®s->WOLCFGClr);
+ /* disable power-event config bit */
+ writew(0xFFFF, ®s->WOLCRClr);
+ /* clear power status */
+ writew(0xFFFF, ®s->WOLSRClr);
+}
+
+
+/*
+ * Header for WOL definitions. Used to compute hashes
+ */
+
+typedef u8 MCAM_ADDR[ETH_ALEN];
+
+struct arp_packet {
+ u8 dest_mac[ETH_ALEN];
+ u8 src_mac[ETH_ALEN];
+ u16 type;
+ u16 ar_hrd;
+ u16 ar_pro;
+ u8 ar_hln;
+ u8 ar_pln;
+ u16 ar_op;
+ u8 ar_sha[ETH_ALEN];
+ u8 ar_sip[4];
+ u8 ar_tha[ETH_ALEN];
+ u8 ar_tip[4];
+} __attribute__ ((__packed__));
+
+struct _magic_packet {
+ u8 dest_mac[6];
+ u8 src_mac[6];
+ u16 type;
+ u8 MAC[16][6];
+ u8 password[6];
+} __attribute__ ((__packed__));
+
+/*
+ * Store for chip context when saving and restoring status. Not
+ * all fields are saved/restored currently.
+ */
+
+struct velocity_context {
+ u8 mac_reg[256];
+ MCAM_ADDR cam_addr[MCAM_SIZE];
+ u16 vcam[VCAM_SIZE];
+ u32 cammask[2];
+ u32 patcrc[2];
+ u32 pattern[8];
+};
+
+
+/*
+ * MII registers.
+ */
+
+
+/*
+ * Registers in the MII (offset unit is WORD)
+ */
+
+#define MII_REG_BMCR 0x00 // physical address
+#define MII_REG_BMSR 0x01 //
+#define MII_REG_PHYID1 0x02 // OUI
+#define MII_REG_PHYID2 0x03 // OUI + Module ID + REV ID
+#define MII_REG_ANAR 0x04 //
+#define MII_REG_ANLPAR 0x05 //
+#define MII_REG_G1000CR 0x09 //
+#define MII_REG_G1000SR 0x0A //
+#define MII_REG_MODCFG 0x10 //
+#define MII_REG_TCSR 0x16 //
+#define MII_REG_PLED 0x1B //
+// NS, MYSON only
+#define MII_REG_PCR 0x17 //
+// ESI only
+#define MII_REG_PCSR 0x17 //
+#define MII_REG_AUXCR 0x1C //
+
+// Marvell 88E1000/88E1000S
+#define MII_REG_PSCR 0x10 // PHY specific control register
+
+//
+// Bits in the BMCR register
+//
+#define BMCR_RESET 0x8000 //
+#define BMCR_LBK 0x4000 //
+#define BMCR_SPEED100 0x2000 //
+#define BMCR_AUTO 0x1000 //
+#define BMCR_PD 0x0800 //
+#define BMCR_ISO 0x0400 //
+#define BMCR_REAUTO 0x0200 //
+#define BMCR_FDX 0x0100 //
+#define BMCR_SPEED1G 0x0040 //
+//
+// Bits in the BMSR register
+//
+#define BMSR_AUTOCM 0x0020 //
+#define BMSR_LNK 0x0004 //
+
+//
+// Bits in the ANAR register
+//
+#define ANAR_ASMDIR 0x0800 // Asymmetric PAUSE support
+#define ANAR_PAUSE 0x0400 // Symmetric PAUSE Support
+#define ANAR_T4 0x0200 //
+#define ANAR_TXFD 0x0100 //
+#define ANAR_TX 0x0080 //
+#define ANAR_10FD 0x0040 //
+#define ANAR_10 0x0020 //
+//
+// Bits in the ANLPAR register
+//
+#define ANLPAR_ASMDIR 0x0800 // Asymmetric PAUSE support
+#define ANLPAR_PAUSE 0x0400 // Symmetric PAUSE Support
+#define ANLPAR_T4 0x0200 //
+#define ANLPAR_TXFD 0x0100 //
+#define ANLPAR_TX 0x0080 //
+#define ANLPAR_10FD 0x0040 //
+#define ANLPAR_10 0x0020 //
+
+//
+// Bits in the G1000CR register
+//
+#define G1000CR_1000FD 0x0200 // PHY is 1000-T Full-duplex capable
+#define G1000CR_1000 0x0100 // PHY is 1000-T Half-duplex capable
+
+//
+// Bits in the G1000SR register
+//
+#define G1000SR_1000FD 0x0800 // LP PHY is 1000-T Full-duplex capable
+#define G1000SR_1000 0x0400 // LP PHY is 1000-T Half-duplex capable
+
+#define TCSR_ECHODIS 0x2000 //
+#define AUXCR_MDPPS 0x0004 //
+
+// Bits in the PLED register
+#define PLED_LALBE 0x0004 //
+
+// Marvell 88E1000/88E1000S Bits in the PHY specific control register (10h)
+#define PSCR_ACRSTX 0x0800 // Assert CRS on Transmit
+
+#define PHYID_CICADA_CS8201 0x000FC410UL
+#define PHYID_VT3216_32BIT 0x000FC610UL
+#define PHYID_VT3216_64BIT 0x000FC600UL
+#define PHYID_MARVELL_1000 0x01410C50UL
+#define PHYID_MARVELL_1000S 0x01410C40UL
+
+#define PHYID_REV_ID_MASK 0x0000000FUL
+
+#define PHYID_GET_PHY_REV_ID(i) ((i) & PHYID_REV_ID_MASK)
+#define PHYID_GET_PHY_ID(i) ((i) & ~PHYID_REV_ID_MASK)
+
+#define MII_REG_BITS_ON(x,i,p) do {\
+ u16 w;\
+ velocity_mii_read((p),(i),&(w));\
+ (w)|=(x);\
+ velocity_mii_write((p),(i),(w));\
+} while (0)
+
+#define MII_REG_BITS_OFF(x,i,p) do {\
+ u16 w;\
+ velocity_mii_read((p),(i),&(w));\
+ (w)&=(~(x));\
+ velocity_mii_write((p),(i),(w));\
+} while (0)
+
+#define MII_REG_BITS_IS_ON(x,i,p) ({\
+ u16 w;\
+ velocity_mii_read((p),(i),&(w));\
+ ((int) ((w) & (x)));})
+
+#define MII_GET_PHY_ID(p) ({\
+ u32 id;\
+ velocity_mii_read((p),MII_REG_PHYID2,(u16 *) &id);\
+ velocity_mii_read((p),MII_REG_PHYID1,((u16 *) &id)+1);\
+ (id);})
+
+/*
+ * Inline debug routine
+ */
+
+
+enum velocity_msg_level {
+ MSG_LEVEL_ERR = 0, //Errors that will cause abnormal operation.
+ MSG_LEVEL_NOTICE = 1, //Some errors need users to be notified.
+ MSG_LEVEL_INFO = 2, //Normal message.
+ MSG_LEVEL_VERBOSE = 3, //Will report all trival errors.
+ MSG_LEVEL_DEBUG = 4 //Only for debug purpose.
+};
+
+#ifdef VELOCITY_DEBUG
+#define ASSERT(x) { \
+ if (!(x)) { \
+ printk(KERN_ERR "assertion %s failed: file %s line %d\n", #x,\
+ __FUNCTION__, __LINE__);\
+ BUG(); \
+ }\
+}
+#define VELOCITY_DBG(p,args...) printk(p, ##args)
+#else
+#define ASSERT(x)
+#define VELOCITY_DBG(x)
+#endif
+
+#define VELOCITY_PRT(l, p, args...) do {if (l<=msglevel) printk( p ,##args);} while (0)
+
+#define VELOCITY_PRT_CAMMASK(p,t) {\
+ int i;\
+ if ((t)==VELOCITY_MULTICAST_CAM) {\
+ for (i=0;i<(MCAM_SIZE/8);i++)\
+ printk("%02X",(p)->mCAMmask[i]);\
+ }\
+ else {\
+ for (i=0;i<(VCAM_SIZE/8);i++)\
+ printk("%02X",(p)->vCAMmask[i]);\
+ }\
+ printk("\n");\
+}
+
+
+
+#define VELOCITY_WOL_MAGIC 0x00000000UL
+#define VELOCITY_WOL_PHY 0x00000001UL
+#define VELOCITY_WOL_ARP 0x00000002UL
+#define VELOCITY_WOL_UCAST 0x00000004UL
+#define VELOCITY_WOL_BCAST 0x00000010UL
+#define VELOCITY_WOL_MCAST 0x00000020UL
+#define VELOCITY_WOL_MAGIC_SEC 0x00000040UL
+
+/*
+ * Flags for options
+ */
+
+#define VELOCITY_FLAGS_TAGGING 0x00000001UL
+#define VELOCITY_FLAGS_TX_CSUM 0x00000002UL
+#define VELOCITY_FLAGS_RX_CSUM 0x00000004UL
+#define VELOCITY_FLAGS_IP_ALIGN 0x00000008UL
+#define VELOCITY_FLAGS_VAL_PKT_LEN 0x00000010UL
+
+#define VELOCITY_FLAGS_FLOW_CTRL 0x01000000UL
+
+/*
+ * Flags for driver status
+ */
+
+#define VELOCITY_FLAGS_OPENED 0x00010000UL
+#define VELOCITY_FLAGS_VMNS_CONNECTED 0x00020000UL
+#define VELOCITY_FLAGS_VMNS_COMMITTED 0x00040000UL
+#define VELOCITY_FLAGS_WOL_ENABLED 0x00080000UL
+
+/*
+ * Flags for MII status
+ */
+
+#define VELOCITY_LINK_FAIL 0x00000001UL
+#define VELOCITY_SPEED_10 0x00000002UL
+#define VELOCITY_SPEED_100 0x00000004UL
+#define VELOCITY_SPEED_1000 0x00000008UL
+#define VELOCITY_DUPLEX_FULL 0x00000010UL
+#define VELOCITY_AUTONEG_ENABLE 0x00000020UL
+#define VELOCITY_FORCED_BY_EEPROM 0x00000040UL
+
+/*
+ * For velocity_set_media_duplex
+ */
+
+#define VELOCITY_LINK_CHANGE 0x00000001UL
+
+enum speed_opt {
+ SPD_DPX_AUTO = 0,
+ SPD_DPX_100_HALF = 1,
+ SPD_DPX_100_FULL = 2,
+ SPD_DPX_10_HALF = 3,
+ SPD_DPX_10_FULL = 4
+};
+
+enum velocity_init_type {
+ VELOCITY_INIT_COLD = 0,
+ VELOCITY_INIT_RESET,
+ VELOCITY_INIT_WOL
+};
+
+enum velocity_flow_cntl_type {
+ FLOW_CNTL_DEFAULT = 1,
+ FLOW_CNTL_TX,
+ FLOW_CNTL_RX,
+ FLOW_CNTL_TX_RX,
+ FLOW_CNTL_DISABLE,
+};
+
+struct velocity_opt {
+ int numrx; /* Number of RX descriptors */
+ int numtx; /* Number of TX descriptors */
+ enum speed_opt spd_dpx; /* Media link mode */
+ int vid; /* vlan id */
+ int DMA_length; /* DMA length */
+ int rx_thresh; /* RX_THRESH */
+ int flow_cntl;
+ int wol_opts; /* Wake on lan options */
+ int td_int_count;
+ int int_works;
+ int rx_bandwidth_hi;
+ int rx_bandwidth_lo;
+ int rx_bandwidth_en;
+ u32 flags;
+};
+
+struct velocity_info {
+ struct list_head list;
+
+ struct pci_dev *pdev;
+ struct net_device *dev;
+ struct net_device_stats stats;
+
+ dma_addr_t rd_pool_dma;
+ dma_addr_t td_pool_dma[TX_QUEUE_NO];
+
+ dma_addr_t tx_bufs_dma;
+ u8 *tx_bufs;
+
+ u8 ip_addr[4];
+ enum chip_type chip_id;
+
+ struct mac_regs __iomem * mac_regs;
+ unsigned long memaddr;
+ unsigned long ioaddr;
+ u32 io_size;
+
+ u8 rev_id;
+
+#define AVAIL_TD(p,q) ((p)->options.numtx-((p)->td_used[(q)]))
+
+ int num_txq;
+
+ volatile int td_used[TX_QUEUE_NO];
+ int td_curr[TX_QUEUE_NO];
+ int td_tail[TX_QUEUE_NO];
+ struct tx_desc *td_rings[TX_QUEUE_NO];
+ struct velocity_td_info *td_infos[TX_QUEUE_NO];
+
+ int rd_curr;
+ int rd_dirty;
+ u32 rd_filled;
+ struct rx_desc *rd_ring;
+ struct velocity_rd_info *rd_info; /* It's an array */
+
+#define GET_RD_BY_IDX(vptr, idx) (vptr->rd_ring[idx])
+ u32 mib_counter[MAX_HW_MIB_COUNTER];
+ struct velocity_opt options;
+
+ u32 int_mask;
+
+ u32 flags;
+
+ int rx_buf_sz;
+ u32 mii_status;
+ u32 phy_id;
+ int multicast_limit;
+
+ u8 vCAMmask[(VCAM_SIZE / 8)];
+ u8 mCAMmask[(MCAM_SIZE / 8)];
+
+ spinlock_t lock;
+
+ int wol_opts;
+ u8 wol_passwd[6];
+
+ struct velocity_context context;
+
+ u32 ticks;
+ u32 rx_bytes;
+
+};
+
+/**
+ * velocity_get_ip - find an IP address for the device
+ * @vptr: Velocity to query
+ *
+ * Dig out an IP address for this interface so that we can
+ * configure wakeup with WOL for ARP. If there are multiple IP
+ * addresses on this chain then we use the first - multi-IP WOL is not
+ * supported.
+ *
+ * CHECK ME: locking
+ */
+
+inline static int velocity_get_ip(struct velocity_info *vptr)
+{
+ struct in_device *in_dev = (struct in_device *) vptr->dev->ip_ptr;
+ struct in_ifaddr *ifa;
+
+ if (in_dev != NULL) {
+ ifa = (struct in_ifaddr *) in_dev->ifa_list;
+ if (ifa != NULL) {
+ memcpy(vptr->ip_addr, &ifa->ifa_address, 4);
+ return 0;
+ }
+ }
+ return -ENOENT;
+}
+
+/**
+ * velocity_update_hw_mibs - fetch MIB counters from chip
+ * @vptr: velocity to update
+ *
+ * The velocity hardware keeps certain counters in the hardware
+ * side. We need to read these when the user asks for statistics
+ * or when they overflow (causing an interrupt). The read of the
+ * statistic clears it, so we keep running master counters in user
+ * space.
+ */
+
+static inline void velocity_update_hw_mibs(struct velocity_info *vptr)
+{
+ u32 tmp;
+ int i;
+ BYTE_REG_BITS_ON(MIBCR_MIBFLSH, &(vptr->mac_regs->MIBCR));
+
+ while (BYTE_REG_BITS_IS_ON(MIBCR_MIBFLSH, &(vptr->mac_regs->MIBCR)));
+
+ BYTE_REG_BITS_ON(MIBCR_MPTRINI, &(vptr->mac_regs->MIBCR));
+ for (i = 0; i < HW_MIB_SIZE; i++) {
+ tmp = readl(&(vptr->mac_regs->MIBData)) & 0x00FFFFFFUL;
+ vptr->mib_counter[i] += tmp;
+ }
+}
+
+/**
+ * init_flow_control_register - set up flow control
+ * @vptr: velocity to configure
+ *
+ * Configure the flow control registers for this velocity device.
+ */
+
+static inline void init_flow_control_register(struct velocity_info *vptr)
+{
+ struct mac_regs __iomem * regs = vptr->mac_regs;
+
+ /* Set {XHITH1, XHITH0, XLTH1, XLTH0} in FlowCR1 to {1, 0, 1, 1}
+ depend on RD=64, and Turn on XNOEN in FlowCR1 */
+ writel((CR0_XONEN | CR0_XHITH1 | CR0_XLTH1 | CR0_XLTH0), ®s->CR0Set);
+ writel((CR0_FDXTFCEN | CR0_FDXRFCEN | CR0_HDXFCEN | CR0_XHITH0), ®s->CR0Clr);
+
+ /* Set TxPauseTimer to 0xFFFF */
+ writew(0xFFFF, ®s->tx_pause_timer);
+
+ /* Initialize RBRDU to Rx buffer count. */
+ writew(vptr->options.numrx, ®s->RBRDU);
+}
+
+
+#endif
--- /dev/null
+
+"This software program is licensed subject to the GNU General Public License
+(GPL). Version 2, June 1991, available at
+<http://www.fsf.org/copyleft/gpl.html>"
+
+GNU General Public License
+
+Version 2, June 1991
+
+Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
+
+Everyone is permitted to copy and distribute verbatim copies of this license
+document, but changing it is not allowed.
+
+Preamble
+
+The licenses for most software are designed to take away your freedom to
+share and change it. By contrast, the GNU General Public License is intended
+to guarantee your freedom to share and change free software--to make sure
+the software is free for all its users. This General Public License applies
+to most of the Free Software Foundation's software and to any other program
+whose authors commit to using it. (Some other Free Software Foundation
+software is covered by the GNU Library General Public License instead.) You
+can apply it to your programs, too.
+
+When we speak of free software, we are referring to freedom, not price. Our
+General Public Licenses are designed to make sure that you have the freedom
+to distribute copies of free software (and charge for this service if you
+wish), that you receive source code or can get it if you want it, that you
+can change the software or use pieces of it in new free programs; and that
+you know you can do these things.
+
+To protect your rights, we need to make restrictions that forbid anyone to
+deny you these rights or to ask you to surrender the rights. These
+restrictions translate to certain responsibilities for you if you distribute
+copies of the software, or if you modify it.
+
+For example, if you distribute copies of such a program, whether gratis or
+for a fee, you must give the recipients all the rights that you have. You
+must make sure that they, too, receive or can get the source code. And you
+must show them these terms so they know their rights.
+
+We protect your rights with two steps: (1) copyright the software, and (2)
+offer you this license which gives you legal permission to copy, distribute
+and/or modify the software.
+
+Also, for each author's protection and ours, we want to make certain that
+everyone understands that there is no warranty for this free software. If
+the software is modified by someone else and passed on, we want its
+recipients to know that what they have is not the original, so that any
+problems introduced by others will not reflect on the original authors'
+reputations.
+
+Finally, any free program is threatened constantly by software patents. We
+wish to avoid the danger that redistributors of a free program will
+individually obtain patent licenses, in effect making the program
+proprietary. To prevent this, we have made it clear that any patent must be
+licensed for everyone's free use or not licensed at all.
+
+The precise terms and conditions for copying, distribution and modification
+follow.
+
+TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+0. This License applies to any program or other work which contains a notice
+ placed by the copyright holder saying it may be distributed under the
+ terms of this General Public License. The "Program", below, refers to any
+ such program or work, and a "work based on the Program" means either the
+ Program or any derivative work under copyright law: that is to say, a
+ work containing the Program or a portion of it, either verbatim or with
+ modifications and/or translated into another language. (Hereinafter,
+ translation is included without limitation in the term "modification".)
+ Each licensee is addressed as "you".
+
+ Activities other than copying, distribution and modification are not
+ covered by this License; they are outside its scope. The act of running
+ the Program is not restricted, and the output from the Program is covered
+ only if its contents constitute a work based on the Program (independent
+ of having been made by running the Program). Whether that is true depends
+ on what the Program does.
+
+1. You may copy and distribute verbatim copies of the Program's source code
+ as you receive it, in any medium, provided that you conspicuously and
+ appropriately publish on each copy an appropriate copyright notice and
+ disclaimer of warranty; keep intact all the notices that refer to this
+ License and to the absence of any warranty; and give any other recipients
+ of the Program a copy of this License along with the Program.
+
+ You may charge a fee for the physical act of transferring a copy, and you
+ may at your option offer warranty protection in exchange for a fee.
+
+2. You may modify your copy or copies of the Program or any portion of it,
+ thus forming a work based on the Program, and copy and distribute such
+ modifications or work under the terms of Section 1 above, provided that
+ you also meet all of these conditions:
+
+ * a) You must cause the modified files to carry prominent notices stating
+ that you changed the files and the date of any change.
+
+ * b) You must cause any work that you distribute or publish, that in
+ whole or in part contains or is derived from the Program or any part
+ thereof, to be licensed as a whole at no charge to all third parties
+ under the terms of this License.
+
+ * c) If the modified program normally reads commands interactively when
+ run, you must cause it, when started running for such interactive
+ use in the most ordinary way, to print or display an announcement
+ including an appropriate copyright notice and a notice that there is
+ no warranty (or else, saying that you provide a warranty) and that
+ users may redistribute the program under these conditions, and
+ telling the user how to view a copy of this License. (Exception: if
+ the Program itself is interactive but does not normally print such
+ an announcement, your work based on the Program is not required to
+ print an announcement.)
+
+ These requirements apply to the modified work as a whole. If identifiable
+ sections of that work are not derived from the Program, and can be
+ reasonably considered independent and separate works in themselves, then
+ this License, and its terms, do not apply to those sections when you
+ distribute them as separate works. But when you distribute the same
+ sections as part of a whole which is a work based on the Program, the
+ distribution of the whole must be on the terms of this License, whose
+ permissions for other licensees extend to the entire whole, and thus to
+ each and every part regardless of who wrote it.
+
+ Thus, it is not the intent of this section to claim rights or contest
+ your rights to work written entirely by you; rather, the intent is to
+ exercise the right to control the distribution of derivative or
+ collective works based on the Program.
+
+ In addition, mere aggregation of another work not based on the Program
+ with the Program (or with a work based on the Program) on a volume of a
+ storage or distribution medium does not bring the other work under the
+ scope of this License.
+
+3. You may copy and distribute the Program (or a work based on it, under
+ Section 2) in object code or executable form under the terms of Sections
+ 1 and 2 above provided that you also do one of the following:
+
+ * a) Accompany it with the complete corresponding machine-readable source
+ code, which must be distributed under the terms of Sections 1 and 2
+ above on a medium customarily used for software interchange; or,
+
+ * b) Accompany it with a written offer, valid for at least three years,
+ to give any third party, for a charge no more than your cost of
+ physically performing source distribution, a complete machine-
+ readable copy of the corresponding source code, to be distributed
+ under the terms of Sections 1 and 2 above on a medium customarily
+ used for software interchange; or,
+
+ * c) Accompany it with the information you received as to the offer to
+ distribute corresponding source code. (This alternative is allowed
+ only for noncommercial distribution and only if you received the
+ program in object code or executable form with such an offer, in
+ accord with Subsection b above.)
+
+ The source code for a work means the preferred form of the work for
+ making modifications to it. For an executable work, complete source code
+ means all the source code for all modules it contains, plus any
+ associated interface definition files, plus the scripts used to control
+ compilation and installation of the executable. However, as a special
+ exception, the source code distributed need not include anything that is
+ normally distributed (in either source or binary form) with the major
+ components (compiler, kernel, and so on) of the operating system on which
+ the executable runs, unless that component itself accompanies the
+ executable.
+
+ If distribution of executable or object code is made by offering access
+ to copy from a designated place, then offering equivalent access to copy
+ the source code from the same place counts as distribution of the source
+ code, even though third parties are not compelled to copy the source
+ along with the object code.
+
+4. You may not copy, modify, sublicense, or distribute the Program except as
+ expressly provided under this License. Any attempt otherwise to copy,
+ modify, sublicense or distribute the Program is void, and will
+ automatically terminate your rights under this License. However, parties
+ who have received copies, or rights, from you under this License will not
+ have their licenses terminated so long as such parties remain in full
+ compliance.
+
+5. You are not required to accept this License, since you have not signed
+ it. However, nothing else grants you permission to modify or distribute
+ the Program or its derivative works. These actions are prohibited by law
+ if you do not accept this License. Therefore, by modifying or
+ distributing the Program (or any work based on the Program), you
+ indicate your acceptance of this License to do so, and all its terms and
+ conditions for copying, distributing or modifying the Program or works
+ based on it.
+
+6. Each time you redistribute the Program (or any work based on the
+ Program), the recipient automatically receives a license from the
+ original licensor to copy, distribute or modify the Program subject to
+ these terms and conditions. You may not impose any further restrictions
+ on the recipients' exercise of the rights granted herein. You are not
+ responsible for enforcing compliance by third parties to this License.
+
+7. If, as a consequence of a court judgment or allegation of patent
+ infringement or for any other reason (not limited to patent issues),
+ conditions are imposed on you (whether by court order, agreement or
+ otherwise) that contradict the conditions of this License, they do not
+ excuse you from the conditions of this License. If you cannot distribute
+ so as to satisfy simultaneously your obligations under this License and
+ any other pertinent obligations, then as a consequence you may not
+ distribute the Program at all. For example, if a patent license would
+ not permit royalty-free redistribution of the Program by all those who
+ receive copies directly or indirectly through you, then the only way you
+ could satisfy both it and this License would be to refrain entirely from
+ distribution of the Program.
+
+ If any portion of this section is held invalid or unenforceable under any
+ particular circumstance, the balance of the section is intended to apply
+ and the section as a whole is intended to apply in other circumstances.
+
+ It is not the purpose of this section to induce you to infringe any
+ patents or other property right claims or to contest validity of any
+ such claims; this section has the sole purpose of protecting the
+ integrity of the free software distribution system, which is implemented
+ by public license practices. Many people have made generous contributions
+ to the wide range of software distributed through that system in
+ reliance on consistent application of that system; it is up to the
+ author/donor to decide if he or she is willing to distribute software
+ through any other system and a licensee cannot impose that choice.
+
+ This section is intended to make thoroughly clear what is believed to be
+ a consequence of the rest of this License.
+
+8. If the distribution and/or use of the Program is restricted in certain
+ countries either by patents or by copyrighted interfaces, the original
+ copyright holder who places the Program under this License may add an
+ explicit geographical distribution limitation excluding those countries,
+ so that distribution is permitted only in or among countries not thus
+ excluded. In such case, this License incorporates the limitation as if
+ written in the body of this License.
+
+9. The Free Software Foundation may publish revised and/or new versions of
+ the General Public License from time to time. Such new versions will be
+ similar in spirit to the present version, but may differ in detail to
+ address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the Program
+ specifies a version number of this License which applies to it and "any
+ later version", you have the option of following the terms and
+ conditions either of that version or of any later version published by
+ the Free Software Foundation. If the Program does not specify a version
+ number of this License, you may choose any version ever published by the
+ Free Software Foundation.
+
+10. If you wish to incorporate parts of the Program into other free programs
+ whose distribution conditions are different, write to the author to ask
+ for permission. For software which is copyrighted by the Free Software
+ Foundation, write to the Free Software Foundation; we sometimes make
+ exceptions for this. Our decision will be guided by the two goals of
+ preserving the free status of all derivatives of our free software and
+ of promoting the sharing and reuse of software generally.
+
+ NO WARRANTY
+
+11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+ FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+ OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+ PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
+ EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
+ ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH
+ YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL
+ NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+ REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR
+ DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL
+ DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM
+ (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED
+ INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF
+ THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR
+ OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+END OF TERMS AND CONDITIONS
+
+How to Apply These Terms to Your New Programs
+
+If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it free
+software which everyone can redistribute and change under these terms.
+
+To do so, attach the following notices to the program. It is safest to
+attach them to the start of each source file to most effectively convey the
+exclusion of warranty; and each file should have at least the "copyright"
+line and a pointer to where the full notice is found.
+
+one line to give the program's name and an idea of what it does.
+Copyright (C) yyyy name of author
+
+This program is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 2 of the License, or (at your option)
+any later version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+more details.
+
+You should have received a copy of the GNU General Public License along with
+this program; if not, write to the Free Software Foundation, Inc., 59
+Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this when
+it starts in an interactive mode:
+
+Gnomovision version 69, Copyright (C) year name of author Gnomovision comes
+with ABSOLUTELY NO WARRANTY; for details type 'show w'. This is free
+software, and you are welcome to redistribute it under certain conditions;
+type 'show c' for details.
+
+The hypothetical commands 'show w' and 'show c' should show the appropriate
+parts of the General Public License. Of course, the commands you use may be
+called something other than 'show w' and 'show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. Here is a sample; alter the names:
+
+Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+'Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+signature of Ty Coon, 1 April 1989
+Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs. If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library. If this is what you want to do, use the GNU Library General Public
+License instead of this License.
--- /dev/null
+#
+# Makefile for the Linux Wireless network device drivers.
+#
+# Original makefile by Peter Johanson
+
+list-m :=
+
+list-$(CONFIG_IEEE80211) += ieee80211
+list-$(CONFIG_IEEE80211_CRYPT) += ieee80211_crypt
+list-$(CONFIG_IEEE80211_CRYPT) += ieee80211_crypt_wep
+list-$(CONFIG_IEEE80211_WPA) += ieee80211_crypt_ccmp
+list-$(CONFIG_IEEE80211_WPA) += ieee80211_crypt_tkip
+
+obj-$(CONFIG_IEEE80211) += ieee80211.o
+obj-$(CONFIG_IEEE80211_CRYPT) += ieee80211_crypt.o
+obj-$(CONFIG_IEEE80211_CRYPT) += ieee80211_crypt_wep.o
+obj-$(CONFIG_IEEE80211_WPA) += ieee80211_crypt_ccmp.o
+obj-$(CONFIG_IEEE80211_WPA) += ieee80211_crypt_tkip.o
+ieee80211-objs := \
+ ieee80211_module.o \
+ ieee80211_tx.o \
+ ieee80211_rx.o \
+ ieee80211_wx.o
--- /dev/null
+/*
+ * Merged with mainline ieee80211.h in Aug 2004. Original ieee802_11
+ * remains copyright by the original authors
+ *
+ * Portions of the merged code are based on Host AP (software wireless
+ * LAN access point) driver for Intersil Prism2/2.5/3.
+ *
+ * Copyright (c) 2001-2002, SSH Communications Security Corp and Jouni Malinen
+ * <jkmaline@cc.hut.fi>
+ * Copyright (c) 2002-2003, Jouni Malinen <jkmaline@cc.hut.fi>
+ *
+ * Adaption to a generic IEEE 802.11 stack by James Ketrenos
+ * <jketreno@linux.intel.com>
+ * Copyright (c) 2004, Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation. See README and COPYING for
+ * more details.
+ */
+#ifndef IEEE80211_H
+#define IEEE80211_H
+#include <linux/if_ether.h> /* ETH_ALEN */
+
+#define IEEE80211_DATA_LEN 2304
+/* Maximum size for the MA-UNITDATA primitive, 802.11 standard section
+ 6.2.1.1.2.
+
+ The figure in section 7.1.2 suggests a body size of up to 2312
+ bytes is allowed, which is a bit confusing, I suspect this
+ represents the 2304 bytes of real data, plus a possible 8 bytes of
+ WEP IV and ICV. (this interpretation suggested by Ramiro Barreiro) */
+
+
+#define IEEE80211_HLEN 30
+#define IEEE80211_FRAME_LEN (IEEE80211_DATA_LEN + IEEE80211_HLEN)
+
+struct ieee80211_hdr {
+ u16 frame_ctl;
+ u16 duration_id;
+ u8 addr1[ETH_ALEN];
+ u8 addr2[ETH_ALEN];
+ u8 addr3[ETH_ALEN];
+ u16 seq_ctl;
+ u8 addr4[ETH_ALEN];
+} __attribute__ ((packed));
+
+#define IEEE80211_3ADDR_SIZE (24)
+#define IEEE80211_4ADDR_SIZE (30)
+
+#define MIN_FRAG_THRESHOLD 256U
+#define MAX_FRAG_THRESHOLD 2342U
+
+/* Frame control field constants */
+#define IEEE80211_FCTL_VERS 0x0002
+#define IEEE80211_FCTL_FTYPE 0x000c
+#define IEEE80211_FCTL_STYPE 0x00f0
+#define IEEE80211_FCTL_TODS 0x0100
+#define IEEE80211_FCTL_FROMDS 0x0200
+#define IEEE80211_FCTL_MOREFRAGS 0x0400
+#define IEEE80211_FCTL_RETRY 0x0800
+#define IEEE80211_FCTL_PM 0x1000
+#define IEEE80211_FCTL_MOREDATA 0x2000
+#define IEEE80211_FCTL_WEP 0x4000
+#define IEEE80211_FCTL_ORDER 0x8000
+
+#define IEEE80211_FTYPE_MGMT 0x0000
+#define IEEE80211_FTYPE_CTL 0x0004
+#define IEEE80211_FTYPE_DATA 0x0008
+
+/* management */
+#define IEEE80211_STYPE_ASSOC_REQ 0x0000
+#define IEEE80211_STYPE_ASSOC_RESP 0x0010
+#define IEEE80211_STYPE_REASSOC_REQ 0x0020
+#define IEEE80211_STYPE_REASSOC_RESP 0x0030
+#define IEEE80211_STYPE_PROBE_REQ 0x0040
+#define IEEE80211_STYPE_PROBE_RESP 0x0050
+#define IEEE80211_STYPE_BEACON 0x0080
+#define IEEE80211_STYPE_ATIM 0x0090
+#define IEEE80211_STYPE_DISASSOC 0x00A0
+#define IEEE80211_STYPE_AUTH 0x00B0
+#define IEEE80211_STYPE_DEAUTH 0x00C0
+
+/* control */
+#define IEEE80211_STYPE_PSPOLL 0x00A0
+#define IEEE80211_STYPE_RTS 0x00B0
+#define IEEE80211_STYPE_CTS 0x00C0
+#define IEEE80211_STYPE_ACK 0x00D0
+#define IEEE80211_STYPE_CFEND 0x00E0
+#define IEEE80211_STYPE_CFENDACK 0x00F0
+
+/* data */
+#define IEEE80211_STYPE_DATA 0x0000
+#define IEEE80211_STYPE_DATA_CFACK 0x0010
+#define IEEE80211_STYPE_DATA_CFPOLL 0x0020
+#define IEEE80211_STYPE_DATA_CFACKPOLL 0x0030
+#define IEEE80211_STYPE_NULLFUNC 0x0040
+#define IEEE80211_STYPE_CFACK 0x0050
+#define IEEE80211_STYPE_CFPOLL 0x0060
+#define IEEE80211_STYPE_CFACKPOLL 0x0070
+
+#define IEEE80211_SCTL_FRAG 0x000F
+#define IEEE80211_SCTL_SEQ 0xFFF0
+
+
+/* debug macros */
+
+#ifdef CONFIG_IEEE80211_DEBUG
+extern u32 ieee80211_debug_level;
+#define IEEE80211_DEBUG(level, fmt, args...) \
+do { if (ieee80211_debug_level & (level)) \
+ printk(KERN_DEBUG "ieee80211: %c %s " fmt, \
+ in_interrupt() ? 'I' : 'U', __FUNCTION__, ## args); } while (0)
+#else
+#define IEEE80211_DEBUG(level, fmt, args...) do {} while (0);
+#endif /* CONFIG_IEEE80211_DEBUG */
+
+/*
+ * To use the debug system;
+ *
+ * If you are defining a new debug classification, simply add it to the #define
+ * list here in the form of:
+ *
+ * #define IEEE80211_DL_xxxx VALUE
+ *
+ * shifting value to the left one bit from the previous entry. xxxx should be
+ * the name of the classification (for example, WEP)
+ *
+ * You then need to either add a IEEE80211_xxxx_DEBUG() macro definition for your
+ * classification, or use IEEE80211_DEBUG(IEEE80211_DL_xxxx, ...) whenever you want
+ * to send output to that classification.
+ *
+ * To add your debug level to the list of levels seen when you perform
+ *
+ * % cat /proc/net/ipw/debug_level
+ *
+ * you simply need to add your entry to the ipw_debug_levels array.
+ *
+ * If you do not see debug_level in /proc/net/ipw then you do not have
+ * CONFIG_IEEE80211_DEBUG defined in your kernel configuration
+ *
+ */
+
+#define IEEE80211_DL_INFO BIT(0)
+#define IEEE80211_DL_WX BIT(1)
+#define IEEE80211_DL_SCAN BIT(2)
+#define IEEE80211_DL_STATE BIT(3)
+#define IEEE80211_DL_MGMT BIT(4)
+#define IEEE80211_DL_FRAG BIT(5)
+#define IEEE80211_DL_EAP BIT(6)
+
+#define IEEE80211_ERROR(f, a...) printk(KERN_ERR "ieee80211: " f, ## a)
+#define IEEE80211_WARNING(f, a...) printk(KERN_WARNING "ieee80211: " f, ## a)
+#define IEEE80211_DEBUG_INFO(f, a...) IEEE80211_DEBUG(IEEE80211_DL_INFO, f, ## a)
+
+#define IEEE80211_DEBUG_WX(f, a...) IEEE80211_DEBUG(IEEE80211_DL_WX, f, ## a)
+#define IEEE80211_DEBUG_SCAN(f, a...) IEEE80211_DEBUG(IEEE80211_DL_SCAN, f, ## a)
+#define IEEE80211_DEBUG_STATE(f, a...) IEEE80211_DEBUG(IEEE80211_DL_STATE, f, ## a)
+#define IEEE80211_DEBUG_MGMT(f, a...) IEEE80211_DEBUG(IEEE80211_DL_MGMT, f, ## a)
+#define IEEE80211_DEBUG_FRAG(f, a...) IEEE80211_DEBUG(IEEE80211_DL_FRAG, f, ## a)
+#define IEEE80211_DEBUG_EAP(f, a...) IEEE80211_DEBUG(IEEE80211_DL_EAP, f, ## a)
+#include <linux/netdevice.h>
+#include <linux/wireless.h>
+#include <linux/if_arp.h> /* ARPHRD_ETHER */
+
+#ifndef WIRELESS_SPY
+#define WIRELESS_SPY // enable iwspy support
+#endif
+#include <net/iw_handler.h> // new driver API
+
+#define BIT(x) (1 << (x))
+
+#ifndef ETH_P_PAE
+#define ETH_P_PAE 0x888E /* Port Access Entity (IEEE 802.1X) */
+#endif /* ETH_P_PAE */
+
+#define ETH_P_PREAUTH 0x88C7 /* IEEE 802.11i pre-authentication */
+
+#ifndef ETH_P_80211_RAW
+#define ETH_P_80211_RAW (ETH_P_ECONET + 1)
+#endif
+
+/* IEEE 802.11 defines */
+
+#define P80211_OUI_LEN 3
+
+struct ieee80211_snap_hdr {
+
+ u8 dsap; /* always 0xAA */
+ u8 ssap; /* always 0xAA */
+ u8 ctrl; /* always 0x03 */
+ u8 oui[P80211_OUI_LEN]; /* organizational universal id */
+
+} __attribute__ ((packed));
+
+#define SNAP_SIZE sizeof(struct ieee80211_snap_hdr)
+
+#define WLAN_FC_GET_TYPE(fc) ((fc) & IEEE80211_FCTL_FTYPE)
+#define WLAN_FC_GET_STYPE(fc) ((fc) & IEEE80211_FCTL_STYPE)
+
+#define WLAN_GET_SEQ_FRAG(seq) ((seq) & IEEE80211_SCTL_FRAG)
+#define WLAN_GET_SEQ_SEQ(seq) ((seq) & IEEE80211_SCTL_SEQ)
+
+/* Authentication algorithms */
+#define WLAN_AUTH_OPEN 0
+#define WLAN_AUTH_SHARED_KEY 1
+
+#define WLAN_AUTH_CHALLENGE_LEN 128
+
+#define WLAN_CAPABILITY_BSS BIT(0)
+#define WLAN_CAPABILITY_IBSS BIT(1)
+#define WLAN_CAPABILITY_CF_POLLABLE BIT(2)
+#define WLAN_CAPABILITY_CF_POLL_REQUEST BIT(3)
+#define WLAN_CAPABILITY_PRIVACY BIT(4)
+#define WLAN_CAPABILITY_SHORT_PREAMBLE BIT(5)
+#define WLAN_CAPABILITY_PBCC BIT(6)
+#define WLAN_CAPABILITY_CHANNEL_AGILITY BIT(7)
+
+/* Status codes */
+#define WLAN_STATUS_SUCCESS 0
+#define WLAN_STATUS_UNSPECIFIED_FAILURE 1
+#define WLAN_STATUS_CAPS_UNSUPPORTED 10
+#define WLAN_STATUS_REASSOC_NO_ASSOC 11
+#define WLAN_STATUS_ASSOC_DENIED_UNSPEC 12
+#define WLAN_STATUS_NOT_SUPPORTED_AUTH_ALG 13
+#define WLAN_STATUS_UNKNOWN_AUTH_TRANSACTION 14
+#define WLAN_STATUS_CHALLENGE_FAIL 15
+#define WLAN_STATUS_AUTH_TIMEOUT 16
+#define WLAN_STATUS_AP_UNABLE_TO_HANDLE_NEW_STA 17
+#define WLAN_STATUS_ASSOC_DENIED_RATES 18
+/* 802.11b */
+#define WLAN_STATUS_ASSOC_DENIED_NOSHORT 19
+#define WLAN_STATUS_ASSOC_DENIED_NOPBCC 20
+#define WLAN_STATUS_ASSOC_DENIED_NOAGILITY 21
+
+/* Reason codes */
+#define WLAN_REASON_UNSPECIFIED 1
+#define WLAN_REASON_PREV_AUTH_NOT_VALID 2
+#define WLAN_REASON_DEAUTH_LEAVING 3
+#define WLAN_REASON_DISASSOC_DUE_TO_INACTIVITY 4
+#define WLAN_REASON_DISASSOC_AP_BUSY 5
+#define WLAN_REASON_CLASS2_FRAME_FROM_NONAUTH_STA 6
+#define WLAN_REASON_CLASS3_FRAME_FROM_NONASSOC_STA 7
+#define WLAN_REASON_DISASSOC_STA_HAS_LEFT 8
+#define WLAN_REASON_STA_REQ_ASSOC_WITHOUT_AUTH 9
+
+
+/* Information Element IDs */
+#define WLAN_EID_SSID 0
+#define WLAN_EID_SUPP_RATES 1
+#define WLAN_EID_FH_PARAMS 2
+#define WLAN_EID_DS_PARAMS 3
+#define WLAN_EID_CF_PARAMS 4
+#define WLAN_EID_TIM 5
+#define WLAN_EID_IBSS_PARAMS 6
+#define WLAN_EID_CHALLENGE 16
+#define WLAN_EID_RSN 48
+#define WLAN_EID_GENERIC 221
+
+#define IEEE80211_MGMT_HDR_LEN 24
+#define IEEE80211_DATA_HDR3_LEN 24
+#define IEEE80211_DATA_HDR4_LEN 30
+
+
+#define IEEE80211_STATMASK_SIGNAL BIT(0)
+#define IEEE80211_STATMASK_RSSI BIT(1)
+#define IEEE80211_STATMASK_NOISE BIT(2)
+#define IEEE80211_STATMASK_RATE BIT(3)
+#define IEEE80211_STATMASK_WEMASK 0x7
+
+
+#define IEEE80211_CCK_MODULATION BIT(0)
+#define IEEE80211_OFDM_MODULATION BIT(1)
+
+#define IEEE80211_24GHZ_BAND BIT(0)
+#define IEEE80211_52GHZ_BAND BIT(1)
+
+#define IEEE80211_CCK_RATE_1MB 0x02
+#define IEEE80211_CCK_RATE_2MB 0x04
+#define IEEE80211_CCK_RATE_5MB 0x0B
+#define IEEE80211_CCK_RATE_11MB 0x16
+#define IEEE80211_OFDM_RATE_6MB 0x0C
+#define IEEE80211_OFDM_RATE_9MB 0x12
+#define IEEE80211_OFDM_RATE_12MB 0x18
+#define IEEE80211_OFDM_RATE_18MB 0x24
+#define IEEE80211_OFDM_RATE_24MB 0x30
+#define IEEE80211_OFDM_RATE_36MB 0x48
+#define IEEE80211_OFDM_RATE_48MB 0x60
+#define IEEE80211_OFDM_RATE_54MB 0x6C
+#define IEEE80211_BASIC_RATE_MASK 0x80
+
+#define IEEE80211_CCK_RATE_1MB_MASK BIT(0)
+#define IEEE80211_CCK_RATE_2MB_MASK BIT(1)
+#define IEEE80211_CCK_RATE_5MB_MASK BIT(2)
+#define IEEE80211_CCK_RATE_11MB_MASK BIT(3)
+#define IEEE80211_OFDM_RATE_6MB_MASK BIT(4)
+#define IEEE80211_OFDM_RATE_9MB_MASK BIT(5)
+#define IEEE80211_OFDM_RATE_12MB_MASK BIT(6)
+#define IEEE80211_OFDM_RATE_18MB_MASK BIT(7)
+#define IEEE80211_OFDM_RATE_24MB_MASK BIT(8)
+#define IEEE80211_OFDM_RATE_36MB_MASK BIT(9)
+#define IEEE80211_OFDM_RATE_48MB_MASK BIT(10)
+#define IEEE80211_OFDM_RATE_54MB_MASK BIT(11)
+
+#define IEEE80211_CCK_RATES_MASK 0x0000000F
+#define IEEE80211_CCK_BASIC_RATES_MASK (IEEE80211_CCK_RATE_1MB_MASK | \
+ IEEE80211_CCK_RATE_2MB_MASK)
+#define IEEE80211_CCK_DEFAULT_RATES_MASK (IEEE80211_CCK_BASIC_RATES_MASK | \
+ IEEE80211_CCK_RATE_5MB_MASK | \
+ IEEE80211_CCK_RATE_11MB_MASK)
+
+#define IEEE80211_OFDM_RATES_MASK 0x00000FF0
+#define IEEE80211_OFDM_BASIC_RATES_MASK (IEEE80211_OFDM_RATE_6MB_MASK | \
+ IEEE80211_OFDM_RATE_12MB_MASK | \
+ IEEE80211_OFDM_RATE_24MB_MASK)
+#define IEEE80211_OFDM_DEFAULT_RATES_MASK (IEEE80211_OFDM_BASIC_RATES_MASK | \
+ IEEE80211_OFDM_RATE_9MB_MASK | \
+ IEEE80211_OFDM_RATE_18MB_MASK | \
+ IEEE80211_OFDM_RATE_36MB_MASK | \
+ IEEE80211_OFDM_RATE_48MB_MASK | \
+ IEEE80211_OFDM_RATE_54MB_MASK)
+#define IEEE80211_DEFAULT_RATES_MASK (IEEE80211_OFDM_DEFAULT_RATES_MASK | \
+ IEEE80211_CCK_DEFAULT_RATES_MASK)
+
+#define IEEE80211_NUM_OFDM_RATES 8
+#define IEEE80211_NUM_CCK_RATES 4
+#define IEEE80211_OFDM_SHIFT_MASK_A 4
+
+
+
+
+/* NOTE: This data is for statistical purposes; not all hardware provides this
+ * information for frames received. Not setting these will not cause
+ * any adverse affects. */
+struct ieee80211_rx_stats {
+ u32 mac_time;
+ u8 rssi;
+ u8 signal;
+ u8 noise;
+ u16 rate; /* in 100 kbps */
+ u8 received_channel;
+ u8 control;
+ u8 mask;
+ u8 freq;
+ u16 len;
+};
+
+/* IEEE 802.11 requires that STA supports concurrent reception of at least
+ * three fragmented frames. This define can be increased to support more
+ * concurrent frames, but it should be noted that each entry can consume about
+ * 2 kB of RAM and increasing cache size will slow down frame reassembly. */
+#define IEEE80211_FRAG_CACHE_LEN 4
+
+struct ieee80211_frag_entry {
+ unsigned long first_frag_time;
+ unsigned int seq;
+ unsigned int last_frag;
+ struct sk_buff *skb;
+ u8 src_addr[ETH_ALEN];
+ u8 dst_addr[ETH_ALEN];
+};
+
+struct ieee80211_stats {
+ unsigned int tx_unicast_frames;
+ unsigned int tx_multicast_frames;
+ unsigned int tx_fragments;
+ unsigned int tx_unicast_octets;
+ unsigned int tx_multicast_octets;
+ unsigned int tx_deferred_transmissions;
+ unsigned int tx_single_retry_frames;
+ unsigned int tx_multiple_retry_frames;
+ unsigned int tx_retry_limit_exceeded;
+ unsigned int tx_discards;
+ unsigned int rx_unicast_frames;
+ unsigned int rx_multicast_frames;
+ unsigned int rx_fragments;
+ unsigned int rx_unicast_octets;
+ unsigned int rx_multicast_octets;
+ unsigned int rx_fcs_errors;
+ unsigned int rx_discards_no_buffer;
+ unsigned int tx_discards_wrong_sa;
+ unsigned int rx_discards_wep_undecryptable;
+ unsigned int rx_message_in_msg_fragments;
+ unsigned int rx_message_in_bad_msg_fragments;
+};
+
+struct ieee80211_device;
+
+#include "ieee80211_crypt.h"
+
+#define SEC_KEY_1 BIT(0)
+#define SEC_KEY_2 BIT(1)
+#define SEC_KEY_3 BIT(2)
+#define SEC_KEY_4 BIT(3)
+#define SEC_ACTIVE_KEY BIT(4)
+#define SEC_AUTH_MODE BIT(5)
+#define SEC_UNICAST_GROUP BIT(6)
+#define SEC_LEVEL BIT(7)
+#define SEC_ENABLED BIT(8)
+
+#define SEC_LEVEL_0 0 /* None */
+#define SEC_LEVEL_1 1 /* WEP 40 and 104 bit */
+#define SEC_LEVEL_2 2 /* Level 1 + TKIP */
+#define SEC_LEVEL_2_CKIP 3 /* Level 1 + CKIP */
+#define SEC_LEVEL_3 4 /* Level 2 + CCMP */
+
+#define WEP_KEYS 4
+#define WEP_KEY_LEN 13
+
+struct ieee80211_security {
+ u16 active_key:1,
+ enabled:1,
+ auth_mode:2,
+ auth_algo:4,
+ unicast_uses_group:1;
+ u8 key_sizes[WEP_KEYS];
+ u8 keys[WEP_KEYS][WEP_KEY_LEN];
+ u8 level;
+ u16 flags;
+} __attribute__ ((packed));
+
+
+/*
+
+ 802.11 data frame from AP
+
+ ,-------------------------------------------------------------------.
+Bytes | 2 | 2 | 6 | 6 | 6 | 2 | 0..2312 | 4 |
+ |------|------|---------|---------|---------|------|---------|------|
+Desc. | ctrl | dura | DA/RA | TA | SA | Sequ | frame | fcs |
+ | | tion | (BSSID) | | | ence | data | |
+ `-------------------------------------------------------------------'
+
+Total: 28-2340 bytes
+
+*/
+
+struct ieee80211_header_data {
+ u16 frame_ctl;
+ u16 duration_id;
+ u8 addr1[6];
+ u8 addr2[6];
+ u8 addr3[6];
+ u16 seq_ctrl;
+};
+
+#define BEACON_PROBE_SSID_ID_POSITION 12
+
+/* Management Frame Information Element Types */
+#define MFIE_TYPE_SSID 0
+#define MFIE_TYPE_RATES 1
+#define MFIE_TYPE_FH_SET 2
+#define MFIE_TYPE_DS_SET 3
+#define MFIE_TYPE_CF_SET 4
+#define MFIE_TYPE_TIM 5
+#define MFIE_TYPE_IBSS_SET 6
+#define MFIE_TYPE_CHALLENGE 16
+#define MFIE_TYPE_RSN 48
+#define MFIE_TYPE_RATES_EX 50
+#define MFIE_TYPE_GENERIC 221
+
+struct ieee80211_info_element_hdr {
+ u8 id;
+ u8 len;
+} __attribute__ ((packed));
+
+struct ieee80211_info_element {
+ u8 id;
+ u8 len;
+ u8 data[0];
+} __attribute__ ((packed));
+
+/*
+ * These are the data types that can make up management packets
+ *
+ u16 auth_algorithm;
+ u16 auth_sequence;
+ u16 beacon_interval;
+ u16 capability;
+ u8 current_ap[ETH_ALEN];
+ u16 listen_interval;
+ struct {
+ u16 association_id:14, reserved:2;
+ } __attribute__ ((packed));
+ u32 time_stamp[2];
+ u16 reason;
+ u16 status;
+*/
+
+struct ieee80211_authentication {
+ struct ieee80211_header_data header;
+ u16 algorithm;
+ u16 transaction;
+ u16 status;
+ struct ieee80211_info_element info_element;
+} __attribute__ ((packed));
+
+
+struct ieee80211_probe_response {
+ struct ieee80211_header_data header;
+ u32 time_stamp[2];
+ u16 beacon_interval;
+ u16 capability;
+ struct ieee80211_info_element info_element;
+} __attribute__ ((packed));
+
+struct ieee80211_assoc_request_frame {
+ u16 capability;
+ u16 listen_interval;
+ u8 current_ap[ETH_ALEN];
+ struct ieee80211_info_element info_element;
+} __attribute__ ((packed));
+
+struct ieee80211_helper_functions {
+ void (*set_security)(struct ieee80211_device *ieee,
+ struct ieee80211_security *sec);
+
+ /* these functions are defined in hardware model specific files
+ * (hostap_{cs,plx,pci}.c */
+ int (*card_present)(struct ieee80211_device *ieee);
+ void (*cor_sreset)(struct ieee80211_device *ieee);
+ int (*dev_open)(struct ieee80211_device *ieee);
+ int (*dev_close)(struct ieee80211_device *ieee);
+ void (*genesis_reset)(struct ieee80211_device *ieee, int hcr);
+
+
+ /* Turn on unencrypted packet filtering at the HW level */
+ void (*set_unencrypted_filter)(struct ieee80211_device *ieee, int flag);
+
+ /* the following functions are from hostap_hw.c, but they may have some
+ * hardware model specific code */
+
+ int (*hw_enable)(struct net_device *dev, int initial);
+ int (*hw_config)(struct net_device *dev, int initial);
+ void (*hw_reset)(struct net_device *dev);
+ void (*hw_shutdown)(struct net_device *dev, int no_disable);
+ int (*reset_port)(struct net_device *dev);
+ int (*tx)(struct sk_buff *skb, struct net_device *dev);
+ void (*schedule_reset)(struct ieee80211_device *ieee);
+ int (*tx_80211)(struct sk_buff *skb, struct net_device *dev);
+};
+
+
+
+/* SWEEP TABLE ENTRIES NUMBER*/
+#define MAX_SWEEP_TAB_ENTRIES 42
+#define MAX_SWEEP_TAB_ENTRIES_PER_PACKET 7
+/* MAX_RATES_LENGTH needs to be 12. The spec says 8, and many APs
+ * only use 8, and then use extended rates for the remaining supported
+ * rates. Other APs, however, stick all of their supported rates on the
+ * main rates information element... */
+#define MAX_RATES_LENGTH ((u8)12)
+#define MAX_RATES_EX_LENGTH ((u8)16)
+#define MAX_NETWORK_COUNT 128
+
+#define CRC_LENGTH 4U
+
+#define MAX_WPA_IE_LEN 64
+
+#define NETWORK_EMPTY_ESSID BIT(0)
+#define NETWORK_HAS_OFDM BIT(1)
+
+struct ieee80211_network {
+ u8 bssid[ETH_ALEN];
+ u8 ssid_len;
+ struct ieee80211_rx_stats stats;
+ u16 capability;
+ u8 channel;
+ u8 rates[MAX_RATES_LENGTH];
+ u8 rates_len;
+ u8 rates_ex[MAX_RATES_EX_LENGTH];
+ u8 rates_ex_len;
+ unsigned long last_scanned;
+ u8 mode;
+ u8 flags;
+ u32 last_associate;
+ /* Ensure null-terminated for any debug msgs */
+ u8 ssid[IW_ESSID_MAX_SIZE + 1];
+ u32 time_stamp[2];
+ u16 beacon_interval;
+ u16 listen_interval;
+ u16 atim_window;
+#ifdef CONFIG_IEEE80211_WPA
+ u8 wpa_ie[MAX_WPA_IE_LEN];
+ size_t wpa_ie_len;
+ u8 rsn_ie[MAX_WPA_IE_LEN];
+ size_t rsn_ie_len;
+#endif /* CONFIG_IEEE80211_WPA */
+ struct list_head list;
+};
+
+enum ieee80211_state {
+ IEEE80211_UNINITIALIZED = 0,
+ IEEE80211_INITIALIZED,
+ IEEE80211_ASSOCIATING,
+ IEEE80211_ASSOCIATED,
+ IEEE80211_AUTHENTICATING,
+ IEEE80211_AUTHENTICATED,
+ IEEE80211_SHUTDOWN
+};
+
+#define DEFAULT_MAX_SCAN_AGE (15 * HZ)
+#define DEFAULT_FTS 2342
+#define MAC_FMT "%02x:%02x:%02x:%02x:%02x:%02x"
+#define MAC_ARG(x) ((u8*)(x))[0],((u8*)(x))[1],((u8*)(x))[2],((u8*)(x))[3],((u8*)(x))[4],((u8*)(x))[5]
+
+
+extern inline int is_multicast_ether_addr(const u8 *addr)
+{
+ return ((addr[0] != 0xff) && (0x01 & addr[0]));
+}
+
+extern inline int is_broadcast_ether_addr(const u8 *addr)
+{
+ return ((addr[0] == 0xff) && (addr[1] == 0xff) && (addr[2] == 0xff) && \
+ (addr[3] == 0xff) && (addr[4] == 0xff) && (addr[5] == 0xff));
+}
+
+
+struct ieee80211_device {
+ struct net_device *dev;
+
+ /* Bookkeeping structures */
+ struct net_device_stats stats;
+ struct ieee80211_stats ieee_stats;
+ void *priv;
+
+ /* Probe / Beacon management */
+ struct list_head network_free_list;
+ struct list_head network_list;
+ struct ieee80211_network *networks;
+ int scans;
+ int scan_age;
+
+ int iw_mode; /* operating mode (IW_MODE_*) */
+
+ spinlock_t lock;
+
+ int tx_payload_only; /* set to 1 if HW only expects the Tx SKB
+ * to contain the payload (vs. a fully configured
+ * 802.11 header for a data frame) */
+ int tx_headroom; /* Set to size of any additional room needed at front
+ * of allocated Tx SKBs */
+
+ /* WEP and other encryption related settings at the device level */
+ int open_wep; /* Set to 1 to allow unencrypted frames */
+
+ int reset_on_keychange; /* Set to 1 if the HW needs to be reset on
+ * WEP key changes */
+
+ /* If the host performs {en,de}cryption, then set to 1 */
+ int host_encrypt;
+ int host_decrypt;
+
+#ifdef CONFIG_IEEE80211_WPA
+ /* WPA data */
+ int wpa_enabled;
+ int drop_unencrypted;
+ int tkip_countermeasures;
+ int ieee_802_1x; /* is IEEE 802.1X used */
+ int privacy_invoked;
+ size_t wpa_ie_len;
+ u8 *wpa_ie;
+#endif /* CONFIG_IEEE80211_WPA */
+
+ struct list_head crypt_deinit_list;
+ struct ieee80211_crypt_data *crypt[WEP_KEYS];
+ int tx_keyidx; /* default TX key index (crypt[tx_keyidx]) */
+ struct timer_list crypt_deinit_timer;
+
+ int bcrx_sta_key; /* use individual keys to override default keys even
+ * with RX of broad/multicast frames */
+
+ /* Fragmentation structures */
+ struct ieee80211_frag_entry frag_cache[IEEE80211_FRAG_CACHE_LEN];
+ unsigned int frag_next_idx;
+ u16 fts; /* Fragmentation Threshold */
+
+ /* Association info */
+ u8 bssid[ETH_ALEN];
+
+ /* Callback vtable */
+ struct ieee80211_helper_functions *func;
+
+ enum ieee80211_state state;
+
+ int mode; /* A, B, G */
+ int modulation; /* CCK, OFDM */
+ int freq_band; /* 2.4Ghz, 5.2Ghz, Mixed */
+ int abg_ture; /* ABG flag */
+};
+
+#define IEEE_A 0
+#define IEEE_B 1
+#define IEEE_G 2
+#define IEEE_MASK (BIT(IEEE_A) | BIT(IEEE_B) | BIT(IEEE_G))
+
+extern inline int ieee80211_is_empty_essid(const char *essid, int essid_len)
+{
+ /* Single white space is for Linksys APs */
+ if (essid_len == 1 && essid[0] == ' ')
+ return 1;
+
+ /* Otherwise, if the entire essid is 0, we assume it is hidden */
+ while (essid_len) {
+ essid_len--;
+ if (essid[essid_len] != '\0')
+ return 0;
+ }
+
+ return 1;
+}
+
+
+extern inline int ieee80211_is_valid_mode(struct ieee80211_device *ieee, int mode)
+{
+ int rc = 1;
+ switch(mode) {
+ case IEEE_A:
+ if (!(ieee->modulation & IEEE80211_OFDM_MODULATION) ||
+ !(ieee->freq_band & IEEE80211_52GHZ_BAND))
+ rc = 0;
+ break;
+
+ case IEEE_B:
+ if (!(ieee->modulation & IEEE80211_CCK_MODULATION) ||
+ !(ieee->freq_band & IEEE80211_24GHZ_BAND))
+ rc = 0;
+ break;
+
+ case IEEE_G:
+ if (!(ieee->modulation & IEEE80211_OFDM_MODULATION) ||
+ !(ieee->freq_band & IEEE80211_24GHZ_BAND))
+ rc = 0;
+ break;
+ }
+ return rc;
+}
+
+extern inline int ieee80211_get_hdrlen(u16 fc)
+{
+ int hdrlen = 24;
+
+ switch (WLAN_FC_GET_TYPE(fc)) {
+ case IEEE80211_FTYPE_DATA:
+ if ((fc & IEEE80211_FCTL_FROMDS) && (fc & IEEE80211_FCTL_TODS))
+ hdrlen = 30; /* Addr4 */
+ break;
+ case IEEE80211_FTYPE_CTL:
+ switch (WLAN_FC_GET_STYPE(fc)) {
+ case IEEE80211_STYPE_CTS:
+ case IEEE80211_STYPE_ACK:
+ hdrlen = 10;
+ break;
+ default:
+ hdrlen = 16;
+ break;
+ }
+ break;
+ }
+
+ return hdrlen;
+}
+
+
+
+/* ieee80211.c */
+extern struct ieee80211_device * ieee80211_alloc(struct net_device *dev,
+ void *priv);
+extern void ieee80211_free(struct ieee80211_device *ieee);
+
+extern int ieee80211_set_encryption(struct ieee80211_device *ieee);
+
+/* ieee80211_tx.c */
+
+struct ieee80211_txb {
+ u8 nr_frags;
+ u8 encrypted;
+ u16 reserved;
+ u16 frag_size;
+ u16 payload_size;
+ struct sk_buff *fragments[0];
+};
+
+extern struct ieee80211_txb *ieee80211_skb_to_txb(struct ieee80211_device *ieee,
+ struct sk_buff *skb);
+extern void ieee80211_txb_free(struct ieee80211_txb *);
+
+
+/* ieee80211_rx.c */
+extern int ieee80211_rx(struct ieee80211_device *ieee, struct sk_buff *skb,
+ struct ieee80211_rx_stats *rx_stats);
+extern void ieee80211_rx_mgt(struct ieee80211_device *ieee,
+ struct ieee80211_hdr *header,
+ struct ieee80211_rx_stats *stats);
+
+/* iee80211_wx.c */
+extern int ieee80211_wx_get_scan(struct ieee80211_device *ieee,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *key);
+extern int ieee80211_wx_set_encode(struct ieee80211_device *ieee,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *key);
+extern int ieee80211_wx_get_encode(struct ieee80211_device *ieee,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *key);
+
+
+extern inline void ieee80211_increment_scans(struct ieee80211_device *ieee)
+{
+ ieee->scans++;
+}
+
+extern inline int ieee80211_get_scans(struct ieee80211_device *ieee)
+{
+ return ieee->scans;
+}
+
+static inline const char *escape_essid(const char *essid, u8 essid_len) {
+ static char escaped[IW_ESSID_MAX_SIZE * 2 + 1];
+ const char *s = essid;
+ char *d = escaped;
+
+ if (ieee80211_is_empty_essid(essid, essid_len)) {
+ memcpy(escaped, "<hidden>", sizeof("<hidden>"));
+ return escaped;
+ }
+
+ essid_len = min(essid_len, (u8)IW_ESSID_MAX_SIZE);
+ while (essid_len--) {
+ if (*s == '\0') {
+ *d++ = '\\';
+ *d++ = '0';
+ s++;
+ } else {
+ *d++ = *s++;
+ }
+ }
+ *d = '\0';
+ return escaped;
+}
+
+#ifndef offset_in_page
+#define offset_in_page(p) ((unsigned long)(p) & ~PAGE_MASK)
+#endif
+
+#endif /* IEEE80211_H */
--- /dev/null
+/*
+ * Host AP crypto routines
+ *
+ * Copyright (c) 2002-2003, Jouni Malinen <jkmaline@cc.hut.fi>
+ * Portions Copyright (C) 2004, Intel Corporation <jketreno@linux.intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation. See README and COPYING for
+ * more details.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <asm/string.h>
+#include <asm/errno.h>
+
+#include "ieee80211.h"
+
+MODULE_AUTHOR("Jouni Malinen");
+MODULE_DESCRIPTION("HostAP crypto");
+MODULE_LICENSE("GPL");
+
+struct ieee80211_crypto_alg {
+ struct list_head list;
+ struct ieee80211_crypto_ops *ops;
+};
+
+
+struct ieee80211_crypto {
+ struct list_head algs;
+ spinlock_t lock;
+};
+
+static struct ieee80211_crypto *hcrypt;
+
+void ieee80211_crypt_deinit_entries(struct ieee80211_device *ieee,
+ int force)
+{
+ struct list_head *ptr, *n;
+ struct ieee80211_crypt_data *entry;
+
+ for (ptr = ieee->crypt_deinit_list.next, n = ptr->next;
+ ptr != &ieee->crypt_deinit_list; ptr = n, n = ptr->next) {
+ entry = list_entry(ptr, struct ieee80211_crypt_data, list);
+
+ if (atomic_read(&entry->refcnt) != 0 && !force)
+ continue;
+
+ list_del(ptr);
+
+ if (entry->ops) {
+ entry->ops->deinit(entry->priv);
+ module_put(entry->ops->owner);
+ }
+ kfree(entry);
+ }
+}
+
+void ieee80211_crypt_deinit_handler(unsigned long data)
+{
+ struct ieee80211_device *ieee = (struct ieee80211_device *)data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ieee->lock, flags);
+ ieee80211_crypt_deinit_entries(ieee, 0);
+ if (!list_empty(&ieee->crypt_deinit_list)) {
+ printk(KERN_DEBUG "%s: entries remaining in delayed crypt "
+ "deletion list\n", ieee->dev->name);
+ ieee->crypt_deinit_timer.expires = jiffies + HZ;
+ add_timer(&ieee->crypt_deinit_timer);
+ }
+ spin_unlock_irqrestore(&ieee->lock, flags);
+
+}
+
+void ieee80211_crypt_delayed_deinit(struct ieee80211_device *ieee,
+ struct ieee80211_crypt_data **crypt)
+{
+ struct ieee80211_crypt_data *tmp;
+ unsigned long flags;
+
+ if (*crypt == NULL)
+ return;
+
+ tmp = *crypt;
+ *crypt = NULL;
+
+ /* must not run ops->deinit() while there may be pending encrypt or
+ * decrypt operations. Use a list of delayed deinits to avoid needing
+ * locking. */
+
+ spin_lock_irqsave(&ieee->lock, flags);
+ list_add(&tmp->list, &ieee->crypt_deinit_list);
+ if (!timer_pending(&ieee->crypt_deinit_timer)) {
+ ieee->crypt_deinit_timer.expires = jiffies + HZ;
+ add_timer(&ieee->crypt_deinit_timer);
+ }
+ spin_unlock_irqrestore(&ieee->lock, flags);
+}
+
+int ieee80211_register_crypto_ops(struct ieee80211_crypto_ops *ops)
+{
+ unsigned long flags;
+ struct ieee80211_crypto_alg *alg;
+
+ if (hcrypt == NULL)
+ return -1;
+
+ alg = kmalloc(sizeof(*alg), GFP_KERNEL);
+ if (alg == NULL)
+ return -ENOMEM;
+
+ memset(alg, 0, sizeof(*alg));
+ alg->ops = ops;
+
+ spin_lock_irqsave(&hcrypt->lock, flags);
+ list_add(&alg->list, &hcrypt->algs);
+ spin_unlock_irqrestore(&hcrypt->lock, flags);
+
+ printk(KERN_DEBUG "ieee80211_crypt: registered algorithm '%s'\n",
+ ops->name);
+
+ return 0;
+}
+
+int ieee80211_unregister_crypto_ops(struct ieee80211_crypto_ops *ops)
+{
+ unsigned long flags;
+ struct list_head *ptr;
+ struct ieee80211_crypto_alg *del_alg = NULL;
+
+ if (hcrypt == NULL)
+ return -1;
+
+ spin_lock_irqsave(&hcrypt->lock, flags);
+ for (ptr = hcrypt->algs.next; ptr != &hcrypt->algs; ptr = ptr->next) {
+ struct ieee80211_crypto_alg *alg =
+ (struct ieee80211_crypto_alg *) ptr;
+ if (alg->ops == ops) {
+ list_del(&alg->list);
+ del_alg = alg;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&hcrypt->lock, flags);
+
+ if (del_alg) {
+ printk(KERN_DEBUG "ieee80211_crypt: unregistered algorithm "
+ "'%s'\n", ops->name);
+ kfree(del_alg);
+ }
+
+ return del_alg ? 0 : -1;
+}
+
+
+struct ieee80211_crypto_ops * ieee80211_get_crypto_ops(const char *name)
+{
+ unsigned long flags;
+ struct list_head *ptr;
+ struct ieee80211_crypto_alg *found_alg = NULL;
+
+ if (hcrypt == NULL)
+ return NULL;
+
+ spin_lock_irqsave(&hcrypt->lock, flags);
+ for (ptr = hcrypt->algs.next; ptr != &hcrypt->algs; ptr = ptr->next) {
+ struct ieee80211_crypto_alg *alg =
+ (struct ieee80211_crypto_alg *) ptr;
+ if (strcmp(alg->ops->name, name) == 0) {
+ found_alg = alg;
+ break;
+ }
+ }
+ spin_unlock_irqrestore(&hcrypt->lock, flags);
+
+ if (found_alg)
+ return found_alg->ops;
+ else
+ return NULL;
+}
+
+
+static void * ieee80211_crypt_null_init(int keyidx) { return (void *) 1; }
+static void ieee80211_crypt_null_deinit(void *priv) {}
+
+static struct ieee80211_crypto_ops ieee80211_crypt_null = {
+ .name = "NULL",
+ .init = ieee80211_crypt_null_init,
+ .deinit = ieee80211_crypt_null_deinit,
+ .encrypt_mpdu = NULL,
+ .decrypt_mpdu = NULL,
+ .encrypt_msdu = NULL,
+ .decrypt_msdu = NULL,
+ .set_key = NULL,
+ .get_key = NULL,
+ .extra_prefix_len = 0,
+ .extra_postfix_len = 0,
+ .owner = THIS_MODULE,
+};
+
+
+static int __init ieee80211_crypto_init(void)
+{
+ hcrypt = kmalloc(sizeof(*hcrypt), GFP_KERNEL);
+ if (hcrypt == NULL)
+ return -ENOMEM;
+
+ memset(hcrypt, 0, sizeof(*hcrypt));
+ INIT_LIST_HEAD(&hcrypt->algs);
+ spin_lock_init(&hcrypt->lock);
+
+ (void) ieee80211_register_crypto_ops(&ieee80211_crypt_null);
+
+ return 0;
+}
+
+
+static void __exit ieee80211_crypto_deinit(void)
+{
+ struct list_head *ptr, *n;
+
+ if (hcrypt == NULL)
+ return;
+
+ for (ptr = hcrypt->algs.next, n = ptr->next; ptr != &hcrypt->algs;
+ ptr = n, n = ptr->next) {
+ struct ieee80211_crypto_alg *alg =
+ (struct ieee80211_crypto_alg *) ptr;
+ list_del(ptr);
+ printk(KERN_DEBUG "ieee80211_crypt: unregistered algorithm "
+ "'%s' (deinit)\n", alg->ops->name);
+ kfree(alg);
+ }
+
+ kfree(hcrypt);
+}
+
+EXPORT_SYMBOL(ieee80211_crypt_deinit_entries);
+EXPORT_SYMBOL(ieee80211_crypt_deinit_handler);
+EXPORT_SYMBOL(ieee80211_crypt_delayed_deinit);
+
+EXPORT_SYMBOL(ieee80211_register_crypto_ops);
+EXPORT_SYMBOL(ieee80211_unregister_crypto_ops);
+EXPORT_SYMBOL(ieee80211_get_crypto_ops);
+
+module_init(ieee80211_crypto_init);
+module_exit(ieee80211_crypto_deinit);
--- /dev/null
+/*
+ * Original code based on Host AP (software wireless LAN access point) driver
+ * for Intersil Prism2/2.5/3.
+ *
+ * Copyright (c) 2001-2002, SSH Communications Security Corp and Jouni Malinen
+ * <jkmaline@cc.hut.fi>
+ * Copyright (c) 2002-2003, Jouni Malinen <jkmaline@cc.hut.fi>
+ *
+ * Adaption to a generic IEEE 802.11 stack by James Ketrenos
+ * <jketreno@linux.intel.com>
+ *
+ * Copyright (c) 2004, Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation. See README and COPYING for
+ * more details.
+ */
+
+/*
+ * This file defines the interface to the ieee80211 crypto module.
+ */
+#ifndef IEEE80211_CRYPT_H
+#define IEEE80211_CRYPT_H
+
+#ifdef CONFIG_IEEE80211_CRYPT_MODULE
+#ifndef CONFIG_IEEE80211_CRYPT
+#define CONFIG_IEEE80211_CRYPT
+#endif
+#endif
+
+#ifdef CONFIG_IEEE80211_CRYPT
+#include <linux/skbuff.h>
+
+struct ieee80211_crypto_ops {
+ const char *name;
+
+ /* init new crypto context (e.g., allocate private data space,
+ * select IV, etc.); returns NULL on failure or pointer to allocated
+ * private data on success */
+ void * (*init)(int keyidx);
+
+ /* deinitialize crypto context and free allocated private data */
+ void (*deinit)(void *priv);
+
+ /* encrypt/decrypt return < 0 on error or >= 0 on success. The return
+ * value from decrypt_mpdu is passed as the keyidx value for
+ * decrypt_msdu. skb must have enough head and tail room for the
+ * encryption; if not, error will be returned; these functions are
+ * called for all MPDUs (i.e., fragments).
+ */
+ int (*encrypt_mpdu)(struct sk_buff *skb, int hdr_len, void *priv);
+ int (*decrypt_mpdu)(struct sk_buff *skb, int hdr_len, void *priv);
+
+ /* These functions are called for full MSDUs, i.e. full frames.
+ * These can be NULL if full MSDU operations are not needed. */
+ int (*encrypt_msdu)(struct sk_buff *skb, int hdr_len, void *priv);
+ int (*decrypt_msdu)(struct sk_buff *skb, int keyidx, int hdr_len,
+ void *priv);
+
+ int (*set_key)(void *key, int len, u8 *seq, void *priv);
+ int (*get_key)(void *key, int len, u8 *seq, void *priv);
+
+ /* procfs handler for printing out key information and possible
+ * statistics */
+ char * (*print_stats)(char *p, void *priv);
+
+ /* maximum number of bytes added by encryption; encrypt buf is
+ * allocated with extra_prefix_len bytes, copy of in_buf, and
+ * extra_postfix_len; encrypt need not use all this space, but
+ * the result must start at the beginning of the buffer and correct
+ * length must be returned */
+ int extra_prefix_len, extra_postfix_len;
+
+ struct module *owner;
+};
+
+struct ieee80211_crypt_data {
+ struct list_head list; /* delayed deletion list */
+ struct ieee80211_crypto_ops *ops;
+ void *priv;
+ atomic_t refcnt;
+};
+
+int ieee80211_register_crypto_ops(struct ieee80211_crypto_ops *ops);
+int ieee80211_unregister_crypto_ops(struct ieee80211_crypto_ops *ops);
+struct ieee80211_crypto_ops * ieee80211_get_crypto_ops(const char *name);
+void ieee80211_crypt_deinit_entries(struct ieee80211_device *, int);
+void ieee80211_crypt_deinit_handler(unsigned long);
+void ieee80211_crypt_delayed_deinit(struct ieee80211_device *ieee,
+ struct ieee80211_crypt_data **crypt);
+
+#else
+
+struct ieee80211_crypt_data {
+ struct list_head list; /* delayed deletion list */
+ void *ops;
+ void *priv;
+ atomic_t refcnt;
+};
+
+#endif /* CONFIG_IEEE80211_NOWEP */
+
+#endif
--- /dev/null
+/*
+ * Host AP crypt: host-based CCMP encryption implementation for Host AP driver
+ *
+ * Copyright (c) 2003-2004, Jouni Malinen <jkmaline@cc.hut.fi>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation. See README and COPYING for
+ * more details.
+ */
+
+#include <linux/config.h>
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/if_ether.h>
+#include <linux/if_arp.h>
+#include <asm/string.h>
+#include <linux/wireless.h>
+
+#include "ieee80211.h"
+
+#ifndef CONFIG_CRYPTO
+#error CONFIG_CRYPTO is required to build this module.
+#endif
+
+#include <linux/crypto.h>
+#include <asm/scatterlist.h>
+
+MODULE_AUTHOR("Jouni Malinen");
+MODULE_DESCRIPTION("Host AP crypt: CCMP");
+MODULE_LICENSE("GPL");
+
+#define AES_BLOCK_LEN 16
+#define CCMP_HDR_LEN 8
+#define CCMP_MIC_LEN 8
+#define CCMP_TK_LEN 16
+#define CCMP_PN_LEN 6
+
+struct ieee80211_ccmp_data {
+ u8 key[CCMP_TK_LEN];
+ int key_set;
+
+ u8 tx_pn[CCMP_PN_LEN];
+ u8 rx_pn[CCMP_PN_LEN];
+
+ u32 dot11RSNAStatsCCMPFormatErrors;
+ u32 dot11RSNAStatsCCMPReplays;
+ u32 dot11RSNAStatsCCMPDecryptErrors;
+
+ int key_idx;
+
+ struct crypto_tfm *tfm;
+
+ /* scratch buffers for virt_to_page() (crypto API) */
+ u8 tx_b0[AES_BLOCK_LEN], tx_b[AES_BLOCK_LEN],
+ tx_e[AES_BLOCK_LEN], tx_s0[AES_BLOCK_LEN];
+ u8 rx_b0[AES_BLOCK_LEN], rx_b[AES_BLOCK_LEN], rx_a[AES_BLOCK_LEN];
+};
+
+void ieee80211_ccmp_aes_encrypt(struct crypto_tfm *tfm,
+ const u8 pt[16], u8 ct[16])
+{
+ struct scatterlist src, dst;
+
+ src.page = virt_to_page(pt);
+ src.offset = offset_in_page(pt);
+ src.length = AES_BLOCK_LEN;
+
+ dst.page = virt_to_page(ct);
+ dst.offset = offset_in_page(ct);
+ dst.length = AES_BLOCK_LEN;
+
+ crypto_cipher_encrypt(tfm, &dst, &src, AES_BLOCK_LEN);
+}
+
+static void * ieee80211_ccmp_init(int key_idx)
+{
+ struct ieee80211_ccmp_data *priv;
+
+ priv = kmalloc(sizeof(*priv), GFP_ATOMIC);
+ if (priv == NULL) {
+ goto fail;
+ }
+ memset(priv, 0, sizeof(*priv));
+ priv->key_idx = key_idx;
+
+ priv->tfm = crypto_alloc_tfm("aes", 0);
+ if (priv->tfm == NULL) {
+ printk(KERN_DEBUG "ieee80211_crypt_ccmp: could not allocate "
+ "crypto API aes\n");
+ goto fail;
+ }
+
+ return priv;
+
+fail:
+ if (priv) {
+ if (priv->tfm)
+ crypto_free_tfm(priv->tfm);
+ kfree(priv);
+ }
+
+ return NULL;
+}
+
+
+static void ieee80211_ccmp_deinit(void *priv)
+{
+ struct ieee80211_ccmp_data *_priv = priv;
+ if (_priv && _priv->tfm)
+ crypto_free_tfm(_priv->tfm);
+ kfree(priv);
+}
+
+
+static inline void xor_block(u8 *b, u8 *a, size_t len)
+{
+ int i;
+ for (i = 0; i < len; i++)
+ b[i] ^= a[i];
+}
+
+
+static void ccmp_init_blocks(struct crypto_tfm *tfm,
+ struct ieee80211_hdr *hdr,
+ u8 *pn, size_t dlen, u8 *b0, u8 *auth,
+ u8 *s0)
+{
+ u8 *pos, qc = 0;
+ size_t aad_len;
+ u16 fc;
+ int a4_included, qc_included;
+ u8 aad[2 * AES_BLOCK_LEN];
+
+ fc = le16_to_cpu(hdr->frame_ctl);
+ a4_included = ((fc & (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) ==
+ (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS));
+ qc_included = ((WLAN_FC_GET_TYPE(fc) == IEEE80211_FTYPE_DATA) &&
+ (WLAN_FC_GET_STYPE(fc) & 0x08));
+ aad_len = 22;
+ if (a4_included)
+ aad_len += 6;
+ if (qc_included) {
+ pos = (u8 *) &hdr->addr4;
+ if (a4_included)
+ pos += 6;
+ qc = *pos & 0x0f;
+ aad_len += 2;
+ }
+
+ /* CCM Initial Block:
+ * Flag (Include authentication header, M=3 (8-octet MIC),
+ * L=1 (2-octet Dlen))
+ * Nonce: 0x00 | A2 | PN
+ * Dlen */
+ b0[0] = 0x59;
+ b0[1] = qc;
+ memcpy(b0 + 2, hdr->addr2, ETH_ALEN);
+ memcpy(b0 + 8, pn, CCMP_PN_LEN);
+ b0[14] = (dlen >> 8) & 0xff;
+ b0[15] = dlen & 0xff;
+
+ /* AAD:
+ * FC with bits 4..6 and 11..13 masked to zero; 14 is always one
+ * A1 | A2 | A3
+ * SC with bits 4..15 (seq#) masked to zero
+ * A4 (if present)
+ * QC (if present)
+ */
+ pos = (u8 *) hdr;
+ aad[0] = 0; /* aad_len >> 8 */
+ aad[1] = aad_len & 0xff;
+ aad[2] = pos[0] & 0x8f;
+ aad[3] = pos[1] & 0xc7;
+ memcpy(aad + 4, hdr->addr1, 3 * ETH_ALEN);
+ pos = (u8 *) &hdr->seq_ctl;
+ aad[22] = pos[0] & 0x0f;
+ aad[23] = 0; /* all bits masked */
+ memset(aad + 24, 0, 8);
+ if (a4_included)
+ memcpy(aad + 24, hdr->addr4, ETH_ALEN);
+ if (qc_included) {
+ aad[a4_included ? 30 : 24] = qc;
+ /* rest of QC masked */
+ }
+
+ /* Start with the first block and AAD */
+ ieee80211_ccmp_aes_encrypt(tfm, b0, auth);
+ xor_block(auth, aad, AES_BLOCK_LEN);
+ ieee80211_ccmp_aes_encrypt(tfm, auth, auth);
+ xor_block(auth, &aad[AES_BLOCK_LEN], AES_BLOCK_LEN);
+ ieee80211_ccmp_aes_encrypt(tfm, auth, auth);
+ b0[0] &= 0x07;
+ b0[14] = b0[15] = 0;
+ ieee80211_ccmp_aes_encrypt(tfm, b0, s0);
+}
+
+
+static int ieee80211_ccmp_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
+{
+ struct ieee80211_ccmp_data *key = priv;
+ int data_len, i, blocks, last, len;
+ u8 *pos, *mic;
+ struct ieee80211_hdr *hdr;
+ u8 *b0 = key->tx_b0;
+ u8 *b = key->tx_b;
+ u8 *e = key->tx_e;
+ u8 *s0 = key->tx_s0;
+
+ if (skb_headroom(skb) < CCMP_HDR_LEN ||
+ skb_tailroom(skb) < CCMP_MIC_LEN ||
+ skb->len < hdr_len)
+ return -1;
+
+ data_len = skb->len - hdr_len;
+ pos = skb_push(skb, CCMP_HDR_LEN);
+ memmove(pos, pos + CCMP_HDR_LEN, hdr_len);
+ pos += hdr_len;
+ mic = skb_put(skb, CCMP_MIC_LEN);
+
+ i = CCMP_PN_LEN - 1;
+ while (i >= 0) {
+ key->tx_pn[i]++;
+ if (key->tx_pn[i] != 0)
+ break;
+ i--;
+ }
+
+ *pos++ = key->tx_pn[5];
+ *pos++ = key->tx_pn[4];
+ *pos++ = 0;
+ *pos++ = (key->key_idx << 6) | (1 << 5) /* Ext IV included */;
+ *pos++ = key->tx_pn[3];
+ *pos++ = key->tx_pn[2];
+ *pos++ = key->tx_pn[1];
+ *pos++ = key->tx_pn[0];
+
+ hdr = (struct ieee80211_hdr *) skb->data;
+ ccmp_init_blocks(key->tfm, hdr, key->tx_pn, data_len, b0, b, s0);
+
+ blocks = (data_len + AES_BLOCK_LEN - 1) / AES_BLOCK_LEN;
+ last = data_len % AES_BLOCK_LEN;
+
+ for (i = 1; i <= blocks; i++) {
+ len = (i == blocks && last) ? last : AES_BLOCK_LEN;
+ /* Authentication */
+ xor_block(b, pos, len);
+ ieee80211_ccmp_aes_encrypt(key->tfm, b, b);
+ /* Encryption, with counter */
+ b0[14] = (i >> 8) & 0xff;
+ b0[15] = i & 0xff;
+ ieee80211_ccmp_aes_encrypt(key->tfm, b0, e);
+ xor_block(pos, e, len);
+ pos += len;
+ }
+
+ for (i = 0; i < CCMP_MIC_LEN; i++)
+ mic[i] = b[i] ^ s0[i];
+
+ return 0;
+}
+
+
+static int ieee80211_ccmp_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
+{
+ struct ieee80211_ccmp_data *key = priv;
+ u8 keyidx, *pos;
+ struct ieee80211_hdr *hdr;
+ u8 *b0 = key->rx_b0;
+ u8 *b = key->rx_b;
+ u8 *a = key->rx_a;
+ u8 pn[6];
+ int i, blocks, last, len;
+ size_t data_len = skb->len - hdr_len - CCMP_HDR_LEN - CCMP_MIC_LEN;
+ u8 *mic = skb->data + skb->len - CCMP_MIC_LEN;
+
+ if (skb->len < hdr_len + CCMP_HDR_LEN + CCMP_MIC_LEN) {
+ key->dot11RSNAStatsCCMPFormatErrors++;
+ return -1;
+ }
+
+ hdr = (struct ieee80211_hdr *) skb->data;
+ pos = skb->data + hdr_len;
+ keyidx = pos[3];
+ if (!(keyidx & (1 << 5))) {
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "CCMP: received packet without ExtIV"
+ " flag from " MAC_FMT "\n", MAC_ARG(hdr->addr2));
+ }
+ key->dot11RSNAStatsCCMPFormatErrors++;
+ return -2;
+ }
+ keyidx >>= 6;
+ if (key->key_idx != keyidx) {
+ printk(KERN_DEBUG "CCMP: RX tkey->key_idx=%d frame "
+ "keyidx=%d priv=%p\n", key->key_idx, keyidx, priv);
+ return -6;
+ }
+ if (!key->key_set) {
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "CCMP: received packet from " MAC_FMT
+ " with keyid=%d that does not have a configured"
+ " key\n", MAC_ARG(hdr->addr2), keyidx);
+ }
+ return -3;
+ }
+
+ pn[0] = pos[7];
+ pn[1] = pos[6];
+ pn[2] = pos[5];
+ pn[3] = pos[4];
+ pn[4] = pos[1];
+ pn[5] = pos[0];
+ pos += 8;
+
+ if (memcmp(pn, key->rx_pn, CCMP_PN_LEN) <= 0) {
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "CCMP: replay detected: STA=" MAC_FMT
+ " previous PN %02x%02x%02x%02x%02x%02x "
+ "received PN %02x%02x%02x%02x%02x%02x\n",
+ MAC_ARG(hdr->addr2), MAC_ARG(key->rx_pn),
+ MAC_ARG(pn));
+ }
+ key->dot11RSNAStatsCCMPReplays++;
+ return -4;
+ }
+
+ ccmp_init_blocks(key->tfm, hdr, pn, data_len, b0, a, b);
+ xor_block(mic, b, CCMP_MIC_LEN);
+
+ blocks = (data_len + AES_BLOCK_LEN - 1) / AES_BLOCK_LEN;
+ last = data_len % AES_BLOCK_LEN;
+
+ for (i = 1; i <= blocks; i++) {
+ len = (i == blocks && last) ? last : AES_BLOCK_LEN;
+ /* Decrypt, with counter */
+ b0[14] = (i >> 8) & 0xff;
+ b0[15] = i & 0xff;
+ ieee80211_ccmp_aes_encrypt(key->tfm, b0, b);
+ xor_block(pos, b, len);
+ /* Authentication */
+ xor_block(a, pos, len);
+ ieee80211_ccmp_aes_encrypt(key->tfm, a, a);
+ pos += len;
+ }
+
+ if (memcmp(mic, a, CCMP_MIC_LEN) != 0) {
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "CCMP: decrypt failed: STA="
+ MAC_FMT "\n", MAC_ARG(hdr->addr2));
+ }
+ key->dot11RSNAStatsCCMPDecryptErrors++;
+ return -5;
+ }
+
+ memcpy(key->rx_pn, pn, CCMP_PN_LEN);
+
+ /* Remove hdr and MIC */
+ memmove(skb->data + CCMP_HDR_LEN, skb->data, hdr_len);
+ skb_pull(skb, CCMP_HDR_LEN);
+ skb_trim(skb, skb->len - CCMP_MIC_LEN);
+
+ return keyidx;
+}
+
+
+static int ieee80211_ccmp_set_key(void *key, int len, u8 *seq, void *priv)
+{
+ struct ieee80211_ccmp_data *data = priv;
+ int keyidx;
+ struct crypto_tfm *tfm = data->tfm;
+
+ keyidx = data->key_idx;
+ memset(data, 0, sizeof(*data));
+ data->key_idx = keyidx;
+ data->tfm = tfm;
+ if (len == CCMP_TK_LEN) {
+ memcpy(data->key, key, CCMP_TK_LEN);
+ data->key_set = 1;
+ if (seq) {
+ data->rx_pn[0] = seq[5];
+ data->rx_pn[1] = seq[4];
+ data->rx_pn[2] = seq[3];
+ data->rx_pn[3] = seq[2];
+ data->rx_pn[4] = seq[1];
+ data->rx_pn[5] = seq[0];
+ }
+ crypto_cipher_setkey(data->tfm, data->key, CCMP_TK_LEN);
+ } else if (len == 0) {
+ data->key_set = 0;
+ } else
+ return -1;
+
+ return 0;
+}
+
+
+static int ieee80211_ccmp_get_key(void *key, int len, u8 *seq, void *priv)
+{
+ struct ieee80211_ccmp_data *data = priv;
+
+ if (len < CCMP_TK_LEN)
+ return -1;
+
+ if (!data->key_set)
+ return 0;
+ memcpy(key, data->key, CCMP_TK_LEN);
+
+ if (seq) {
+ seq[0] = data->tx_pn[5];
+ seq[1] = data->tx_pn[4];
+ seq[2] = data->tx_pn[3];
+ seq[3] = data->tx_pn[2];
+ seq[4] = data->tx_pn[1];
+ seq[5] = data->tx_pn[0];
+ }
+
+ return CCMP_TK_LEN;
+}
+
+
+static char * ieee80211_ccmp_print_stats(char *p, void *priv)
+{
+ struct ieee80211_ccmp_data *ccmp = priv;
+ p += sprintf(p, "key[%d] alg=CCMP key_set=%d "
+ "tx_pn=%02x%02x%02x%02x%02x%02x "
+ "rx_pn=%02x%02x%02x%02x%02x%02x "
+ "format_errors=%d replays=%d decrypt_errors=%d\n",
+ ccmp->key_idx, ccmp->key_set,
+ MAC_ARG(ccmp->tx_pn), MAC_ARG(ccmp->rx_pn),
+ ccmp->dot11RSNAStatsCCMPFormatErrors,
+ ccmp->dot11RSNAStatsCCMPReplays,
+ ccmp->dot11RSNAStatsCCMPDecryptErrors);
+
+ return p;
+}
+
+
+static struct ieee80211_crypto_ops ieee80211_crypt_ccmp = {
+ .name = "CCMP",
+ .init = ieee80211_ccmp_init,
+ .deinit = ieee80211_ccmp_deinit,
+ .encrypt_mpdu = ieee80211_ccmp_encrypt,
+ .decrypt_mpdu = ieee80211_ccmp_decrypt,
+ .encrypt_msdu = NULL,
+ .decrypt_msdu = NULL,
+ .set_key = ieee80211_ccmp_set_key,
+ .get_key = ieee80211_ccmp_get_key,
+ .print_stats = ieee80211_ccmp_print_stats,
+ .extra_prefix_len = CCMP_HDR_LEN,
+ .extra_postfix_len = CCMP_MIC_LEN,
+ .owner = THIS_MODULE,
+};
+
+
+static int __init ieee80211_crypto_ccmp_init(void)
+{
+ if (ieee80211_register_crypto_ops(&ieee80211_crypt_ccmp) < 0)
+ return -1;
+
+ return 0;
+}
+
+
+static void __exit ieee80211_crypto_ccmp_exit(void)
+{
+ ieee80211_unregister_crypto_ops(&ieee80211_crypt_ccmp);
+}
+
+
+module_init(ieee80211_crypto_ccmp_init);
+module_exit(ieee80211_crypto_ccmp_exit);
--- /dev/null
+/*
+ * Host AP crypt: host-based TKIP encryption implementation for Host AP driver
+ *
+ * Copyright (c) 2003-2004, Jouni Malinen <jkmaline@cc.hut.fi>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation. See README and COPYING for
+ * more details.
+ */
+
+#include <linux/config.h>
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include <linux/if_ether.h>
+#include <linux/if_arp.h>
+#include <asm/string.h>
+
+#include "ieee80211.h"
+
+#ifndef CONFIG_CRYPTO
+#error CONFIG_CRYPTO is required to build this module.
+#endif
+
+#include <linux/crypto.h>
+#include <asm/scatterlist.h>
+#include <linux/crc32.h>
+
+MODULE_AUTHOR("Jouni Malinen");
+MODULE_DESCRIPTION("Host AP crypt: TKIP");
+MODULE_LICENSE("GPL");
+
+struct ieee80211_tkip_data {
+#define TKIP_KEY_LEN 32
+ u8 key[TKIP_KEY_LEN];
+ int key_set;
+
+ u32 tx_iv32;
+ u16 tx_iv16;
+ u16 tx_ttak[5];
+ int tx_phase1_done;
+
+ u32 rx_iv32;
+ u16 rx_iv16;
+ u16 rx_ttak[5];
+ int rx_phase1_done;
+ u32 rx_iv32_new;
+ u16 rx_iv16_new;
+
+ u32 dot11RSNAStatsTKIPReplays;
+ u32 dot11RSNAStatsTKIPICVErrors;
+ u32 dot11RSNAStatsTKIPLocalMICFailures;
+
+ int key_idx;
+
+ struct crypto_tfm *tfm_arc4;
+ struct crypto_tfm *tfm_michael;
+
+ /* scratch buffers for virt_to_page() (crypto API) */
+ u8 rx_hdr[16], tx_hdr[16];
+};
+
+static void * ieee80211_tkip_init(int key_idx)
+{
+ struct ieee80211_tkip_data *priv;
+
+ priv = kmalloc(sizeof(*priv), GFP_ATOMIC);
+ if (priv == NULL)
+ goto fail;
+ memset(priv, 0, sizeof(*priv));
+ priv->key_idx = key_idx;
+
+ priv->tfm_arc4 = crypto_alloc_tfm("arc4", 0);
+ if (priv->tfm_arc4 == NULL) {
+ printk(KERN_DEBUG "ieee80211_crypt_tkip: could not allocate "
+ "crypto API arc4\n");
+ goto fail;
+ }
+
+ priv->tfm_michael = crypto_alloc_tfm("michael_mic", 0);
+ if (priv->tfm_michael == NULL) {
+ printk(KERN_DEBUG "ieee80211_crypt_tkip: could not allocate "
+ "crypto API michael_mic\n");
+ goto fail;
+ }
+
+ return priv;
+
+fail:
+ if (priv) {
+ if (priv->tfm_michael)
+ crypto_free_tfm(priv->tfm_michael);
+ if (priv->tfm_arc4)
+ crypto_free_tfm(priv->tfm_arc4);
+ kfree(priv);
+ }
+
+ return NULL;
+}
+
+
+static void ieee80211_tkip_deinit(void *priv)
+{
+ struct ieee80211_tkip_data *_priv = priv;
+ if (_priv && _priv->tfm_michael)
+ crypto_free_tfm(_priv->tfm_michael);
+ if (_priv && _priv->tfm_arc4)
+ crypto_free_tfm(_priv->tfm_arc4);
+ kfree(priv);
+}
+
+
+static inline u16 RotR1(u16 val)
+{
+ return (val >> 1) | (val << 15);
+}
+
+
+static inline u8 Lo8(u16 val)
+{
+ return val & 0xff;
+}
+
+
+static inline u8 Hi8(u16 val)
+{
+ return val >> 8;
+}
+
+
+static inline u16 Lo16(u32 val)
+{
+ return val & 0xffff;
+}
+
+
+static inline u16 Hi16(u32 val)
+{
+ return val >> 16;
+}
+
+
+static inline u16 Mk16(u8 hi, u8 lo)
+{
+ return lo | (((u16) hi) << 8);
+}
+
+
+static inline u16 Mk16_le(u16 *v)
+{
+ return le16_to_cpu(*v);
+}
+
+
+static const u16 Sbox[256] =
+{
+ 0xC6A5, 0xF884, 0xEE99, 0xF68D, 0xFF0D, 0xD6BD, 0xDEB1, 0x9154,
+ 0x6050, 0x0203, 0xCEA9, 0x567D, 0xE719, 0xB562, 0x4DE6, 0xEC9A,
+ 0x8F45, 0x1F9D, 0x8940, 0xFA87, 0xEF15, 0xB2EB, 0x8EC9, 0xFB0B,
+ 0x41EC, 0xB367, 0x5FFD, 0x45EA, 0x23BF, 0x53F7, 0xE496, 0x9B5B,
+ 0x75C2, 0xE11C, 0x3DAE, 0x4C6A, 0x6C5A, 0x7E41, 0xF502, 0x834F,
+ 0x685C, 0x51F4, 0xD134, 0xF908, 0xE293, 0xAB73, 0x6253, 0x2A3F,
+ 0x080C, 0x9552, 0x4665, 0x9D5E, 0x3028, 0x37A1, 0x0A0F, 0x2FB5,
+ 0x0E09, 0x2436, 0x1B9B, 0xDF3D, 0xCD26, 0x4E69, 0x7FCD, 0xEA9F,
+ 0x121B, 0x1D9E, 0x5874, 0x342E, 0x362D, 0xDCB2, 0xB4EE, 0x5BFB,
+ 0xA4F6, 0x764D, 0xB761, 0x7DCE, 0x527B, 0xDD3E, 0x5E71, 0x1397,
+ 0xA6F5, 0xB968, 0x0000, 0xC12C, 0x4060, 0xE31F, 0x79C8, 0xB6ED,
+ 0xD4BE, 0x8D46, 0x67D9, 0x724B, 0x94DE, 0x98D4, 0xB0E8, 0x854A,
+ 0xBB6B, 0xC52A, 0x4FE5, 0xED16, 0x86C5, 0x9AD7, 0x6655, 0x1194,
+ 0x8ACF, 0xE910, 0x0406, 0xFE81, 0xA0F0, 0x7844, 0x25BA, 0x4BE3,
+ 0xA2F3, 0x5DFE, 0x80C0, 0x058A, 0x3FAD, 0x21BC, 0x7048, 0xF104,
+ 0x63DF, 0x77C1, 0xAF75, 0x4263, 0x2030, 0xE51A, 0xFD0E, 0xBF6D,
+ 0x814C, 0x1814, 0x2635, 0xC32F, 0xBEE1, 0x35A2, 0x88CC, 0x2E39,
+ 0x9357, 0x55F2, 0xFC82, 0x7A47, 0xC8AC, 0xBAE7, 0x322B, 0xE695,
+ 0xC0A0, 0x1998, 0x9ED1, 0xA37F, 0x4466, 0x547E, 0x3BAB, 0x0B83,
+ 0x8CCA, 0xC729, 0x6BD3, 0x283C, 0xA779, 0xBCE2, 0x161D, 0xAD76,
+ 0xDB3B, 0x6456, 0x744E, 0x141E, 0x92DB, 0x0C0A, 0x486C, 0xB8E4,
+ 0x9F5D, 0xBD6E, 0x43EF, 0xC4A6, 0x39A8, 0x31A4, 0xD337, 0xF28B,
+ 0xD532, 0x8B43, 0x6E59, 0xDAB7, 0x018C, 0xB164, 0x9CD2, 0x49E0,
+ 0xD8B4, 0xACFA, 0xF307, 0xCF25, 0xCAAF, 0xF48E, 0x47E9, 0x1018,
+ 0x6FD5, 0xF088, 0x4A6F, 0x5C72, 0x3824, 0x57F1, 0x73C7, 0x9751,
+ 0xCB23, 0xA17C, 0xE89C, 0x3E21, 0x96DD, 0x61DC, 0x0D86, 0x0F85,
+ 0xE090, 0x7C42, 0x71C4, 0xCCAA, 0x90D8, 0x0605, 0xF701, 0x1C12,
+ 0xC2A3, 0x6A5F, 0xAEF9, 0x69D0, 0x1791, 0x9958, 0x3A27, 0x27B9,
+ 0xD938, 0xEB13, 0x2BB3, 0x2233, 0xD2BB, 0xA970, 0x0789, 0x33A7,
+ 0x2DB6, 0x3C22, 0x1592, 0xC920, 0x8749, 0xAAFF, 0x5078, 0xA57A,
+ 0x038F, 0x59F8, 0x0980, 0x1A17, 0x65DA, 0xD731, 0x84C6, 0xD0B8,
+ 0x82C3, 0x29B0, 0x5A77, 0x1E11, 0x7BCB, 0xA8FC, 0x6DD6, 0x2C3A,
+};
+
+
+static inline u16 _S_(u16 v)
+{
+ u16 t = Sbox[Hi8(v)];
+ return Sbox[Lo8(v)] ^ ((t << 8) | (t >> 8));
+}
+
+
+#define PHASE1_LOOP_COUNT 8
+
+static void tkip_mixing_phase1(u16 *TTAK, const u8 *TK, const u8 *TA, u32 IV32)
+{
+ int i, j;
+
+ /* Initialize the 80-bit TTAK from TSC (IV32) and TA[0..5] */
+ TTAK[0] = Lo16(IV32);
+ TTAK[1] = Hi16(IV32);
+ TTAK[2] = Mk16(TA[1], TA[0]);
+ TTAK[3] = Mk16(TA[3], TA[2]);
+ TTAK[4] = Mk16(TA[5], TA[4]);
+
+ for (i = 0; i < PHASE1_LOOP_COUNT; i++) {
+ j = 2 * (i & 1);
+ TTAK[0] += _S_(TTAK[4] ^ Mk16(TK[1 + j], TK[0 + j]));
+ TTAK[1] += _S_(TTAK[0] ^ Mk16(TK[5 + j], TK[4 + j]));
+ TTAK[2] += _S_(TTAK[1] ^ Mk16(TK[9 + j], TK[8 + j]));
+ TTAK[3] += _S_(TTAK[2] ^ Mk16(TK[13 + j], TK[12 + j]));
+ TTAK[4] += _S_(TTAK[3] ^ Mk16(TK[1 + j], TK[0 + j])) + i;
+ }
+}
+
+
+static void tkip_mixing_phase2(u8 *WEPSeed, const u8 *TK, const u16 *TTAK,
+ u16 IV16)
+{
+ /* Make temporary area overlap WEP seed so that the final copy can be
+ * avoided on little endian hosts. */
+ u16 *PPK = (u16 *) &WEPSeed[4];
+
+ /* Step 1 - make copy of TTAK and bring in TSC */
+ PPK[0] = TTAK[0];
+ PPK[1] = TTAK[1];
+ PPK[2] = TTAK[2];
+ PPK[3] = TTAK[3];
+ PPK[4] = TTAK[4];
+ PPK[5] = TTAK[4] + IV16;
+
+ /* Step 2 - 96-bit bijective mixing using S-box */
+ PPK[0] += _S_(PPK[5] ^ Mk16_le((u16 *) &TK[0]));
+ PPK[1] += _S_(PPK[0] ^ Mk16_le((u16 *) &TK[2]));
+ PPK[2] += _S_(PPK[1] ^ Mk16_le((u16 *) &TK[4]));
+ PPK[3] += _S_(PPK[2] ^ Mk16_le((u16 *) &TK[6]));
+ PPK[4] += _S_(PPK[3] ^ Mk16_le((u16 *) &TK[8]));
+ PPK[5] += _S_(PPK[4] ^ Mk16_le((u16 *) &TK[10]));
+
+ PPK[0] += RotR1(PPK[5] ^ Mk16_le((u16 *) &TK[12]));
+ PPK[1] += RotR1(PPK[0] ^ Mk16_le((u16 *) &TK[14]));
+ PPK[2] += RotR1(PPK[1]);
+ PPK[3] += RotR1(PPK[2]);
+ PPK[4] += RotR1(PPK[3]);
+ PPK[5] += RotR1(PPK[4]);
+
+ /* Step 3 - bring in last of TK bits, assign 24-bit WEP IV value
+ * WEPSeed[0..2] is transmitted as WEP IV */
+ WEPSeed[0] = Hi8(IV16);
+ WEPSeed[1] = (Hi8(IV16) | 0x20) & 0x7F;
+ WEPSeed[2] = Lo8(IV16);
+ WEPSeed[3] = Lo8((PPK[5] ^ Mk16_le((u16 *) &TK[0])) >> 1);
+
+#ifdef __BIG_ENDIAN
+ {
+ int i;
+ for (i = 0; i < 6; i++)
+ PPK[i] = (PPK[i] << 8) | (PPK[i] >> 8);
+ }
+#endif
+}
+
+static int ieee80211_tkip_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
+{
+ struct ieee80211_tkip_data *tkey = priv;
+ int len;
+ u8 rc4key[16], *pos, *icv;
+ struct ieee80211_hdr *hdr;
+ u32 crc;
+ struct scatterlist sg;
+
+ if (skb_headroom(skb) < 8 || skb_tailroom(skb) < 4 ||
+ skb->len < hdr_len)
+ return -1;
+
+ hdr = (struct ieee80211_hdr *) skb->data;
+ if (!tkey->tx_phase1_done) {
+ tkip_mixing_phase1(tkey->tx_ttak, tkey->key, hdr->addr2,
+ tkey->tx_iv32);
+ tkey->tx_phase1_done = 1;
+ }
+ tkip_mixing_phase2(rc4key, tkey->key, tkey->tx_ttak, tkey->tx_iv16);
+
+ len = skb->len - hdr_len;
+ pos = skb_push(skb, 8);
+ memmove(pos, pos + 8, hdr_len);
+ pos += hdr_len;
+ icv = skb_put(skb, 4);
+
+ *pos++ = rc4key[0];
+ *pos++ = rc4key[1];
+ *pos++ = rc4key[2];
+ *pos++ = (tkey->key_idx << 6) | (1 << 5) /* Ext IV included */;
+ *pos++ = tkey->tx_iv32 & 0xff;
+ *pos++ = (tkey->tx_iv32 >> 8) & 0xff;
+ *pos++ = (tkey->tx_iv32 >> 16) & 0xff;
+ *pos++ = (tkey->tx_iv32 >> 24) & 0xff;
+
+ crc = ~crc32_le(~0, pos, len);
+ icv[0] = crc;
+ icv[1] = crc >> 8;
+ icv[2] = crc >> 16;
+ icv[3] = crc >> 24;
+
+ crypto_cipher_setkey(tkey->tfm_arc4, rc4key, 16);
+ sg.page = virt_to_page(pos);
+ sg.offset = offset_in_page(pos);
+ sg.length = len + 4;
+ crypto_cipher_encrypt(tkey->tfm_arc4, &sg, &sg, len + 4);
+
+ tkey->tx_iv16++;
+ if (tkey->tx_iv16 == 0) {
+ tkey->tx_phase1_done = 0;
+ tkey->tx_iv32++;
+ }
+
+ return 0;
+}
+
+static int ieee80211_tkip_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
+{
+ struct ieee80211_tkip_data *tkey = priv;
+ u8 rc4key[16];
+ u8 keyidx, *pos;
+ u32 iv32;
+ u16 iv16;
+ struct ieee80211_hdr *hdr;
+ u8 icv[4];
+ u32 crc;
+ struct scatterlist sg;
+ int plen;
+
+ if (skb->len < hdr_len + 8 + 4)
+ return -1;
+
+ hdr = (struct ieee80211_hdr *) skb->data;
+ pos = skb->data + hdr_len;
+ keyidx = pos[3];
+ if (!(keyidx & (1 << 5))) {
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "TKIP: received packet without ExtIV"
+ " flag from " MAC_FMT "\n", MAC_ARG(hdr->addr2));
+ }
+ return -2;
+ }
+ keyidx >>= 6;
+ if (tkey->key_idx != keyidx) {
+ printk(KERN_DEBUG "TKIP: RX tkey->key_idx=%d frame "
+ "keyidx=%d priv=%p\n", tkey->key_idx, keyidx, priv);
+ return -6;
+ }
+ if (!tkey->key_set) {
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "TKIP: received packet from " MAC_FMT
+ " with keyid=%d that does not have a configured"
+ " key\n", MAC_ARG(hdr->addr2), keyidx);
+ }
+ return -3;
+ }
+ iv16 = (pos[0] << 8) | pos[2];
+ iv32 = pos[4] | (pos[5] << 8) | (pos[6] << 16) | (pos[7] << 24);
+ pos += 8;
+
+ if (iv32 < tkey->rx_iv32 ||
+ (iv32 == tkey->rx_iv32 && iv16 <= tkey->rx_iv16)) {
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "TKIP: replay detected: STA=" MAC_FMT
+ " previous TSC %08x%04x received TSC "
+ "%08x%04x\n", MAC_ARG(hdr->addr2),
+ tkey->rx_iv32, tkey->rx_iv16, iv32, iv16);
+ }
+ tkey->dot11RSNAStatsTKIPReplays++;
+ return -4;
+ }
+
+ if (iv32 != tkey->rx_iv32 || !tkey->rx_phase1_done) {
+ tkip_mixing_phase1(tkey->rx_ttak, tkey->key, hdr->addr2, iv32);
+ tkey->rx_phase1_done = 1;
+ }
+ tkip_mixing_phase2(rc4key, tkey->key, tkey->rx_ttak, iv16);
+
+ plen = skb->len - hdr_len - 12;
+
+ crypto_cipher_setkey(tkey->tfm_arc4, rc4key, 16);
+ sg.page = virt_to_page(pos);
+ sg.offset = offset_in_page(pos);
+ sg.length = plen + 4;
+ crypto_cipher_decrypt(tkey->tfm_arc4, &sg, &sg, plen + 4);
+
+ crc = ~crc32_le(~0, pos, plen);
+ icv[0] = crc;
+ icv[1] = crc >> 8;
+ icv[2] = crc >> 16;
+ icv[3] = crc >> 24;
+ if (memcmp(icv, pos + plen, 4) != 0) {
+ if (iv32 != tkey->rx_iv32) {
+ /* Previously cached Phase1 result was already lost, so
+ * it needs to be recalculated for the next packet. */
+ tkey->rx_phase1_done = 0;
+ }
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "TKIP: ICV error detected: STA="
+ MAC_FMT "\n", MAC_ARG(hdr->addr2));
+ }
+ tkey->dot11RSNAStatsTKIPICVErrors++;
+ return -5;
+ }
+
+ /* Update real counters only after Michael MIC verification has
+ * completed */
+ tkey->rx_iv32_new = iv32;
+ tkey->rx_iv16_new = iv16;
+
+ /* Remove IV and ICV */
+ memmove(skb->data + 8, skb->data, hdr_len);
+ skb_pull(skb, 8);
+ skb_trim(skb, skb->len - 4);
+
+ return keyidx;
+}
+
+
+static int michael_mic(struct ieee80211_tkip_data *tkey, u8 *key, u8 *hdr,
+ u8 *data, size_t data_len, u8 *mic)
+{
+ struct scatterlist sg[2];
+
+ if (tkey->tfm_michael == NULL) {
+ printk(KERN_WARNING "michael_mic: tfm_michael == NULL\n");
+ return -1;
+ }
+ sg[0].page = virt_to_page(hdr);
+ sg[0].offset = offset_in_page(hdr);
+ sg[0].length = 16;
+
+ sg[1].page = virt_to_page(data);
+ sg[1].offset = offset_in_page(data);
+ sg[1].length = data_len;
+
+ crypto_digest_init(tkey->tfm_michael);
+ crypto_digest_setkey(tkey->tfm_michael, key, 8);
+ crypto_digest_update(tkey->tfm_michael, sg, 2);
+ crypto_digest_final(tkey->tfm_michael, mic);
+
+ return 0;
+}
+
+static void michael_mic_hdr(struct sk_buff *skb, u8 *hdr)
+{
+ struct ieee80211_hdr *hdr11;
+
+ hdr11 = (struct ieee80211_hdr *) skb->data;
+ switch (le16_to_cpu(hdr11->frame_ctl) &
+ (IEEE80211_FCTL_FROMDS | IEEE80211_FCTL_TODS)) {
+ case IEEE80211_FCTL_TODS:
+ memcpy(hdr, hdr11->addr3, ETH_ALEN); /* DA */
+ memcpy(hdr + ETH_ALEN, hdr11->addr2, ETH_ALEN); /* SA */
+ break;
+ case IEEE80211_FCTL_FROMDS:
+ memcpy(hdr, hdr11->addr1, ETH_ALEN); /* DA */
+ memcpy(hdr + ETH_ALEN, hdr11->addr3, ETH_ALEN); /* SA */
+ break;
+ case IEEE80211_FCTL_FROMDS | IEEE80211_FCTL_TODS:
+ memcpy(hdr, hdr11->addr3, ETH_ALEN); /* DA */
+ memcpy(hdr + ETH_ALEN, hdr11->addr4, ETH_ALEN); /* SA */
+ break;
+ case 0:
+ memcpy(hdr, hdr11->addr1, ETH_ALEN); /* DA */
+ memcpy(hdr + ETH_ALEN, hdr11->addr2, ETH_ALEN); /* SA */
+ break;
+ }
+
+ hdr[12] = 0; /* priority */
+ hdr[13] = hdr[14] = hdr[15] = 0; /* reserved */
+}
+
+
+static int ieee80211_michael_mic_add(struct sk_buff *skb, int hdr_len, void *priv)
+{
+ struct ieee80211_tkip_data *tkey = priv;
+ u8 *pos;
+
+ if (skb_tailroom(skb) < 8 || skb->len < hdr_len) {
+ printk(KERN_DEBUG "Invalid packet for Michael MIC add "
+ "(tailroom=%d hdr_len=%d skb->len=%d)\n",
+ skb_tailroom(skb), hdr_len, skb->len);
+ return -1;
+ }
+
+ michael_mic_hdr(skb, tkey->tx_hdr);
+ pos = skb_put(skb, 8);
+ if (michael_mic(tkey, &tkey->key[16], tkey->tx_hdr,
+ skb->data + hdr_len, skb->len - 8 - hdr_len, pos))
+ return -1;
+
+ return 0;
+}
+
+
+#if WIRELESS_EXT >= 18
+static void ieee80211_michael_mic_failure(struct net_device *dev,
+ struct ieee80211_hdr *hdr,
+ int keyidx)
+{
+ union iwreq_data wrqu;
+ struct iw_michaelmicfailure ev;
+
+ /* TODO: needed parameters: count, keyid, key type, TSC */
+ memset(&ev, 0, sizeof(ev));
+ ev.flags = keyidx & IW_MICFAILURE_KEY_ID;
+ if (hdr->addr1[0] & 0x01)
+ ev.flags |= IW_MICFAILURE_GROUP;
+ else
+ ev.flags |= IW_MICFAILURE_PAIRWISE;
+ ev.src_addr.sa_family = ARPHRD_ETHER;
+ memcpy(ev.src_addr.sa_data, hdr->addr2, ETH_ALEN);
+ memset(&wrqu, 0, sizeof(wrqu));
+ wrqu.data.length = sizeof(ev);
+ wireless_send_event(dev, IWEVMICHAELMICFAILURE, &wrqu, (char *) &ev);
+}
+#elif WIRELESS_EXT >= 15
+static void ieee80211_michael_mic_failure(struct net_device *dev,
+ struct ieee80211_hdr *hdr,
+ int keyidx)
+{
+ union iwreq_data wrqu;
+ char buf[128];
+
+ /* TODO: needed parameters: count, keyid, key type, TSC */
+ sprintf(buf, "MLME-MICHAELMICFAILURE.indication(keyid=%d %scast addr="
+ MAC_FMT ")", keyidx, hdr->addr1[0] & 0x01 ? "broad" : "uni",
+ MAC_ARG(hdr->addr2));
+ memset(&wrqu, 0, sizeof(wrqu));
+ wrqu.data.length = strlen(buf);
+ wireless_send_event(dev, IWEVCUSTOM, &wrqu, buf);
+}
+#else /* WIRELESS_EXT >= 15 */
+static inline void ieee80211_michael_mic_failure(struct net_device *dev,
+ struct ieee80211_hdr *hdr,
+ int keyidx)
+{
+}
+#endif /* WIRELESS_EXT >= 15 */
+
+
+static int ieee80211_michael_mic_verify(struct sk_buff *skb, int keyidx,
+ int hdr_len, void *priv)
+{
+ struct ieee80211_tkip_data *tkey = priv;
+ u8 mic[8];
+
+ if (!tkey->key_set)
+ return -1;
+
+ michael_mic_hdr(skb, tkey->rx_hdr);
+ if (michael_mic(tkey, &tkey->key[24], tkey->rx_hdr,
+ skb->data + hdr_len, skb->len - 8 - hdr_len, mic))
+ return -1;
+ if (memcmp(mic, skb->data + skb->len - 8, 8) != 0) {
+ struct ieee80211_hdr *hdr;
+ hdr = (struct ieee80211_hdr *) skb->data;
+ printk(KERN_DEBUG "%s: Michael MIC verification failed for "
+ "MSDU from " MAC_FMT " keyidx=%d\n",
+ skb->dev ? skb->dev->name : "N/A", MAC_ARG(hdr->addr2),
+ keyidx);
+ if (skb->dev)
+ ieee80211_michael_mic_failure(skb->dev, hdr, keyidx);
+ tkey->dot11RSNAStatsTKIPLocalMICFailures++;
+ return -1;
+ }
+
+ /* Update TSC counters for RX now that the packet verification has
+ * completed. */
+ tkey->rx_iv32 = tkey->rx_iv32_new;
+ tkey->rx_iv16 = tkey->rx_iv16_new;
+
+ skb_trim(skb, skb->len - 8);
+
+ return 0;
+}
+
+
+static int ieee80211_tkip_set_key(void *key, int len, u8 *seq, void *priv)
+{
+ struct ieee80211_tkip_data *tkey = priv;
+ int keyidx;
+ struct crypto_tfm *tfm = tkey->tfm_michael;
+ struct crypto_tfm *tfm2 = tkey->tfm_arc4;
+
+ keyidx = tkey->key_idx;
+ memset(tkey, 0, sizeof(*tkey));
+ tkey->key_idx = keyidx;
+ tkey->tfm_michael = tfm;
+ tkey->tfm_arc4 = tfm2;
+ if (len == TKIP_KEY_LEN) {
+ memcpy(tkey->key, key, TKIP_KEY_LEN);
+ tkey->key_set = 1;
+ tkey->tx_iv16 = 1; /* TSC is initialized to 1 */
+ if (seq) {
+ tkey->rx_iv32 = (seq[5] << 24) | (seq[4] << 16) |
+ (seq[3] << 8) | seq[2];
+ tkey->rx_iv16 = (seq[1] << 8) | seq[0];
+ }
+ } else if (len == 0) {
+ tkey->key_set = 0;
+ } else
+ return -1;
+
+ return 0;
+}
+
+
+static int ieee80211_tkip_get_key(void *key, int len, u8 *seq, void *priv)
+{
+ struct ieee80211_tkip_data *tkey = priv;
+
+ if (len < TKIP_KEY_LEN)
+ return -1;
+
+ if (!tkey->key_set)
+ return 0;
+ memcpy(key, tkey->key, TKIP_KEY_LEN);
+
+ if (seq) {
+ /* Return the sequence number of the last transmitted frame. */
+ u16 iv16 = tkey->tx_iv16;
+ u32 iv32 = tkey->tx_iv32;
+ if (iv16 == 0)
+ iv32--;
+ iv16--;
+ seq[0] = tkey->tx_iv16;
+ seq[1] = tkey->tx_iv16 >> 8;
+ seq[2] = tkey->tx_iv32;
+ seq[3] = tkey->tx_iv32 >> 8;
+ seq[4] = tkey->tx_iv32 >> 16;
+ seq[5] = tkey->tx_iv32 >> 24;
+ }
+
+ return TKIP_KEY_LEN;
+}
+
+
+static char * ieee80211_tkip_print_stats(char *p, void *priv)
+{
+ struct ieee80211_tkip_data *tkip = priv;
+ p += sprintf(p, "key[%d] alg=TKIP key_set=%d "
+ "tx_pn=%02x%02x%02x%02x%02x%02x "
+ "rx_pn=%02x%02x%02x%02x%02x%02x "
+ "replays=%d icv_errors=%d local_mic_failures=%d\n",
+ tkip->key_idx, tkip->key_set,
+ (tkip->tx_iv32 >> 24) & 0xff,
+ (tkip->tx_iv32 >> 16) & 0xff,
+ (tkip->tx_iv32 >> 8) & 0xff,
+ tkip->tx_iv32 & 0xff,
+ (tkip->tx_iv16 >> 8) & 0xff,
+ tkip->tx_iv16 & 0xff,
+ (tkip->rx_iv32 >> 24) & 0xff,
+ (tkip->rx_iv32 >> 16) & 0xff,
+ (tkip->rx_iv32 >> 8) & 0xff,
+ tkip->rx_iv32 & 0xff,
+ (tkip->rx_iv16 >> 8) & 0xff,
+ tkip->rx_iv16 & 0xff,
+ tkip->dot11RSNAStatsTKIPReplays,
+ tkip->dot11RSNAStatsTKIPICVErrors,
+ tkip->dot11RSNAStatsTKIPLocalMICFailures);
+ return p;
+}
+
+
+static struct ieee80211_crypto_ops ieee80211_crypt_tkip = {
+ .name = "TKIP",
+ .init = ieee80211_tkip_init,
+ .deinit = ieee80211_tkip_deinit,
+ .encrypt_mpdu = ieee80211_tkip_encrypt,
+ .decrypt_mpdu = ieee80211_tkip_decrypt,
+ .encrypt_msdu = ieee80211_michael_mic_add,
+ .decrypt_msdu = ieee80211_michael_mic_verify,
+ .set_key = ieee80211_tkip_set_key,
+ .get_key = ieee80211_tkip_get_key,
+ .print_stats = ieee80211_tkip_print_stats,
+ .extra_prefix_len = 4 + 4, /* IV + ExtIV */
+ .extra_postfix_len = 8 + 4, /* MIC + ICV */
+ .owner = THIS_MODULE,
+};
+
+
+static int __init ieee80211_crypto_tkip_init(void)
+{
+ if (ieee80211_register_crypto_ops(&ieee80211_crypt_tkip) < 0)
+ return -1;
+
+ return 0;
+}
+
+
+static void __exit ieee80211_crypto_tkip_exit(void)
+{
+ ieee80211_unregister_crypto_ops(&ieee80211_crypt_tkip);
+}
+
+
+module_init(ieee80211_crypto_tkip_init);
+module_exit(ieee80211_crypto_tkip_exit);
--- /dev/null
+/*
+ * Host AP crypt: host-based WEP encryption implementation for Host AP driver
+ *
+ * Copyright (c) 2002-2004, Jouni Malinen <jkmaline@cc.hut.fi>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation. See README and COPYING for
+ * more details.
+ */
+
+#include <linux/config.h>
+#include <linux/version.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/random.h>
+#include <linux/skbuff.h>
+#include <asm/string.h>
+
+#include "ieee80211.h"
+
+#ifndef CONFIG_CRYPTO
+#error CONFIG_CRYPTO is required to build this module.
+#endif
+#include <linux/crypto.h>
+#include <asm/scatterlist.h>
+#include <linux/crc32.h>
+
+MODULE_AUTHOR("Jouni Malinen");
+MODULE_DESCRIPTION("Host AP crypt: WEP");
+MODULE_LICENSE("GPL");
+
+
+struct prism2_wep_data {
+ u32 iv;
+#define WEP_KEY_LEN 13
+ u8 key[WEP_KEY_LEN + 1];
+ u8 key_len;
+ u8 key_idx;
+ struct crypto_tfm *tfm;
+};
+
+
+static void * prism2_wep_init(int keyidx)
+{
+ struct prism2_wep_data *priv;
+
+ priv = kmalloc(sizeof(*priv), GFP_ATOMIC);
+ if (priv == NULL)
+ goto fail;
+ memset(priv, 0, sizeof(*priv));
+ priv->key_idx = keyidx;
+
+ priv->tfm = crypto_alloc_tfm("arc4", 0);
+ if (priv->tfm == NULL) {
+ printk(KERN_DEBUG "ieee80211_crypt_wep: could not allocate "
+ "crypto API arc4\n");
+ goto fail;
+ }
+
+ /* start WEP IV from a random value */
+ get_random_bytes(&priv->iv, 4);
+
+ return priv;
+
+fail:
+ if (priv) {
+ if (priv->tfm)
+ crypto_free_tfm(priv->tfm);
+ kfree(priv);
+ }
+ return NULL;
+}
+
+
+static void prism2_wep_deinit(void *priv)
+{
+ struct prism2_wep_data *_priv = priv;
+ if (_priv && _priv->tfm)
+ crypto_free_tfm(_priv->tfm);
+ kfree(priv);
+}
+
+
+/* Perform WEP encryption on given skb that has at least 4 bytes of headroom
+ * for IV and 4 bytes of tailroom for ICV. Both IV and ICV will be transmitted,
+ * so the payload length increases with 8 bytes.
+ *
+ * WEP frame payload: IV + TX key idx, RC4(data), ICV = RC4(CRC32(data))
+ */
+static int prism2_wep_encrypt(struct sk_buff *skb, int hdr_len, void *priv)
+{
+ struct prism2_wep_data *wep = priv;
+ u32 crc, klen, len;
+ u8 key[WEP_KEY_LEN + 3];
+ u8 *pos, *icv;
+ struct scatterlist sg;
+
+ if (skb_headroom(skb) < 4 || skb_tailroom(skb) < 4 ||
+ skb->len < hdr_len)
+ return -1;
+
+ len = skb->len - hdr_len;
+ pos = skb_push(skb, 4);
+ memmove(pos, pos + 4, hdr_len);
+ pos += hdr_len;
+
+ klen = 3 + wep->key_len;
+
+ wep->iv++;
+
+ /* Fluhrer, Mantin, and Shamir have reported weaknesses in the key
+ * scheduling algorithm of RC4. At least IVs (KeyByte + 3, 0xff, N)
+ * can be used to speedup attacks, so avoid using them. */
+ if ((wep->iv & 0xff00) == 0xff00) {
+ u8 B = (wep->iv >> 16) & 0xff;
+ if (B >= 3 && B < klen)
+ wep->iv += 0x0100;
+ }
+
+ /* Prepend 24-bit IV to RC4 key and TX frame */
+ *pos++ = key[0] = (wep->iv >> 16) & 0xff;
+ *pos++ = key[1] = (wep->iv >> 8) & 0xff;
+ *pos++ = key[2] = wep->iv & 0xff;
+ *pos++ = wep->key_idx << 6;
+
+ /* Copy rest of the WEP key (the secret part) */
+ memcpy(key + 3, wep->key, wep->key_len);
+
+ /* Append little-endian CRC32 and encrypt it to produce ICV */
+ crc = ~crc32_le(~0, pos, len);
+ icv = skb_put(skb, 4);
+ icv[0] = crc;
+ icv[1] = crc >> 8;
+ icv[2] = crc >> 16;
+ icv[3] = crc >> 24;
+
+ crypto_cipher_setkey(wep->tfm, key, klen);
+ sg.page = virt_to_page(pos);
+ sg.offset = offset_in_page(pos);
+ sg.length = len + 4;
+ crypto_cipher_encrypt(wep->tfm, &sg, &sg, len + 4);
+
+ return 0;
+}
+
+
+/* Perform WEP decryption on given buffer. Buffer includes whole WEP part of
+ * the frame: IV (4 bytes), encrypted payload (including SNAP header),
+ * ICV (4 bytes). len includes both IV and ICV.
+ *
+ * Returns 0 if frame was decrypted successfully and ICV was correct and -1 on
+ * failure. If frame is OK, IV and ICV will be removed.
+ */
+static int prism2_wep_decrypt(struct sk_buff *skb, int hdr_len, void *priv)
+{
+ struct prism2_wep_data *wep = priv;
+ u32 crc, klen, plen;
+ u8 key[WEP_KEY_LEN + 3];
+ u8 keyidx, *pos, icv[4];
+ struct scatterlist sg;
+
+ if (skb->len < hdr_len + 8)
+ return -1;
+
+ pos = skb->data + hdr_len;
+ key[0] = *pos++;
+ key[1] = *pos++;
+ key[2] = *pos++;
+ keyidx = *pos++ >> 6;
+ if (keyidx != wep->key_idx)
+ return -1;
+
+ klen = 3 + wep->key_len;
+
+ /* Copy rest of the WEP key (the secret part) */
+ memcpy(key + 3, wep->key, wep->key_len);
+
+ /* Apply RC4 to data and compute CRC32 over decrypted data */
+ plen = skb->len - hdr_len - 8;
+
+ crypto_cipher_setkey(wep->tfm, key, klen);
+ sg.page = virt_to_page(pos);
+ sg.offset = offset_in_page(pos);
+ sg.length = plen + 4;
+ crypto_cipher_decrypt(wep->tfm, &sg, &sg, plen + 4);
+
+ crc = ~crc32_le(~0, pos, plen);
+ icv[0] = crc;
+ icv[1] = crc >> 8;
+ icv[2] = crc >> 16;
+ icv[3] = crc >> 24;
+ if (memcmp(icv, pos + plen, 4) != 0) {
+ /* ICV mismatch - drop frame */
+ return -2;
+ }
+
+ /* Remove IV and ICV */
+ memmove(skb->data + 4, skb->data, hdr_len);
+ skb_pull(skb, 4);
+ skb_trim(skb, skb->len - 4);
+
+ return 0;
+}
+
+
+static int prism2_wep_set_key(void *key, int len, u8 *seq, void *priv)
+{
+ struct prism2_wep_data *wep = priv;
+
+ if (len < 0 || len > WEP_KEY_LEN)
+ return -1;
+
+ memcpy(wep->key, key, len);
+ wep->key_len = len;
+
+ return 0;
+}
+
+
+static int prism2_wep_get_key(void *key, int len, u8 *seq, void *priv)
+{
+ struct prism2_wep_data *wep = priv;
+
+ if (len < wep->key_len)
+ return -1;
+
+ memcpy(key, wep->key, wep->key_len);
+
+ return wep->key_len;
+}
+
+
+static char * prism2_wep_print_stats(char *p, void *priv)
+{
+ struct prism2_wep_data *wep = priv;
+ p += sprintf(p, "key[%d] alg=WEP len=%d\n",
+ wep->key_idx, wep->key_len);
+ return p;
+}
+
+
+static struct ieee80211_crypto_ops ieee80211_crypt_wep = {
+ .name = "WEP",
+ .init = prism2_wep_init,
+ .deinit = prism2_wep_deinit,
+ .encrypt_mpdu = prism2_wep_encrypt,
+ .decrypt_mpdu = prism2_wep_decrypt,
+ .encrypt_msdu = NULL,
+ .decrypt_msdu = NULL,
+ .set_key = prism2_wep_set_key,
+ .get_key = prism2_wep_get_key,
+ .print_stats = prism2_wep_print_stats,
+ .extra_prefix_len = 4, /* IV */
+ .extra_postfix_len = 4, /* ICV */
+ .owner = THIS_MODULE,
+};
+
+
+static int __init ieee80211_crypto_wep_init(void)
+{
+ if (ieee80211_register_crypto_ops(&ieee80211_crypt_wep) < 0)
+ return -1;
+
+ return 0;
+}
+
+
+static void __exit ieee80211_crypto_wep_exit(void)
+{
+ ieee80211_unregister_crypto_ops(&ieee80211_crypt_wep);
+}
+
+
+module_init(ieee80211_crypto_wep_init);
+module_exit(ieee80211_crypto_wep_exit);
--- /dev/null
+/*******************************************************************************
+
+ Copyright(c) 2004 Intel Corporation. All rights reserved.
+
+ Portions of this file are based on the WEP enablement code provided by the
+ Host AP project hostap-drivers v0.1.3
+ Copyright (c) 2001-2002, SSH Communications Security Corp and Jouni Malinen
+ <jkmaline@cc.hut.fi>
+ Copyright (c) 2002-2003, Jouni Malinen <jkmaline@cc.hut.fi>
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of version 2 of the GNU General Public License as
+ published by the Free Software Foundation.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ You should have received a copy of the GNU General Public License along with
+ this program; if not, write to the Free Software Foundation, Inc., 59
+ Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+ The full GNU General Public License is included in this distribution in the
+ file called LICENSE.
+
+ Contact Information:
+ James P. Ketrenos <ipw2100-admin@linux.intel.com>
+ Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
+*******************************************************************************/
+
+#include <linux/compiler.h>
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/if_arp.h>
+#include <linux/in6.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/pci.h>
+#include <linux/proc_fs.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/tcp.h>
+#include <linux/types.h>
+#include <linux/version.h>
+#include <linux/wireless.h>
+#include <linux/etherdevice.h>
+#include <asm/uaccess.h>
+
+#include "ieee80211.h"
+
+MODULE_DESCRIPTION("802.11 data/management/control stack");
+MODULE_AUTHOR("Copyright (C) 2004 Intel Corporation <jketreno@linux.intel.com>");
+MODULE_LICENSE("GPL");
+
+#define DRV_NAME "ieee80211"
+
+static inline int ieee80211_networks_allocate(struct ieee80211_device *ieee)
+{
+ if (ieee->networks)
+ return 0;
+
+ ieee->networks = kmalloc(
+ MAX_NETWORK_COUNT * sizeof(struct ieee80211_network),
+ GFP_KERNEL);
+ if (!ieee->networks) {
+ printk(KERN_WARNING "%s: Out of memory allocating beacons\n",
+ ieee->dev->name);
+ return -ENOMEM;
+ }
+
+ memset(ieee->networks, 0,
+ MAX_NETWORK_COUNT * sizeof(struct ieee80211_network));
+
+ return 0;
+}
+
+static inline void ieee80211_networks_free(struct ieee80211_device *ieee)
+{
+ if (!ieee->networks)
+ return;
+ kfree(ieee->networks);
+ ieee->networks = NULL;
+}
+
+static inline void ieee80211_networks_initialize(struct ieee80211_device *ieee)
+{
+ int i;
+
+ INIT_LIST_HEAD(&ieee->network_free_list);
+ INIT_LIST_HEAD(&ieee->network_list);
+ for (i = 0; i < MAX_NETWORK_COUNT; i++)
+ list_add_tail(&ieee->networks[i].list, &ieee->network_free_list);
+}
+
+struct ieee80211_device *ieee80211_alloc(struct net_device *dev,
+ void *priv)
+{
+ struct ieee80211_device *ieee = kmalloc(
+ sizeof(struct ieee80211_device), GFP_KERNEL);
+ int err;
+ if (ieee == NULL)
+ return NULL;
+ memset(ieee, 0, sizeof(*ieee));
+ ieee->dev = dev;
+ ieee->priv = priv;
+
+ err = ieee80211_networks_allocate(ieee);
+ if (err) {
+ IEEE80211_ERROR("Unable to allocate beacon storage\n");
+ goto error;
+ }
+ ieee80211_networks_initialize(ieee);
+
+ /* Default fragmentation threshold is maximum payload size */
+ ieee->fts = DEFAULT_FTS;
+ ieee->scan_age = DEFAULT_MAX_SCAN_AGE;
+ ieee->open_wep = 1;
+
+#ifdef CONFIG_IEEE80211_CRYPT
+ /* Default to enabling full open WEP with host based encrypt/decrypt */
+ ieee->host_encrypt = 1;
+ ieee->host_decrypt = 1;
+
+ INIT_LIST_HEAD(&ieee->crypt_deinit_list);
+ init_timer(&ieee->crypt_deinit_timer);
+ ieee->crypt_deinit_timer.data = (unsigned long)ieee;
+ ieee->crypt_deinit_timer.function = ieee80211_crypt_deinit_handler;
+#endif
+
+ spin_lock_init(&ieee->lock);
+
+ return ieee;
+
+ error:
+ if (ieee)
+ kfree(ieee);
+ return NULL;
+}
+
+void ieee80211_free(struct ieee80211_device *ieee)
+{
+#ifdef CONFIG_IEEE80211_CRYPT
+ int i;
+
+ del_timer_sync(&ieee->crypt_deinit_timer);
+ ieee80211_crypt_deinit_entries(ieee, 1);
+
+ for (i = 0; i < WEP_KEYS; i++) {
+ struct ieee80211_crypt_data *crypt = ieee->crypt[i];
+ if (crypt) {
+ if (crypt->ops) {
+ crypt->ops->deinit(crypt->priv);
+ module_put(crypt->ops->owner);
+ }
+ kfree(crypt);
+ ieee->crypt[i] = NULL;
+ }
+ }
+#endif
+ ieee80211_networks_free(ieee);
+ kfree(ieee);
+}
+
+#ifdef CONFIG_IEEE80211_DEBUG
+
+static int debug = 0;
+u32 ieee80211_debug_level = 0;
+struct proc_dir_entry *ieee80211_proc = NULL;
+
+static int show_debug_level(char *page, char **start, off_t offset,
+ int count, int *eof, void *data)
+{
+ return snprintf(page, count, "0x%08X\n", ieee80211_debug_level);
+}
+
+static int store_debug_level(struct file *file, const char *buffer,
+ unsigned long count, void *data)
+{
+ char buf[] = "0x00000000";
+ unsigned long len = min(sizeof(buf) - 1, (u32)count);
+ char *p = (char *)buf;
+ unsigned long val;
+
+ if (copy_from_user(buf, buffer, len))
+ return count;
+ buf[len] = 0;
+ if (p[1] == 'x' || p[1] == 'X' || p[0] == 'x' || p[0] == 'X') {
+ p++;
+ if (p[0] == 'x' || p[0] == 'X')
+ p++;
+ val = simple_strtoul(p, &p, 16);
+ } else
+ val = simple_strtoul(p, &p, 10);
+ if (p == buf)
+ printk(KERN_INFO DRV_NAME
+ ": %s is not in hex or decimal form.\n", buf);
+ else
+ ieee80211_debug_level = val;
+
+ return strnlen(buf, count);
+}
+
+static int __init ieee80211_init(void)
+{
+ struct proc_dir_entry *e;
+
+ ieee80211_debug_level = debug;
+ ieee80211_proc = create_proc_entry(DRV_NAME, S_IFDIR, proc_net);
+ if (ieee80211_proc == NULL) {
+ IEEE80211_ERROR("Unable to create " DRV_NAME
+ " proc directory\n");
+ return -EIO;
+ }
+ e = create_proc_entry("debug_level", S_IFREG | S_IRUGO | S_IWUSR,
+ ieee80211_proc);
+ if (!e) {
+ remove_proc_entry(DRV_NAME, proc_net);
+ ieee80211_proc = NULL;
+ return -EIO;
+ }
+ e->read_proc = show_debug_level;
+ e->write_proc = store_debug_level;
+ e->data = NULL;
+
+ return 0;
+}
+
+static void __exit ieee80211_exit(void)
+{
+ if (ieee80211_proc) {
+ remove_proc_entry("debug_level", ieee80211_proc);
+ remove_proc_entry(DRV_NAME, proc_net);
+ ieee80211_proc = NULL;
+ }
+}
+
+#include <linux/moduleparam.h>
+module_param(debug, int, 0444);
+MODULE_PARM_DESC(debug, "debug output mask");
+
+
+module_exit(ieee80211_exit);
+module_init(ieee80211_init);
+#endif
+
+EXPORT_SYMBOL(ieee80211_alloc);
+EXPORT_SYMBOL(ieee80211_free);
--- /dev/null
+/*
+ * Original code based Host AP (software wireless LAN access point) driver
+ * for Intersil Prism2/2.5/3 - hostap.o module, common routines
+ *
+ * Copyright (c) 2001-2002, SSH Communications Security Corp and Jouni Malinen
+ * <jkmaline@cc.hut.fi>
+ * Copyright (c) 2002-2003, Jouni Malinen <jkmaline@cc.hut.fi>
+ * Copyright (c) 2004, Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation. See README and COPYING for
+ * more details.
+ */
+
+#include <linux/compiler.h>
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/if_arp.h>
+#include <linux/in6.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/pci.h>
+#include <linux/proc_fs.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/tcp.h>
+#include <linux/types.h>
+#include <linux/version.h>
+#include <linux/wireless.h>
+#include <linux/etherdevice.h>
+#include <asm/uaccess.h>
+#include <linux/ctype.h>
+
+#include "ieee80211.h"
+
+static inline void ieee80211_monitor_rx(struct ieee80211_device *ieee,
+ struct sk_buff *skb,
+ struct ieee80211_rx_stats *rx_stats)
+{
+ struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ u16 fc = le16_to_cpu(hdr->frame_ctl);
+
+ skb->dev = ieee->dev;
+ skb->mac.raw = skb->data;
+ skb_pull(skb, ieee80211_get_hdrlen(fc));
+ skb->pkt_type = PACKET_OTHERHOST;
+ skb->protocol = __constant_htons(ETH_P_80211_RAW);
+ memset(skb->cb, 0, sizeof(skb->cb));
+ netif_rx(skb);
+}
+
+
+/* Called only as a tasklet (software IRQ) */
+static struct ieee80211_frag_entry *
+ieee80211_frag_cache_find(struct ieee80211_device *ieee, unsigned int seq,
+ unsigned int frag, u8 *src, u8 *dst)
+{
+ struct ieee80211_frag_entry *entry;
+ int i;
+
+ for (i = 0; i < IEEE80211_FRAG_CACHE_LEN; i++) {
+ entry = &ieee->frag_cache[i];
+ if (entry->skb != NULL &&
+ time_after(jiffies, entry->first_frag_time + 2 * HZ)) {
+ printk(KERN_DEBUG "%s: expiring fragment cache entry "
+ "seq=%u last_frag=%u\n",
+ ieee->dev->name, entry->seq, entry->last_frag);
+ dev_kfree_skb_any(entry->skb);
+ entry->skb = NULL;
+ }
+
+ if (entry->skb != NULL && entry->seq == seq &&
+ (entry->last_frag + 1 == frag || frag == -1) &&
+ memcmp(entry->src_addr, src, ETH_ALEN) == 0 &&
+ memcmp(entry->dst_addr, dst, ETH_ALEN) == 0)
+ return entry;
+ }
+
+ return NULL;
+}
+
+/* Called only as a tasklet (software IRQ) */
+static struct sk_buff *
+ieee80211_frag_cache_get(struct ieee80211_device *ieee,
+ struct ieee80211_hdr *hdr)
+{
+ struct sk_buff *skb = NULL;
+ u16 sc;
+ unsigned int frag, seq;
+ struct ieee80211_frag_entry *entry;
+
+ sc = le16_to_cpu(hdr->seq_ctl);
+ frag = WLAN_GET_SEQ_FRAG(sc);
+ seq = WLAN_GET_SEQ_SEQ(sc);
+
+ if (frag == 0) {
+ /* Reserve enough space to fit maximum frame length */
+ skb = dev_alloc_skb(ieee->dev->mtu +
+ sizeof(struct ieee80211_hdr) +
+ 8 /* LLC */ +
+ 2 /* alignment */ +
+ 8 /* WEP */ + ETH_ALEN /* WDS */);
+ if (skb == NULL)
+ return NULL;
+
+ entry = &ieee->frag_cache[ieee->frag_next_idx];
+ ieee->frag_next_idx++;
+ if (ieee->frag_next_idx >= IEEE80211_FRAG_CACHE_LEN)
+ ieee->frag_next_idx = 0;
+
+ if (entry->skb != NULL)
+ dev_kfree_skb_any(entry->skb);
+
+ entry->first_frag_time = jiffies;
+ entry->seq = seq;
+ entry->last_frag = frag;
+ entry->skb = skb;
+ memcpy(entry->src_addr, hdr->addr2, ETH_ALEN);
+ memcpy(entry->dst_addr, hdr->addr1, ETH_ALEN);
+ } else {
+ /* received a fragment of a frame for which the head fragment
+ * should have already been received */
+ entry = ieee80211_frag_cache_find(ieee, seq, frag, hdr->addr2,
+ hdr->addr1);
+ if (entry != NULL) {
+ entry->last_frag = frag;
+ skb = entry->skb;
+ }
+ }
+
+ return skb;
+}
+
+
+/* Called only as a tasklet (software IRQ) */
+static int ieee80211_frag_cache_invalidate(struct ieee80211_device *ieee,
+ struct ieee80211_hdr *hdr)
+{
+ u16 sc;
+ unsigned int seq;
+ struct ieee80211_frag_entry *entry;
+
+ sc = le16_to_cpu(hdr->seq_ctl);
+ seq = WLAN_GET_SEQ_SEQ(sc);
+
+ entry = ieee80211_frag_cache_find(ieee, seq, -1, hdr->addr2,
+ hdr->addr1);
+
+ if (entry == NULL) {
+ printk(KERN_DEBUG "%s: could not invalidate fragment cache "
+ "entry (seq=%u)\n",
+ ieee->dev->name, seq);
+ return -1;
+ }
+
+ entry->skb = NULL;
+ return 0;
+}
+
+
+#ifdef NOT_YET
+/* ieee80211_rx_frame_mgtmt
+ *
+ * Responsible for handling management control frames
+ *
+ * Called by ieee80211_rx */
+static inline int
+ieee80211_rx_frame_mgmt(struct ieee80211_device *ieee, struct sk_buff *skb,
+ struct ieee80211_rx_stats *rx_stats, u16 type,
+ u16 stype)
+{
+ if (ieee->iw_mode == IW_MODE_MASTER) {
+ printk(KERN_DEBUG "%s: Master mode not yet suppported.\n",
+ ieee->dev->name);
+ return 0;
+/*
+ hostap_update_sta_ps(ieee, (struct hostap_ieee80211_hdr *)
+ skb->data);*/
+ }
+
+ if (ieee->hostapd && type == WLAN_FC_TYPE_MGMT) {
+ if (stype == WLAN_FC_STYPE_BEACON &&
+ ieee->iw_mode == IW_MODE_MASTER) {
+ struct sk_buff *skb2;
+ /* Process beacon frames also in kernel driver to
+ * update STA(AP) table statistics */
+ skb2 = skb_clone(skb, GFP_ATOMIC);
+ if (skb2)
+ hostap_rx(skb2->dev, skb2, rx_stats);
+ }
+
+ /* send management frames to the user space daemon for
+ * processing */
+ ieee->apdevstats.rx_packets++;
+ ieee->apdevstats.rx_bytes += skb->len;
+ prism2_rx_80211(ieee->apdev, skb, rx_stats, PRISM2_RX_MGMT);
+ return 0;
+ }
+
+ if (ieee->iw_mode == IW_MODE_MASTER) {
+ if (type != WLAN_FC_TYPE_MGMT && type != WLAN_FC_TYPE_CTRL) {
+ printk(KERN_DEBUG "%s: unknown management frame "
+ "(type=0x%02x, stype=0x%02x) dropped\n",
+ skb->dev->name, type, stype);
+ return -1;
+ }
+
+ hostap_rx(skb->dev, skb, rx_stats);
+ return 0;
+ }
+
+ printk(KERN_DEBUG "%s: hostap_rx_frame_mgmt: management frame "
+ "received in non-Host AP mode\n", skb->dev->name);
+ return -1;
+}
+#endif
+
+
+/* See IEEE 802.1H for LLC/SNAP encapsulation/decapsulation */
+/* Ethernet-II snap header (RFC1042 for most EtherTypes) */
+static unsigned char rfc1042_header[] =
+{ 0xaa, 0xaa, 0x03, 0x00, 0x00, 0x00 };
+/* Bridge-Tunnel header (for EtherTypes ETH_P_AARP and ETH_P_IPX) */
+static unsigned char bridge_tunnel_header[] =
+{ 0xaa, 0xaa, 0x03, 0x00, 0x00, 0xf8 };
+/* No encapsulation header if EtherType < 0x600 (=length) */
+
+#ifdef CONFIG_IEEE80211_CRYPT
+/* Called by ieee80211_rx_frame_decrypt */
+static int ieee80211_is_eapol_frame(struct ieee80211_device *ieee,
+ struct sk_buff *skb)
+{
+ struct net_device *dev = ieee->dev;
+ u16 fc, ethertype;
+ struct ieee80211_hdr *hdr;
+ u8 *pos;
+
+ if (skb->len < 24)
+ return 0;
+
+ hdr = (struct ieee80211_hdr *) skb->data;
+ fc = le16_to_cpu(hdr->frame_ctl);
+
+ /* check that the frame is unicast frame to us */
+ if ((fc & (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) ==
+ IEEE80211_FCTL_TODS &&
+ memcmp(hdr->addr1, dev->dev_addr, ETH_ALEN) == 0 &&
+ memcmp(hdr->addr3, dev->dev_addr, ETH_ALEN) == 0) {
+ /* ToDS frame with own addr BSSID and DA */
+ } else if ((fc & (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) ==
+ IEEE80211_FCTL_FROMDS &&
+ memcmp(hdr->addr1, dev->dev_addr, ETH_ALEN) == 0) {
+ /* FromDS frame with own addr as DA */
+ } else
+ return 0;
+
+ if (skb->len < 24 + 8)
+ return 0;
+
+ /* check for port access entity Ethernet type */
+ pos = skb->data + 24;
+ ethertype = (pos[6] << 8) | pos[7];
+ if (ethertype == ETH_P_PAE)
+ return 1;
+
+ return 0;
+}
+
+/* Called only as a tasklet (software IRQ), by ieee80211_rx */
+static inline int
+ieee80211_rx_frame_decrypt(struct ieee80211_device* ieee, struct sk_buff *skb,
+ struct ieee80211_crypt_data *crypt)
+{
+ struct ieee80211_hdr *hdr;
+ int res, hdrlen;
+
+ if (crypt == NULL || crypt->ops->decrypt_mpdu == NULL)
+ return 0;
+
+ hdr = (struct ieee80211_hdr *) skb->data;
+ hdrlen = ieee80211_get_hdrlen(le16_to_cpu(hdr->frame_ctl));
+
+#ifdef CONFIG_IEEE80211_WPA
+ if (ieee->tkip_countermeasures &&
+ strcmp(crypt->ops->name, "TKIP") == 0) {
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "%s: TKIP countermeasures: dropped "
+ "received packet from " MAC_FMT "\n",
+ ieee->dev->name, MAC_ARG(hdr->addr2));
+ }
+ return -1;
+ }
+#endif
+
+ atomic_inc(&crypt->refcnt);
+ res = crypt->ops->decrypt_mpdu(skb, hdrlen, crypt->priv);
+ atomic_dec(&crypt->refcnt);
+ if (res < 0) {
+ printk(KERN_DEBUG "%s: decryption failed (SA=" MAC_FMT
+ ") res=%d\n",
+ ieee->dev->name, MAC_ARG(hdr->addr2), res);
+ if (res == -2)
+ printk(KERN_DEBUG "%s: WEP decryption failed ICV "
+ "mismatch (key %d)\n",
+ ieee->dev->name, skb->data[hdrlen + 3] >> 6);
+ ieee->ieee_stats.rx_discards_wep_undecryptable++;
+ return -1;
+ }
+
+ return res;
+}
+
+
+/* Called only as a tasklet (software IRQ), by ieee80211_rx */
+static inline int
+ieee80211_rx_frame_decrypt_msdu(struct ieee80211_device* ieee, struct sk_buff *skb,
+ int keyidx, struct ieee80211_crypt_data *crypt)
+{
+ struct ieee80211_hdr *hdr;
+ int res, hdrlen;
+
+ if (crypt == NULL || crypt->ops->decrypt_msdu == NULL)
+ return 0;
+
+ hdr = (struct ieee80211_hdr *) skb->data;
+ hdrlen = ieee80211_get_hdrlen(le16_to_cpu(hdr->frame_ctl));
+
+ atomic_inc(&crypt->refcnt);
+ res = crypt->ops->decrypt_msdu(skb, keyidx, hdrlen, crypt->priv);
+ atomic_dec(&crypt->refcnt);
+ if (res < 0) {
+ printk(KERN_DEBUG "%s: MSDU decryption/MIC verification failed"
+ " (SA=" MAC_FMT " keyidx=%d)\n",
+ ieee->dev->name, MAC_ARG(hdr->addr2), keyidx);
+ return -1;
+ }
+
+ return 0;
+}
+#endif /* CONFIG_IEEE80211_CRYPT */
+
+
+/* All received frames are sent to this function. @skb contains the frame in
+ * IEEE 802.11 format, i.e., in the format it was sent over air.
+ * This function is called only as a tasklet (software IRQ). */
+int ieee80211_rx(struct ieee80211_device *ieee, struct sk_buff *skb,
+ struct ieee80211_rx_stats *rx_stats)
+{
+ struct net_device *dev = ieee->dev;
+ struct ieee80211_hdr *hdr;
+ size_t hdrlen;
+ u16 fc, type, stype, sc;
+ struct net_device_stats *stats;
+ unsigned int frag;
+ u8 *payload;
+ u16 ethertype;
+#ifdef NOT_YET
+ struct net_device *wds = NULL;
+ struct sk_buff *skb2 = NULL;
+ struct net_device *wds = NULL;
+ int frame_authorized = 0;
+ int from_assoc_ap = 0;
+ void *sta = NULL;
+#endif
+ u8 dst[ETH_ALEN];
+ u8 src[ETH_ALEN];
+#ifdef CONFIG_IEEE80211_CRYPT
+ struct ieee80211_crypt_data *crypt = NULL;
+ int keyidx = 0;
+#endif
+
+ hdr = (struct ieee80211_hdr *)skb->data;
+ stats = &ieee->stats;
+
+ if (skb->len < 10) {
+ printk(KERN_INFO "%s: SKB length < 10\n",
+ dev->name);
+ goto rx_dropped;
+ }
+
+ fc = le16_to_cpu(hdr->frame_ctl);
+ type = WLAN_FC_GET_TYPE(fc);
+ stype = WLAN_FC_GET_STYPE(fc);
+ sc = le16_to_cpu(hdr->seq_ctl);
+ frag = WLAN_GET_SEQ_FRAG(sc);
+ hdrlen = ieee80211_get_hdrlen(fc);
+
+#ifdef NOT_YET
+#if WIRELESS_EXT > 15
+ /* Put this code here so that we avoid duplicating it in all
+ * Rx paths. - Jean II */
+#ifdef IW_WIRELESS_SPY /* defined in iw_handler.h */
+ /* If spy monitoring on */
+ if (iface->spy_data.spy_number > 0) {
+ struct iw_quality wstats;
+ wstats.level = rx_stats->signal;
+ wstats.noise = rx_stats->noise;
+ wstats.updated = 6; /* No qual value */
+ /* Update spy records */
+ wireless_spy_update(dev, hdr->addr2, &wstats);
+ }
+#endif /* IW_WIRELESS_SPY */
+#endif /* WIRELESS_EXT > 15 */
+ hostap_update_rx_stats(local->ap, hdr, rx_stats);
+#endif
+
+#if WIRELESS_EXT > 15
+ if (ieee->iw_mode == IW_MODE_MONITOR) {
+ ieee80211_monitor_rx(ieee, skb, rx_stats);
+ stats->rx_packets++;
+ stats->rx_bytes += skb->len;
+ return 1;
+ }
+#endif
+
+#ifdef CONFIG_IEEE80211_CRYPT
+ if (ieee->host_decrypt) {
+ int idx = 0;
+ if (skb->len >= hdrlen + 3)
+ idx = skb->data[hdrlen + 3] >> 6;
+ crypt = ieee->crypt[idx];
+#ifdef NOT_YET
+ sta = NULL;
+
+ /* Use station specific key to override default keys if the
+ * receiver address is a unicast address ("individual RA"). If
+ * bcrx_sta_key parameter is set, station specific key is used
+ * even with broad/multicast targets (this is against IEEE
+ * 802.11, but makes it easier to use different keys with
+ * stations that do not support WEP key mapping). */
+
+ if (!(hdr->addr1[0] & 0x01) || local->bcrx_sta_key)
+ (void) hostap_handle_sta_crypto(local, hdr, &crypt,
+ &sta);
+#endif
+
+ /* allow NULL decrypt to indicate an station specific override
+ * for default encryption */
+ if (crypt && (crypt->ops == NULL ||
+ crypt->ops->decrypt_mpdu == NULL))
+ crypt = NULL;
+
+ if (!crypt && (fc & IEEE80211_FCTL_WEP)) {
+#if 0
+ /* This seems to be triggered by some (multicast?)
+ * frames from other than current BSS, so just drop the
+ * frames silently instead of filling system log with
+ * these reports. */
+ printk(KERN_DEBUG "%s: WEP decryption failed (not set)"
+ " (SA=" MAC_FMT ")\n",
+ ieee->dev->name, MAC_ARG(hdr->addr2));
+#endif
+ ieee->ieee_stats.rx_discards_wep_undecryptable++;
+ goto rx_dropped;
+ }
+ }
+#endif /* CONFIG_IEEE80211_CRYPT */
+
+#ifdef NOT_YET
+ if (type != WLAN_FC_TYPE_DATA) {
+ if (type == WLAN_FC_TYPE_MGMT && stype == WLAN_FC_STYPE_AUTH &&
+ fc & IEEE80211_FCTL_WEP && ieee->host_decrypt &&
+ (keyidx = hostap_rx_frame_decrypt(ieee, skb, crypt)) < 0)
+ {
+ printk(KERN_DEBUG "%s: failed to decrypt mgmt::auth "
+ "from " MAC_FMT "\n", dev->name,
+ MAC_ARG(hdr->addr2));
+ /* TODO: could inform hostapd about this so that it
+ * could send auth failure report */
+ goto rx_dropped;
+ }
+
+ if (ieee80211_rx_frame_mgmt(ieee, skb, rx_stats, type, stype))
+ goto rx_dropped;
+ else
+ goto rx_exit;
+ }
+#endif
+
+ /* Data frame - extract src/dst addresses */
+ if (skb->len < IEEE80211_DATA_HDR3_LEN)
+ goto rx_dropped;
+
+ switch (fc & (IEEE80211_FCTL_FROMDS | IEEE80211_FCTL_TODS)) {
+ case IEEE80211_FCTL_FROMDS:
+ memcpy(dst, hdr->addr1, ETH_ALEN);
+ memcpy(src, hdr->addr3, ETH_ALEN);
+ break;
+ case IEEE80211_FCTL_TODS:
+ memcpy(dst, hdr->addr3, ETH_ALEN);
+ memcpy(src, hdr->addr2, ETH_ALEN);
+ break;
+ case IEEE80211_FCTL_FROMDS | IEEE80211_FCTL_TODS:
+ if (skb->len < IEEE80211_DATA_HDR4_LEN)
+ goto rx_dropped;
+ memcpy(dst, hdr->addr3, ETH_ALEN);
+ memcpy(src, hdr->addr4, ETH_ALEN);
+ break;
+ case 0:
+ memcpy(dst, hdr->addr1, ETH_ALEN);
+ memcpy(src, hdr->addr2, ETH_ALEN);
+ break;
+ }
+
+#ifdef NOT_YET
+ if (hostap_rx_frame_wds(ieee, hdr, fc, &wds))
+ goto rx_dropped;
+ if (wds) {
+ skb->dev = dev = wds;
+ stats = hostap_get_stats(dev);
+ }
+
+ if (ieee->iw_mode == IW_MODE_MASTER && !wds &&
+ (fc & (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) == IEEE80211_FCTL_FROMDS &&
+ ieee->stadev &&
+ memcmp(hdr->addr2, ieee->assoc_ap_addr, ETH_ALEN) == 0) {
+ /* Frame from BSSID of the AP for which we are a client */
+ skb->dev = dev = ieee->stadev;
+ stats = hostap_get_stats(dev);
+ from_assoc_ap = 1;
+ }
+#endif
+
+ dev->last_rx = jiffies;
+
+#ifdef NOT_YET
+ if ((ieee->iw_mode == IW_MODE_MASTER ||
+ ieee->iw_mode == IW_MODE_REPEAT) &&
+ !from_assoc_ap) {
+ switch (hostap_handle_sta_rx(ieee, dev, skb, rx_stats,
+ wds != NULL)) {
+ case AP_RX_CONTINUE_NOT_AUTHORIZED:
+ frame_authorized = 0;
+ break;
+ case AP_RX_CONTINUE:
+ frame_authorized = 1;
+ break;
+ case AP_RX_DROP:
+ goto rx_dropped;
+ case AP_RX_EXIT:
+ goto rx_exit;
+ }
+ }
+#endif
+
+ /* Nullfunc frames may have PS-bit set, so they must be passed to
+ * hostap_handle_sta_rx() before being dropped here. */
+ if (stype != IEEE80211_STYPE_DATA &&
+ stype != IEEE80211_STYPE_DATA_CFACK &&
+ stype != IEEE80211_STYPE_DATA_CFPOLL &&
+ stype != IEEE80211_STYPE_DATA_CFACKPOLL) {
+ if (stype != IEEE80211_STYPE_NULLFUNC)
+ printk(KERN_DEBUG "%s: RX: dropped data frame "
+ "with no data (type=0x%02x, subtype=0x%02x, len=%d)\n",
+ dev->name, type, stype, skb->len);
+ goto rx_dropped;
+ }
+
+ /* skb: hdr + (possibly fragmented, possibly encrypted) payload */
+
+#ifdef CONFIG_IEEE80211_CRYPT
+ if (ieee->host_decrypt && (fc & IEEE80211_FCTL_WEP) &&
+ (keyidx = ieee80211_rx_frame_decrypt(ieee, skb, crypt)) < 0)
+ goto rx_dropped;
+#endif
+ hdr = (struct ieee80211_hdr *) skb->data;
+
+ /* skb: hdr + (possibly fragmented) plaintext payload */
+ // PR: FIXME: hostap has additional conditions in the "if" below:
+ // ieee->host_decrypt && (fc & IEEE80211_FCTL_WEP) &&
+ if ((frag != 0 || (fc & IEEE80211_FCTL_MOREFRAGS))) {
+ int flen;
+ struct sk_buff *frag_skb = ieee80211_frag_cache_get(ieee, hdr);
+ IEEE80211_DEBUG_FRAG("Rx Fragment received (%u)\n", frag);
+
+ if (!frag_skb) {
+ printk(KERN_DEBUG "%s: Rx cannot get skb from "
+ "fragment cache (morefrag=%d seq=%u frag=%u)\n",
+ dev->name,
+ (fc & IEEE80211_FCTL_MOREFRAGS) != 0,
+ WLAN_GET_SEQ_SEQ(sc), frag);
+ goto rx_dropped;
+ }
+
+ flen = skb->len;
+ if (frag != 0)
+ flen -= hdrlen;
+
+ if (frag_skb->tail + flen > frag_skb->end) {
+ printk(KERN_WARNING "%s: host decrypted and "
+ "reassembled frame did not fit skb\n",
+ dev->name);
+ ieee80211_frag_cache_invalidate(ieee, hdr);
+ goto rx_dropped;
+ }
+
+ if (frag == 0) {
+ /* copy first fragment (including full headers) into
+ * beginning of the fragment cache skb */
+ memcpy(skb_put(frag_skb, flen), skb->data, flen);
+ } else {
+ /* append frame payload to the end of the fragment
+ * cache skb */
+ memcpy(skb_put(frag_skb, flen), skb->data + hdrlen,
+ flen);
+ }
+ dev_kfree_skb_any(skb);
+ skb = NULL;
+
+ if (fc & IEEE80211_FCTL_MOREFRAGS) {
+ /* more fragments expected - leave the skb in fragment
+ * cache for now; it will be delivered to upper layers
+ * after all fragments have been received */
+ goto rx_exit;
+ }
+
+ /* this was the last fragment and the frame will be
+ * delivered, so remove skb from fragment cache */
+ skb = frag_skb;
+ hdr = (struct ieee80211_hdr *) skb->data;
+ ieee80211_frag_cache_invalidate(ieee, hdr);
+ }
+
+ /* skb: hdr + (possible reassembled) full MSDU payload; possibly still
+ * encrypted/authenticated */
+#ifdef CONFIG_IEEE80211_CRYPT
+ if (ieee->host_decrypt && (fc & IEEE80211_FCTL_WEP) &&
+ ieee80211_rx_frame_decrypt_msdu(ieee, skb, keyidx, crypt))
+ goto rx_dropped;
+
+ hdr = (struct ieee80211_hdr *) skb->data;
+ if (crypt && !(fc & IEEE80211_FCTL_WEP) && !ieee->open_wep) {
+ if (/*ieee->ieee_802_1x &&*/
+ ieee80211_is_eapol_frame(ieee, skb)) {
+ /* pass unencrypted EAPOL frames even if encryption is
+ * configured */
+ IEEE80211_DEBUG_EAP("RX: IEEE 802.1X - passing "
+ "unencrypted EAPOL frame\n");
+ } else {
+ printk(KERN_DEBUG "%s: encryption configured, but RX "
+ "frame not encrypted (SA=" MAC_FMT ")\n",
+ ieee->dev->name, MAC_ARG(hdr->addr2));
+ goto rx_dropped;
+ }
+ }
+
+#ifdef CONFIG_IEEE80211_DEBUG
+ if (crypt && !(fc & IEEE80211_FCTL_WEP) &&
+ ieee80211_is_eapol_frame(ieee, skb)) {
+ IEEE80211_DEBUG_EAP("RX: IEEE 802.1X - passing "
+ "unencrypted EAPOL frame\n");
+ }
+#endif
+
+ if (/*ieee->drop_unencrypted*/ crypt && !(fc & IEEE80211_FCTL_WEP) && !ieee->open_wep &&
+ !ieee80211_is_eapol_frame(ieee, skb)) {
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "%s: dropped unencrypted RX data "
+ "frame from " MAC_FMT " (drop_unencrypted=1)\n",
+ dev->name, MAC_ARG(hdr->addr2));
+ }
+ goto rx_dropped;
+ }
+#endif /* CONFIG_IEEE80211_CRYPT */
+
+ /* skb: hdr + (possible reassembled) full plaintext payload */
+
+ payload = skb->data + hdrlen;
+ ethertype = (payload[6] << 8) | payload[7];
+
+#ifdef NOT_YET
+ /* If IEEE 802.1X is used, check whether the port is authorized to send
+ * the received frame. */
+ if (ieee->ieee_802_1x && ieee->iw_mode == IW_MODE_MASTER) {
+ if (ethertype == ETH_P_PAE) {
+ printk(KERN_DEBUG "%s: RX: IEEE 802.1X frame\n",
+ dev->name);
+ if (ieee->hostapd && ieee->apdev) {
+ /* Send IEEE 802.1X frames to the user
+ * space daemon for processing */
+ prism2_rx_80211(ieee->apdev, skb, rx_stats,
+ PRISM2_RX_MGMT);
+ ieee->apdevstats.rx_packets++;
+ ieee->apdevstats.rx_bytes += skb->len;
+ goto rx_exit;
+ }
+ } else if (!frame_authorized) {
+ printk(KERN_DEBUG "%s: dropped frame from "
+ "unauthorized port (IEEE 802.1X): "
+ "ethertype=0x%04x\n",
+ dev->name, ethertype);
+ goto rx_dropped;
+ }
+ }
+#endif
+
+ /* convert hdr + possible LLC headers into Ethernet header */
+ if (skb->len - hdrlen >= 8 &&
+ ((memcmp(payload, rfc1042_header, SNAP_SIZE) == 0 &&
+ ethertype != ETH_P_AARP && ethertype != ETH_P_IPX) ||
+ memcmp(payload, bridge_tunnel_header, SNAP_SIZE) == 0)) {
+ /* remove RFC1042 or Bridge-Tunnel encapsulation and
+ * replace EtherType */
+ skb_pull(skb, hdrlen + SNAP_SIZE);
+ memcpy(skb_push(skb, ETH_ALEN), src, ETH_ALEN);
+ memcpy(skb_push(skb, ETH_ALEN), dst, ETH_ALEN);
+ } else {
+ u16 len;
+ /* Leave Ethernet header part of hdr and full payload */
+ skb_pull(skb, hdrlen);
+ len = htons(skb->len);
+ memcpy(skb_push(skb, 2), &len, 2);
+ memcpy(skb_push(skb, ETH_ALEN), src, ETH_ALEN);
+ memcpy(skb_push(skb, ETH_ALEN), dst, ETH_ALEN);
+ }
+
+#ifdef NOT_YET
+ if (wds && ((fc & (IEEE80211_FCTL_TODS | IEEE80211_FCTL_FROMDS)) ==
+ IEEE80211_FCTL_TODS) &&
+ skb->len >= ETH_HLEN + ETH_ALEN) {
+ /* Non-standard frame: get addr4 from its bogus location after
+ * the payload */
+ memcpy(skb->data + ETH_ALEN,
+ skb->data + skb->len - ETH_ALEN, ETH_ALEN);
+ skb_trim(skb, skb->len - ETH_ALEN);
+ }
+#endif
+
+ stats->rx_packets++;
+ stats->rx_bytes += skb->len;
+
+#ifdef NOT_YET
+ if (ieee->iw_mode == IW_MODE_MASTER && !wds &&
+ ieee->ap->bridge_packets) {
+ if (dst[0] & 0x01) {
+ /* copy multicast frame both to the higher layers and
+ * to the wireless media */
+ ieee->ap->bridged_multicast++;
+ skb2 = skb_clone(skb, GFP_ATOMIC);
+ if (skb2 == NULL)
+ printk(KERN_DEBUG "%s: skb_clone failed for "
+ "multicast frame\n", dev->name);
+ } else if (hostap_is_sta_assoc(ieee->ap, dst)) {
+ /* send frame directly to the associated STA using
+ * wireless media and not passing to higher layers */
+ ieee->ap->bridged_unicast++;
+ skb2 = skb;
+ skb = NULL;
+ }
+ }
+
+ if (skb2 != NULL) {
+ /* send to wireless media */
+ skb2->protocol = __constant_htons(ETH_P_802_3);
+ skb2->mac.raw = skb2->nh.raw = skb2->data;
+ /* skb2->nh.raw = skb2->data + ETH_HLEN; */
+ skb2->dev = dev;
+ dev_queue_xmit(skb2);
+ }
+
+#endif
+
+ if (skb) {
+ skb->protocol = eth_type_trans(skb, dev);
+ memset(skb->cb, 0, sizeof(skb->cb));
+ skb->dev = dev;
+ skb->ip_summed = CHECKSUM_NONE; /* 802.11 crc not sufficient */
+ netif_rx(skb);
+ }
+
+ rx_exit:
+#ifdef NOT_YET
+ if (sta)
+ hostap_handle_sta_release(sta);
+#endif
+ return 1;
+
+ rx_dropped:
+ stats->rx_dropped++;
+
+ /* Returning 0 indicates to caller that we have not handled the SKB--
+ * so it is still allocated and can be used again by underlying
+ * hardware as a DMA target */
+ return 0;
+}
+
+#define MGMT_FRAME_FIXED_PART_LENGTH 0x24
+
+static int ieee80211_filter_network(
+ struct ieee80211_device *ieee,
+ struct ieee80211_network *network,
+ struct ieee80211_rx_stats *stats)
+{
+ // TODO check valid channel
+ if (ieee->abg_ture == 1)
+ return 0;
+
+ switch (stats->freq) {
+ case IEEE80211_52GHZ_BAND:
+ if (ieee->freq_band == IEEE80211_24GHZ_BAND)
+ return 1;
+ break;
+ case IEEE80211_24GHZ_BAND:
+ default:
+ if (ieee->freq_band == IEEE80211_52GHZ_BAND)
+ return 1;
+ if (ieee->modulation == IEEE80211_CCK_MODULATION) {
+ if (network->flags & NETWORK_HAS_OFDM)
+ return 1;
+ } else if (ieee->modulation == IEEE80211_OFDM_MODULATION) {
+ if (!(network->flags & NETWORK_HAS_OFDM))
+ return 1;
+ }
+ break;
+ }
+
+ return 0;
+}
+
+static inline int ieee80211_is_ofdm_rate(u8 rate)
+{
+ switch (rate & ~IEEE80211_BASIC_RATE_MASK) {
+ case IEEE80211_OFDM_RATE_6MB:
+ case IEEE80211_OFDM_RATE_9MB:
+ case IEEE80211_OFDM_RATE_12MB:
+ case IEEE80211_OFDM_RATE_18MB:
+ case IEEE80211_OFDM_RATE_24MB:
+ case IEEE80211_OFDM_RATE_36MB:
+ case IEEE80211_OFDM_RATE_48MB:
+ case IEEE80211_OFDM_RATE_54MB:
+ return 1;
+ }
+ return 0;
+}
+
+
+static inline int ieee80211_network_init(
+ struct ieee80211_device *ieee,
+ struct ieee80211_probe_response *beacon,
+ struct ieee80211_network *network,
+ struct ieee80211_rx_stats *stats)
+{
+ struct ieee80211_info_element *info_element;
+ u16 left;
+ u8 i;
+ int probe_response = WLAN_FC_GET_STYPE(beacon->header.frame_ctl) ==
+ IEEE80211_STYPE_PROBE_RESP;
+
+ if (stats->freq == IEEE80211_52GHZ_BAND) {
+ /* for A band (No DS info) */
+ network->channel = stats->received_channel;
+ }
+
+ /* Pull out fixed field data */
+ memcpy(network->bssid, beacon->header.addr3, ETH_ALEN);
+ network->capability = beacon->capability;
+ network->last_scanned = jiffies;
+ network->time_stamp[0] = beacon->time_stamp[0];
+ network->time_stamp[1] = beacon->time_stamp[1];
+ network->beacon_interval = beacon->beacon_interval;
+ network->listen_interval = 0x0A; /* Where to pull this? beacon->listen_interval;*/
+ network->flags = 0;
+ network->rates_len = network->rates_ex_len = 0;
+ network->last_associate = 0;
+
+#ifdef CONFIG_IEEE80211_WPA
+ network->wpa_ie_len = 0;
+ network->rsn_ie_len = 0;
+#endif /* CONFIG_IEEE80211_WPA */
+
+ info_element = &beacon->info_element;
+ left = stats->len - ((void *)info_element - (void *)beacon);
+ while (left >= sizeof(struct ieee80211_info_element_hdr)) {
+ if (sizeof(struct ieee80211_info_element_hdr) + info_element->len > left) {
+ IEEE80211_DEBUG_SCAN("SCAN: parse failed: info_element->len + 2 > left : info_element->len+2=%d left=%d.\n",
+ info_element->len + sizeof(struct ieee80211_info_element),
+ left);
+ return 1;
+ }
+
+ switch (info_element->id) {
+ case MFIE_TYPE_SSID:
+ if (!probe_response) {
+ IEEE80211_DEBUG_SCAN(
+ "MFIE_TYPE_SSID: "
+ "Ignored from BEACON FRAME.\n");
+ break;
+ }
+
+ network->ssid_len = min(info_element->len,
+ (u8)IW_ESSID_MAX_SIZE);
+ memcpy(network->ssid, info_element->data, network->ssid_len);
+ if (network->ssid_len < IW_ESSID_MAX_SIZE)
+ memset(network->ssid + network->ssid_len, 0,
+ IW_ESSID_MAX_SIZE - network->ssid_len);
+
+ IEEE80211_DEBUG_SCAN("MFIE_TYPE_SSID: '%s' len=%d.\n",
+ network->ssid, network->ssid_len);
+ break;
+
+ case MFIE_TYPE_RATES:
+ network->rates_len = min(info_element->len, MAX_RATES_LENGTH);
+ for (i = 0; i < network->rates_len; i++) {
+ network->rates[i] = info_element->data[i];
+ if (!(network->flags & NETWORK_HAS_OFDM))
+ network->flags |= ieee80211_is_ofdm_rate(info_element->data[i]) ? NETWORK_HAS_OFDM : 0;
+ }
+ break;
+
+ case MFIE_TYPE_RATES_EX:
+ network->rates_ex_len = min(info_element->len, MAX_RATES_EX_LENGTH);
+ for (i = 0; i < network->rates_ex_len; i++) {
+ network->rates_ex[i] = info_element->data[i];
+ if (!(network->flags & NETWORK_HAS_OFDM))
+ network->flags |= ieee80211_is_ofdm_rate(info_element->data[i]) ? NETWORK_HAS_OFDM : 0;
+ }
+ break;
+
+ case MFIE_TYPE_DS_SET:
+ IEEE80211_DEBUG_SCAN("MFIE_TYPE_DS_SET: %d\n",
+ info_element->data[0]);
+ if (stats->freq == IEEE80211_24GHZ_BAND)
+ network->channel = info_element->data[0];
+ break;
+
+ case MFIE_TYPE_FH_SET:
+ IEEE80211_DEBUG_SCAN("MFIE_TYPE_FH_SET: ignored\n");
+ break;
+
+ case MFIE_TYPE_CF_SET:
+ IEEE80211_DEBUG_SCAN("MFIE_TYPE_CF_SET: ignored\n");
+ break;
+
+ case MFIE_TYPE_TIM:
+ IEEE80211_DEBUG_SCAN("MFIE_TYPE_TIM: ignored\n");
+ break;
+
+ case MFIE_TYPE_IBSS_SET:
+ IEEE80211_DEBUG_SCAN("MFIE_TYPE_IBSS_SET: ignored\n");
+ break;
+
+ case MFIE_TYPE_CHALLENGE:
+ IEEE80211_DEBUG_SCAN("MFIE_TYPE_CHALLENGE: ignored\n");
+ break;
+
+#ifdef CONFIG_IEEE80211_WPA
+ case MFIE_TYPE_GENERIC:
+ IEEE80211_DEBUG_SCAN("MFIE_TYPE_GENERIC: %d bytes\n",
+ info_element->len);
+ if (info_element->len >= 4 &&
+ info_element->data[0] == 0x00 &&
+ info_element->data[1] == 0x50 &&
+ info_element->data[2] == 0xf2 &&
+ info_element->data[3] == 0x01) {
+ network->wpa_ie_len = min(info_element->len + 2,
+ MAX_WPA_IE_LEN);
+ memcpy(network->wpa_ie, info_element,
+ network->wpa_ie_len);
+ }
+ break;
+
+ case MFIE_TYPE_RSN:
+ IEEE80211_DEBUG_SCAN("MFIE_TYPE_RSN: %d bytes\n",
+ info_element->len);
+ network->rsn_ie_len = min(info_element->len + 2,
+ MAX_WPA_IE_LEN);
+ memcpy(network->rsn_ie, info_element,
+ network->rsn_ie_len);
+ break;
+#endif
+
+ default:
+ IEEE80211_DEBUG_SCAN("unsupported IE %d\n",
+ info_element->id);
+ break;
+ }
+
+ left -= sizeof(struct ieee80211_info_element_hdr) +
+ info_element->len;
+ info_element = (struct ieee80211_info_element *)
+ &info_element->data[info_element->len];
+ }
+
+ if (stats->freq == IEEE80211_52GHZ_BAND)
+ network->mode = IEEE_A;
+ else {
+ if (network->flags & NETWORK_HAS_OFDM)
+ network->mode = IEEE_G;
+ else
+ network->mode = IEEE_B;
+ }
+
+ if (ieee80211_is_empty_essid(network->ssid, network->ssid_len))
+ network->flags |= NETWORK_EMPTY_ESSID;
+
+ if (ieee80211_filter_network(ieee, network, stats)) {
+ IEEE80211_DEBUG_SCAN("Filtered out '%s (" MAC_FMT ")' "
+ "network.\n",
+ escape_essid(network->ssid,
+ network->ssid_len),
+ MAC_ARG(network->bssid));
+ return 1;
+ }
+
+ memcpy(&network->stats, stats, sizeof(network->stats));
+
+ return 0;
+}
+
+static inline void ieee80211_process_probe_response(
+ struct ieee80211_device *ieee,
+ struct ieee80211_probe_response *beacon,
+ struct ieee80211_rx_stats *stats)
+{
+ struct ieee80211_network *network;
+ struct ieee80211_network *oldest_network = NULL;
+#ifdef CONFIG_IEEE80211_DEBUG
+ struct ieee80211_info_element *ssid_ie;
+ u8 ssid_len = sizeof("<hidden>");
+ u8 ssid[IW_ESSID_MAX_SIZE];
+ u8 empty_ssid;
+#endif
+
+ IEEE80211_DEBUG_SCAN(
+ "\n"
+ "Time Stamp : %08X %08X\n"
+ "Beacon Interval : %04X\n"
+ "Capabilities : %c%c%c%c-%c%c%c%c\n",
+ beacon->time_stamp[0],
+ beacon->time_stamp[1],
+ beacon->beacon_interval,
+ (beacon->capability & BIT(7)) ? '1' : '0',
+ (beacon->capability & BIT(6)) ? '1' : '0',
+ (beacon->capability & BIT(5)) ? '1' : '0',
+ (beacon->capability & BIT(4)) ? '1' : '0',
+ (beacon->capability & BIT(3)) ? '1' : '0',
+ (beacon->capability & BIT(2)) ? '1' : '0',
+ (beacon->capability & BIT(1)) ? '1' : '0',
+ (beacon->capability & BIT(0)) ? '1' : '0');
+
+ /* Search for this entry in the list and nuke it if it is
+ * already there.
+ */
+ list_for_each_entry(network, &ieee->network_list, list) {
+ if (!memcmp(network->bssid, beacon->header.addr3,
+ ETH_ALEN))
+ break;
+ if ((oldest_network == NULL) ||
+ (network->last_scanned < oldest_network->last_scanned))
+ oldest_network = network;
+ }
+
+ /* If we didn't find a match, then get a new network slot to initialize
+ * with this beacon's information */
+ if (&network->list == &ieee->network_list) {
+ if (list_empty(&ieee->network_free_list)) {
+ /* If there are no more slots, expire the oldest */
+ list_del(&oldest_network->list);
+ network = oldest_network;
+ IEEE80211_DEBUG_SCAN("Expired '%s (" MAC_FMT ")' from "
+ "network list.\n",
+ escape_essid(network->ssid,
+ network->ssid_len),
+ MAC_ARG(network->bssid));
+ } else {
+ /* Otherwise just pull from the free list */
+ network = list_entry(ieee->network_free_list.next,
+ struct ieee80211_network, list);
+ list_del(ieee->network_free_list.next);
+ }
+
+#ifdef CONFIG_IEEE80211_DEBUG
+ ssid_ie = &beacon->info_element;
+ if (ssid_ie->id == MFIE_TYPE_SSID) {
+ ssid_len = min(ssid_ie->len, (u8)IW_ESSID_MAX_SIZE);
+ empty_ssid = ieee80211_is_empty_essid(ssid_ie->data,
+ ssid_len);
+ } else {
+ empty_ssid = 1;
+ }
+
+ if (empty_ssid)
+ memcpy(ssid, "<hidden>", sizeof("<hidden>"));
+ else
+ memcpy(ssid, ssid_ie->data, ssid_len);
+#endif
+
+ IEEE80211_DEBUG_SCAN("Adding '%s (" MAC_FMT ")' to network "
+ "list.\n",
+ escape_essid(ssid, ssid_len),
+ MAC_ARG(beacon->header.addr3));
+ list_add_tail(&network->list, &ieee->network_list);
+ } else {
+ IEEE80211_DEBUG_SCAN("Updating '%s (" MAC_FMT ")' to network "
+ "list.\n",
+ escape_essid(network->ssid,
+ network->ssid_len),
+ MAC_ARG(network->bssid));
+ }
+
+ if (ieee80211_network_init(ieee, beacon, network, stats)) {
+ /* If parsing of the beacon probe was not successful then
+ * nuke this network from the list and stick it on the free
+ * list for future use */
+ IEEE80211_DEBUG_SCAN("Dropped '%s (" MAC_FMT ")' from network "
+ "list.\n",
+ escape_essid(ssid, ssid_len),
+ MAC_ARG(beacon->header.addr3));
+ list_del(&network->list);
+ list_add_tail(&network->list, &ieee->network_free_list);
+ }
+}
+
+void ieee80211_rx_mgt(struct ieee80211_device *ieee,
+ struct ieee80211_hdr *header,
+ struct ieee80211_rx_stats *stats)
+{
+ switch (WLAN_FC_GET_STYPE(header->frame_ctl)) {
+ case IEEE80211_STYPE_ASSOC_RESP:
+ IEEE80211_DEBUG_MGMT("received ASSOCIATION RESPONSE (%d)\n",
+ WLAN_FC_GET_STYPE(header->frame_ctl));
+ break;
+
+ case IEEE80211_STYPE_REASSOC_RESP:
+ IEEE80211_DEBUG_MGMT("received REASSOCIATION RESPONSE (%d)\n",
+ WLAN_FC_GET_STYPE(header->frame_ctl));
+ break;
+
+ case IEEE80211_STYPE_PROBE_RESP:
+ IEEE80211_DEBUG_MGMT("received PROBE RESPONSE (%d)\n",
+ WLAN_FC_GET_STYPE(header->frame_ctl));
+ IEEE80211_DEBUG_SCAN("Probe response\n");
+ ieee80211_process_probe_response(
+ ieee, (struct ieee80211_probe_response *)header, stats);
+ break;
+
+ case IEEE80211_STYPE_BEACON:
+ IEEE80211_DEBUG_MGMT("received BEACON (%d)\n",
+ WLAN_FC_GET_STYPE(header->frame_ctl));
+ IEEE80211_DEBUG_SCAN("Beacon\n");
+ ieee80211_process_probe_response(
+ ieee, (struct ieee80211_probe_response *)header, stats);
+ break;
+
+ default:
+ IEEE80211_DEBUG_MGMT("received UNKNOWN (%d)\n",
+ WLAN_FC_GET_STYPE(header->frame_ctl));
+ IEEE80211_WARNING("%s: Unknown management packet: %d\n",
+ ieee->dev->name,
+ WLAN_FC_GET_STYPE(header->frame_ctl));
+ break;
+ }
+}
+
+
+EXPORT_SYMBOL(ieee80211_rx_mgt);
+EXPORT_SYMBOL(ieee80211_rx);
--- /dev/null
+/******************************************************************************
+
+ Copyright(c) 2003 - 2004 Intel Corporation. All rights reserved.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of version 2 of the GNU General Public License as
+ published by the Free Software Foundation.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ You should have received a copy of the GNU General Public License along with
+ this program; if not, write to the Free Software Foundation, Inc., 59
+ Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+ The full GNU General Public License is included in this distribution in the
+ file called LICENSE.
+
+ Contact Information:
+ James P. Ketrenos <ipw2100-admin@linux.intel.com>
+ Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
+******************************************************************************/
+#include <linux/compiler.h>
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/if_arp.h>
+#include <linux/in6.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/pci.h>
+#include <linux/proc_fs.h>
+#include <linux/skbuff.h>
+#include <linux/slab.h>
+#include <linux/tcp.h>
+#include <linux/types.h>
+#include <linux/version.h>
+#include <linux/wireless.h>
+#include <linux/etherdevice.h>
+#include <asm/uaccess.h>
+
+#include "ieee80211.h"
+
+
+/*
+
+
+802.11 Data Frame
+
+ ,-------------------------------------------------------------------.
+Bytes | 2 | 2 | 6 | 6 | 6 | 2 | 0..2312 | 4 |
+ |------|------|---------|---------|---------|------|---------|------|
+Desc. | ctrl | dura | DA/RA | TA | SA | Sequ | Frame | fcs |
+ | | tion | (BSSID) | | | ence | data | |
+ `--------------------------------------------------| |------'
+Total: 28 non-data bytes `----.----'
+ |
+ .- 'Frame data' expands to <---------------------------'
+ |
+ V
+ ,---------------------------------------------------.
+Bytes | 1 | 1 | 1 | 3 | 2 | 0-2304 |
+ |------|------|---------|----------|------|---------|
+Desc. | SNAP | SNAP | Control |Eth Tunnel| Type | IP |
+ | DSAP | SSAP | | | | Packet |
+ | 0xAA | 0xAA |0x03 (UI)|0x00-00-F8| | |
+ `-----------------------------------------| |
+Total: 8 non-data bytes `----.----'
+ |
+ .- 'IP Packet' expands, if WEP enabled, to <--'
+ |
+ V
+ ,-----------------------.
+Bytes | 4 | 0-2296 | 4 |
+ |-----|-----------|-----|
+Desc. | IV | Encrypted | ICV |
+ | | IP Packet | |
+ `-----------------------'
+Total: 8 non-data bytes
+
+
+802.3 Ethernet Data Frame
+
+ ,-----------------------------------------.
+Bytes | 6 | 6 | 2 | Variable | 4 |
+ |-------|-------|------|-----------|------|
+Desc. | Dest. | Source| Type | IP Packet | fcs |
+ | MAC | MAC | | | |
+ `-----------------------------------------'
+Total: 18 non-data bytes
+
+In the event that fragmentation is required, the incoming payload is split into
+N parts of size ieee->fts. The first fragment contains the SNAP header and the
+remaining packets are just data.
+
+If encryption is enabled, each fragment payload size is reduced by enough space
+to add the prefix and postfix (IV and ICV totalling 8 bytes in the case of WEP)
+So if you have 1500 bytes of payload with ieee->fts set to 500 without
+encryption it will take 3 frames. With WEP it will take 4 frames as the
+payload of each frame is reduced to 492 bytes.
+
+* SKB visualization
+*
+* ,- skb->data
+* |
+* | ETHERNET HEADER ,-<-- PAYLOAD
+* | | 14 bytes from skb->data
+* | 2 bytes for Type --> ,T. | (sizeof ethhdr)
+* | | | |
+* |,-Dest.--. ,--Src.---. | | |
+* | 6 bytes| | 6 bytes | | | |
+* v | | | | | |
+* 0 | v 1 | v | v 2
+* 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+* ^ | ^ | ^ |
+* | | | | | |
+* | | | | `T' <---- 2 bytes for Type
+* | | | |
+* | | '---SNAP--' <-------- 6 bytes for SNAP
+* | |
+* `-IV--' <-------------------- 4 bytes for IV (WEP)
+*
+* SNAP HEADER
+*
+*/
+
+static u8 P802_1H_OUI[P80211_OUI_LEN] = { 0x00, 0x00, 0xf8 };
+static u8 RFC1042_OUI[P80211_OUI_LEN] = { 0x00, 0x00, 0x00 };
+
+static inline int ieee80211_put_snap(u8 *data, u16 h_proto)
+{
+ struct ieee80211_snap_hdr *snap;
+ u8 *oui;
+
+ snap = (struct ieee80211_snap_hdr *)data;
+ snap->dsap = 0xaa;
+ snap->ssap = 0xaa;
+ snap->ctrl = 0x03;
+
+ if (h_proto == 0x8137 || h_proto == 0x80f3)
+ oui = P802_1H_OUI;
+ else
+ oui = RFC1042_OUI;
+ snap->oui[0] = oui[0];
+ snap->oui[1] = oui[1];
+ snap->oui[2] = oui[2];
+
+ *(u16 *)(data + SNAP_SIZE) = htons(h_proto);
+
+ return SNAP_SIZE + sizeof(u16);
+}
+
+#ifdef CONFIG_IEEE80211_CRYPT
+static inline int ieee80211_encrypt_fragment(
+ struct ieee80211_device *ieee,
+ struct sk_buff *frag,
+ int hdr_len)
+{
+ struct ieee80211_crypt_data* crypt = ieee->crypt[ieee->tx_keyidx];
+ int res;
+#ifdef CONFIG_IEEE80211_WPA
+ struct ieee80211_hdr *header;
+
+ if (ieee->tkip_countermeasures &&
+ crypt && crypt->ops && strcmp(crypt->ops->name, "TKIP") == 0) {
+ header = (struct ieee80211_hdr *) frag->data;
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "%s: TKIP countermeasures: dropped "
+ "TX packet to " MAC_FMT "\n",
+ ieee->dev->name, MAC_ARG(header->addr1));
+ }
+ return -1;
+ }
+#endif
+ /* To encrypt, frame format is:
+ * IV (4 bytes), clear payload (including SNAP), ICV (4 bytes) */
+
+ // PR: FIXME: Copied from hostap. Check fragmentation/MSDU/MPDU encryption.
+ /* Host-based IEEE 802.11 fragmentation for TX is not yet supported, so
+ * call both MSDU and MPDU encryption functions from here. */
+ atomic_inc(&crypt->refcnt);
+ res = 0;
+ if (crypt->ops->encrypt_msdu)
+ res = crypt->ops->encrypt_msdu(frag, hdr_len, crypt->priv);
+ if (res == 0 && crypt->ops->encrypt_mpdu)
+ res = crypt->ops->encrypt_mpdu(frag, hdr_len, crypt->priv);
+
+ atomic_dec(&crypt->refcnt);
+ if (res < 0) {
+ printk(KERN_INFO "%s: Encryption failed: len=%d.\n",
+ ieee->dev->name, frag->len);
+ ieee->ieee_stats.tx_discards++;
+ return -1;
+ }
+
+ return 0;
+}
+#endif
+
+
+void ieee80211_txb_free(struct ieee80211_txb *txb) {
+ int i;
+ if (unlikely(!txb))
+ return;
+ for (i = 0; i < txb->nr_frags; i++)
+ if (txb->fragments[i])
+ dev_kfree_skb_any(txb->fragments[i]);
+ kfree(txb);
+}
+
+struct ieee80211_txb *ieee80211_alloc_txb(int nr_frags, int txb_size,
+ int gfp_mask) {
+ struct ieee80211_txb *txb;
+ int i;
+ txb = kmalloc(
+ sizeof(struct ieee80211_txb) + (sizeof(u8*) * nr_frags),
+ gfp_mask);
+ if (!txb)
+ return NULL;
+
+ memset(txb, sizeof(struct ieee80211_txb), 0);
+ txb->nr_frags = nr_frags;
+ txb->frag_size = txb_size;
+
+ for (i = 0; i < nr_frags; i++) {
+ txb->fragments[i] = dev_alloc_skb(txb_size);
+ if (unlikely(!txb->fragments[i])) {
+ i--;
+ break;
+ }
+ }
+ if (unlikely(i != nr_frags)) {
+ while (i >= 0)
+ dev_kfree_skb_any(txb->fragments[i--]);
+ kfree(txb);
+ return NULL;
+ }
+ return txb;
+}
+
+/* SKBs are added to the ieee->tx_queue. */
+struct ieee80211_txb *ieee80211_skb_to_txb(struct ieee80211_device *ieee,
+ struct sk_buff *skb)
+{
+ struct ieee80211_txb *txb;
+ int i, bytes_per_frag, nr_frags, bytes_last_frag, frag_size;
+ unsigned long flags;
+ struct net_device_stats *stats = &ieee->stats;
+ int ether_type, encrypt;
+ int bytes, fc, hdr_len;
+ struct sk_buff *skb_frag;
+ struct ieee80211_hdr header;
+ u8 dest[ETH_ALEN], src[ETH_ALEN];
+
+#ifdef CONFIG_IEEE80211_CRYPT
+ struct ieee80211_crypt_data* crypt;
+#endif
+
+ spin_lock_irqsave(&ieee->lock, flags);
+
+ if (unlikely(skb->len < SNAP_SIZE + sizeof(u16))) {
+ printk(KERN_WARNING "%s: skb too small (%d).\n",
+ ieee->dev->name, skb->len);
+ goto failed;
+ }
+
+ ether_type = ntohs(((struct ethhdr *)skb->data)->h_proto);
+
+#ifndef CONFIG_IEEE80211_CRYPT
+ encrypt = 0;
+#else /* CONFIG_IEEE80211_CRYPT */
+ crypt = ieee->crypt[ieee->tx_keyidx];
+
+#ifndef CONFIG_IEEE80211_WPA
+ encrypt = (ether_type != ETH_P_PAE) &&
+ ieee->host_encrypt && crypt && crypt->ops;
+
+#else /* CONFIG_IEEE80211_WPA */
+ encrypt = !(ether_type == ETH_P_PAE && ieee->ieee_802_1x) &&
+ ieee->host_encrypt && crypt && crypt->ops;
+
+ if (!encrypt && ieee->ieee_802_1x &&
+ ieee->drop_unencrypted && ether_type != ETH_P_PAE){
+ stats->tx_dropped++;
+ /* FIXME: Allocate an empty txb and return it; this
+ * isn't the best code path since an alloc/free is
+ * required for no real reason except to return a
+ * special case success code... */
+ txb = ieee80211_alloc_txb(0, ieee->fts, GFP_ATOMIC);
+ if (unlikely(!txb)) {
+ printk(KERN_WARNING
+ "%s: Could not allocate TXB\n",
+ ieee->dev->name);
+ goto failed;
+ }
+
+ if (net_ratelimit()) {
+ printk(KERN_DEBUG "%s: dropped unencrypted TX data "
+ "frame (drop_unencrypted=1)\n",
+ ieee->dev->name);
+ }
+
+ goto success;
+ }
+
+#endif /* CONFIG_IEEE80211_WPA */
+ if (crypt && !encrypt && ether_type == ETH_P_PAE)
+ IEEE80211_DEBUG_EAP("TX: IEEE 802.11 - sending EAPOL frame\n");
+#endif /* CONFIG_IEEE80211_CRYPT */
+
+ if (encrypt) {
+ /* Save source and destination addresses */
+ memcpy(&dest, skb->data, ETH_ALEN);
+ memcpy(&src, skb->data+ETH_ALEN, ETH_ALEN);
+ }
+
+ /* Advance the SKB to the start of the payload */
+ skb_pull(skb, sizeof(struct ethhdr));
+
+ /* Determine total amount of storage required for TXB packets */
+ bytes = skb->len + SNAP_SIZE + sizeof(u16);
+
+ if (!ieee->tx_payload_only) {
+ if (encrypt)
+ fc = IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA |
+ IEEE80211_FCTL_WEP;
+ else
+ fc = IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA;
+
+ if (ieee->iw_mode == IW_MODE_INFRA) {
+ fc |= IEEE80211_FCTL_TODS;
+ hdr_len = 24;
+ /* To DS: Addr1 = BSSID, Addr2 = SA,
+ Addr3 = DA */
+ memcpy(&header.addr1, ieee->bssid, ETH_ALEN);
+ memcpy(&header.addr2, &src, ETH_ALEN);
+ memcpy(&header.addr3, &dest, ETH_ALEN);
+ } else if (ieee->iw_mode == IW_MODE_ADHOC) {
+ /* not From/To DS: Addr1 = DA, Addr2 = SA,
+ Addr3 = BSSID */
+ memcpy(&header.addr1, dest, ETH_ALEN);
+ memcpy(&header.addr2, src, ETH_ALEN);
+ memcpy(&header.addr3, ieee->bssid, ETH_ALEN);
+ }
+ header.frame_ctl = cpu_to_le16(fc);
+ hdr_len = IEEE80211_3ADDR_SIZE;
+ } else
+ hdr_len = 0;
+
+ /* Determine amount of payload per fragment. Regardless of if
+ * this stack is providing the full 802.11 header, one will
+ * eventually be affixed to this fragment -- so we must account for
+ * it when determining the amount of payload space. */
+ if (is_multicast_ether_addr(dest) ||
+ is_broadcast_ether_addr(dest))
+ frag_size = MAX_FRAG_THRESHOLD - IEEE80211_3ADDR_SIZE;
+ else
+ frag_size = ieee->fts - IEEE80211_3ADDR_SIZE;
+
+ bytes_per_frag = frag_size;
+
+#ifdef CONFIG_IEEE80211_CRYPT
+ /* Each fragment may need to have room for encryptiong pre/postfix */
+ if (encrypt)
+ bytes_per_frag -= crypt->ops->extra_prefix_len +
+ crypt->ops->extra_postfix_len;
+#endif
+
+ /* Number of fragments is the total bytes_per_frag /
+ * payload_per_fragment */
+ nr_frags = bytes / bytes_per_frag;
+ bytes_last_frag = bytes % bytes_per_frag;
+ if (bytes_last_frag)
+ nr_frags++;
+ else
+ bytes_last_frag = bytes_per_frag;
+
+ /* When we allocate the TXB we allocate enough space for the reserve
+ * and full fragment bytes (bytes_per_frag doesn't include prefix and
+ * postfix) */
+ txb = ieee80211_alloc_txb(nr_frags, frag_size, GFP_ATOMIC);
+ if (unlikely(!txb)) {
+ printk(KERN_WARNING "%s: Could not allocate TXB\n",
+ ieee->dev->name);
+ goto failed;
+ }
+ txb->encrypted = encrypt;
+ txb->payload_size = bytes;
+
+ for (i = 0; i < nr_frags; i++) {
+ skb_frag = txb->fragments[i];
+
+#ifdef CONFIG_IEEE80211_CRYPT
+ if (encrypt)
+ skb_reserve(skb_frag, crypt->ops->extra_prefix_len);
+#endif
+
+ if (hdr_len)
+ memcpy(skb_put(skb_frag, hdr_len), &header, hdr_len);
+
+ bytes = (i == nr_frags - 1) ? bytes_last_frag : bytes_per_frag;
+
+ /* Put a SNAP header on the first fragment */
+ if (i == 0) {
+ ieee80211_put_snap(
+ skb_put(skb_frag, SNAP_SIZE + sizeof(u16)),
+ ether_type);
+ bytes -= SNAP_SIZE + sizeof(u16);
+ }
+
+ memcpy(skb_put(skb_frag, bytes), skb->data, bytes);
+
+ /* Advance the SKB... */
+ skb_pull(skb, bytes);
+
+#ifdef CONFIG_IEEE80211_CRYPT
+ /* Encryption routine will move the header forward in order
+ * to insert the IV between the header and the payload */
+ if (encrypt) {
+ ieee80211_encrypt_fragment(ieee, skb_frag, hdr_len);
+ skb_pull(skb_frag, hdr_len);
+ }
+#endif
+ }
+
+ stats->tx_packets++;
+ stats->tx_bytes += txb->payload_size;
+
+#ifdef CONFIG_IEEE80211_WPA
+ success:
+#endif
+ /* We are now done with the SKB provided to us */
+ dev_kfree_skb_any(skb);
+
+ spin_unlock_irqrestore(&ieee->lock, flags);
+
+ return txb;
+
+ failed:
+ stats->tx_errors++;
+
+ return NULL;
+
+}
+
+EXPORT_SYMBOL(ieee80211_skb_to_txb);
+EXPORT_SYMBOL(ieee80211_txb_free);
--- /dev/null
+/******************************************************************************
+
+ Copyright(c) 2004 Intel Corporation. All rights reserved.
+
+ Portions of this file are based on the WEP enablement code provided by the
+ Host AP project hostap-drivers v0.1.3
+ Copyright (c) 2001-2002, SSH Communications Security Corp and Jouni Malinen
+ <jkmaline@cc.hut.fi>
+ Copyright (c) 2002-2003, Jouni Malinen <jkmaline@cc.hut.fi>
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of version 2 of the GNU General Public License as
+ published by the Free Software Foundation.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ You should have received a copy of the GNU General Public License along with
+ this program; if not, write to the Free Software Foundation, Inc., 59
+ Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+ The full GNU General Public License is included in this distribution in the
+ file called LICENSE.
+
+ Contact Information:
+ James P. Ketrenos <ipw2100-admin@linux.intel.com>
+ Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
+******************************************************************************/
+#include <linux/wireless.h>
+#include <linux/version.h>
+#include <linux/kmod.h>
+#include <linux/module.h>
+
+#include "ieee80211.h"
+static const char ieee80211_modes[] = {
+ 'a', 'b', 'g', '?'
+};
+
+#if 0
+static u32 ieee80211_frequency(u8 channel, u8 mode)
+{
+ if (mode == IEEE_A) {
+ if (channel >= 8 && channel <= 161)
+ return 5000000 + 5000 * channel;
+
+ if (channel >= 240 && channel <= 252)
+ return 4960000 + 5000 * (channel - 240);
+ }
+
+ if (channel == 14)
+ return 2484000;
+
+ if (channel >= 1 && channel <= 13)
+ return 2407000 + 5000 * channel;
+
+ return 0;
+}
+#endif
+
+#define MAX_CUSTOM_LEN 64
+static inline char *ipw2100_translate_scan(struct ieee80211_device *ieee,
+ char *start, char *stop,
+ struct ieee80211_network *network)
+{
+ char custom[MAX_CUSTOM_LEN];
+ char *p;
+ struct iw_event iwe;
+ int i, j;
+ u8 max_rate, rate;
+
+ /* First entry *MUST* be the AP MAC address */
+ iwe.cmd = SIOCGIWAP;
+ iwe.u.ap_addr.sa_family = ARPHRD_ETHER;
+ memcpy(iwe.u.ap_addr.sa_data, network->bssid, ETH_ALEN);
+ start = iwe_stream_add_event(start, stop, &iwe, IW_EV_ADDR_LEN);
+
+ /* Remaining entries will be displayed in the order we provide them */
+
+ /* Add the ESSID */
+ iwe.cmd = SIOCGIWESSID;
+ iwe.u.data.flags = 1;
+ if (network->flags & NETWORK_EMPTY_ESSID) {
+ iwe.u.data.length = sizeof("<hidden>");
+ start = iwe_stream_add_point(start, stop, &iwe, "<hidden>");
+ } else {
+ iwe.u.data.length = min(network->ssid_len, (u8)32);
+ start = iwe_stream_add_point(start, stop, &iwe, network->ssid);
+ }
+
+ /* Add the protocol name */
+ iwe.cmd = SIOCGIWNAME;
+ snprintf(iwe.u.name, IFNAMSIZ, "IEEE 802.11%c", ieee80211_modes[network->mode]);
+ start = iwe_stream_add_event(start, stop, &iwe, IW_EV_CHAR_LEN);
+
+ /* Add mode */
+ iwe.cmd = SIOCGIWMODE;
+ if (network->capability &
+ (WLAN_CAPABILITY_BSS | WLAN_CAPABILITY_IBSS)) {
+ if (network->capability & WLAN_CAPABILITY_BSS)
+ iwe.u.mode = IW_MODE_MASTER;
+ else
+ iwe.u.mode = IW_MODE_ADHOC;
+
+ start = iwe_stream_add_event(start, stop, &iwe,
+ IW_EV_UINT_LEN);
+ }
+
+ /* Add frequency/channel */
+ iwe.cmd = SIOCGIWFREQ;
+/* iwe.u.freq.m = ieee80211_frequency(network->channel, network->mode);
+ iwe.u.freq.e = 3; */
+ iwe.u.freq.m = network->channel;
+ iwe.u.freq.e = 0;
+ iwe.u.freq.i = 0;
+ start = iwe_stream_add_event(start, stop, &iwe, IW_EV_FREQ_LEN);
+
+ /* Add encryption capability */
+ iwe.cmd = SIOCGIWENCODE;
+ if (network->capability & WLAN_CAPABILITY_PRIVACY)
+ iwe.u.data.flags = IW_ENCODE_ENABLED | IW_ENCODE_NOKEY;
+ else
+ iwe.u.data.flags = IW_ENCODE_DISABLED;
+ iwe.u.data.length = 0;
+ start = iwe_stream_add_point(start, stop, &iwe, network->ssid);
+
+ /* Add basic and extended rates */
+ max_rate = 0;
+ p = custom;
+ p += snprintf(p, MAX_CUSTOM_LEN - (p - custom), " Rates (Mb/s): ");
+ for (i = 0, j = 0; i < network->rates_len; ) {
+ if (j < network->rates_ex_len &&
+ ((network->rates_ex[j] & 0x7F) <
+ (network->rates[i] & 0x7F)))
+ rate = network->rates_ex[j++] & 0x7F;
+ else
+ rate = network->rates[i++] & 0x7F;
+ if (rate > max_rate)
+ max_rate = rate;
+ p += snprintf(p, MAX_CUSTOM_LEN - (p - custom),
+ "%d%s ", rate >> 1, (rate & 1) ? ".5" : "");
+ }
+ for (; j < network->rates_ex_len; j++) {
+ rate = network->rates_ex[j] & 0x7F;
+ p += snprintf(p, MAX_CUSTOM_LEN - (p - custom),
+ "%d%s ", rate >> 1, (rate & 1) ? ".5" : "");
+ if (rate > max_rate)
+ max_rate = rate;
+ }
+
+ iwe.cmd = SIOCGIWRATE;
+ iwe.u.bitrate.fixed = iwe.u.bitrate.disabled = 0;
+ iwe.u.bitrate.value = max_rate * 500000;
+ start = iwe_stream_add_event(start, stop, &iwe,
+ IW_EV_PARAM_LEN);
+
+ iwe.cmd = IWEVCUSTOM;
+ iwe.u.data.length = p - custom;
+ if (iwe.u.data.length)
+ start = iwe_stream_add_point(start, stop, &iwe, custom);
+
+#if 0
+ /* Add quality statistics */
+ /* TODO: Fix these values... */
+ iwe.cmd = IWEVQUAL;
+ iwe.u.qual.qual = network->stats.signal;
+ iwe.u.qual.level = network->stats.rssi;
+ iwe.u.qual.noise = network->stats.noise;
+ iwe.u.qual.updated = network->stats.mask & IEEE80211_STATMASK_WEMASK;
+
+ start = iwe_stream_add_event(start, stop, &iwe, IW_EV_QUAL_LEN);
+#endif
+ iwe.cmd = IWEVCUSTOM;
+ p = custom;
+
+ if (network->stats.mask & IEEE80211_STATMASK_RSSI)
+ p += snprintf(p, MAX_CUSTOM_LEN - (p - custom),
+ " RSSI: %-4d dBm ",
+ (s8)network->stats.rssi);
+
+ if (network->stats.mask & IEEE80211_STATMASK_NOISE)
+ p += snprintf(p, MAX_CUSTOM_LEN - (p - custom),
+ " Noise: %-4d dBm ",
+ (s8)network->stats.noise);
+
+ if (network->stats.mask & IEEE80211_STATMASK_SIGNAL)
+ p += snprintf(p, MAX_CUSTOM_LEN - (p - custom),
+ " Signal: %-4d dBm ",
+ (s8)network->stats.signal);
+
+ iwe.u.data.length = p - custom;
+ if (iwe.u.data.length)
+ start = iwe_stream_add_point(start, stop, &iwe, custom);
+
+#ifdef CONFIG_IEEE80211_WPA
+ if (ieee->wpa_enabled && network->wpa_ie_len){
+ char buf[MAX_WPA_IE_LEN * 2 + 30];
+
+ u8 *p = buf;
+ p += sprintf(p, "wpa_ie=");
+ for (i = 0; i < network->wpa_ie_len; i++) {
+ p += sprintf(p, "%02x", network->wpa_ie[i]);
+ }
+
+ memset(&iwe, 0, sizeof(iwe));
+ iwe.cmd = IWEVCUSTOM;
+ iwe.u.data.length = strlen(buf);
+ start = iwe_stream_add_point(start, stop, &iwe, buf);
+ }
+
+ if (ieee->wpa_enabled && network->rsn_ie_len){
+ char buf[MAX_WPA_IE_LEN * 2 + 30];
+
+ u8 *p = buf;
+ p += sprintf(p, "rsn_ie=");
+ for (i = 0; i < network->rsn_ie_len; i++) {
+ p += sprintf(p, "%02x", network->rsn_ie[i]);
+ }
+
+ memset(&iwe, 0, sizeof(iwe));
+ iwe.cmd = IWEVCUSTOM;
+ iwe.u.data.length = strlen(buf);
+ start = iwe_stream_add_point(start, stop, &iwe, buf);
+ }
+
+#endif /* CONFIG_IEEE80211_WPA */
+
+ /* Add EXTRA: Age to display seconds since last beacon/probe response
+ * for given network. */
+ iwe.cmd = IWEVCUSTOM;
+ p = custom;
+ p += snprintf(p, MAX_CUSTOM_LEN - (p - custom),
+ " Last beacon: %lums ago", (jiffies - network->last_scanned) / (HZ / 100));
+ iwe.u.data.length = p - custom;
+ if (iwe.u.data.length)
+ start = iwe_stream_add_point(start, stop, &iwe, custom);
+
+
+ return start;
+}
+
+int ieee80211_wx_get_scan(struct ieee80211_device *ieee,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ieee80211_network *network;
+ unsigned long flags;
+
+ char *ev = extra;
+ char *stop = ev + IW_SCAN_MAX_DATA;
+ int i = 0;
+
+ IEEE80211_DEBUG_WX("Getting scan\n");
+
+ spin_lock_irqsave(&ieee->lock, flags);
+
+ list_for_each_entry(network, &ieee->network_list, list) {
+ i++;
+ if (ieee->scan_age == 0 ||
+ (jiffies - network->last_scanned) < ieee->scan_age)
+ ev = ipw2100_translate_scan(ieee, ev, stop, network);
+ else
+ IEEE80211_DEBUG_SCAN(
+ "Not showing network '%s ("
+ MAC_FMT ")' due to age (%lums).\n",
+ escape_essid(network->ssid,
+ network->ssid_len),
+ MAC_ARG(network->bssid),
+ (jiffies - network->last_scanned) / (HZ / 100));
+ }
+
+ spin_unlock_irqrestore(&ieee->lock, flags);
+
+ wrqu->data.length = ev - extra;
+ wrqu->data.flags = 0;
+
+ IEEE80211_DEBUG_WX("exit: %d networks returned.\n", i);
+
+ return 0;
+}
+
+int ieee80211_wx_set_encode(struct ieee80211_device *ieee,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *keybuf)
+{
+ struct iw_point *erq = &(wrqu->encoding);
+#ifndef CONFIG_IEEE80211_CRYPT
+ if (erq->flags & IW_ENCODE_DISABLED)
+ return 0;
+ return -EOPNOTSUPP;
+#else
+ struct net_device *dev = ieee->dev;
+ struct ieee80211_security sec = {
+ .flags = 0
+ };
+ int i, key, key_provided, len;
+ struct ieee80211_crypt_data **crypt;
+
+ IEEE80211_DEBUG_WX("SET_ENCODE\n");
+
+ key = erq->flags & IW_ENCODE_INDEX;
+ if (key) {
+ if (key > WEP_KEYS)
+ return -EINVAL;
+ key--;
+ key_provided = 1;
+ } else {
+ key_provided = 0;
+ key = ieee->tx_keyidx;
+ }
+
+ IEEE80211_DEBUG_WX("Key: %d [%s]\n", key, key_provided ?
+ "provided" : "default");
+
+ crypt = &ieee->crypt[key];
+
+ if (erq->flags & IW_ENCODE_DISABLED) {
+ if (key_provided && *crypt) {
+ IEEE80211_DEBUG_WX("Disabling encryption on key %d.\n",
+ key);
+ ieee80211_crypt_delayed_deinit(ieee, crypt);
+ } else
+ IEEE80211_DEBUG_WX("Disabling encryption.\n");
+
+ /* Check all the keys to see if any are still configured,
+ * and if no key index was provided, de-init them all */
+ for (i = 0; i < WEP_KEYS; i++) {
+ if (ieee->crypt[i] != NULL) {
+ if (key_provided)
+ break;
+ ieee80211_crypt_delayed_deinit(
+ ieee, &ieee->crypt[i]);
+ }
+ }
+
+ if (i == WEP_KEYS) {
+ sec.enabled = 0;
+ sec.level = SEC_LEVEL_0;
+ sec.flags |= SEC_ENABLED | SEC_LEVEL;
+ }
+
+ goto done;
+ }
+
+ sec.enabled = 1;
+ sec.flags |= SEC_ENABLED;
+
+ if (*crypt != NULL && (*crypt)->ops != NULL &&
+ strcmp((*crypt)->ops->name, "WEP") != 0) {
+ /* changing to use WEP; deinit previously used algorithm
+ * on this key */
+ ieee80211_crypt_delayed_deinit(ieee, crypt);
+ }
+
+ if (*crypt == NULL) {
+ struct ieee80211_crypt_data *new_crypt;
+
+ /* take WEP into use */
+ new_crypt = kmalloc(sizeof(struct ieee80211_crypt_data),
+ GFP_KERNEL);
+ if (new_crypt == NULL)
+ return -ENOMEM;
+ memset(new_crypt, 0, sizeof(struct ieee80211_crypt_data));
+ new_crypt->ops = ieee80211_get_crypto_ops("WEP");
+ if (!new_crypt->ops) {
+ request_module("ieee80211_crypt_wep");
+ new_crypt->ops = ieee80211_get_crypto_ops("WEP");
+ }
+
+ if (new_crypt->ops && try_module_get(new_crypt->ops->owner))
+ new_crypt->priv = new_crypt->ops->init(key);
+
+ if (!new_crypt->ops || !new_crypt->priv) {
+ kfree(new_crypt);
+ new_crypt = NULL;
+
+ printk(KERN_WARNING "%s: could not initialize WEP: "
+ "load module ieee80211_crypt_wep.o\n",
+ dev->name);
+ return -EOPNOTSUPP;
+ }
+ *crypt = new_crypt;
+ }
+
+ /* If a new key was provided, set it up */
+ if (erq->length > 0) {
+ len = erq->length <= 5 ? 5 : 13;
+ memcpy(sec.keys[key], keybuf, erq->length);
+ if (len > erq->length)
+ memset(sec.keys[key] + erq->length, 0,
+ len - erq->length);
+ IEEE80211_DEBUG_WX("Setting key %d to '%s' (%d:%d bytes)\n",
+ key, escape_essid(sec.keys[key], len),
+ erq->length, len);
+ sec.key_sizes[key] = len;
+ (*crypt)->ops->set_key(sec.keys[key], len, NULL,
+ (*crypt)->priv);
+ sec.flags |= (1 << key);
+ /* This ensures a key will be activated if no key is
+ * explicitely set */
+ if (key == sec.active_key)
+ sec.flags |= SEC_ACTIVE_KEY;
+ } else {
+ len = (*crypt)->ops->get_key(sec.keys[key], WEP_KEY_LEN,
+ NULL, (*crypt)->priv);
+ if (len == 0) {
+ /* Set a default key of all 0 */
+ IEEE80211_DEBUG_WX("Setting key %d to all zero.\n",
+ key);
+ memset(sec.keys[key], 0, 13);
+ (*crypt)->ops->set_key(sec.keys[key], 13, NULL,
+ (*crypt)->priv);
+ sec.key_sizes[key] = 13;
+ sec.flags |= (1 << key);
+ }
+
+ /* No key data - just set the default TX key index */
+ if (key_provided) {
+ ieee->tx_keyidx = key;
+ sec.active_key = key;
+ sec.flags |= SEC_ACTIVE_KEY;
+ }
+ }
+
+ done:
+ ieee->open_wep = !(erq->flags & IW_ENCODE_RESTRICTED);
+ sec.auth_mode = ieee->open_wep ? WLAN_AUTH_OPEN : WLAN_AUTH_SHARED_KEY;
+ sec.flags |= SEC_AUTH_MODE;
+ IEEE80211_DEBUG_WX("Auth: %s\n", sec.auth_mode == WLAN_AUTH_OPEN ?
+ "OPEN" : "SHARED KEY");
+
+ /* For now we just support WEP, so only set that security level...
+ * TODO: When WPA is added this is one place that needs to change */
+ sec.flags |= SEC_LEVEL;
+ sec.level = SEC_LEVEL_1; /* 40 and 104 bit WEP */
+
+ if (ieee->func && ieee->func->set_security)
+ ieee->func->set_security(ieee, &sec);
+
+ /* Do not reset port if card is in Managed mode since resetting will
+ * generate new IEEE 802.11 authentication which may end up in looping
+ * with IEEE 802.1X. If your hardware requires a reset after WEP
+ * configuration (for example... Prism2), implement the reset_port in
+ * the callbacks structures used to initialize the 802.11 stack. */
+ if (ieee->reset_on_keychange &&
+ ieee->iw_mode != IW_MODE_INFRA &&
+ ieee->func->reset_port &&
+ ieee->func->reset_port(dev)) {
+ printk(KERN_DEBUG "%s: reset_port failed\n", dev->name);
+ return -EINVAL;
+ }
+ return 0;
+#endif
+}
+
+int ieee80211_wx_get_encode(struct ieee80211_device *ieee,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *keybuf)
+{
+ struct iw_point *erq = &(wrqu->encoding);
+#ifndef CONFIG_IEEE80211_CRYPT
+ printk(KERN_WARNING "%s: Encryption requested but not enabled in "
+ "build.\n", ieee->dev->name);
+ erq->length = 0;
+ erq->flags = IW_ENCODE_DISABLED;
+ return 0;
+#else
+ int len, key;
+ struct ieee80211_crypt_data *crypt;
+
+ IEEE80211_DEBUG_WX("GET_ENCODE\n");
+
+ key = erq->flags & IW_ENCODE_INDEX;
+ if (key) {
+ if (key > WEP_KEYS)
+ return -EINVAL;
+ key--;
+ } else
+ key = ieee->tx_keyidx;
+
+ crypt = ieee->crypt[key];
+ erq->flags = key + 1;
+
+ if (crypt == NULL || crypt->ops == NULL) {
+ erq->length = 0;
+ erq->flags |= IW_ENCODE_DISABLED;
+ return 0;
+ }
+
+ if (strcmp(crypt->ops->name, "WEP") != 0) {
+ /* only WEP is supported with wireless extensions, so just
+ * report that encryption is used */
+ erq->length = 0;
+ erq->flags |= IW_ENCODE_ENABLED;
+ return 0;
+ }
+
+ len = crypt->ops->get_key(keybuf, WEP_KEY_LEN, NULL, crypt->priv);
+ erq->length = (len >= 0 ? len : 0);
+
+ erq->flags |= IW_ENCODE_ENABLED;
+
+ if (ieee->open_wep)
+ erq->flags |= IW_ENCODE_OPEN;
+ else
+ erq->flags |= IW_ENCODE_RESTRICTED;
+
+ return 0;
+#endif
+}
+
+EXPORT_SYMBOL(ieee80211_wx_get_scan);
+EXPORT_SYMBOL(ieee80211_wx_set_encode);
+EXPORT_SYMBOL(ieee80211_wx_get_encode);
--- /dev/null
+
+"This software program is licensed subject to the GNU General Public License
+(GPL). Version 2, June 1991, available at
+<http://www.fsf.org/copyleft/gpl.html>"
+
+GNU General Public License
+
+Version 2, June 1991
+
+Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
+
+Everyone is permitted to copy and distribute verbatim copies of this license
+document, but changing it is not allowed.
+
+Preamble
+
+The licenses for most software are designed to take away your freedom to
+share and change it. By contrast, the GNU General Public License is intended
+to guarantee your freedom to share and change free software--to make sure
+the software is free for all its users. This General Public License applies
+to most of the Free Software Foundation's software and to any other program
+whose authors commit to using it. (Some other Free Software Foundation
+software is covered by the GNU Library General Public License instead.) You
+can apply it to your programs, too.
+
+When we speak of free software, we are referring to freedom, not price. Our
+General Public Licenses are designed to make sure that you have the freedom
+to distribute copies of free software (and charge for this service if you
+wish), that you receive source code or can get it if you want it, that you
+can change the software or use pieces of it in new free programs; and that
+you know you can do these things.
+
+To protect your rights, we need to make restrictions that forbid anyone to
+deny you these rights or to ask you to surrender the rights. These
+restrictions translate to certain responsibilities for you if you distribute
+copies of the software, or if you modify it.
+
+For example, if you distribute copies of such a program, whether gratis or
+for a fee, you must give the recipients all the rights that you have. You
+must make sure that they, too, receive or can get the source code. And you
+must show them these terms so they know their rights.
+
+We protect your rights with two steps: (1) copyright the software, and (2)
+offer you this license which gives you legal permission to copy, distribute
+and/or modify the software.
+
+Also, for each author's protection and ours, we want to make certain that
+everyone understands that there is no warranty for this free software. If
+the software is modified by someone else and passed on, we want its
+recipients to know that what they have is not the original, so that any
+problems introduced by others will not reflect on the original authors'
+reputations.
+
+Finally, any free program is threatened constantly by software patents. We
+wish to avoid the danger that redistributors of a free program will
+individually obtain patent licenses, in effect making the program
+proprietary. To prevent this, we have made it clear that any patent must be
+licensed for everyone's free use or not licensed at all.
+
+The precise terms and conditions for copying, distribution and modification
+follow.
+
+TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+0. This License applies to any program or other work which contains a notice
+ placed by the copyright holder saying it may be distributed under the
+ terms of this General Public License. The "Program", below, refers to any
+ such program or work, and a "work based on the Program" means either the
+ Program or any derivative work under copyright law: that is to say, a
+ work containing the Program or a portion of it, either verbatim or with
+ modifications and/or translated into another language. (Hereinafter,
+ translation is included without limitation in the term "modification".)
+ Each licensee is addressed as "you".
+
+ Activities other than copying, distribution and modification are not
+ covered by this License; they are outside its scope. The act of running
+ the Program is not restricted, and the output from the Program is covered
+ only if its contents constitute a work based on the Program (independent
+ of having been made by running the Program). Whether that is true depends
+ on what the Program does.
+
+1. You may copy and distribute verbatim copies of the Program's source code
+ as you receive it, in any medium, provided that you conspicuously and
+ appropriately publish on each copy an appropriate copyright notice and
+ disclaimer of warranty; keep intact all the notices that refer to this
+ License and to the absence of any warranty; and give any other recipients
+ of the Program a copy of this License along with the Program.
+
+ You may charge a fee for the physical act of transferring a copy, and you
+ may at your option offer warranty protection in exchange for a fee.
+
+2. You may modify your copy or copies of the Program or any portion of it,
+ thus forming a work based on the Program, and copy and distribute such
+ modifications or work under the terms of Section 1 above, provided that
+ you also meet all of these conditions:
+
+ * a) You must cause the modified files to carry prominent notices stating
+ that you changed the files and the date of any change.
+
+ * b) You must cause any work that you distribute or publish, that in
+ whole or in part contains or is derived from the Program or any part
+ thereof, to be licensed as a whole at no charge to all third parties
+ under the terms of this License.
+
+ * c) If the modified program normally reads commands interactively when
+ run, you must cause it, when started running for such interactive
+ use in the most ordinary way, to print or display an announcement
+ including an appropriate copyright notice and a notice that there is
+ no warranty (or else, saying that you provide a warranty) and that
+ users may redistribute the program under these conditions, and
+ telling the user how to view a copy of this License. (Exception: if
+ the Program itself is interactive but does not normally print such
+ an announcement, your work based on the Program is not required to
+ print an announcement.)
+
+ These requirements apply to the modified work as a whole. If identifiable
+ sections of that work are not derived from the Program, and can be
+ reasonably considered independent and separate works in themselves, then
+ this License, and its terms, do not apply to those sections when you
+ distribute them as separate works. But when you distribute the same
+ sections as part of a whole which is a work based on the Program, the
+ distribution of the whole must be on the terms of this License, whose
+ permissions for other licensees extend to the entire whole, and thus to
+ each and every part regardless of who wrote it.
+
+ Thus, it is not the intent of this section to claim rights or contest
+ your rights to work written entirely by you; rather, the intent is to
+ exercise the right to control the distribution of derivative or
+ collective works based on the Program.
+
+ In addition, mere aggregation of another work not based on the Program
+ with the Program (or with a work based on the Program) on a volume of a
+ storage or distribution medium does not bring the other work under the
+ scope of this License.
+
+3. You may copy and distribute the Program (or a work based on it, under
+ Section 2) in object code or executable form under the terms of Sections
+ 1 and 2 above provided that you also do one of the following:
+
+ * a) Accompany it with the complete corresponding machine-readable source
+ code, which must be distributed under the terms of Sections 1 and 2
+ above on a medium customarily used for software interchange; or,
+
+ * b) Accompany it with a written offer, valid for at least three years,
+ to give any third party, for a charge no more than your cost of
+ physically performing source distribution, a complete machine-
+ readable copy of the corresponding source code, to be distributed
+ under the terms of Sections 1 and 2 above on a medium customarily
+ used for software interchange; or,
+
+ * c) Accompany it with the information you received as to the offer to
+ distribute corresponding source code. (This alternative is allowed
+ only for noncommercial distribution and only if you received the
+ program in object code or executable form with such an offer, in
+ accord with Subsection b above.)
+
+ The source code for a work means the preferred form of the work for
+ making modifications to it. For an executable work, complete source code
+ means all the source code for all modules it contains, plus any
+ associated interface definition files, plus the scripts used to control
+ compilation and installation of the executable. However, as a special
+ exception, the source code distributed need not include anything that is
+ normally distributed (in either source or binary form) with the major
+ components (compiler, kernel, and so on) of the operating system on which
+ the executable runs, unless that component itself accompanies the
+ executable.
+
+ If distribution of executable or object code is made by offering access
+ to copy from a designated place, then offering equivalent access to copy
+ the source code from the same place counts as distribution of the source
+ code, even though third parties are not compelled to copy the source
+ along with the object code.
+
+4. You may not copy, modify, sublicense, or distribute the Program except as
+ expressly provided under this License. Any attempt otherwise to copy,
+ modify, sublicense or distribute the Program is void, and will
+ automatically terminate your rights under this License. However, parties
+ who have received copies, or rights, from you under this License will not
+ have their licenses terminated so long as such parties remain in full
+ compliance.
+
+5. You are not required to accept this License, since you have not signed
+ it. However, nothing else grants you permission to modify or distribute
+ the Program or its derivative works. These actions are prohibited by law
+ if you do not accept this License. Therefore, by modifying or
+ distributing the Program (or any work based on the Program), you
+ indicate your acceptance of this License to do so, and all its terms and
+ conditions for copying, distributing or modifying the Program or works
+ based on it.
+
+6. Each time you redistribute the Program (or any work based on the
+ Program), the recipient automatically receives a license from the
+ original licensor to copy, distribute or modify the Program subject to
+ these terms and conditions. You may not impose any further restrictions
+ on the recipients' exercise of the rights granted herein. You are not
+ responsible for enforcing compliance by third parties to this License.
+
+7. If, as a consequence of a court judgment or allegation of patent
+ infringement or for any other reason (not limited to patent issues),
+ conditions are imposed on you (whether by court order, agreement or
+ otherwise) that contradict the conditions of this License, they do not
+ excuse you from the conditions of this License. If you cannot distribute
+ so as to satisfy simultaneously your obligations under this License and
+ any other pertinent obligations, then as a consequence you may not
+ distribute the Program at all. For example, if a patent license would
+ not permit royalty-free redistribution of the Program by all those who
+ receive copies directly or indirectly through you, then the only way you
+ could satisfy both it and this License would be to refrain entirely from
+ distribution of the Program.
+
+ If any portion of this section is held invalid or unenforceable under any
+ particular circumstance, the balance of the section is intended to apply
+ and the section as a whole is intended to apply in other circumstances.
+
+ It is not the purpose of this section to induce you to infringe any
+ patents or other property right claims or to contest validity of any
+ such claims; this section has the sole purpose of protecting the
+ integrity of the free software distribution system, which is implemented
+ by public license practices. Many people have made generous contributions
+ to the wide range of software distributed through that system in
+ reliance on consistent application of that system; it is up to the
+ author/donor to decide if he or she is willing to distribute software
+ through any other system and a licensee cannot impose that choice.
+
+ This section is intended to make thoroughly clear what is believed to be
+ a consequence of the rest of this License.
+
+8. If the distribution and/or use of the Program is restricted in certain
+ countries either by patents or by copyrighted interfaces, the original
+ copyright holder who places the Program under this License may add an
+ explicit geographical distribution limitation excluding those countries,
+ so that distribution is permitted only in or among countries not thus
+ excluded. In such case, this License incorporates the limitation as if
+ written in the body of this License.
+
+9. The Free Software Foundation may publish revised and/or new versions of
+ the General Public License from time to time. Such new versions will be
+ similar in spirit to the present version, but may differ in detail to
+ address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the Program
+ specifies a version number of this License which applies to it and "any
+ later version", you have the option of following the terms and
+ conditions either of that version or of any later version published by
+ the Free Software Foundation. If the Program does not specify a version
+ number of this License, you may choose any version ever published by the
+ Free Software Foundation.
+
+10. If you wish to incorporate parts of the Program into other free programs
+ whose distribution conditions are different, write to the author to ask
+ for permission. For software which is copyrighted by the Free Software
+ Foundation, write to the Free Software Foundation; we sometimes make
+ exceptions for this. Our decision will be guided by the two goals of
+ preserving the free status of all derivatives of our free software and
+ of promoting the sharing and reuse of software generally.
+
+ NO WARRANTY
+
+11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+ FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+ OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+ PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
+ EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
+ ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH
+ YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL
+ NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+ REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR
+ DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL
+ DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM
+ (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED
+ INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF
+ THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR
+ OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+END OF TERMS AND CONDITIONS
+
+How to Apply These Terms to Your New Programs
+
+If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it free
+software which everyone can redistribute and change under these terms.
+
+To do so, attach the following notices to the program. It is safest to
+attach them to the start of each source file to most effectively convey the
+exclusion of warranty; and each file should have at least the "copyright"
+line and a pointer to where the full notice is found.
+
+one line to give the program's name and an idea of what it does.
+Copyright (C) yyyy name of author
+
+This program is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 2 of the License, or (at your option)
+any later version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+more details.
+
+You should have received a copy of the GNU General Public License along with
+this program; if not, write to the Free Software Foundation, Inc., 59
+Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this when
+it starts in an interactive mode:
+
+Gnomovision version 69, Copyright (C) year name of author Gnomovision comes
+with ABSOLUTELY NO WARRANTY; for details type 'show w'. This is free
+software, and you are welcome to redistribute it under certain conditions;
+type 'show c' for details.
+
+The hypothetical commands 'show w' and 'show c' should show the appropriate
+parts of the General Public License. Of course, the commands you use may be
+called something other than 'show w' and 'show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. Here is a sample; alter the names:
+
+Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+'Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+signature of Ty Coon, 1 April 1989
+Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs. If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library. If this is what you want to do, use the GNU Library General Public
+License instead of this License.
--- /dev/null
+#
+# Makefile for the Linux Wireless network device drivers.
+#
+# Original makefile by Peter Johanson
+
+EXTRA_CFLAGS += -I$(TOPDIR)/drivers/net/wireless/ieee80211
+
+ifeq ($(CONFIG_IPW_DEBUG),y)
+ EXTRA_CFLAGS += -g -Wa,-adhlms=$@.lst
+endif
+
+list-m :=
+list-$(CONFIG_IPW2100) += ipw2100
+
+obj-$(CONFIG_IPW2100) += ipw2100.o
--- /dev/null
+/******************************************************************************
+
+ Copyright(c) 2003 - 2004 Intel Corporation. All rights reserved.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of version 2 of the GNU General Public License as
+ published by the Free Software Foundation.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ You should have received a copy of the GNU General Public License along with
+ this program; if not, write to the Free Software Foundation, Inc., 59
+ Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+ The full GNU General Public License is included in this distribution in the
+ file called LICENSE.
+
+ Contact Information:
+ James P. Ketrenos <ipw2100-admin@linux.intel.com>
+ Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
+ Portions of this file are based on the sample_* files provided by Wireless
+ Extensions 0.26 package and copyright (c) 1997-2003 Jean Tourrilhes
+ <jt@hpl.hp.com>
+
+ Portions of this file are based on the Host AP project,
+ Copyright (c) 2001-2002, SSH Communications Security Corp and Jouni Malinen
+ <jkmaline@cc.hut.fi>
+ Copyright (c) 2002-2003, Jouni Malinen <jkmaline@cc.hut.fi>
+
+ Portions of ipw2100_mod_firmware_load, ipw2100_do_mod_firmware_load, and
+ ipw2100_fw_load are loosely based on drivers/sound/sound_firmware.c
+ available in the 2.4.25 kernel sources, and are copyright (c) Alan Cox
+
+******************************************************************************/
+/*
+
+ Initial driver on which this is based was developed by Janusz Gorycki,
+ Maciej Urbaniak, and Maciej Sosnowski.
+
+ Promiscuous mode support added by Jacek Wysoczynski and Maciej Urbaniak.
+
+Theory of Operation
+
+Tx - Commands and Data
+
+Firmware and host share a circular queue of Transmit Buffer Descriptors (TBDs)
+Each TBD contains a pointer to the physical (dma_addr_t) address of data being
+sent to the firmware as well as the length of the data.
+
+The host writes to the TBD queue at the WRITE index. The WRITE index points
+to the _next_ packet to be written and is advanced when after the TBD has been
+filled.
+
+The firmware pulls from the TBD queue at the READ index. The READ index points
+to the currently being read entry, and is advanced once the firmware is
+done with a packet.
+
+When data is sent to the firmware, the first TBD is used to indicate to the
+firmware if a Command or Data is being sent. If it is Command, all of the
+command information is contained within the physical address referred to by the
+TBD. If it is Data, the first TBD indicates the type of data packet, number
+of fragments, etc. The next TBD then referrs to the actual packet location.
+
+The Tx flow cycle is as follows:
+
+1) ipw2100_tx() is called by kernel with SKB to transmit
+2) Packet is move from the tx_free_list and appended to the transmit pending
+ list (tx_pend_list)
+3) work is scheduled to move pending packets into the shared circular queue.
+4) when placing packet in the circular queue, the incoming SKB is DMA mapped
+ to a physical address. That address is entered into a TBD. Two TBDs are
+ filled out. The first indicating a data packet, the second referring to the
+ actual payload data.
+5) the packet is removed from tx_pend_list and placed on the end of the
+ firmware pending list (fw_pend_list)
+6) firmware is notified that the WRITE index has
+7) Once the firmware has processed the TBD, INTA is triggered.
+8) For each Tx interrupt received from the firmware, the READ index is checked
+ to see which TBDs are done being processed.
+9) For each TBD that has been processed, the ISR pulls the oldest packet
+ from the fw_pend_list.
+10)The packet structure contained in the fw_pend_list is then used
+ to unmap the DMA address and to free the SKB originally passed to the driver
+ from the kernel.
+11)The packet structure is placed onto the tx_free_list
+
+The above steps are the same for commands, only the msg_free_list/msg_pend_list
+are used instead of tx_free_list/tx_pend_list
+
+...
+
+Critical Sections / Locking :
+
+There are two locks utilized. The first is the low level lock (priv->low_lock)
+that protects the following:
+
+- Access to the Tx/Rx queue lists via priv->low_lock. The lists are as follows:
+
+ tx_free_list : Holds pre-allocated Tx buffers.
+ TAIL modified in __ipw2100_tx_process()
+ HEAD modified in ipw2100_tx()
+
+ tx_pend_list : Holds used Tx buffers waiting to go into the TBD ring
+ TAIL modified ipw2100_tx()
+ HEAD modified by X__ipw2100_tx_send_data()
+
+ msg_free_list : Holds pre-allocated Msg (Command) buffers
+ TAIL modified in __ipw2100_tx_process()
+ HEAD modified in ipw2100_hw_send_command()
+
+ msg_pend_list : Holds used Msg buffers waiting to go into the TBD ring
+ TAIL modified in ipw2100_hw_send_command()
+ HEAD modified in X__ipw2100_tx_send_commands()
+
+ The flow of data on the TX side is as follows:
+
+ MSG_FREE_LIST + COMMAND => MSG_PEND_LIST => TBD => MSG_FREE_LIST
+ TX_FREE_LIST + DATA => TX_PEND_LIST => TBD => TX_FREE_LIST
+
+ The methods that work on the TBD ring are protected via priv->low_lock.
+
+- The internal data state of the device itself
+- Access to the firmware read/write indexes for the BD queues
+ and associated logic
+
+*/
+
+#include <linux/compiler.h>
+#include <linux/config.h>
+#include <linux/errno.h>
+#include <linux/if_arp.h>
+#include <linux/in6.h>
+#include <linux/in.h>
+#include <linux/ip.h>
+#include <linux/kernel.h>
+#include <linux/kmod.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
+#include <linux/ethtool.h>
+#include <linux/pci.h>
+#include <linux/proc_fs.h>
+#include <linux/skbuff.h>
+#include <asm/uaccess.h>
+#include <asm/io.h>
+#define __KERNEL_SYSCALLS__
+#include <linux/fs.h>
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <linux/unistd.h>
+#include <linux/stringify.h>
+#include <linux/tcp.h>
+#include <linux/types.h>
+#include <linux/version.h>
+#include <linux/time.h>
+#ifndef CONFIG_IPW2100_LEGACY_FW_LOAD
+#include <linux/firmware.h>
+#endif
+#include <linux/acpi.h>
+#include <linux/ctype.h>
+
+#include "ipw2100.h"
+
+#define IPW2100_VERSION "1.0.0"
+
+#define DRV_NAME "ipw2100"
+#define DRV_VERSION IPW2100_VERSION
+#define DRV_DESCRIPTION "Intel(R) PRO/Wireless 2100 Network Driver"
+#define DRV_COPYRIGHT "Copyright(c) 2003-2004 Intel Corporation"
+
+
+/* Debugging stuff */
+#ifdef CONFIG_IPW_DEBUG
+#define CONFIG_IPW2100_RX_DEBUG /* Reception debugging */
+#endif
+
+MODULE_DESCRIPTION(DRV_DESCRIPTION);
+MODULE_AUTHOR(DRV_COPYRIGHT);
+MODULE_LICENSE("GPL");
+
+static int debug = 0;
+static char *ifname = NULL;
+static int mode = 0;
+static int channel = 0;
+static int associate = 1;
+static int disable = 0;
+#ifdef CONFIG_PM
+static struct ipw2100_fw ipw2100_firmware;
+#endif
+
+#include <linux/moduleparam.h>
+module_param(debug, int, 0444);
+module_param(ifname, charp, 0444);
+module_param(mode, int, 0444);
+module_param(channel, int, 0444);
+module_param(associate, int, 0444);
+module_param(disable, int, 0444);
+
+MODULE_PARM_DESC(debug, "debug level");
+MODULE_PARM_DESC(ifname, "interface name (default 'eth%d')");
+MODULE_PARM_DESC(mode, "network mode (0=BSS,1=IBSS,2=Monitor)");
+MODULE_PARM_DESC(channel, "channel");
+MODULE_PARM_DESC(associate, "auto associate when scanning (default on)");
+MODULE_PARM_DESC(disable, "manually disable the radio (default 0 [radio on])");
+
+u32 ipw2100_debug_level = IPW_DL_NONE;
+
+#ifdef CONFIG_IPW_DEBUG
+static const char *command_types[] = {
+ "undefined",
+ "unused", /* HOST_ATTENTION */
+ "HOST_COMPLETE",
+ "unused", /* SLEEP */
+ "unused", /* HOST_POWER_DOWN */
+ "unused",
+ "SYSTEM_CONFIG",
+ "unused", /* SET_IMR */
+ "SSID",
+ "MANDATORY_BSSID",
+ "AUTHENTICATION_TYPE",
+ "ADAPTER_ADDRESS",
+ "PORT_TYPE",
+ "INTERNATIONAL_MODE",
+ "CHANNEL",
+ "RTS_THRESHOLD",
+ "FRAG_THRESHOLD",
+ "POWER_MODE",
+ "TX_RATES",
+ "BASIC_TX_RATES",
+ "WEP_KEY_INFO",
+ "unused",
+ "unused",
+ "unused",
+ "unused",
+ "WEP_KEY_INDEX",
+ "WEP_FLAGS",
+ "ADD_MULTICAST",
+ "CLEAR_ALL_MULTICAST",
+ "BEACON_INTERVAL",
+ "ATIM_WINDOW",
+ "CLEAR_STATISTICS",
+ "undefined",
+ "undefined",
+ "undefined",
+ "undefined",
+ "TX_POWER_INDEX",
+ "undefined",
+ "undefined",
+ "undefined",
+ "undefined",
+ "undefined",
+ "undefined",
+ "BROADCAST_SCAN",
+ "CARD_DISABLE",
+ "PREFERRED_BSSID",
+ "SET_SCAN_OPTIONS",
+ "SCAN_DWELL_TIME",
+ "SWEEP_TABLE",
+ "AP_OR_STATION_TABLE",
+ "GROUP_ORDINALS",
+ "SHORT_RETRY_LIMIT",
+ "LONG_RETRY_LIMIT",
+ "unused", /* SAVE_CALIBRATION */
+ "unused", /* RESTORE_CALIBRATION */
+ "undefined",
+ "undefined",
+ "undefined",
+ "HOST_PRE_POWER_DOWN",
+ "unused", /* HOST_INTERRUPT_COALESCING */
+ "undefined",
+ "CARD_DISABLE_PHY_OFF",
+ "MSDU_TX_RATES"
+ "undefined",
+ "undefined",
+ "SET_STATION_STAT_BITS",
+ "CLEAR_STATIONS_STAT_BITS",
+ "LEAP_ROGUE_MODE",
+ "SET_SECURITY_INFORMATION",
+ "DISASSOCIATION_BSSID",
+ "SET_WPA_ASS_IE"
+};
+#endif
+
+
+/* Pre-decl until we get the code solid and then we can clean it up */
+static void X__ipw2100_tx_send_commands(struct ipw2100_priv *priv);
+static void X__ipw2100_tx_send_data(struct ipw2100_priv *priv);
+static int ipw2100_adapter_setup(struct ipw2100_priv *priv);
+
+static void ipw2100_queues_initialize(struct ipw2100_priv *priv);
+static void ipw2100_queues_free(struct ipw2100_priv *priv);
+static int ipw2100_queues_allocate(struct ipw2100_priv *priv);
+
+
+static inline void read_register(struct net_device *dev, u32 reg, u32 *val)
+{
+ *val = readl((void *)(dev->base_addr + reg));
+ IPW_DEBUG_IO("r: 0x%08X => 0x%08X\n", reg, *val);
+}
+
+static inline void write_register(struct net_device *dev, u32 reg, u32 val)
+{
+ writel(val, (void *)(dev->base_addr + reg));
+ IPW_DEBUG_IO("w: 0x%08X <= 0x%08X\n", reg, val);
+}
+
+static inline void read_register_word(struct net_device *dev, u32 reg, u16 *val)
+{
+ *val = readw((void *)(dev->base_addr + reg));
+ IPW_DEBUG_IO("r: 0x%08X => %04X\n", reg, *val);
+}
+
+static inline void read_register_byte(struct net_device *dev, u32 reg, u8 *val)
+{
+ *val = readb((void *)(dev->base_addr + reg));
+ IPW_DEBUG_IO("r: 0x%08X => %02X\n", reg, *val);
+}
+
+static inline void write_register_word(struct net_device *dev, u32 reg, u16 val)
+{
+ writew(val, (void *)(dev->base_addr + reg));
+ IPW_DEBUG_IO("w: 0x%08X <= %04X\n", reg, val);
+}
+
+
+static inline void write_register_byte(struct net_device *dev, u32 reg, u8 val)
+{
+ writeb(val, (void *)(dev->base_addr + reg));
+ IPW_DEBUG_IO("w: 0x%08X =< %02X\n", reg, val);
+}
+
+static inline void read_nic_dword(struct net_device *dev, u32 addr, u32 *val)
+{
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_ADDRESS,
+ addr & IPW_REG_INDIRECT_ADDR_MASK);
+ read_register(dev, IPW_REG_INDIRECT_ACCESS_DATA, val);
+}
+
+static inline void write_nic_dword(struct net_device *dev, u32 addr, u32 val)
+{
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_ADDRESS,
+ addr & IPW_REG_INDIRECT_ADDR_MASK);
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_DATA, val);
+}
+
+static inline void read_nic_word(struct net_device *dev, u32 addr, u16 *val)
+{
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_ADDRESS,
+ addr & IPW_REG_INDIRECT_ADDR_MASK);
+ read_register_word(dev, IPW_REG_INDIRECT_ACCESS_DATA, val);
+}
+
+static inline void write_nic_word(struct net_device *dev, u32 addr, u16 val)
+{
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_ADDRESS,
+ addr & IPW_REG_INDIRECT_ADDR_MASK);
+ write_register_word(dev, IPW_REG_INDIRECT_ACCESS_DATA, val);
+}
+
+static inline void read_nic_byte(struct net_device *dev, u32 addr, u8 *val)
+{
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_ADDRESS,
+ addr & IPW_REG_INDIRECT_ADDR_MASK);
+ read_register_byte(dev, IPW_REG_INDIRECT_ACCESS_DATA, val);
+}
+
+static inline void write_nic_byte(struct net_device *dev, u32 addr, u8 val)
+{
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_ADDRESS,
+ addr & IPW_REG_INDIRECT_ADDR_MASK);
+ write_register_byte(dev, IPW_REG_INDIRECT_ACCESS_DATA, val);
+}
+
+static inline void write_nic_auto_inc_address(struct net_device *dev, u32 addr)
+{
+ write_register(dev, IPW_REG_AUTOINCREMENT_ADDRESS,
+ addr & IPW_REG_INDIRECT_ADDR_MASK);
+}
+
+static inline void write_nic_dword_auto_inc(struct net_device *dev, u32 val)
+{
+ write_register(dev, IPW_REG_AUTOINCREMENT_DATA, val);
+}
+
+static inline void write_nic_memory(struct net_device *dev, u32 addr, u32 len,
+ const u8 *buf)
+{
+ u32 aligned_addr;
+ u32 aligned_len;
+ u32 dif_len;
+ u32 i;
+
+ /* read first nibble byte by byte */
+ aligned_addr = addr & (~0x3);
+ dif_len = addr - aligned_addr;
+ if (dif_len) {
+ /* Start reading at aligned_addr + dif_len */
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_ADDRESS,
+ aligned_addr);
+ for (i = dif_len; i < 4; i++, buf++)
+ write_register_byte(
+ dev, IPW_REG_INDIRECT_ACCESS_DATA + i,
+ *buf);
+
+ len -= dif_len;
+ aligned_addr += 4;
+ }
+
+ /* read DWs through autoincrement registers */
+ write_register(dev, IPW_REG_AUTOINCREMENT_ADDRESS,
+ aligned_addr);
+ aligned_len = len & (~0x3);
+ for (i = 0; i < aligned_len; i += 4, buf += 4, aligned_addr += 4)
+ write_register(
+ dev, IPW_REG_AUTOINCREMENT_DATA, *(u32 *)buf);
+
+ /* copy the last nibble */
+ dif_len = len - aligned_len;
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_ADDRESS, aligned_addr);
+ for (i = 0; i < dif_len; i++, buf++)
+ write_register_byte(
+ dev, IPW_REG_INDIRECT_ACCESS_DATA + i, *buf);
+}
+
+static inline void read_nic_memory(struct net_device *dev, u32 addr, u32 len,
+ u8 *buf)
+{
+ u32 aligned_addr;
+ u32 aligned_len;
+ u32 dif_len;
+ u32 i;
+
+ /* read first nibble byte by byte */
+ aligned_addr = addr & (~0x3);
+ dif_len = addr - aligned_addr;
+ if (dif_len) {
+ /* Start reading at aligned_addr + dif_len */
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_ADDRESS,
+ aligned_addr);
+ for (i = dif_len; i < 4; i++, buf++)
+ read_register_byte(
+ dev, IPW_REG_INDIRECT_ACCESS_DATA + i, buf);
+
+ len -= dif_len;
+ aligned_addr += 4;
+ }
+
+ /* read DWs through autoincrement registers */
+ write_register(dev, IPW_REG_AUTOINCREMENT_ADDRESS,
+ aligned_addr);
+ aligned_len = len & (~0x3);
+ for (i = 0; i < aligned_len; i += 4, buf += 4, aligned_addr += 4)
+ read_register(dev, IPW_REG_AUTOINCREMENT_DATA,
+ (u32 *)buf);
+
+ /* copy the last nibble */
+ dif_len = len - aligned_len;
+ write_register(dev, IPW_REG_INDIRECT_ACCESS_ADDRESS,
+ aligned_addr);
+ for (i = 0; i < dif_len; i++, buf++)
+ read_register_byte(dev, IPW_REG_INDIRECT_ACCESS_DATA +
+ i, buf);
+}
+
+static inline int ipw2100_hw_is_adapter_in_system(struct net_device *dev)
+{
+ return (dev->base_addr &&
+ (readl((void *)(dev->base_addr + IPW_REG_DOA_DEBUG_AREA_START))
+ == IPW_DATA_DOA_DEBUG_VALUE));
+}
+
+int ipw2100_get_ordinal(struct ipw2100_priv *priv, u32 ord,
+ void *val, u32 *len)
+{
+ struct ipw2100_ordinals *ordinals = &priv->ordinals;
+ u32 addr;
+ u32 field_info;
+ u16 field_len;
+ u16 field_count;
+ u32 total_length;
+
+ if (ordinals->table1_addr == 0) {
+ IPW_DEBUG_WARNING(DRV_NAME ": attempt to use fw ordinals "
+ "before they have been loaded.\n");
+ return -EINVAL;
+ }
+
+ if (IS_ORDINAL_TABLE_ONE(ordinals, ord)) {
+ if (*len < IPW_ORD_TAB_1_ENTRY_SIZE) {
+ *len = IPW_ORD_TAB_1_ENTRY_SIZE;
+
+ IPW_DEBUG_WARNING(DRV_NAME
+ ": ordinal buffer length too small, need %d\n",
+ IPW_ORD_TAB_1_ENTRY_SIZE);
+
+ return -EINVAL;
+ }
+
+ read_nic_dword(priv->net_dev, ordinals->table1_addr + (ord << 2),
+ &addr);
+ read_nic_dword(priv->net_dev, addr, val);
+
+ *len = IPW_ORD_TAB_1_ENTRY_SIZE;
+
+ return 0;
+ }
+
+ if (IS_ORDINAL_TABLE_TWO(ordinals, ord)) {
+
+ ord -= IPW_START_ORD_TAB_2;
+
+ /* get the address of statistic */
+ read_nic_dword(priv->net_dev, ordinals->table2_addr + (ord << 3),
+ &addr);
+
+ /* get the second DW of statistics ;
+ * two 16-bit words - first is length, second is count */
+ read_nic_dword(priv->net_dev,
+ ordinals->table2_addr + (ord << 3) + sizeof(u32),
+ &field_info);
+
+ /* get each entry length */
+ field_len = *((u16 *)&field_info);
+
+ /* get number of entries */
+ field_count = *(((u16 *)&field_info) + 1);
+
+ /* abort if no enought memory */
+ total_length = field_len * field_count;
+ if (total_length > *len) {
+ *len = total_length;
+ return -EINVAL;
+ }
+
+ *len = total_length;
+ if (!total_length)
+ return 0;
+
+ /* read the ordinal data from the SRAM */
+ read_nic_memory(priv->net_dev, addr, total_length, val);
+
+ return 0;
+ }
+
+ IPW_DEBUG_WARNING(DRV_NAME ": ordinal %d neither in table 1 nor "
+ "in table 2\n", ord);
+
+ return -EINVAL;
+}
+
+static int ipw2100_set_ordinal(struct ipw2100_priv *priv, u32 ord, u32 *val,
+ u32 *len)
+{
+ struct ipw2100_ordinals *ordinals = &priv->ordinals;
+ u32 addr;
+
+ if (IS_ORDINAL_TABLE_ONE(ordinals, ord)) {
+ if (*len != IPW_ORD_TAB_1_ENTRY_SIZE) {
+ *len = IPW_ORD_TAB_1_ENTRY_SIZE;
+ IPW_DEBUG_INFO("wrong size\n");
+ return -EINVAL;
+ }
+
+ read_nic_dword(priv->net_dev, ordinals->table1_addr + (ord << 2),
+ &addr);
+
+ write_nic_dword(priv->net_dev, addr, *val);
+
+ *len = IPW_ORD_TAB_1_ENTRY_SIZE;
+
+ return 0;
+ }
+
+ IPW_DEBUG_INFO("wrong table\n");
+ if (IS_ORDINAL_TABLE_TWO(ordinals, ord))
+ return -EINVAL;
+
+ return -EINVAL;
+}
+
+static char *snprint_line(char *buf, size_t count,
+ const u8 *data, u32 len, u32 ofs)
+{
+ int out, i, j, l;
+ char c;
+
+ out = snprintf(buf, count, "%08X", ofs);
+
+ for (l = 0, i = 0; i < 2; i++) {
+ out += snprintf(buf + out, count - out, " ");
+ for (j = 0; j < 8 && l < len; j++, l++)
+ out += snprintf(buf + out, count - out, "%02X ",
+ data[(i * 8 + j)]);
+ for (; j < 8; j++)
+ out += snprintf(buf + out, count - out, " ");
+ }
+
+ out += snprintf(buf + out, count - out, " ");
+ for (l = 0, i = 0; i < 2; i++) {
+ out += snprintf(buf + out, count - out, " ");
+ for (j = 0; j < 8 && l < len; j++, l++) {
+ c = data[(i * 8 + j)];
+ if (!isascii(c) || !isprint(c))
+ c = '.';
+
+ out += snprintf(buf + out, count - out, "%c", c);
+ }
+
+ for (; j < 8; j++)
+ out += snprintf(buf + out, count - out, " ");
+ }
+
+ return buf;
+}
+
+static void printk_buf(int level, const u8 *data, u32 len)
+{
+ char line[81];
+ u32 ofs = 0;
+ if (!(ipw2100_debug_level & level))
+ return;
+
+ while (len) {
+ printk(KERN_DEBUG "%s\n",
+ snprint_line(line, sizeof(line), &data[ofs],
+ min(len, 16U), ofs));
+ ofs += 16;
+ len -= min(len, 16U);
+ }
+}
+
+
+
+#define MAX_RESET_BACKOFF 10
+
+static inline void schedule_reset(struct ipw2100_priv *priv)
+{
+ unsigned long now = get_seconds();
+
+ /* If we haven't received a reset request within the backoff period,
+ * then we can reset the backoff interval so this reset occurs
+ * immediately */
+ if (priv->reset_backoff &&
+ (now - priv->last_reset > priv->reset_backoff))
+ priv->reset_backoff = 0;
+
+ priv->last_reset = get_seconds();
+
+ if (!(priv->status & STATUS_RESET_PENDING)) {
+ IPW_DEBUG_INFO("%s: Scheduling firmware restart (%ds).\n",
+ priv->net_dev->name, priv->reset_backoff);
+ netif_carrier_off(priv->net_dev);
+ netif_stop_queue(priv->net_dev);
+ priv->status |= STATUS_RESET_PENDING;
+ if (priv->reset_backoff)
+ queue_delayed_work(priv->workqueue, &priv->reset_work,
+ priv->reset_backoff * HZ);
+ else
+ queue_work(priv->workqueue, &priv->reset_work);
+
+ if (priv->reset_backoff < MAX_RESET_BACKOFF)
+ priv->reset_backoff++;
+
+ wake_up_interruptible(&priv->wait_command_queue);
+ } else
+ IPW_DEBUG_INFO("%s: Firmware restart already in progress.\n",
+ priv->net_dev->name);
+
+}
+
+#define HOST_COMPLETE_TIMEOUT (2 * HZ)
+static int ipw2100_hw_send_command(struct ipw2100_priv *priv,
+ struct host_command * cmd)
+{
+ struct list_head *element;
+ struct ipw2100_tx_packet *packet;
+ unsigned long flags;
+ int err = 0;
+
+ IPW_DEBUG_HC("Sending %s command (#%d), %d bytes\n",
+ command_types[cmd->host_command], cmd->host_command,
+ cmd->host_command_length);
+ printk_buf(IPW_DL_HC, (u8*)cmd->host_command_parameters,
+ cmd->host_command_length);
+
+ spin_lock_irqsave(&priv->low_lock, flags);
+
+ if (priv->fatal_error) {
+ IPW_DEBUG_INFO("Attempt to send command while hardware in fatal error condition.\n");
+ err = -EIO;
+ goto fail_unlock;
+ }
+
+ if (!(priv->status & STATUS_RUNNING)) {
+ IPW_DEBUG_INFO("Attempt to send command while hardware is not running.\n");
+ err = -EIO;
+ goto fail_unlock;
+ }
+
+ if (priv->status & STATUS_CMD_ACTIVE) {
+ IPW_DEBUG_INFO("Attempt to send command while hardware another command is pending.\n");
+ err = -EBUSY;
+ goto fail_unlock;
+ }
+
+ if (list_empty(&priv->msg_free_list)) {
+ IPW_DEBUG_INFO("no available msg buffers\n");
+ goto fail_unlock;
+ }
+
+ priv->status |= STATUS_CMD_ACTIVE;
+ priv->messages_sent++;
+
+ element = priv->msg_free_list.next;
+
+ packet = list_entry(element, struct ipw2100_tx_packet, list);
+ packet->jiffy_start = jiffies;
+
+ /* initialize the firmware command packet */
+ packet->info.c_struct.cmd->host_command_reg = cmd->host_command;
+ packet->info.c_struct.cmd->host_command_reg1 = cmd->host_command1;
+ packet->info.c_struct.cmd->host_command_len_reg = cmd->host_command_length;
+ packet->info.c_struct.cmd->sequence = cmd->host_command_sequence;
+
+ memcpy(packet->info.c_struct.cmd->host_command_params_reg,
+ cmd->host_command_parameters,
+ sizeof(packet->info.c_struct.cmd->host_command_params_reg));
+
+ list_del(element);
+ DEC_STAT(&priv->msg_free_stat);
+
+ list_add_tail(element, &priv->msg_pend_list);
+ INC_STAT(&priv->msg_pend_stat);
+
+ X__ipw2100_tx_send_commands(priv);
+ X__ipw2100_tx_send_data(priv);
+
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+
+ /*
+ * We must wait for this command to complete before another
+ * command can be sent... but if we wait more than 3 seconds
+ * then there is a problem.
+ */
+
+ err = wait_event_interruptible_timeout(
+ priv->wait_command_queue, !(priv->status & STATUS_CMD_ACTIVE),
+ HOST_COMPLETE_TIMEOUT);
+
+ if (err == 0) {
+ IPW_DEBUG_INFO("Command completion failed out after %dms.\n",
+ HOST_COMPLETE_TIMEOUT / (HZ / 100));
+ priv->fatal_error = IPW2100_ERR_MSG_TIMEOUT;
+ priv->status &= ~STATUS_CMD_ACTIVE;
+ schedule_reset(priv);
+ return -EIO;
+ }
+
+ if (priv->fatal_error) {
+ IPW_DEBUG_WARNING("%s: firmware fatal error\n",
+ priv->net_dev->name);
+ return -EIO;
+ }
+
+ /* !!!!! HACK TEST !!!!!
+ * When lots of debug trace statements are enabled, the driver
+ * doesn't seem to have as many firmware restart cycles...
+ *
+ * As a test, we're sticking in a 1/100s delay here */
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HZ / 100);
+
+ return 0;
+
+ fail_unlock:
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+
+ return err;
+}
+
+
+/*
+ * Verify the values and data access of the hardware
+ * No locks needed or used. No functions called.
+ */
+static int ipw2100_verify(struct ipw2100_priv *priv)
+{
+ u32 data1, data2;
+ u32 address;
+
+ u32 val1 = 0x76543210;
+ u32 val2 = 0xFEDCBA98;
+
+ /* Domain 0 check - all values should be DOA_DEBUG */
+ for (address = IPW_REG_DOA_DEBUG_AREA_START;
+ address < IPW_REG_DOA_DEBUG_AREA_END;
+ address += sizeof(u32)) {
+ read_register(priv->net_dev, address, &data1);
+ if (data1 != IPW_DATA_DOA_DEBUG_VALUE)
+ return -EIO;
+ }
+
+ /* Domain 1 check - use arbitrary read/write compare */
+ for (address = 0; address < 5; address++) {
+ /* The memory area is not used now */
+ write_register(priv->net_dev, IPW_REG_DOMAIN_1_OFFSET + 0x32,
+ val1);
+ write_register(priv->net_dev, IPW_REG_DOMAIN_1_OFFSET + 0x36,
+ val2);
+ read_register(priv->net_dev, IPW_REG_DOMAIN_1_OFFSET + 0x32,
+ &data1);
+ read_register(priv->net_dev, IPW_REG_DOMAIN_1_OFFSET + 0x36,
+ &data2);
+ if (val1 == data1 && val2 == data2)
+ return 0;
+ }
+
+ return -EIO;
+}
+
+/*
+ *
+ * Loop until the CARD_DISABLED bit is the same value as the
+ * supplied parameter
+ *
+ * TODO: See if it would be more efficient to do a wait/wake
+ * cycle and have the completion event trigger the wakeup
+ *
+ */
+#define IPW_CARD_DISABLE_COMPLETE_WAIT 100 // 100 milli
+static int ipw2100_wait_for_card_state(struct ipw2100_priv *priv, int state)
+{
+ int i;
+ u32 card_state;
+ u32 len = sizeof(card_state);
+ int err;
+
+ for (i = 0; i <= IPW_CARD_DISABLE_COMPLETE_WAIT * 1000; i += 50) {
+ err = ipw2100_get_ordinal(priv, IPW_ORD_CARD_DISABLED,
+ &card_state, &len);
+ if (err) {
+ IPW_DEBUG_INFO("Query of CARD_DISABLED ordinal "
+ "failed.\n");
+ return 0;
+ }
+
+ /* We'll break out if either the HW state says it is
+ * in the state we want, or if HOST_COMPLETE command
+ * finishes */
+ if ((card_state == state) ||
+ ((priv->status & STATUS_ENABLED) ?
+ IPW_HW_STATE_ENABLED : IPW_HW_STATE_DISABLED) == state) {
+ if (state == IPW_HW_STATE_ENABLED)
+ priv->status |= STATUS_ENABLED;
+ else
+ priv->status &= ~STATUS_ENABLED;
+
+ return 0;
+ }
+
+ udelay(50);
+ }
+
+ IPW_DEBUG_INFO("ipw2100_wait_for_card_state to %s state timed out\n",
+ state ? "DISABLED" : "ENABLED");
+ return -EIO;
+}
+
+
+/*********************************************************************
+ Procedure : sw_reset_and_clock
+ Purpose : Asserts s/w reset, asserts clock initialization
+ and waits for clock stabilization
+ ********************************************************************/
+static int sw_reset_and_clock(struct ipw2100_priv *priv)
+{
+ int i;
+ u32 r;
+
+ // assert s/w reset
+ write_register(priv->net_dev, IPW_REG_RESET_REG,
+ IPW_AUX_HOST_RESET_REG_SW_RESET);
+
+ // wait for clock stabilization
+ for (i = 0; i < 1000; i++) {
+ udelay(IPW_WAIT_RESET_ARC_COMPLETE_DELAY);
+
+ // check clock ready bit
+ read_register(priv->net_dev, IPW_REG_RESET_REG, &r);
+ if (r & IPW_AUX_HOST_RESET_REG_PRINCETON_RESET)
+ break;
+ }
+
+ if (i == 1000)
+ return -EIO; // TODO: better error value
+
+ /* set "initialization complete" bit to move adapter to
+ * D0 state */
+ write_register(priv->net_dev, IPW_REG_GP_CNTRL,
+ IPW_AUX_HOST_GP_CNTRL_BIT_INIT_DONE);
+
+ /* wait for clock stabilization */
+ for (i = 0; i < 10000; i++) {
+ udelay(IPW_WAIT_CLOCK_STABILIZATION_DELAY * 4);
+
+ /* check clock ready bit */
+ read_register(priv->net_dev, IPW_REG_GP_CNTRL, &r);
+ if (r & IPW_AUX_HOST_GP_CNTRL_BIT_CLOCK_READY)
+ break;
+ }
+
+ if (i == 10000)
+ return -EIO; /* TODO: better error value */
+
+//#if CONFIG_IPW2100_D0ENABLED
+ /* set D0 standby bit */
+ read_register(priv->net_dev, IPW_REG_GP_CNTRL, &r);
+ write_register(priv->net_dev, IPW_REG_GP_CNTRL,
+ r | IPW_AUX_HOST_GP_CNTRL_BIT_HOST_ALLOWS_STANDBY);
+//#endif
+
+ return 0;
+}
+
+/*********************************************************************
+ Procedure : ipw2100_ipw2100_download_firmware
+ Purpose : Initiaze adapter after power on.
+ The sequence is:
+ 1. assert s/w reset first!
+ 2. awake clocks & wait for clock stabilization
+ 3. hold ARC (don't ask me why...)
+ 4. load Dino ucode and reset/clock init again
+ 5. zero-out shared mem
+ 6. download f/w
+ *******************************************************************/
+static int ipw2100_download_firmware(struct ipw2100_priv *priv)
+{
+ u32 address;
+ int err;
+
+#ifndef CONFIG_PM
+ /* Fetch the firmware and microcode */
+ struct ipw2100_fw ipw2100_firmware;
+#endif
+
+ if (priv->fatal_error) {
+ IPW_DEBUG_ERROR("%s: ipw2100_download_firmware called after "
+ "fatal error %d. Interface must be brought down.\n",
+ priv->net_dev->name, priv->fatal_error);
+ return -EINVAL;
+ }
+
+#ifdef CONFIG_PM
+ if (!ipw2100_firmware.version) {
+ err = ipw2100_get_firmware(priv, &ipw2100_firmware);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: ipw2100_get_firmware failed: %d\n",
+ priv->net_dev->name, err);
+ priv->fatal_error = IPW2100_ERR_FW_LOAD;
+ goto fail;
+ }
+ }
+#else
+ err = ipw2100_get_firmware(priv, &ipw2100_firmware);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: ipw2100_get_firmware failed: %d\n",
+ priv->net_dev->name, err);
+ priv->fatal_error = IPW2100_ERR_FW_LOAD;
+ goto fail;
+ }
+#endif
+ priv->firmware_version = ipw2100_firmware.version;
+
+ /* s/w reset and clock stabilization */
+ err = sw_reset_and_clock(priv);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: sw_reset_and_clock failed: %d\n",
+ priv->net_dev->name, err);
+ goto fail;
+ }
+
+ err = ipw2100_verify(priv);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: ipw2100_verify failed: %d\n",
+ priv->net_dev->name, err);
+ goto fail;
+ }
+
+ /* Hold ARC */
+ write_nic_dword(priv->net_dev,
+ IPW_INTERNAL_REGISTER_HALT_AND_RESET,
+ 0x80000000);
+
+ /* allow ARC to run */
+ write_register(priv->net_dev, IPW_REG_RESET_REG, 0);
+
+ /* load microcode */
+ err = ipw2100_ucode_download(priv, &ipw2100_firmware);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: Error loading microcode: %d\n",
+ priv->net_dev->name, err);
+ goto fail;
+ }
+
+ /* release ARC */
+ write_nic_dword(priv->net_dev,
+ IPW_INTERNAL_REGISTER_HALT_AND_RESET,
+ 0x00000000);
+
+ /* s/w reset and clock stabilization (again!!!) */
+ err = sw_reset_and_clock(priv);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: sw_reset_and_clock failed: %d\n",
+ priv->net_dev->name, err);
+ goto fail;
+ }
+
+ /* load f/w */
+ err = ipw2100_fw_download(priv, &ipw2100_firmware);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: Error loading firmware: %d\n",
+ priv->net_dev->name, err);
+ goto fail;
+ }
+
+#ifndef CONFIG_PM
+ /*
+ * When the .resume method of the driver is called, the other
+ * part of the system, i.e. the ide driver could still stay in
+ * the suspend stage. This prevents us from loading the firmware
+ * from the disk. --YZ
+ */
+
+ /* free any storage allocated for firmware image */
+ ipw2100_release_firmware(priv, &ipw2100_firmware);
+#endif
+
+ /* zero out Domain 1 area indirectly (Si requirement) */
+ for (address = IPW_HOST_FW_SHARED_AREA0;
+ address < IPW_HOST_FW_SHARED_AREA0_END; address += 4)
+ write_nic_dword(priv->net_dev, address, 0);
+ for (address = IPW_HOST_FW_SHARED_AREA1;
+ address < IPW_HOST_FW_SHARED_AREA1_END; address += 4)
+ write_nic_dword(priv->net_dev, address, 0);
+ for (address = IPW_HOST_FW_SHARED_AREA2;
+ address < IPW_HOST_FW_SHARED_AREA2_END; address += 4)
+ write_nic_dword(priv->net_dev, address, 0);
+ for (address = IPW_HOST_FW_SHARED_AREA3;
+ address < IPW_HOST_FW_SHARED_AREA3_END; address += 4)
+ write_nic_dword(priv->net_dev, address, 0);
+ for (address = IPW_HOST_FW_INTERRUPT_AREA;
+ address < IPW_HOST_FW_INTERRUPT_AREA_END; address += 4)
+ write_nic_dword(priv->net_dev, address, 0);
+
+ return 0;
+
+ fail:
+ ipw2100_release_firmware(priv, &ipw2100_firmware);
+ return err;
+}
+
+static inline void ipw2100_enable_interrupts(struct ipw2100_priv *priv)
+{
+ if (priv->status & STATUS_INT_ENABLED)
+ return;
+ priv->status |= STATUS_INT_ENABLED;
+ write_register(priv->net_dev, IPW_REG_INTA_MASK, IPW_INTERRUPT_MASK);
+}
+
+static inline void ipw2100_disable_interrupts(struct ipw2100_priv *priv)
+{
+ if (!(priv->status & STATUS_INT_ENABLED))
+ return;
+ priv->status &= ~STATUS_INT_ENABLED;
+ write_register(priv->net_dev, IPW_REG_INTA_MASK, 0x0);
+}
+
+
+static void ipw2100_initialize_ordinals(struct ipw2100_priv *priv)
+{
+ struct ipw2100_ordinals *ord = &priv->ordinals;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ read_register(priv->net_dev, IPW_MEM_HOST_SHARED_ORDINALS_TABLE_1,
+ &ord->table1_addr);
+
+ read_register(priv->net_dev, IPW_MEM_HOST_SHARED_ORDINALS_TABLE_2,
+ &ord->table2_addr);
+
+ read_nic_dword(priv->net_dev, ord->table1_addr, &ord->table1_size);
+ read_nic_dword(priv->net_dev, ord->table2_addr, &ord->table2_size);
+
+ ord->table2_size &= 0x0000FFFF;
+
+ IPW_DEBUG_INFO("table 1 size: %d\n", ord->table1_size);
+ IPW_DEBUG_INFO("table 2 size: %d\n", ord->table2_size);
+ IPW_DEBUG_INFO("exit\n");
+}
+
+static inline void ipw2100_hw_set_gpio(struct ipw2100_priv *priv)
+{
+ u32 reg = 0;
+ /*
+ * Set GPIO 3 writable by FW; GPIO 1 writable
+ * by driver and enable clock
+ */
+ reg = (IPW_BIT_GPIO_GPIO3_MASK | IPW_BIT_GPIO_GPIO1_ENABLE |
+ IPW_BIT_GPIO_LED_OFF);
+ write_register(priv->net_dev, IPW_REG_GPIO, reg);
+}
+
+static inline int rf_kill_active(struct ipw2100_priv *priv)
+{
+#define MAX_RF_KILL_CHECKS 5
+#define RF_KILL_CHECK_DELAY 40
+#define RF_KILL_CHECK_THRESHOLD 3
+
+ unsigned short value = 0;
+ u32 reg = 0;
+ int i;
+
+ if (!(priv->hw_features & HW_FEATURE_RFKILL)) {
+ priv->status &= ~STATUS_RF_KILL_HW;
+ return 0;
+ }
+
+ for (i = 0; i < MAX_RF_KILL_CHECKS; i++) {
+ udelay(RF_KILL_CHECK_DELAY);
+ read_register(priv->net_dev, IPW_REG_GPIO, ®);
+ value = (value << 1) | ((reg & IPW_BIT_GPIO_RF_KILL) ? 0 : 1);
+ }
+
+ if (value == 0)
+ priv->status |= STATUS_RF_KILL_HW;
+ else
+ priv->status &= ~STATUS_RF_KILL_HW;
+
+ return (value == 0);
+}
+
+static int ipw2100_get_hw_features(struct ipw2100_priv *priv)
+{
+ u32 addr, len;
+ u32 val;
+
+ /*
+ * EEPROM_SRAM_DB_START_ADDRESS using ordinal in ordinal table 1
+ */
+ len = sizeof(addr);
+ if (ipw2100_get_ordinal(
+ priv, IPW_ORD_EEPROM_SRAM_DB_BLOCK_START_ADDRESS,
+ &addr, &len)) {
+ IPW_DEBUG_INFO("failed querying ordinals at line %d\n",
+ __LINE__);
+ return -EIO;
+ }
+
+ IPW_DEBUG_INFO("EEPROM address: %08X\n", addr);
+
+ /*
+ * EEPROM version is the byte at offset 0xfd in firmware
+ * We read 4 bytes, then shift out the byte we actually want */
+ read_nic_dword(priv->net_dev, addr + 0xFC, &val);
+ priv->eeprom_version = (val >> 24) & 0xFF;
+ IPW_DEBUG_INFO("EEPROM version: %d\n", priv->eeprom_version);
+
+ /*
+ * HW RF Kill enable is bit 0 in byte at offset 0x21 in firmware
+ *
+ * notice that the EEPROM bit is reverse polarity, i.e.
+ * bit = 0 signifies HW RF kill switch is supported
+ * bit = 1 signifies HW RF kill switch is NOT supported
+ */
+ read_nic_dword(priv->net_dev, addr + 0x20, &val);
+ if (!((val >> 24) & 0x01))
+ priv->hw_features |= HW_FEATURE_RFKILL;
+
+ IPW_DEBUG_INFO("HW RF Kill: %ssupported.\n",
+ (priv->hw_features & HW_FEATURE_RFKILL) ?
+ "" : "not ");
+
+ return 0;
+}
+
+/*
+ * Start firmware execution after power on and intialization
+ * The sequence is:
+ * 1. Release ARC
+ * 2. Wait for f/w initialization completes;
+ */
+static int ipw2100_start_adapter(struct ipw2100_priv *priv)
+{
+#define IPW_WAIT_FW_INIT_COMPLETE_DELAY (40 * HZ / 1000)
+ int i;
+ u32 inta, inta_mask, gpio;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ if (priv->status & STATUS_RUNNING)
+ return 0;
+
+ /*
+ * Initialize the hw - drive adapter to DO state by setting
+ * init_done bit. Wait for clk_ready bit and Download
+ * fw & dino ucode
+ */
+ if (ipw2100_download_firmware(priv)) {
+ IPW_DEBUG_ERROR("%s: Failed to power on the adapter.\n",
+ priv->net_dev->name);
+ return -EIO;
+ }
+
+ /* Clear the Tx, Rx and Msg queues and the r/w indexes
+ * in the firmware RBD and TBD ring queue */
+ ipw2100_queues_initialize(priv);
+
+ ipw2100_hw_set_gpio(priv);
+
+ /* TODO -- Look at disabling interrupts here to make sure none
+ * get fired during FW initialization */
+
+ /* Release ARC - clear reset bit */
+ write_register(priv->net_dev, IPW_REG_RESET_REG, 0);
+
+ /* wait for f/w intialization complete */
+ IPW_DEBUG_FW("Waiting for f/w initialization to complete...\n");
+ i = 5000;
+ do {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(IPW_WAIT_FW_INIT_COMPLETE_DELAY);
+ /* Todo... wait for sync command ... */
+
+ read_register(priv->net_dev, IPW_REG_INTA, &inta);
+
+ /* check "init done" bit */
+ if (inta & IPW2100_INTA_FW_INIT_DONE) {
+ /* reset "init done" bit */
+ write_register(priv->net_dev, IPW_REG_INTA,
+ IPW2100_INTA_FW_INIT_DONE);
+ break;
+ }
+
+ /* check error conditions : we check these after the firmware
+ * check so that if there is an error, the interrupt handler
+ * will see it and the adapter will be reset */
+ if (inta &
+ (IPW2100_INTA_FATAL_ERROR | IPW2100_INTA_PARITY_ERROR)) {
+ /* clear error conditions */
+ write_register(priv->net_dev, IPW_REG_INTA,
+ IPW2100_INTA_FATAL_ERROR |
+ IPW2100_INTA_PARITY_ERROR);
+ }
+ } while (i--);
+
+ /* Clear out any pending INTAs since we aren't supposed to have
+ * interrupts enabled at this point... */
+ read_register(priv->net_dev, IPW_REG_INTA, &inta);
+ read_register(priv->net_dev, IPW_REG_INTA_MASK, &inta_mask);
+ inta &= IPW_INTERRUPT_MASK;
+ /* Clear out any pending interrupts */
+ if (inta & inta_mask)
+ write_register(priv->net_dev, IPW_REG_INTA, inta);
+
+ IPW_DEBUG_FW("f/w initialization complete: %s\n",
+ i ? "SUCCESS" : "FAILED");
+
+ if (!i) {
+ IPW_DEBUG_WARNING("%s: Firmware did not initialize.\n",
+ priv->net_dev->name);
+ return -EIO;
+ }
+
+ /* allow firmware to write to GPIO1 & GPIO3 */
+ read_register(priv->net_dev, IPW_REG_GPIO, &gpio);
+
+ gpio |= (IPW_BIT_GPIO_GPIO1_MASK | IPW_BIT_GPIO_GPIO3_MASK);
+
+ write_register(priv->net_dev, IPW_REG_GPIO, gpio);
+
+ /* Ready to receive commands */
+ priv->status |= STATUS_RUNNING;
+
+ /* The adapter has been reset; we are not associated */
+ priv->status &= ~STATUS_ASSOCIATED;
+
+ IPW_DEBUG_INFO("exit\n");
+
+ return 0;
+}
+
+static inline void ipw2100_reset_fatalerror(struct ipw2100_priv *priv)
+{
+ if (!priv->fatal_error)
+ return;
+
+ priv->fatal_errors[priv->fatal_index++] = priv->fatal_error;
+ priv->fatal_index %= IPW2100_ERROR_QUEUE;
+ priv->fatal_error = 0;
+}
+
+
+/* NOTE: Our interrupt is disabled when this method is called */
+static int ipw2100_power_cycle_adapter(struct ipw2100_priv *priv)
+{
+ u32 reg;
+ int i;
+
+ IPW_DEBUG_INFO("Power cycling the hardware.\n");
+
+ ipw2100_hw_set_gpio(priv);
+
+ /* Step 1. Stop Master Assert */
+ write_register(priv->net_dev, IPW_REG_RESET_REG,
+ IPW_AUX_HOST_RESET_REG_STOP_MASTER);
+
+ /* Step 2. Wait for stop Master Assert
+ * (not more then 50us, otherwise ret error */
+ i = 5;
+ do {
+ udelay(IPW_WAIT_RESET_MASTER_ASSERT_COMPLETE_DELAY);
+ read_register(priv->net_dev, IPW_REG_RESET_REG, ®);
+
+ if (reg & IPW_AUX_HOST_RESET_REG_MASTER_DISABLED)
+ break;
+ } while(i--);
+
+ priv->status &= ~STATUS_RESET_PENDING;
+
+ if (!i) {
+ IPW_DEBUG_INFO("exit - waited too long for master assert stop\n");
+ return -EIO;
+ }
+
+ write_register(priv->net_dev, IPW_REG_RESET_REG,
+ IPW_AUX_HOST_RESET_REG_SW_RESET);
+
+
+ /* Reset any fatal_error conditions */
+ ipw2100_reset_fatalerror(priv);
+
+ /* At this point, the adapter is now stopped and disabled */
+ priv->status &= ~(STATUS_RUNNING | STATUS_ASSOCIATED | STATUS_ENABLED);
+
+ return 0;
+}
+
+/*
+ * Send the CARD_DISABLE_PHY_OFF comamnd to the card to disable it
+ *
+ * After disabling, if the card was associated, a STATUS_ASSN_LOST will be sent.
+ *
+ * STATUS_CARD_DISABLE_NOTIFICATION will be sent regardless of
+ * if STATUS_ASSN_LOST is sent.
+ */
+static int ipw2100_hw_phy_off(struct ipw2100_priv *priv)
+{
+
+#define HW_PHY_OFF_LOOP_DELAY (HZ / 5000)
+
+ struct host_command cmd = {
+ .host_command = CARD_DISABLE_PHY_OFF,
+ .host_command_sequence = 0,
+ .host_command_length = 0,
+ };
+ int err, i;
+ u32 val1, val2;
+
+ IPW_DEBUG_HC("CARD_DISABLE_PHY_OFF\n");
+
+ /* Turn off the radio */
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err)
+ return err;
+
+ for (i = 0; i < 2500; i++) {
+ read_nic_dword(priv->net_dev, IPW2100_CONTROL_REG, &val1);
+ read_nic_dword(priv->net_dev, IPW2100_COMMAND, &val2);
+
+ if ((val1 & IPW2100_CONTROL_PHY_OFF) &&
+ (val2 & IPW2100_COMMAND_PHY_OFF))
+ return 0;
+
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HW_PHY_OFF_LOOP_DELAY);
+ }
+
+ return -EIO;
+}
+
+
+static int ipw2100_enable_adapter(struct ipw2100_priv *priv)
+{
+ struct host_command cmd = {
+ .host_command = HOST_COMPLETE,
+ .host_command_sequence = 0,
+ .host_command_length = 0
+ };
+ int err;
+
+ IPW_DEBUG_HC("HOST_COMPLETE\n");
+
+ if (priv->status & STATUS_ENABLED)
+ return 0;
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err) {
+ IPW_DEBUG_INFO("Failed to send HOST_COMPLETE command\n");
+ return err;
+ }
+
+ err = ipw2100_wait_for_card_state(priv, IPW_HW_STATE_ENABLED);
+ if (err) {
+ IPW_DEBUG_INFO(
+ "%s: card not responding to init command.\n",
+ priv->net_dev->name);
+ return err;
+ }
+
+ if (priv->stop_hang_check) {
+ priv->stop_hang_check = 0;
+ queue_delayed_work(priv->workqueue, &priv->hang_check, 2 * HZ);
+ }
+
+ return 0;
+}
+
+static int ipw2100_hw_stop_adapter(struct ipw2100_priv *priv)
+{
+#define HW_POWER_DOWN_DELAY (HZ / 10)
+
+ struct host_command cmd = {
+ .host_command = HOST_PRE_POWER_DOWN,
+ .host_command_sequence = 0,
+ .host_command_length = 0,
+ };
+ int err, i;
+ u32 reg;
+
+ if (!(priv->status & STATUS_RUNNING))
+ return 0;
+
+ priv->status |= STATUS_STOPPING;
+
+ /* We can only shut down the card if the firmware is operational. So,
+ * if we haven't reset since a fatal_error, then we can not send the
+ * shutdown commands. */
+ if (!priv->fatal_error) {
+ /* First, make sure the adapter is enabled so that the PHY_OFF
+ * command can shut it down */
+ ipw2100_enable_adapter(priv);
+
+ err = ipw2100_hw_phy_off(priv);
+ if (err)
+ IPW_DEBUG_WARNING("Error disabling radio %d\n", err);
+
+ /*
+ * If in D0-standby mode going directly to D3 may cause a
+ * PCI bus violation. Therefore we must change out of the D0
+ * state.
+ *
+ * Sending the PREPARE_FOR_POWER_DOWN will restrict the
+ * hardware from going into standby mode and will transition
+ * out of D0-standy if it is already in that state.
+ *
+ * STATUS_PREPARE_POWER_DOWN_COMPLETE will be sent by the
+ * driver upon completion. Once received, the driver can
+ * proceed to the D3 state.
+ *
+ * Prepare for power down command to fw. This command would
+ * take HW out of D0-standby and prepare it for D3 state.
+ *
+ * Currently FW does not support event notification for this
+ * event. Therefore, skip waiting for it. Just wait a fixed
+ * 100ms
+ */
+ IPW_DEBUG_HC("HOST_PRE_POWER_DOWN\n");
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err)
+ IPW_DEBUG_WARNING(
+ "%s: Power down command failed: Error %d\n",
+ priv->net_dev->name, err);
+ else {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ schedule_timeout(HW_POWER_DOWN_DELAY);
+ }
+ }
+
+ priv->status &= ~STATUS_ENABLED;
+
+ /*
+ * Set GPIO 3 writable by FW; GPIO 1 writable
+ * by driver and enable clock
+ */
+ ipw2100_hw_set_gpio(priv);
+
+ /*
+ * Power down adapter. Sequence:
+ * 1. Stop master assert (RESET_REG[9]=1)
+ * 2. Wait for stop master (RESET_REG[8]==1)
+ * 3. S/w reset assert (RESET_REG[7] = 1)
+ */
+
+ /* Stop master assert */
+ write_register(priv->net_dev, IPW_REG_RESET_REG,
+ IPW_AUX_HOST_RESET_REG_STOP_MASTER);
+
+ /* wait stop master not more than 50 usec.
+ * Otherwise return error. */
+ for (i = 5; i > 0; i--) {
+ udelay(10);
+
+ /* Check master stop bit */
+ read_register(priv->net_dev, IPW_REG_RESET_REG, ®);
+
+ if (reg & IPW_AUX_HOST_RESET_REG_MASTER_DISABLED)
+ break;
+ }
+
+ if (i == 0)
+ IPW_DEBUG_WARNING(DRV_NAME
+ ": %s: Could now power down adapter.\n",
+ priv->net_dev->name);
+
+ /* assert s/w reset */
+ write_register(priv->net_dev, IPW_REG_RESET_REG,
+ IPW_AUX_HOST_RESET_REG_SW_RESET);
+
+ priv->status &= ~(STATUS_RUNNING | STATUS_STOPPING);
+
+ return 0;
+}
+
+
+static int ipw2100_disable_adapter(struct ipw2100_priv *priv)
+{
+ struct host_command cmd = {
+ .host_command = CARD_DISABLE,
+ .host_command_sequence = 0,
+ .host_command_length = 0
+ };
+ int err;
+
+ IPW_DEBUG_HC("CARD_DISABLE\n");
+
+ if (!(priv->status & STATUS_ENABLED))
+ return 0;
+
+ /* Make sure we clear the associated state */
+ priv->status &= ~STATUS_ASSOCIATED;
+
+ if (!priv->stop_hang_check) {
+ priv->stop_hang_check = 1;
+ cancel_delayed_work(&priv->hang_check);
+ }
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err) {
+ IPW_DEBUG_WARNING("exit - failed to send CARD_DISABLE command\n");
+ return err;
+ }
+
+ err = ipw2100_wait_for_card_state(priv, IPW_HW_STATE_DISABLED);
+ if (err) {
+ IPW_DEBUG_WARNING("exit - card failed to change to DISABLED\n");
+ return err;
+ }
+
+ IPW_DEBUG_INFO("TODO: implement scan state machine\n");
+
+
+ return 0;
+}
+
+int ipw2100_set_scan_options(struct ipw2100_priv *priv)
+{
+ struct host_command cmd = {
+ .host_command = SET_SCAN_OPTIONS,
+ .host_command_sequence = 0,
+ .host_command_length = 8
+ };
+ int err;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ IPW_DEBUG_SCAN("setting scan options\n");
+
+ cmd.host_command_parameters[0] = 0;
+
+ if (!(priv->config & CFG_ASSOCIATE))
+ cmd.host_command_parameters[0] |= IPW_SCAN_NOASSOCIATE;
+ if ((priv->sec.flags & SEC_ENABLED) && priv->sec.enabled)
+ cmd.host_command_parameters[0] |= IPW_SCAN_MIXED_CELL;
+ if (priv->config & CFG_PASSIVE_SCAN)
+ cmd.host_command_parameters[0] |= IPW_SCAN_PASSIVE;
+
+ cmd.host_command_parameters[1] = priv->channel_mask;
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ IPW_DEBUG_HC("SET_SCAN_OPTIONS 0x%04X\n",
+ cmd.host_command_parameters[0]);
+
+ return err;
+}
+
+int ipw2100_start_scan(struct ipw2100_priv *priv)
+{
+ struct host_command cmd = {
+ .host_command = BROADCAST_SCAN,
+ .host_command_sequence = 0,
+ .host_command_length = 4
+ };
+ int err;
+
+ IPW_DEBUG_HC("START_SCAN\n");
+
+ cmd.host_command_parameters[0] = 0;
+
+ /* No scanning if in monitor mode */
+ if (priv->ieee->iw_mode == IW_MODE_MONITOR)
+ return 1;
+
+ if (priv->status & STATUS_SCANNING) {
+ IPW_DEBUG_SCAN("Scan requested while already in scan...\n");
+ return 0;
+ }
+
+ IPW_DEBUG_INFO("enter\n");
+
+ /* Not clearing here; doing so makes iwlist always return nothing...
+ *
+ * We should modify the table logic to use aging tables vs. clearing
+ * the table on each scan start.
+ */
+ IPW_DEBUG_SCAN("starting scan\n");
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ priv->status |= STATUS_SCANNING;
+ IPW_DEBUG_INFO("exit\n");
+
+ return err;
+}
+
+static int ipw2100_up(struct ipw2100_priv *priv, int deferred)
+{
+ unsigned long flags;
+ int rc = 0;
+ u32 lock;
+ u32 ord_len = sizeof(lock);
+
+ /* Quite if manually disabled. */
+ if (priv->status & STATUS_RF_KILL_SW) {
+ IPW_DEBUG_INFO("%s: Radio is disabled by Manual Disable "
+ "switch\n", priv->net_dev->name);
+ return 0;
+ }
+
+ /* If the interrupt is enabled, turn it off... */
+ spin_lock_irqsave(&priv->low_lock, flags);
+ ipw2100_disable_interrupts(priv);
+
+ /* Reset any fatal_error conditions */
+ ipw2100_reset_fatalerror(priv);
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+
+ if (priv->status & STATUS_POWERED ||
+ (priv->status & STATUS_RESET_PENDING)) {
+ /* Power cycle the card ... */
+ if (ipw2100_power_cycle_adapter(priv)) {
+ IPW_DEBUG_WARNING("%s: Could not cycle adapter.\n",
+ priv->net_dev->name);
+ rc = 1;
+ goto exit;
+ }
+ } else
+ priv->status |= STATUS_POWERED;
+
+ /* Load the firmeware, start the clocks, etc. */
+ if (ipw2100_start_adapter(priv)) {
+ IPW_DEBUG_ERROR("%s: Failed to start the firmware.\n",
+ priv->net_dev->name);
+ rc = 1;
+ goto exit;
+ }
+
+ ipw2100_initialize_ordinals(priv);
+
+ /* Determine capabilities of this particular HW configuration */
+ if (ipw2100_get_hw_features(priv)) {
+ IPW_DEBUG_ERROR("%s: Failed to determine HW features.\n",
+ priv->net_dev->name);
+ rc = 1;
+ goto exit;
+ }
+
+ lock = LOCK_NONE;
+ if (ipw2100_set_ordinal(priv, IPW_ORD_PERS_DB_LOCK, &lock, &ord_len)) {
+ IPW_DEBUG_ERROR("%s: Failed to clear ordinal lock.\n",
+ priv->net_dev->name);
+ rc = 1;
+ goto exit;
+ }
+
+ priv->status &= ~STATUS_SCANNING;
+
+ if (rf_kill_active(priv)) {
+ printk(KERN_INFO "%s: Radio is disabled by RF switch.\n",
+ priv->net_dev->name);
+
+ if (priv->stop_rf_kill) {
+ priv->stop_rf_kill = 0;
+ queue_delayed_work(priv->workqueue, &priv->rf_kill, HZ);
+ }
+
+ deferred = 1;
+ }
+
+ /* Turn on the interrupt so that commands can be processed */
+ ipw2100_enable_interrupts(priv);
+
+ /* Send all of the commands that must be sent prior to
+ * HOST_COMPLETE */
+ if (ipw2100_adapter_setup(priv)) {
+ IPW_DEBUG_ERROR("%s: Failed to start the card.\n",
+ priv->net_dev->name);
+ rc = 1;
+ goto exit;
+ }
+
+ if (!deferred) {
+ /* Enable the adapter - sends HOST_COMPLETE */
+ if (ipw2100_enable_adapter(priv)) {
+ IPW_DEBUG_ERROR(
+ "%s: failed in call to enable adapter.\n",
+ priv->net_dev->name);
+ ipw2100_hw_stop_adapter(priv);
+ rc = 1;
+ goto exit;
+ }
+
+
+ /* Start a scan . . . */
+ ipw2100_set_scan_options(priv);
+ ipw2100_start_scan(priv);
+ }
+
+ exit:
+ return rc;
+}
+
+/* Called by register_netdev() */
+static int ipw2100_net_init(struct net_device *dev)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ return ipw2100_up(priv, 1);
+}
+
+static void ipw2100_down(struct ipw2100_priv *priv)
+{
+ unsigned long flags;
+
+ /* Kill the RF switch timer */
+ if (!priv->stop_rf_kill) {
+ priv->stop_rf_kill = 1;
+ cancel_delayed_work(&priv->rf_kill);
+ }
+
+ /* Kill the firmare hang check timer */
+ if (!priv->stop_hang_check) {
+ priv->stop_hang_check = 1;
+ cancel_delayed_work(&priv->hang_check);
+ }
+
+ /* Kill any pending resets */
+ if (priv->status & STATUS_RESET_PENDING)
+ cancel_delayed_work(&priv->reset_work);
+
+ /* Make sure the interrupt is on so that FW commands will be
+ * processed correctly */
+ spin_lock_irqsave(&priv->low_lock, flags);
+ ipw2100_enable_interrupts(priv);
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+
+ if (ipw2100_hw_stop_adapter(priv))
+ IPW_DEBUG_ERROR("%s: Error stopping adapter.\n",
+ priv->net_dev->name);
+
+ /* Do not disable the interrupt until _after_ we disable
+ * the adaptor. Otherwise the CARD_DISABLE command will never
+ * be ack'd by the firmware */
+ spin_lock_irqsave(&priv->low_lock, flags);
+ ipw2100_disable_interrupts(priv);
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+
+#ifdef ACPI_CSTATE_LIMIT_DEFINED
+ if (priv->config & CFG_C3_DISABLED) {
+ IPW_DEBUG_INFO(DRV_NAME ": Resetting C3 transitions.\n");
+ acpi_set_cstate_limit(priv->cstate_limit);
+ priv->config &= ~CFG_C3_DISABLED;
+ }
+#endif
+
+ priv->status &= ~STATUS_ASSOCIATED;
+ netif_carrier_off(priv->net_dev);
+ netif_stop_queue(priv->net_dev);
+}
+
+void ipw2100_reset_adapter(struct ipw2100_priv *priv)
+{
+ unsigned long flags;
+ spin_lock_irqsave(&priv->low_lock, flags);
+ IPW_DEBUG_INFO(DRV_NAME ": %s: Restarting adapter.\n",
+ priv->net_dev->name);
+ priv->resets++;
+ priv->status &= ~STATUS_ASSOCIATED;
+ priv->status |= STATUS_SECURITY_UPDATED;
+
+ /* Force a power cycle even if interface hasn't been opened
+ * yet */
+ cancel_delayed_work(&priv->reset_work);
+ priv->status |= STATUS_RESET_PENDING;
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+
+ /* stop timed checks so that they don't interfere with reset */
+ priv->stop_hang_check = 1;
+ cancel_delayed_work(&priv->hang_check);
+
+ ipw2100_up(priv, 0);
+
+}
+
+
+static void isr_indicate_associated(struct ipw2100_priv *priv, u32 status)
+{
+
+#define MAC_ASSOCIATION_READ_DELAY (HZ)
+ int ret, len, essid_len;
+ char essid[IW_ESSID_MAX_SIZE];
+ u32 txrate;
+ u32 chan;
+ char *txratename;
+ u8 bssid[ETH_ALEN];
+
+ /*
+ * TBD: BSSID is usually 00:00:00:00:00:00 here and not
+ * an actual MAC of the AP. Seems like FW sets this
+ * address too late. Read it later and expose through
+ * /proc or schedule a later task to query and update
+ */
+
+ essid_len = IW_ESSID_MAX_SIZE;
+ ret = ipw2100_get_ordinal(priv, IPW_ORD_STAT_ASSN_SSID,
+ essid, &essid_len);
+ if (ret) {
+ IPW_DEBUG_INFO("failed querying ordinals at line %d\n",
+ __LINE__);
+ return;
+ }
+
+ len = sizeof(u32);
+ ret = ipw2100_get_ordinal(priv, IPW_ORD_CURRENT_TX_RATE,
+ &txrate, &len);
+ if (ret) {
+ IPW_DEBUG_INFO("failed querying ordinals at line %d\n",
+ __LINE__);
+ return;
+ }
+
+ len = sizeof(u32);
+ ret = ipw2100_get_ordinal(priv, IPW_ORD_OUR_FREQ, &chan, &len);
+ if (ret) {
+ IPW_DEBUG_INFO("failed querying ordinals at line %d\n",
+ __LINE__);
+ return;
+ }
+ len = ETH_ALEN;
+ ipw2100_get_ordinal(priv, IPW_ORD_STAT_ASSN_AP_BSSID, &bssid, &len);
+ if (ret) {
+ IPW_DEBUG_INFO("failed querying ordinals at line %d\n",
+ __LINE__);
+ return;
+ }
+ memcpy(priv->ieee->bssid, bssid, ETH_ALEN);
+
+
+ switch (txrate) {
+ case TX_RATE_1_MBIT:
+ txratename = "1Mbps";
+ break;
+ case TX_RATE_2_MBIT:
+ txratename = "2Mbsp";
+ break;
+ case TX_RATE_5_5_MBIT:
+ txratename = "5.5Mbps";
+ break;
+ case TX_RATE_11_MBIT:
+ txratename = "11Mbps";
+ break;
+ default:
+ IPW_DEBUG_INFO("Unknown rate: %d\n", txrate);
+ txratename = "unknown rate";
+ break;
+ }
+
+ IPW_DEBUG_INFO("%s: Associated with '%s' at %s, channel %d (BSSID="
+ MAC_FMT ")\n",
+ priv->net_dev->name, escape_essid(essid, essid_len),
+ txratename, chan, MAC_ARG(bssid));
+
+ /* now we copy read ssid into dev */
+ if (!(priv->config & CFG_STATIC_ESSID)) {
+ priv->essid_len = min((u8)essid_len, (u8)IW_ESSID_MAX_SIZE);
+ memcpy(priv->essid, essid, priv->essid_len);
+ }
+ priv->channel = chan;
+ memcpy(priv->bssid, bssid, ETH_ALEN);
+
+ priv->status |= STATUS_ASSOCIATED;
+ priv->connect_start = get_seconds();
+
+ netif_carrier_on(priv->net_dev);
+ if (netif_queue_stopped(priv->net_dev)) {
+ IPW_DEBUG_INFO("Waking net queue.\n");
+ netif_wake_queue(priv->net_dev);
+ } else {
+ IPW_DEBUG_INFO("Starting net queue.\n");
+ netif_start_queue(priv->net_dev);
+ }
+
+ queue_delayed_work(priv->workqueue, &priv->wx_event_work, HZ / 10);
+}
+
+
+int ipw2100_set_essid(struct ipw2100_priv *priv, char *essid,
+ int length, int batch_mode)
+{
+ int ssid_len = min(length, IW_ESSID_MAX_SIZE);
+ struct host_command cmd = {
+ .host_command = SSID,
+ .host_command_sequence = 0,
+ .host_command_length = ssid_len
+ };
+ int err;
+
+ IPW_DEBUG_HC("SSID: '%s'\n", escape_essid(essid, ssid_len));
+
+ if (ssid_len)
+ memcpy((char*)cmd.host_command_parameters,
+ essid, ssid_len);
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ /* Bug in FW currently doesn't honor bit 0 in SET_SCAN_OPTIONS to
+ * disable auto association -- so we cheat by setting a bogus SSID */
+ if (!ssid_len && !(priv->config & CFG_ASSOCIATE)) {
+ int i;
+ u8 *bogus = (u8*)cmd.host_command_parameters;
+ for (i = 0; i < IW_ESSID_MAX_SIZE; i++)
+ bogus[i] = 0x18 + i;
+ cmd.host_command_length = IW_ESSID_MAX_SIZE;
+ }
+
+ /* NOTE: We always send the SSID command even if the provided ESSID is
+ * the same as what we currently think is set. */
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (!err) {
+ memset(priv->essid + ssid_len, 0,
+ IW_ESSID_MAX_SIZE - ssid_len);
+ memcpy(priv->essid, essid, ssid_len);
+ priv->essid_len = ssid_len;
+ }
+
+ if (!batch_mode) {
+ if (ipw2100_enable_adapter(priv))
+ err = -EIO;
+ }
+
+ return err;
+}
+
+static void isr_indicate_association_lost(struct ipw2100_priv *priv, u32 status)
+{
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "disassociated: '%s' " MAC_FMT " \n",
+ escape_essid(priv->essid, priv->essid_len),
+ MAC_ARG(priv->bssid));
+
+ if (priv->status & STATUS_STOPPING) {
+ IPW_DEBUG_INFO("Card is stopping itself, discard ASSN_LOST.\n");
+ return;
+ }
+ priv->status &= ~STATUS_ASSOCIATED;
+
+ memset(priv->bssid, 0, ETH_ALEN);
+ memset(priv->ieee->bssid, 0, ETH_ALEN);
+
+ netif_carrier_off(priv->net_dev);
+ netif_stop_queue(priv->net_dev);
+
+ if (!(priv->status & STATUS_RUNNING))
+ return;
+
+ if (priv->status & STATUS_SECURITY_UPDATED)
+ queue_work(priv->workqueue, &priv->security_work);
+
+ queue_work(priv->workqueue, &priv->wx_event_work);
+}
+
+static void isr_indicate_rf_kill(struct ipw2100_priv *priv, u32 status)
+{
+ IPW_DEBUG_INFO("%s: RF Kill state changed to radio OFF.\n",
+ priv->net_dev->name);
+
+ /* RF_KILL is now enabled (else we wouldn't be here) */
+ priv->status |= STATUS_RF_KILL_HW;
+
+#ifdef ACPI_CSTATE_LIMIT_DEFINED
+ if (priv->config & CFG_C3_DISABLED) {
+ IPW_DEBUG_INFO(DRV_NAME ": Resetting C3 transitions.\n");
+ acpi_set_cstate_limit(priv->cstate_limit);
+ priv->config &= ~CFG_C3_DISABLED;
+ }
+#endif
+
+ /* If not already running, we now fire up a timer that will poll
+ * the state of the RF switch on the hardware so we can re-enable
+ * the firmware if the switch is enabled */
+ if (priv->stop_rf_kill) {
+ priv->stop_rf_kill = 0;
+ queue_delayed_work(priv->workqueue, &priv->rf_kill, HZ);
+ }
+}
+
+static void isr_scan_complete(struct ipw2100_priv *priv, u32 status)
+{
+ IPW_DEBUG_SCAN("scan complete\n");
+ /* Age the scan results... */
+ priv->ieee->scans++;
+ priv->status &= ~STATUS_SCANNING;
+}
+
+#ifdef CONFIG_IPW_DEBUG
+#define IPW2100_HANDLER(v, f) { v, f, # v }
+struct ipw2100_status_indicator {
+ int status;
+ void (*cb)(struct ipw2100_priv *priv, u32 status);
+ char *name;
+};
+#else
+#define IPW2100_HANDLER(v, f) { v, f }
+struct ipw2100_status_indicator {
+ int status;
+ void (*cb)(struct ipw2100_priv *priv, u32 status);
+};
+#endif /* CONFIG_IPW_DEBUG */
+
+static void isr_indicate_scanning(struct ipw2100_priv *priv, u32 status)
+{
+ priv->status |= STATUS_SCANNING;
+}
+
+const struct ipw2100_status_indicator status_handlers[] = {
+ IPW2100_HANDLER(IPW_STATE_INITIALIZED, 0),
+ IPW2100_HANDLER(IPW_STATE_COUNTRY_FOUND, 0),
+ IPW2100_HANDLER(IPW_STATE_ASSOCIATED, isr_indicate_associated),
+ IPW2100_HANDLER(IPW_STATE_ASSN_LOST, isr_indicate_association_lost),
+ IPW2100_HANDLER(IPW_STATE_ASSN_CHANGED, 0),
+ IPW2100_HANDLER(IPW_STATE_SCAN_COMPLETE, isr_scan_complete),
+ IPW2100_HANDLER(IPW_STATE_ENTERED_PSP, 0),
+ IPW2100_HANDLER(IPW_STATE_LEFT_PSP, 0),
+ IPW2100_HANDLER(IPW_STATE_RF_KILL, isr_indicate_rf_kill),
+ IPW2100_HANDLER(IPW_STATE_DISABLED, 0),
+ IPW2100_HANDLER(IPW_STATE_POWER_DOWN, 0),
+ IPW2100_HANDLER(IPW_STATE_SCANNING, isr_indicate_scanning),
+ IPW2100_HANDLER(-1, 0)
+};
+
+
+static void isr_status_change(struct ipw2100_priv *priv, int status)
+{
+ int i;
+
+ for (i = 0; status_handlers[i].status != -1; i++) {
+ if (status == status_handlers[i].status) {
+ IPW_DEBUG_NOTIF("Status change: %s\n",
+ status_handlers[i].name);
+ if (status_handlers[i].cb)
+ status_handlers[i].cb(priv, status);
+ priv->wstats.status = status;
+ return;
+ }
+ }
+
+ IPW_DEBUG_NOTIF("unknown status received: %04x\n", status);
+}
+
+static void isr_rx_complete_command(
+ struct ipw2100_priv *priv,
+ struct ipw2100_cmd_header *cmd)
+{
+#ifdef CONFIG_IPW_DEBUG
+ if (cmd->host_command_reg < ARRAY_SIZE(command_types)) {
+ IPW_DEBUG_HC("Command completed '%s (%d)'\n",
+ command_types[cmd->host_command_reg],
+ cmd->host_command_reg);
+ }
+#endif
+ if (cmd->host_command_reg == HOST_COMPLETE)
+ priv->status |= STATUS_ENABLED;
+
+ if (cmd->host_command_reg == CARD_DISABLE)
+ priv->status &= ~STATUS_ENABLED;
+
+ priv->status &= ~STATUS_CMD_ACTIVE;
+
+ wake_up_interruptible(&priv->wait_command_queue);
+}
+
+#ifdef CONFIG_IPW_DEBUG
+const char *frame_types[] = {
+ "COMMAND_STATUS_VAL",
+ "STATUS_CHANGE_VAL",
+ "P80211_DATA_VAL",
+ "P8023_DATA_VAL",
+ "HOST_NOTIFICATION_VAL"
+};
+#endif
+
+
+static inline int ipw2100_alloc_skb(
+ struct ipw2100_priv *priv,
+ struct ipw2100_rx_packet *packet)
+{
+ packet->skb = dev_alloc_skb(sizeof(struct ipw2100_rx));
+ if (!packet->skb)
+ return -ENOMEM;
+
+ packet->rxp = (struct ipw2100_rx *)packet->skb->data;
+ packet->dma_addr = pci_map_single(priv->pci_dev, packet->skb->data,
+ sizeof(struct ipw2100_rx),
+ PCI_DMA_FROMDEVICE);
+ /* NOTE: pci_map_single does not return an error code, and 0 is a valid
+ * dma_addr */
+
+ return 0;
+}
+
+
+#define SEARCH_ERROR 0xffffffff
+#define SEARCH_FAIL 0xfffffffe
+#define SEARCH_SUCCESS 0xfffffff0
+#define SEARCH_DISCARD 0
+#define SEARCH_SNAPSHOT 1
+
+#define SNAPSHOT_ADDR(ofs) (priv->snapshot[((ofs) >> 12) & 0xff] + ((ofs) & 0xfff))
+static inline int ipw2100_snapshot_alloc(struct ipw2100_priv *priv)
+{
+ int i;
+ if (priv->snapshot[0])
+ return 1;
+ for (i = 0; i < 0x30; i++) {
+ priv->snapshot[i] = (u8*)kmalloc(0x1000, GFP_ATOMIC);
+ if (!priv->snapshot[i]) {
+ IPW_DEBUG_INFO("%s: Error allocating snapshot "
+ "buffer %d\n", priv->net_dev->name, i);
+ while (i > 0)
+ kfree(priv->snapshot[--i]);
+ priv->snapshot[0] = NULL;
+ return 0;
+ }
+ }
+
+ return 1;
+}
+
+static inline void ipw2100_snapshot_free(struct ipw2100_priv *priv)
+{
+ int i;
+ if (!priv->snapshot[0])
+ return;
+ for (i = 0; i < 0x30; i++)
+ kfree(priv->snapshot[i]);
+ priv->snapshot[0] = NULL;
+}
+
+static inline u32 ipw2100_match_buf(struct ipw2100_priv *priv, u8 *in_buf,
+ size_t len, int mode)
+{
+ u32 i, j;
+ u32 tmp;
+ u8 *s, *d;
+ u32 ret;
+
+ s = in_buf;
+ if (mode == SEARCH_SNAPSHOT) {
+ if (!ipw2100_snapshot_alloc(priv))
+ mode = SEARCH_DISCARD;
+ }
+
+ for (ret = SEARCH_FAIL, i = 0; i < 0x30000; i += 4) {
+ read_nic_dword(priv->net_dev, i, &tmp);
+ if (mode == SEARCH_SNAPSHOT)
+ *(u32 *)SNAPSHOT_ADDR(i) = tmp;
+ if (ret == SEARCH_FAIL) {
+ d = (u8*)&tmp;
+ for (j = 0; j < 4; j++) {
+ if (*s != *d) {
+ s = in_buf;
+ continue;
+ }
+
+ s++;
+ d++;
+
+ if ((s - in_buf) == len)
+ ret = (i + j) - len + 1;
+ }
+ } else if (mode == SEARCH_DISCARD)
+ return ret;
+ }
+
+ return ret;
+}
+
+/*
+ *
+ * 0) Disconnect the SKB from the firmware (just unmap)
+ * 1) Pack the ETH header into the SKB
+ * 2) Pass the SKB to the network stack
+ *
+ * When packet is provided by the firmware, it contains the following:
+ *
+ * . ieee80211_hdr
+ * . ieee80211_snap_hdr
+ *
+ * The size of the constructed ethernet
+ *
+ */
+#ifdef CONFIG_IPW2100_RX_DEBUG
+u8 packet_data[IPW_RX_NIC_BUFFER_LENGTH];
+#endif
+
+static inline void ipw2100_corruption_detected(struct ipw2100_priv *priv,
+ int i)
+{
+#ifdef CONFIG_IPW_DEBUG_C3
+ struct ipw2100_status *status = &priv->status_queue.drv[i];
+ u32 match, reg;
+ int j;
+#endif
+#ifdef ACPI_CSTATE_LIMIT_DEFINED
+ int limit;
+#endif
+
+ IPW_DEBUG_INFO(DRV_NAME ": PCI latency error detected at "
+ "0x%04X.\n", i * sizeof(struct ipw2100_status));
+
+#ifdef ACPI_CSTATE_LIMIT_DEFINED
+ IPW_DEBUG_INFO(DRV_NAME ": Disabling C3 transitions.\n");
+ limit = acpi_get_cstate_limit();
+ if (limit > 2) {
+ priv->cstate_limit = limit;
+ acpi_set_cstate_limit(2);
+ priv->config |= CFG_C3_DISABLED;
+ }
+#endif
+
+#ifdef CONFIG_IPW_DEBUG_C3
+ /* Halt the fimrware so we can get a good image */
+ write_register(priv->net_dev, IPW_REG_RESET_REG,
+ IPW_AUX_HOST_RESET_REG_STOP_MASTER);
+ j = 5;
+ do {
+ udelay(IPW_WAIT_RESET_MASTER_ASSERT_COMPLETE_DELAY);
+ read_register(priv->net_dev, IPW_REG_RESET_REG, ®);
+
+ if (reg & IPW_AUX_HOST_RESET_REG_MASTER_DISABLED)
+ break;
+ } while (j--);
+
+ match = ipw2100_match_buf(priv, (u8*)status,
+ sizeof(struct ipw2100_status),
+ SEARCH_SNAPSHOT);
+ if (match < SEARCH_SUCCESS)
+ IPW_DEBUG_INFO("%s: DMA status match in Firmware at "
+ "offset 0x%06X, length %d:\n",
+ priv->net_dev->name, match,
+ sizeof(struct ipw2100_status));
+ else
+ IPW_DEBUG_INFO("%s: No DMA status match in "
+ "Firmware.\n", priv->net_dev->name);
+
+ printk_buf((u8*)priv->status_queue.drv,
+ sizeof(struct ipw2100_status) * RX_QUEUE_LENGTH);
+#endif
+
+ priv->fatal_error = IPW2100_ERR_C3_CORRUPTION;
+ priv->ieee->stats.rx_errors++;
+ schedule_reset(priv);
+}
+
+static inline void isr_rx(struct ipw2100_priv *priv, int i,
+ struct ieee80211_rx_stats *stats)
+{
+ struct ipw2100_status *status = &priv->status_queue.drv[i];
+ struct ipw2100_rx_packet *packet = &priv->rx_buffers[i];
+
+ IPW_DEBUG_RX("Handler...\n");
+
+ if (unlikely(status->frame_size > skb_tailroom(packet->skb))) {
+ IPW_DEBUG_INFO("%s: frame_size (%u) > skb_tailroom (%u)!"
+ " Dropping.\n",
+ priv->net_dev->name,
+ status->frame_size, skb_tailroom(packet->skb));
+ priv->ieee->stats.rx_errors++;
+ return;
+ }
+
+ if (unlikely(!netif_running(priv->net_dev))) {
+ priv->ieee->stats.rx_errors++;
+ priv->wstats.discard.misc++;
+ IPW_DEBUG_DROP("Dropping packet while interface is not up.\n");
+ }
+
+ if (unlikely(priv->ieee->iw_mode == IW_MODE_MONITOR &&
+ status->flags & IPW_STATUS_FLAG_CRC_ERROR)) {
+ IPW_DEBUG_RX("CRC error in packet. Dropping.\n");
+ priv->ieee->stats.rx_errors++;
+ return;
+ }
+
+ pci_unmap_single(priv->pci_dev,
+ packet->dma_addr,
+ sizeof(struct ipw2100_rx),
+ PCI_DMA_FROMDEVICE);
+
+ skb_put(packet->skb, status->frame_size);
+
+#ifdef CONFIG_IPW2100_RX_DEBUG
+ /* Make a copy of the frame so we can dump it to the logs if
+ * ieee80211_rx fails */
+ memcpy(packet_data, packet->skb->data,
+ min(status->frame_size, IPW_RX_NIC_BUFFER_LENGTH));
+#endif
+
+ if (!ieee80211_rx(priv->ieee, packet->skb, stats)) {
+#ifdef CONFIG_IPW2100_RX_DEBUG
+ IPW_DEBUG_DROP("%s: Non consumed packet:\n",
+ priv->net_dev->name);
+ printk_buf(IPW_DL_DROP, packet_data, status->frame_size);
+#endif
+ priv->ieee->stats.rx_errors++;
+
+ /* ieee80211_rx failed, so it didn't free the SKB */
+ dev_kfree_skb_any(packet->skb);
+ packet->skb = NULL;
+ }
+
+ /* We need to allocate a new SKB and attach it to the RDB. */
+ if (unlikely(ipw2100_alloc_skb(priv, packet))) {
+ IPW_DEBUG_WARNING(
+ "%s: Unable to allocate SKB onto RBD ring - disabling "
+ "adapter.\n", priv->net_dev->name);
+ /* TODO: schedule adapter shutdown */
+ IPW_DEBUG_INFO("TODO: Shutdown adapter...\n");
+ }
+
+ /* Update the RDB entry */
+ priv->rx_queue.drv[i].host_addr = packet->dma_addr;
+}
+
+static inline int ipw2100_corruption_check(struct ipw2100_priv *priv, int i)
+{
+ struct ipw2100_status *status = &priv->status_queue.drv[i];
+ struct ipw2100_rx *u = priv->rx_buffers[i].rxp;
+ u16 frame_type = status->status_fields & STATUS_TYPE_MASK;
+
+ switch (frame_type) {
+ case COMMAND_STATUS_VAL:
+ return (status->frame_size != sizeof(u->rx_data.command));
+ case STATUS_CHANGE_VAL:
+ return (status->frame_size != sizeof(u->rx_data.status));
+ case HOST_NOTIFICATION_VAL:
+ return (status->frame_size < sizeof(u->rx_data.notification));
+ case P80211_DATA_VAL:
+ case P8023_DATA_VAL:
+#ifdef CONFIG_IPW2100_PROMISC
+ return 0;
+#else
+ switch (WLAN_FC_GET_TYPE(u->rx_data.header.frame_ctl)) {
+ case IEEE80211_FTYPE_MGMT:
+ case IEEE80211_FTYPE_CTL:
+ return 0;
+ case IEEE80211_FTYPE_DATA:
+ return (status->frame_size >
+ IPW_MAX_802_11_PAYLOAD_LENGTH);
+ }
+#endif
+ }
+
+ return 1;
+}
+
+/*
+ * ipw2100 interrupts are disabled at this point, and the ISR
+ * is the only code that calls this method. So, we do not need
+ * to play with any locks.
+ *
+ * RX Queue works as follows:
+ *
+ * Read index - firmware places packet in entry identified by the
+ * Read index and advances Read index. In this manner,
+ * Read index will always point to the next packet to
+ * be filled--but not yet valid.
+ *
+ * Write index - driver fills this entry with an unused RBD entry.
+ * This entry has not filled by the firmware yet.
+ *
+ * In between the W and R indexes are the RBDs that have been received
+ * but not yet processed.
+ *
+ * The process of handling packets will start at WRITE + 1 and advance
+ * until it reaches the READ index.
+ *
+ * The WRITE index is cached in the variable 'priv->rx_queue.next'.
+ *
+ */
+static inline void __ipw2100_rx_process(struct ipw2100_priv *priv)
+{
+ struct ipw2100_bd_queue *rxq = &priv->rx_queue;
+ struct ipw2100_status_queue *sq = &priv->status_queue;
+ struct ipw2100_rx_packet *packet;
+ u16 frame_type;
+ u32 r, w, i, s;
+ struct ipw2100_rx *u;
+ struct ieee80211_rx_stats stats = {
+ .mac_time = jiffies,
+ };
+
+ read_register(priv->net_dev, IPW_MEM_HOST_SHARED_RX_READ_INDEX, &r);
+ read_register(priv->net_dev, IPW_MEM_HOST_SHARED_RX_WRITE_INDEX, &w);
+
+ if (r >= rxq->entries) {
+ IPW_DEBUG_RX("exit - bad read index\n");
+ return;
+ }
+
+
+ i = (rxq->next + 1) % rxq->entries;
+ s = i;
+ while (i != r) {
+ /* IPW_DEBUG_RX("r = %d : w = %d : processing = %d\n",
+ r, rxq->next, i); */
+
+ packet = &priv->rx_buffers[i];
+
+ /* Sync the DMA for the STATUS buffer so CPU is sure to get
+ * the correct values */
+ pci_dma_sync_single_for_cpu(
+ priv->pci_dev,
+ sq->nic + sizeof(struct ipw2100_status) * i,
+ sizeof(struct ipw2100_status),
+ PCI_DMA_FROMDEVICE);
+
+ /* Sync the DMA for the RX buffer so CPU is sure to get
+ * the correct values */
+ pci_dma_sync_single_for_cpu(priv->pci_dev, packet->dma_addr,
+ sizeof(struct ipw2100_rx),
+ PCI_DMA_FROMDEVICE);
+
+ if (unlikely(ipw2100_corruption_check(priv, i))) {
+ ipw2100_corruption_detected(priv, i);
+ goto increment;
+ }
+
+ u = packet->rxp;
+ frame_type = sq->drv[i].status_fields &
+ STATUS_TYPE_MASK;
+ stats.signal = sq->drv[i].rssi + IPW2100_RSSI_TO_DBM;
+ stats.len = sq->drv[i].frame_size;
+
+ stats.mask = 0;
+ if (stats.noise != 0)
+ stats.mask |= IEEE80211_STATMASK_NOISE;
+ if (stats.rssi != 0)
+ stats.mask |= IEEE80211_STATMASK_RSSI;
+ if (stats.signal != 0)
+ stats.mask |= IEEE80211_STATMASK_SIGNAL;
+ if (stats.rate != 0)
+ stats.mask |= IEEE80211_STATMASK_RATE;
+ stats.freq = IEEE80211_24GHZ_BAND;
+
+ IPW_DEBUG_RX(
+ "%s: '%s' frame type received (%d).\n",
+ priv->net_dev->name, frame_types[frame_type],
+ stats.len);
+
+ switch (frame_type) {
+ case COMMAND_STATUS_VAL:
+ isr_rx_complete_command(
+ priv, &u->rx_data.command);
+ break;
+
+ case STATUS_CHANGE_VAL:
+ isr_status_change(priv, u->rx_data.status);
+ break;
+
+ case P80211_DATA_VAL:
+ case P8023_DATA_VAL:
+#ifdef CONFIG_IPW2100_PROMISC
+ if (priv->ieee->iw_mode == IW_MODE_MONITOR) {
+ isr_rx(priv, i, &stats);
+ break;
+ }
+#endif
+ if (stats.len < sizeof(u->rx_data.header))
+ break;
+ switch (WLAN_FC_GET_TYPE(u->rx_data.header.
+ frame_ctl)) {
+ case IEEE80211_FTYPE_MGMT:
+ ieee80211_rx_mgt(priv->ieee,
+ &u->rx_data.header,
+ &stats);
+ break;
+
+ case IEEE80211_FTYPE_CTL:
+ break;
+
+ case IEEE80211_FTYPE_DATA:
+ isr_rx(priv, i, &stats);
+ break;
+
+ }
+ break;
+ }
+
+ increment:
+ /* clear status field associated with this RBD */
+ rxq->drv[i].status.info.field = 0;
+
+ i = (i + 1) % rxq->entries;
+ }
+
+ if (i != s) {
+ /* backtrack one entry, wrapping to end if at 0 */
+ rxq->next = (i ? i : rxq->entries) - 1;
+
+ write_register(priv->net_dev,
+ IPW_MEM_HOST_SHARED_RX_WRITE_INDEX,
+ rxq->next);
+ }
+}
+
+
+/*
+ * __ipw2100_tx_process
+ *
+ * This routine will determine whether the next packet on
+ * the fw_pend_list has been processed by the firmware yet.
+ *
+ * If not, then it does nothing and returns.
+ *
+ * If so, then it removes the item from the fw_pend_list, frees
+ * any associated storage, and places the item back on the
+ * free list of its source (either msg_free_list or tx_free_list)
+ *
+ * TX Queue works as follows:
+ *
+ * Read index - points to the next TBD that the firmware will
+ * process. The firmware will read the data, and once
+ * done processing, it will advance the Read index.
+ *
+ * Write index - driver fills this entry with an constructed TBD
+ * entry. The Write index is not advanced until the
+ * packet has been configured.
+ *
+ * In between the W and R indexes are the TBDs that have NOT been
+ * processed. Lagging behind the R index are packets that have
+ * been processed but have not been freed by the driver.
+ *
+ * In order to free old storage, an internal index will be maintained
+ * that points to the next packet to be freed. When all used
+ * packets have been freed, the oldest index will be the same as the
+ * firmware's read index.
+ *
+ * The OLDEST index is cached in the variable 'priv->tx_queue.oldest'
+ *
+ * Because the TBD structure can not contain arbitrary data, the
+ * driver must keep an internal queue of cached allocations such that
+ * it can put that data back into the tx_free_list and msg_free_list
+ * for use by future command and data packets.
+ *
+ */
+static inline int __ipw2100_tx_process(struct ipw2100_priv *priv)
+{
+ struct ipw2100_bd_queue *txq = &priv->tx_queue;
+ struct ipw2100_bd *tbd;
+ struct list_head *element;
+ struct ipw2100_tx_packet *packet;
+ int descriptors_used;
+ int e, i;
+ u32 r, w, frag_num = 0;
+
+ if (list_empty(&priv->fw_pend_list))
+ return 0;
+
+ element = priv->fw_pend_list.next;
+
+ packet = list_entry(element, struct ipw2100_tx_packet, list);
+ tbd = &txq->drv[packet->index];
+
+ /* Determine how many TBD entries must be finished... */
+ switch (packet->type) {
+ case COMMAND:
+ /* COMMAND uses only one slot; don't advance */
+ descriptors_used = 1;
+ e = txq->oldest;
+ break;
+
+ case DATA:
+ /* DATA uses two slots; advance and loop position. */
+ descriptors_used = tbd->num_fragments;
+ frag_num = tbd->num_fragments - 1;
+ e = txq->oldest + frag_num;
+ e %= txq->entries;
+ break;
+
+ default:
+ IPW_DEBUG_WARNING("%s: Bad fw_pend_list entry!\n",
+ priv->net_dev->name);
+ return 0;
+ }
+
+ /* if the last TBD is not done by NIC yet, then packet is
+ * not ready to be released.
+ *
+ */
+ read_register(priv->net_dev, IPW_MEM_HOST_SHARED_TX_QUEUE_READ_INDEX,
+ &r);
+ read_register(priv->net_dev, IPW_MEM_HOST_SHARED_TX_QUEUE_WRITE_INDEX,
+ &w);
+ if (w != txq->next)
+ IPW_DEBUG_WARNING("%s: write index mismatch\n",
+ priv->net_dev->name);
+
+ /*
+ * txq->next is the index of the last packet written txq->oldest is
+ * the index of the r is the index of the next packet to be read by
+ * firmware
+ */
+
+
+ /*
+ * Quick graphic to help you visualize the following
+ * if / else statement
+ *
+ * ===>| s---->|===============
+ * e>|
+ * | a | b | c | d | e | f | g | h | i | j | k | l
+ * r---->|
+ * w
+ *
+ * w - updated by driver
+ * r - updated by firmware
+ * s - start of oldest BD entry (txq->oldest)
+ * e - end of oldest BD entry
+ *
+ */
+ if (!((r <= w && (e < r || e >= w)) || (e < r && e >= w))) {
+ IPW_DEBUG_TX("exit - no processed packets ready to release.\n");
+ return 0;
+ }
+
+ list_del(element);
+ DEC_STAT(&priv->fw_pend_stat);
+
+#ifdef CONFIG_IPW_DEBUG
+ {
+ int i = txq->oldest;
+ IPW_DEBUG_TX(
+ "TX%d V=%p P=%p T=%p L=%d\n", i,
+ &txq->drv[i],
+ (void*)txq->nic + i * sizeof(struct ipw2100_bd),
+ (void*)txq->drv[i].host_addr,
+ txq->drv[i].buf_length);
+
+ if (packet->type == DATA) {
+ i = (i + 1) % txq->entries;
+
+ IPW_DEBUG_TX(
+ "TX%d V=%p P=%p T=%p L=%d\n", i,
+ &txq->drv[i],
+ (void*)txq->nic + i *
+ sizeof(struct ipw2100_bd),
+ (void*)txq->drv[i].host_addr,
+ txq->drv[i].buf_length);
+ }
+ }
+#endif
+
+ switch (packet->type) {
+ case DATA:
+ if (txq->drv[txq->oldest].status.info.fields.txType != 0)
+ IPW_DEBUG_WARNING("%s: Queue mismatch. "
+ "Expecting DATA TBD but pulled "
+ "something else: ids %d=%d.\n",
+ priv->net_dev->name, txq->oldest, packet->index);
+
+ /* DATA packet; we have to unmap and free the SKB */
+ priv->ieee->stats.tx_packets++;
+ for (i = 0; i < frag_num; i++) {
+ tbd = &txq->drv[(packet->index + 1 + i) %
+ txq->entries];
+
+ IPW_DEBUG_TX(
+ "TX%d P=%08x L=%d\n",
+ (packet->index + 1 + i) % txq->entries,
+ tbd->host_addr, tbd->buf_length);
+
+ pci_unmap_single(priv->pci_dev,
+ tbd->host_addr,
+ tbd->buf_length,
+ PCI_DMA_TODEVICE);
+ }
+
+ priv->ieee->stats.tx_bytes += packet->info.d_struct.txb->payload_size;
+ ieee80211_txb_free(packet->info.d_struct.txb);
+ packet->info.d_struct.txb = NULL;
+
+ list_add_tail(element, &priv->tx_free_list);
+ INC_STAT(&priv->tx_free_stat);
+
+ /* We have a free slot in the Tx queue, so wake up the
+ * transmit layer if it is stopped. */
+ if (priv->status & STATUS_ASSOCIATED &&
+ netif_queue_stopped(priv->net_dev)) {
+ IPW_DEBUG_INFO(KERN_INFO
+ "%s: Waking net queue.\n",
+ priv->net_dev->name);
+ netif_wake_queue(priv->net_dev);
+ }
+
+ /* A packet was processed by the hardware, so update the
+ * watchdog */
+ priv->net_dev->trans_start = jiffies;
+
+ break;
+
+ case COMMAND:
+ if (txq->drv[txq->oldest].status.info.fields.txType != 1)
+ IPW_DEBUG_WARNING("%s: Queue mismatch. "
+ "Expecting COMMAND TBD but pulled "
+ "something else: ids %d=%d.\n",
+ priv->net_dev->name, txq->oldest, packet->index);
+
+#ifdef CONFIG_IPW_DEBUG
+ if (packet->info.c_struct.cmd->host_command_reg <
+ sizeof(command_types) / sizeof(*command_types))
+ IPW_DEBUG_TX(
+ "Command '%s (%d)' processed: %d.\n",
+ command_types[packet->info.c_struct.cmd->host_command_reg],
+ packet->info.c_struct.cmd->host_command_reg,
+ packet->info.c_struct.cmd->cmd_status_reg);
+#endif
+
+ list_add_tail(element, &priv->msg_free_list);
+ INC_STAT(&priv->msg_free_stat);
+ break;
+ }
+
+ /* advance oldest used TBD pointer to start of next entry */
+ txq->oldest = (e + 1) % txq->entries;
+ /* increase available TBDs number */
+ txq->available += descriptors_used;
+ SET_STAT(&priv->txq_stat, txq->available);
+
+ IPW_DEBUG_TX("packet latency (send to process) %ld jiffies\n",
+ jiffies - packet->jiffy_start);
+
+ return (!list_empty(&priv->fw_pend_list));
+}
+
+
+static inline void __ipw2100_tx_complete(struct ipw2100_priv *priv)
+{
+ int i = 0;
+
+ while (__ipw2100_tx_process(priv) && i < 200) i++;
+
+ if (i == 200) {
+ IPW_DEBUG_WARNING(
+ "%s: Driver is running slow (%d iters).\n",
+ priv->net_dev->name, i);
+ }
+}
+
+
+static void X__ipw2100_tx_send_commands(struct ipw2100_priv *priv)
+{
+ struct list_head *element;
+ struct ipw2100_tx_packet *packet;
+ struct ipw2100_bd_queue *txq = &priv->tx_queue;
+ struct ipw2100_bd *tbd;
+ int next = txq->next;
+
+ while (!list_empty(&priv->msg_pend_list)) {
+ /* if there isn't enough space in TBD queue, then
+ * don't stuff a new one in.
+ * NOTE: 3 are needed as a command will take one,
+ * and there is a minimum of 2 that must be
+ * maintained between the r and w indexes
+ */
+ if (txq->available <= 3) {
+ IPW_DEBUG_TX("no room in tx_queue\n");
+ break;
+ }
+
+ element = priv->msg_pend_list.next;
+ list_del(element);
+ DEC_STAT(&priv->msg_pend_stat);
+
+ packet = list_entry(element,
+ struct ipw2100_tx_packet, list);
+
+ IPW_DEBUG_TX("using TBD at virt=%p, phys=%p\n",
+ &txq->drv[txq->next],
+ (void*)(txq->nic + txq->next *
+ sizeof(struct ipw2100_bd)));
+
+ packet->index = txq->next;
+
+ tbd = &txq->drv[txq->next];
+
+ /* initialize TBD */
+ tbd->host_addr = packet->info.c_struct.cmd_phys;
+ tbd->buf_length = sizeof(struct ipw2100_cmd_header);
+ /* not marking number of fragments causes problems
+ * with f/w debug version */
+ tbd->num_fragments = 1;
+ tbd->status.info.field =
+ IPW_BD_STATUS_TX_FRAME_COMMAND |
+ IPW_BD_STATUS_TX_INTERRUPT_ENABLE;
+
+ /* update TBD queue counters */
+ txq->next++;
+ txq->next %= txq->entries;
+ txq->available--;
+ DEC_STAT(&priv->txq_stat);
+
+ list_add_tail(element, &priv->fw_pend_list);
+ INC_STAT(&priv->fw_pend_stat);
+ }
+
+ if (txq->next != next) {
+ /* kick off the DMA by notifying firmware the
+ * write index has moved; make sure TBD stores are sync'd */
+ wmb();
+ write_register(priv->net_dev,
+ IPW_MEM_HOST_SHARED_TX_QUEUE_WRITE_INDEX,
+ txq->next);
+ }
+}
+
+
+/*
+ * X__ipw2100_tx_send_data
+ *
+ */
+static void X__ipw2100_tx_send_data(struct ipw2100_priv *priv)
+{
+ struct list_head *element;
+ struct ipw2100_tx_packet *packet;
+ struct ipw2100_bd_queue *txq = &priv->tx_queue;
+ struct ipw2100_bd *tbd;
+ int next = txq->next;
+ int i = 0;
+
+ while (!list_empty(&priv->tx_pend_list)) {
+ /* if there isn't enough space in TBD queue, then
+ * don't stuff a new one in.
+ * NOTE: 4 are needed as a data will take two,
+ * and there is a minimum of 2 that must be
+ * maintained between the r and w indexes
+ */
+ element = priv->tx_pend_list.next;
+ packet = list_entry(element, struct ipw2100_tx_packet, list);
+
+ if (unlikely(1 + packet->info.d_struct.txb->nr_frags > IPW_MAX_BDS)) {
+ /* TODO: Support merging buffers if more than
+ * IPW_MAX_BDS are used */
+ IPW_DEBUG_INFO(
+ "%s: Maximum BD theshold exceeded. "
+ "Increase fragmentation level.\n",
+ priv->net_dev->name);
+ }
+
+ if (txq->available <= 3 + packet->info.d_struct.txb->nr_frags) {
+ IPW_DEBUG_TX("no room in tx_queue\n");
+ break;
+ }
+
+ list_del(element);
+ DEC_STAT(&priv->tx_pend_stat);
+
+ tbd = &txq->drv[txq->next];
+
+ packet->index = txq->next;
+
+ tbd->host_addr = packet->info.d_struct.data_phys;
+ tbd->buf_length = sizeof(struct ipw2100_data_header);
+ tbd->num_fragments = 1 + packet->info.d_struct.txb->nr_frags;
+ tbd->status.info.field =
+ IPW_BD_STATUS_TX_FRAME_802_3 |
+ IPW_BD_STATUS_TX_FRAME_NOT_LAST_FRAGMENT;
+ txq->next++;
+ txq->next %= txq->entries;
+
+ IPW_DEBUG_TX(
+ "data header tbd TX%d P=%08x L=%d\n",
+ packet->index, tbd->host_addr,
+ tbd->buf_length);
+#ifdef CONFIG_IPW_DEBUG
+ if (packet->info.d_struct.txb->nr_frags > 1)
+ IPW_DEBUG_FRAG("fragment Tx: %d frames\n",
+ packet->info.d_struct.txb->nr_frags);
+#endif
+
+ for (i = 0; i < packet->info.d_struct.txb->nr_frags; i++) {
+ tbd = &txq->drv[txq->next];
+ if (i == packet->info.d_struct.txb->nr_frags - 1)
+ tbd->status.info.field =
+ IPW_BD_STATUS_TX_FRAME_802_3 |
+ IPW_BD_STATUS_TX_INTERRUPT_ENABLE;
+ else
+ tbd->status.info.field =
+ IPW_BD_STATUS_TX_FRAME_802_3 |
+ IPW_BD_STATUS_TX_FRAME_NOT_LAST_FRAGMENT;
+
+ tbd->buf_length = packet->info.d_struct.txb->fragments[i]->len;
+
+ tbd->host_addr = pci_map_single(
+ priv->pci_dev,
+ packet->info.d_struct.txb->fragments[i]->data,
+ tbd->buf_length,
+ PCI_DMA_TODEVICE);
+
+ IPW_DEBUG_TX(
+ "data frag tbd TX%d P=%08x L=%d\n",
+ txq->next, tbd->host_addr, tbd->buf_length);
+
+ pci_dma_sync_single_for_device(
+ priv->pci_dev, tbd->host_addr,
+ tbd->buf_length,
+ PCI_DMA_TODEVICE);
+
+ txq->next++;
+ txq->next %= txq->entries;
+ }
+
+ txq->available -= 1 + packet->info.d_struct.txb->nr_frags;
+ SET_STAT(&priv->txq_stat, txq->available);
+
+ list_add_tail(element, &priv->fw_pend_list);
+ INC_STAT(&priv->fw_pend_stat);
+ }
+
+ if (txq->next != next) {
+ /* kick off the DMA by notifying firmware the
+ * write index has moved; make sure TBD stores are sync'd */
+ write_register(priv->net_dev,
+ IPW_MEM_HOST_SHARED_TX_QUEUE_WRITE_INDEX,
+ txq->next);
+ }
+ return;
+}
+
+static void ipw2100_irq_tasklet(struct ipw2100_priv *priv)
+{
+ struct net_device *dev = priv->net_dev;
+ unsigned long flags;
+ u32 inta, tmp;
+
+ spin_lock_irqsave(&priv->low_lock, flags);
+ ipw2100_disable_interrupts(priv);
+
+ read_register(dev, IPW_REG_INTA, &inta);
+
+ IPW_DEBUG_ISR("enter - INTA: 0x%08lX\n",
+ (unsigned long)inta & IPW_INTERRUPT_MASK);
+
+ priv->in_isr++;
+ priv->interrupts++;
+
+ /* We do not loop and keep polling for more interrupts as this
+ * is frowned upon and doesn't play nicely with other potentially
+ * chained IRQs */
+ IPW_DEBUG_ISR("INTA: 0x%08lX\n",
+ (unsigned long)inta & IPW_INTERRUPT_MASK);
+
+ if (inta & IPW2100_INTA_FATAL_ERROR) {
+ IPW_DEBUG_WARNING(DRV_NAME
+ ": Fatal interrupt. Scheduling firmware restart.\n");
+ priv->inta_other++;
+ write_register(
+ dev, IPW_REG_INTA,
+ IPW2100_INTA_FATAL_ERROR);
+
+ read_nic_dword(dev, IPW_NIC_FATAL_ERROR, &priv->fatal_error);
+ IPW_DEBUG_INFO("%s: Fatal error value: 0x%08X\n",
+ priv->net_dev->name, priv->fatal_error);
+
+ read_nic_dword(dev, IPW_ERROR_ADDR(priv->fatal_error), &tmp);
+ IPW_DEBUG_INFO("%s: Fatal error address value: 0x%08X\n",
+ priv->net_dev->name, tmp);
+
+ /* Wake up any sleeping jobs */
+ schedule_reset(priv);
+ }
+
+ if (inta & IPW2100_INTA_PARITY_ERROR) {
+ IPW_DEBUG_ERROR("***** PARITY ERROR INTERRUPT !!!! \n");
+ priv->inta_other++;
+ write_register(
+ dev, IPW_REG_INTA,
+ IPW2100_INTA_PARITY_ERROR);
+ }
+
+ if (inta & IPW2100_INTA_RX_TRANSFER) {
+ IPW_DEBUG_ISR("RX interrupt\n");
+
+ priv->rx_interrupts++;
+
+ write_register(
+ dev, IPW_REG_INTA,
+ IPW2100_INTA_RX_TRANSFER);
+
+ __ipw2100_rx_process(priv);
+ __ipw2100_tx_complete(priv);
+ }
+
+ if (inta & IPW2100_INTA_TX_TRANSFER) {
+ IPW_DEBUG_ISR("TX interrupt\n");
+
+ priv->tx_interrupts++;
+
+ write_register(dev, IPW_REG_INTA,
+ IPW2100_INTA_TX_TRANSFER);
+
+ __ipw2100_tx_complete(priv);
+ X__ipw2100_tx_send_commands(priv);
+ X__ipw2100_tx_send_data(priv);
+ }
+
+ if (inta & IPW2100_INTA_TX_COMPLETE) {
+ IPW_DEBUG_ISR("TX complete\n");
+ priv->inta_other++;
+ write_register(
+ dev, IPW_REG_INTA,
+ IPW2100_INTA_TX_COMPLETE);
+
+ __ipw2100_tx_complete(priv);
+ }
+
+ if (inta & IPW2100_INTA_EVENT_INTERRUPT) {
+ /* ipw2100_handle_event(dev); */
+ priv->inta_other++;
+ write_register(
+ dev, IPW_REG_INTA,
+ IPW2100_INTA_EVENT_INTERRUPT);
+ }
+
+ if (inta & IPW2100_INTA_FW_INIT_DONE) {
+ IPW_DEBUG_ISR("FW init done interrupt\n");
+ priv->inta_other++;
+
+ read_register(dev, IPW_REG_INTA, &tmp);
+ if (tmp & (IPW2100_INTA_FATAL_ERROR |
+ IPW2100_INTA_PARITY_ERROR)) {
+ write_register(
+ dev, IPW_REG_INTA,
+ IPW2100_INTA_FATAL_ERROR |
+ IPW2100_INTA_PARITY_ERROR);
+ }
+
+ write_register(dev, IPW_REG_INTA,
+ IPW2100_INTA_FW_INIT_DONE);
+ }
+
+ if (inta & IPW2100_INTA_STATUS_CHANGE) {
+ IPW_DEBUG_ISR("Status change interrupt\n");
+ priv->inta_other++;
+ write_register(
+ dev, IPW_REG_INTA,
+ IPW2100_INTA_STATUS_CHANGE);
+ }
+
+ if (inta & IPW2100_INTA_SLAVE_MODE_HOST_COMMAND_DONE) {
+ IPW_DEBUG_ISR("slave host mode interrupt\n");
+ priv->inta_other++;
+ write_register(
+ dev, IPW_REG_INTA,
+ IPW2100_INTA_SLAVE_MODE_HOST_COMMAND_DONE);
+ }
+
+ priv->in_isr--;
+ ipw2100_enable_interrupts(priv);
+
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+
+ IPW_DEBUG_ISR("exit\n");
+}
+
+
+static irqreturn_t ipw2100_interrupt(int irq, void *data,
+ struct pt_regs *regs)
+{
+ struct ipw2100_priv *priv = data;
+ u32 inta, inta_mask;
+
+ if (!data)
+ return IRQ_NONE;
+
+ spin_lock(&priv->low_lock);
+
+ /* We check to see if we should be ignoring interrupts before
+ * we touch the hardware. During ucode load if we try and handle
+ * an interrupt we can cause keyboard problems as well as cause
+ * the ucode to fail to initialize */
+ if (!(priv->status & STATUS_INT_ENABLED)) {
+ /* Shared IRQ */
+ goto none;
+ }
+
+ read_register(priv->net_dev, IPW_REG_INTA_MASK, &inta_mask);
+ read_register(priv->net_dev, IPW_REG_INTA, &inta);
+
+ if (inta == 0xFFFFFFFF) {
+ /* Hardware disappeared */
+ IPW_DEBUG_WARNING("IRQ INTA == 0xFFFFFFFF\n");
+ goto none;
+ }
+
+ inta &= IPW_INTERRUPT_MASK;
+
+ if (!(inta & inta_mask)) {
+ /* Shared interrupt */
+ goto none;
+ }
+
+ /* We disable the hardware interrupt here just to prevent unneeded
+ * calls to be made. We disable this again within the actual
+ * work tasklet, so if another part of the code re-enables the
+ * interrupt, that is fine */
+ ipw2100_disable_interrupts(priv);
+
+ tasklet_schedule(&priv->irq_tasklet);
+ spin_unlock(&priv->low_lock);
+
+ return IRQ_HANDLED;
+ none:
+ spin_unlock(&priv->low_lock);
+ return IRQ_NONE;
+}
+
+static int ipw2100_tx(struct sk_buff *skb, struct net_device *dev)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ struct list_head *element;
+ struct ipw2100_data_header *header = NULL;
+ struct ipw2100_tx_packet *packet;
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->low_lock, flags);
+
+ if (!(priv->status & STATUS_ASSOCIATED)) {
+ IPW_DEBUG_INFO("Can not transmit when not connected.\n");
+ priv->ieee->stats.tx_carrier_errors++;
+ netif_stop_queue(dev);
+ goto fail_unlock;
+ }
+
+ if (list_empty(&priv->tx_free_list))
+ goto fail_unlock;
+
+ element = priv->tx_free_list.next;
+ packet = list_entry(element, struct ipw2100_tx_packet, list);
+
+ header = packet->info.d_struct.data;
+ memcpy(header->dst_addr, skb->data, ETH_ALEN);
+ memcpy(header->src_addr, skb->data + ETH_ALEN, ETH_ALEN);
+
+ packet->info.d_struct.txb = ieee80211_skb_to_txb(priv->ieee, skb);
+ if (packet->info.d_struct.txb == NULL) {
+ IPW_DEBUG_DROP("Failed to Tx packet\n");
+ priv->ieee->stats.tx_errors++;
+ goto fail_unlock;
+ }
+
+ if (packet->info.d_struct.txb->nr_frags == 0) {
+ /* No fragments provided; packet transform was successful,
+ * and then dropped */
+
+ /* This packet was processed successfully, even though it was
+ * never passed to the HW, so we reset the NIC watchdog
+ * timer to keep it from kicking in */
+ priv->net_dev->trans_start = jiffies;
+ ieee80211_txb_free(packet->info.d_struct.txb);
+ packet->info.d_struct.txb = NULL;
+ goto success;
+ }
+
+ header->host_command_reg = SEND;
+ header->host_command_reg1 = 0;
+
+ /* For now we only support host based encryption */
+ header->needs_encryption = 0;
+ header->encrypted = packet->info.d_struct.txb->encrypted;
+ if (packet->info.d_struct.txb->nr_frags > 1)
+ header->fragment_size = packet->info.d_struct.txb->frag_size;
+ else
+ header->fragment_size = 0;
+
+ packet->jiffy_start = jiffies;
+
+ list_del(element);
+ DEC_STAT(&priv->tx_free_stat);
+
+ list_add_tail(element, &priv->tx_pend_list);
+ INC_STAT(&priv->tx_pend_stat);
+
+ X__ipw2100_tx_send_data(priv);
+
+ success:
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+ return 0;
+
+ fail_unlock:
+ netif_stop_queue(dev);
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+ return 1;
+}
+
+
+static int ipw2100_msg_allocate(struct ipw2100_priv *priv)
+{
+ int i, j, err = -EINVAL;
+ void *v;
+ dma_addr_t p;
+
+ priv->msg_buffers = (struct ipw2100_tx_packet *)kmalloc(
+ IPW_COMMAND_POOL_SIZE * sizeof(struct ipw2100_tx_packet),
+ GFP_KERNEL);
+ if (!priv->msg_buffers) {
+ IPW_DEBUG_ERROR("%s: PCI alloc failed for msg "
+ "buffers.\n", priv->net_dev->name);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < IPW_COMMAND_POOL_SIZE; i++) {
+ v = pci_alloc_consistent(
+ priv->pci_dev,
+ sizeof(struct ipw2100_cmd_header),
+ &p);
+ if (!v) {
+ IPW_DEBUG_ERROR(
+ "%s: PCI alloc failed for msg "
+ "buffers.\n",
+ priv->net_dev->name);
+ err = -ENOMEM;
+ break;
+ }
+
+ memset(v, 0, sizeof(struct ipw2100_cmd_header));
+
+ priv->msg_buffers[i].type = COMMAND;
+ priv->msg_buffers[i].info.c_struct.cmd =
+ (struct ipw2100_cmd_header*)v;
+ priv->msg_buffers[i].info.c_struct.cmd_phys = p;
+ }
+
+ if (i == IPW_COMMAND_POOL_SIZE)
+ return 0;
+
+ for (j = 0; j < i; j++) {
+ pci_free_consistent(
+ priv->pci_dev,
+ sizeof(struct ipw2100_cmd_header),
+ priv->msg_buffers[j].info.c_struct.cmd,
+ priv->msg_buffers[j].info.c_struct.cmd_phys);
+ }
+
+ kfree(priv->msg_buffers);
+ priv->msg_buffers = NULL;
+
+ return err;
+}
+
+static int ipw2100_msg_initialize(struct ipw2100_priv *priv)
+{
+ int i;
+
+ INIT_LIST_HEAD(&priv->msg_free_list);
+ INIT_LIST_HEAD(&priv->msg_pend_list);
+
+ for (i = 0; i < IPW_COMMAND_POOL_SIZE; i++)
+ list_add_tail(&priv->msg_buffers[i].list, &priv->msg_free_list);
+ SET_STAT(&priv->msg_free_stat, i);
+
+ return 0;
+}
+
+static void ipw2100_msg_free(struct ipw2100_priv *priv)
+{
+ int i;
+
+ if (!priv->msg_buffers)
+ return;
+
+ for (i = 0; i < IPW_COMMAND_POOL_SIZE; i++) {
+ pci_free_consistent(priv->pci_dev,
+ sizeof(struct ipw2100_cmd_header),
+ priv->msg_buffers[i].info.c_struct.cmd,
+ priv->msg_buffers[i].info.c_struct.cmd_phys);
+ }
+
+ kfree(priv->msg_buffers);
+ priv->msg_buffers = NULL;
+}
+
+static ssize_t show_pci(struct device *d, char *buf)
+{
+ struct pci_dev *pci_dev = container_of(d, struct pci_dev, dev);
+ char *out = buf;
+ int i, j;
+ u32 val;
+
+ for (i = 0; i < 16; i++) {
+ out += sprintf(out, "[%08X] ", i * 16);
+ for (j = 0; j < 16; j += 4) {
+ pci_read_config_dword(pci_dev, i * 16 + j, &val);
+ out += sprintf(out, "%08X ", val);
+ }
+ out += sprintf(out, "\n");
+ }
+
+ return out - buf;
+}
+static DEVICE_ATTR(pci, S_IRUGO, show_pci, NULL);
+
+static ssize_t show_cfg(struct device *d, char *buf)
+{
+ struct ipw2100_priv *p = (struct ipw2100_priv *)d->driver_data;
+ return sprintf(buf, "0x%08x\n", (int)p->config);
+}
+static DEVICE_ATTR(cfg, S_IRUGO, show_cfg, NULL);
+
+static ssize_t show_status(struct device *d, char *buf)
+{
+ struct ipw2100_priv *p = (struct ipw2100_priv *)d->driver_data;
+ return sprintf(buf, "0x%08x\n", (int)p->status);
+}
+static DEVICE_ATTR(status, S_IRUGO, show_status, NULL);
+
+static ssize_t show_capability(struct device *d, char *buf)
+{
+ struct ipw2100_priv *p = (struct ipw2100_priv *)d->driver_data;
+ return sprintf(buf, "0x%08x\n", (int)p->capability);
+}
+static DEVICE_ATTR(capability, S_IRUGO, show_capability, NULL);
+
+
+#define IPW2100_REG(x) { IPW_ ##x, #x }
+const struct {
+ u32 addr;
+ const char *name;
+} hw_data[] = {
+ IPW2100_REG(REG_GP_CNTRL),
+ IPW2100_REG(REG_GPIO),
+ IPW2100_REG(REG_INTA),
+ IPW2100_REG(REG_INTA_MASK),
+ IPW2100_REG(REG_RESET_REG),
+};
+#define IPW2100_NIC(x, s) { x, #x, s }
+const struct {
+ u32 addr;
+ const char *name;
+ size_t size;
+} nic_data[] = {
+ IPW2100_NIC(IPW2100_CONTROL_REG, 2),
+ IPW2100_NIC(0x210014, 1),
+ IPW2100_NIC(0x210000, 1),
+};
+#define IPW2100_ORD(x, d) { IPW_ORD_ ##x, #x, d }
+const struct {
+ u8 index;
+ const char *name;
+ const char *desc;
+} ord_data[] = {
+ IPW2100_ORD(STAT_TX_HOST_REQUESTS, "requested Host Tx's (MSDU)"),
+ IPW2100_ORD(STAT_TX_HOST_COMPLETE, "successful Host Tx's (MSDU)"),
+ IPW2100_ORD(STAT_TX_DIR_DATA, "successful Directed Tx's (MSDU)"),
+ IPW2100_ORD(STAT_TX_DIR_DATA1, "successful Directed Tx's (MSDU) @ 1MB"),
+ IPW2100_ORD(STAT_TX_DIR_DATA2, "successful Directed Tx's (MSDU) @ 2MB"),
+ IPW2100_ORD(STAT_TX_DIR_DATA5_5, "successful Directed Tx's (MSDU) @ 5_5MB"),
+ IPW2100_ORD(STAT_TX_DIR_DATA11, "successful Directed Tx's (MSDU) @ 11MB"),
+ IPW2100_ORD(STAT_TX_NODIR_DATA1, "successful Non_Directed Tx's (MSDU) @ 1MB"),
+ IPW2100_ORD(STAT_TX_NODIR_DATA2, "successful Non_Directed Tx's (MSDU) @ 2MB"),
+ IPW2100_ORD(STAT_TX_NODIR_DATA5_5, "successful Non_Directed Tx's (MSDU) @ 5.5MB"),
+ IPW2100_ORD(STAT_TX_NODIR_DATA11, "successful Non_Directed Tx's (MSDU) @ 11MB"),
+ IPW2100_ORD(STAT_NULL_DATA, "successful NULL data Tx's"),
+ IPW2100_ORD(STAT_TX_RTS, "successful Tx RTS"),
+ IPW2100_ORD(STAT_TX_CTS, "successful Tx CTS"),
+ IPW2100_ORD(STAT_TX_ACK, "successful Tx ACK"),
+ IPW2100_ORD(STAT_TX_ASSN, "successful Association Tx's"),
+ IPW2100_ORD(STAT_TX_ASSN_RESP, "successful Association response Tx's"),
+ IPW2100_ORD(STAT_TX_REASSN, "successful Reassociation Tx's"),
+ IPW2100_ORD(STAT_TX_REASSN_RESP, "successful Reassociation response Tx's"),
+ IPW2100_ORD(STAT_TX_PROBE, "probes successfully transmitted"),
+ IPW2100_ORD(STAT_TX_PROBE_RESP, "probe responses successfully transmitted"),
+ IPW2100_ORD(STAT_TX_BEACON, "tx beacon"),
+ IPW2100_ORD(STAT_TX_ATIM, "Tx ATIM"),
+ IPW2100_ORD(STAT_TX_DISASSN, "successful Disassociation TX"),
+ IPW2100_ORD(STAT_TX_AUTH, "successful Authentication Tx"),
+ IPW2100_ORD(STAT_TX_DEAUTH, "successful Deauthentication TX"),
+ IPW2100_ORD(STAT_TX_TOTAL_BYTES, "Total successful Tx data bytes"),
+ IPW2100_ORD(STAT_TX_RETRIES, "Tx retries"),
+ IPW2100_ORD(STAT_TX_RETRY1, "Tx retries at 1MBPS"),
+ IPW2100_ORD(STAT_TX_RETRY2, "Tx retries at 2MBPS"),
+ IPW2100_ORD(STAT_TX_RETRY5_5, "Tx retries at 5.5MBPS"),
+ IPW2100_ORD(STAT_TX_RETRY11, "Tx retries at 11MBPS"),
+ IPW2100_ORD(STAT_TX_FAILURES, "Tx Failures"),
+ IPW2100_ORD(STAT_TX_MAX_TRIES_IN_HOP,"times max tries in a hop failed"),
+ IPW2100_ORD(STAT_TX_DISASSN_FAIL, "times disassociation failed"),
+ IPW2100_ORD(STAT_TX_ERR_CTS, "missed/bad CTS frames"),
+ IPW2100_ORD(STAT_TX_ERR_ACK, "tx err due to acks"),
+ IPW2100_ORD(STAT_RX_HOST, "packets passed to host"),
+ IPW2100_ORD(STAT_RX_DIR_DATA, "directed packets"),
+ IPW2100_ORD(STAT_RX_DIR_DATA1, "directed packets at 1MB"),
+ IPW2100_ORD(STAT_RX_DIR_DATA2, "directed packets at 2MB"),
+ IPW2100_ORD(STAT_RX_DIR_DATA5_5, "directed packets at 5.5MB"),
+ IPW2100_ORD(STAT_RX_DIR_DATA11, "directed packets at 11MB"),
+ IPW2100_ORD(STAT_RX_NODIR_DATA,"nondirected packets"),
+ IPW2100_ORD(STAT_RX_NODIR_DATA1, "nondirected packets at 1MB"),
+ IPW2100_ORD(STAT_RX_NODIR_DATA2, "nondirected packets at 2MB"),
+ IPW2100_ORD(STAT_RX_NODIR_DATA5_5, "nondirected packets at 5.5MB"),
+ IPW2100_ORD(STAT_RX_NODIR_DATA11, "nondirected packets at 11MB"),
+ IPW2100_ORD(STAT_RX_NULL_DATA, "null data rx's"),
+ IPW2100_ORD(STAT_RX_RTS, "Rx RTS"),
+ IPW2100_ORD(STAT_RX_CTS, "Rx CTS"),
+ IPW2100_ORD(STAT_RX_ACK, "Rx ACK"),
+ IPW2100_ORD(STAT_RX_CFEND, "Rx CF End"),
+ IPW2100_ORD(STAT_RX_CFEND_ACK, "Rx CF End + CF Ack"),
+ IPW2100_ORD(STAT_RX_ASSN, "Association Rx's"),
+ IPW2100_ORD(STAT_RX_ASSN_RESP, "Association response Rx's"),
+ IPW2100_ORD(STAT_RX_REASSN, "Reassociation Rx's"),
+ IPW2100_ORD(STAT_RX_REASSN_RESP, "Reassociation response Rx's"),
+ IPW2100_ORD(STAT_RX_PROBE, "probe Rx's"),
+ IPW2100_ORD(STAT_RX_PROBE_RESP, "probe response Rx's"),
+ IPW2100_ORD(STAT_RX_BEACON, "Rx beacon"),
+ IPW2100_ORD(STAT_RX_ATIM, "Rx ATIM"),
+ IPW2100_ORD(STAT_RX_DISASSN, "disassociation Rx"),
+ IPW2100_ORD(STAT_RX_AUTH, "authentication Rx"),
+ IPW2100_ORD(STAT_RX_DEAUTH, "deauthentication Rx"),
+ IPW2100_ORD(STAT_RX_TOTAL_BYTES,"Total rx data bytes received"),
+ IPW2100_ORD(STAT_RX_ERR_CRC, "packets with Rx CRC error"),
+ IPW2100_ORD(STAT_RX_ERR_CRC1, "Rx CRC errors at 1MB"),
+ IPW2100_ORD(STAT_RX_ERR_CRC2, "Rx CRC errors at 2MB"),
+ IPW2100_ORD(STAT_RX_ERR_CRC5_5, "Rx CRC errors at 5.5MB"),
+ IPW2100_ORD(STAT_RX_ERR_CRC11, "Rx CRC errors at 11MB"),
+ IPW2100_ORD(STAT_RX_DUPLICATE1, "duplicate rx packets at 1MB"),
+ IPW2100_ORD(STAT_RX_DUPLICATE2, "duplicate rx packets at 2MB"),
+ IPW2100_ORD(STAT_RX_DUPLICATE5_5, "duplicate rx packets at 5.5MB"),
+ IPW2100_ORD(STAT_RX_DUPLICATE11, "duplicate rx packets at 11MB"),
+ IPW2100_ORD(STAT_RX_DUPLICATE, "duplicate rx packets"),
+ IPW2100_ORD(PERS_DB_LOCK, "locking fw permanent db"),
+ IPW2100_ORD(PERS_DB_SIZE, "size of fw permanent db"),
+ IPW2100_ORD(PERS_DB_ADDR, "address of fw permanent db"),
+ IPW2100_ORD(STAT_RX_INVALID_PROTOCOL, "rx frames with invalid protocol"),
+ IPW2100_ORD(SYS_BOOT_TIME, "Boot time"),
+ IPW2100_ORD(STAT_RX_NO_BUFFER, "rx frames rejected due to no buffer"),
+ IPW2100_ORD(STAT_RX_MISSING_FRAG, "rx frames dropped due to missing fragment"),
+ IPW2100_ORD(STAT_RX_ORPHAN_FRAG, "rx frames dropped due to non-sequential fragment"),
+ IPW2100_ORD(STAT_RX_ORPHAN_FRAME, "rx frames dropped due to unmatched 1st frame"),
+ IPW2100_ORD(STAT_RX_FRAG_AGEOUT, "rx frames dropped due to uncompleted frame"),
+ IPW2100_ORD(STAT_RX_ICV_ERRORS, "ICV errors during decryption"),
+ IPW2100_ORD(STAT_PSP_SUSPENSION,"times adapter suspended"),
+ IPW2100_ORD(STAT_PSP_BCN_TIMEOUT, "beacon timeout"),
+ IPW2100_ORD(STAT_PSP_POLL_TIMEOUT, "poll response timeouts"),
+ IPW2100_ORD(STAT_PSP_NONDIR_TIMEOUT, "timeouts waiting for last {broad,multi}cast pkt"),
+ IPW2100_ORD(STAT_PSP_RX_DTIMS, "PSP DTIMs received"),
+ IPW2100_ORD(STAT_PSP_RX_TIMS, "PSP TIMs received"),
+ IPW2100_ORD(STAT_PSP_STATION_ID,"PSP Station ID"),
+ IPW2100_ORD(LAST_ASSN_TIME, "RTC time of last association"),
+ IPW2100_ORD(STAT_PERCENT_MISSED_BCNS,"current calculation of % missed beacons"),
+ IPW2100_ORD(STAT_PERCENT_RETRIES,"current calculation of % missed tx retries"),
+ IPW2100_ORD(ASSOCIATED_AP_PTR, "0 if not associated, else pointer to AP table entry"),
+ IPW2100_ORD(AVAILABLE_AP_CNT, "AP's decsribed in the AP table"),
+ IPW2100_ORD(AP_LIST_PTR, "Ptr to list of available APs"),
+ IPW2100_ORD(STAT_AP_ASSNS, "associations"),
+ IPW2100_ORD(STAT_ASSN_FAIL, "association failures"),
+ IPW2100_ORD(STAT_ASSN_RESP_FAIL,"failures due to response fail"),
+ IPW2100_ORD(STAT_FULL_SCANS, "full scans"),
+ IPW2100_ORD(CARD_DISABLED, "Card Disabled"),
+ IPW2100_ORD(STAT_ROAM_INHIBIT, "times roaming was inhibited due to activity"),
+ IPW2100_ORD(RSSI_AT_ASSN, "RSSI of associated AP at time of association"),
+ IPW2100_ORD(STAT_ASSN_CAUSE1, "reassociation: no probe response or TX on hop"),
+ IPW2100_ORD(STAT_ASSN_CAUSE2, "reassociation: poor tx/rx quality"),
+ IPW2100_ORD(STAT_ASSN_CAUSE3, "reassociation: tx/rx quality (excessive AP load"),
+ IPW2100_ORD(STAT_ASSN_CAUSE4, "reassociation: AP RSSI level"),
+ IPW2100_ORD(STAT_ASSN_CAUSE5, "reassociations due to load leveling"),
+ IPW2100_ORD(STAT_AUTH_FAIL, "times authentication failed"),
+ IPW2100_ORD(STAT_AUTH_RESP_FAIL,"times authentication response failed"),
+ IPW2100_ORD(STATION_TABLE_CNT, "entries in association table"),
+ IPW2100_ORD(RSSI_AVG_CURR, "Current avg RSSI"),
+ IPW2100_ORD(POWER_MGMT_MODE, "Power mode - 0=CAM, 1=PSP"),
+ IPW2100_ORD(COUNTRY_CODE, "IEEE country code as recv'd from beacon"),
+ IPW2100_ORD(COUNTRY_CHANNELS, "channels suported by country"),
+ IPW2100_ORD(RESET_CNT, "adapter resets (warm)"),
+ IPW2100_ORD(BEACON_INTERVAL, "Beacon interval"),
+ IPW2100_ORD(ANTENNA_DIVERSITY, "TRUE if antenna diversity is disabled"),
+ IPW2100_ORD(DTIM_PERIOD, "beacon intervals between DTIMs"),
+ IPW2100_ORD(OUR_FREQ, "current radio freq lower digits - channel ID"),
+ IPW2100_ORD(RTC_TIME, "current RTC time"),
+ IPW2100_ORD(PORT_TYPE, "operating mode"),
+ IPW2100_ORD(CURRENT_TX_RATE, "current tx rate"),
+ IPW2100_ORD(SUPPORTED_RATES, "supported tx rates"),
+ IPW2100_ORD(ATIM_WINDOW, "current ATIM Window"),
+ IPW2100_ORD(BASIC_RATES, "basic tx rates"),
+ IPW2100_ORD(NIC_HIGHEST_RATE, "NIC highest tx rate"),
+ IPW2100_ORD(AP_HIGHEST_RATE, "AP highest tx rate"),
+ IPW2100_ORD(CAPABILITIES, "Management frame capability field"),
+ IPW2100_ORD(AUTH_TYPE, "Type of authentication"),
+ IPW2100_ORD(RADIO_TYPE, "Adapter card platform type"),
+ IPW2100_ORD(RTS_THRESHOLD, "Min packet length for RTS handshaking"),
+ IPW2100_ORD(INT_MODE, "International mode"),
+ IPW2100_ORD(FRAGMENTATION_THRESHOLD, "protocol frag threshold"),
+ IPW2100_ORD(EEPROM_SRAM_DB_BLOCK_START_ADDRESS, "EEPROM offset in SRAM"),
+ IPW2100_ORD(EEPROM_SRAM_DB_BLOCK_SIZE, "EEPROM size in SRAM"),
+ IPW2100_ORD(EEPROM_SKU_CAPABILITY, "EEPROM SKU Capability"),
+ IPW2100_ORD(EEPROM_IBSS_11B_CHANNELS, "EEPROM IBSS 11b channel set"),
+ IPW2100_ORD(MAC_VERSION, "MAC Version"),
+ IPW2100_ORD(MAC_REVISION, "MAC Revision"),
+ IPW2100_ORD(RADIO_VERSION, "Radio Version"),
+ IPW2100_ORD(NIC_MANF_DATE_TIME, "MANF Date/Time STAMP"),
+ IPW2100_ORD(UCODE_VERSION, "Ucode Version"),
+};
+
+
+static ssize_t show_registers(struct device *d, char *buf)
+{
+ int i;
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ struct net_device *dev = priv->net_dev;
+ char * out = buf;
+ u32 val = 0;
+
+ out += sprintf(out, "%30s [Address ] : Hex\n", "Register");
+
+ for (i = 0; i < (sizeof(hw_data) / sizeof(*hw_data)); i++) {
+ read_register(dev, hw_data[i].addr, &val);
+ out += sprintf(out, "%30s [%08X] : %08X\n",
+ hw_data[i].name, hw_data[i].addr, val);
+ }
+
+ return out - buf;
+}
+static DEVICE_ATTR(registers, S_IRUGO, show_registers, NULL);
+
+
+static ssize_t show_hardware(struct device *d, char *buf)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ struct net_device *dev = priv->net_dev;
+ char * out = buf;
+ int i;
+
+ out += sprintf(out, "%30s [Address ] : Hex\n", "NIC entry");
+
+ for (i = 0; i < (sizeof(nic_data) / sizeof(*nic_data)); i++) {
+ u8 tmp8;
+ u16 tmp16;
+ u32 tmp32;
+
+ switch (nic_data[i].size) {
+ case 1:
+ read_nic_byte(dev, nic_data[i].addr, &tmp8);
+ out += sprintf(out, "%30s [%08X] : %02X\n",
+ nic_data[i].name, nic_data[i].addr,
+ tmp8);
+ break;
+ case 2:
+ read_nic_word(dev, nic_data[i].addr, &tmp16);
+ out += sprintf(out, "%30s [%08X] : %04X\n",
+ nic_data[i].name, nic_data[i].addr,
+ tmp16);
+ break;
+ case 4:
+ read_nic_dword(dev, nic_data[i].addr, &tmp32);
+ out += sprintf(out, "%30s [%08X] : %08X\n",
+ nic_data[i].name, nic_data[i].addr,
+ tmp32);
+ break;
+ }
+ }
+ return out - buf;
+}
+static DEVICE_ATTR(hardware, S_IRUGO, show_hardware, NULL);
+
+
+static ssize_t show_memory(struct device *d, char *buf)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ struct net_device *dev = priv->net_dev;
+ static unsigned long loop = 0;
+ int len = 0;
+ u32 buffer[4];
+ int i;
+ char line[81];
+
+ if (loop >= 0x30000)
+ loop = 0;
+
+ /* sysfs provides us PAGE_SIZE buffer */
+ while (len < PAGE_SIZE - 128 && loop < 0x30000) {
+
+ if (priv->snapshot[0]) for (i = 0; i < 4; i++)
+ buffer[i] = *(u32 *)SNAPSHOT_ADDR(loop + i * 4);
+ else for (i = 0; i < 4; i++)
+ read_nic_dword(dev, loop + i * 4, &buffer[i]);
+
+ if (priv->dump_raw)
+ len += sprintf(buf + len,
+ "%c%c%c%c"
+ "%c%c%c%c"
+ "%c%c%c%c"
+ "%c%c%c%c",
+ ((u8*)buffer)[0x0],
+ ((u8*)buffer)[0x1],
+ ((u8*)buffer)[0x2],
+ ((u8*)buffer)[0x3],
+ ((u8*)buffer)[0x4],
+ ((u8*)buffer)[0x5],
+ ((u8*)buffer)[0x6],
+ ((u8*)buffer)[0x7],
+ ((u8*)buffer)[0x8],
+ ((u8*)buffer)[0x9],
+ ((u8*)buffer)[0xa],
+ ((u8*)buffer)[0xb],
+ ((u8*)buffer)[0xc],
+ ((u8*)buffer)[0xd],
+ ((u8*)buffer)[0xe],
+ ((u8*)buffer)[0xf]);
+ else
+ len += sprintf(buf + len, "%s\n",
+ snprint_line(line, sizeof(line),
+ (u8*)buffer, 16, loop));
+ loop += 16;
+ }
+
+ return len;
+}
+
+static ssize_t store_memory(struct device *d, const char *buf, size_t count)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ struct net_device *dev = priv->net_dev;
+ const char *p = buf;
+
+ if (count < 1)
+ return count;
+
+ if (p[0] == '1' ||
+ (count >= 2 && tolower(p[0]) == 'o' && tolower(p[1]) == 'n')) {
+ IPW_DEBUG_INFO("%s: Setting memory dump to RAW mode.\n",
+ dev->name);
+ priv->dump_raw = 1;
+
+ } else if (p[0] == '0' || (count >= 2 && tolower(p[0]) == 'o' &&
+ tolower(p[1]) == 'f')) {
+ IPW_DEBUG_INFO("%s: Setting memory dump to HEX mode.\n",
+ dev->name);
+ priv->dump_raw = 0;
+
+ } else if (tolower(p[0]) == 'r') {
+ IPW_DEBUG_INFO("%s: Resetting firmware snapshot.\n",
+ dev->name);
+ ipw2100_snapshot_free(priv);
+
+ } else
+ IPW_DEBUG_INFO("%s: Usage: 0|on = HEX, 1|off = RAW, "
+ "reset = clear memory snapshot\n",
+ dev->name);
+
+ return count;
+}
+static DEVICE_ATTR(memory, S_IWUSR|S_IRUGO, show_memory, store_memory);
+
+
+static ssize_t show_ordinals(struct device *d, char *buf)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ u32 val = 0;
+ int len = 0;
+ u32 val_len;
+ static int loop = 0;
+
+ if (loop >= sizeof(ord_data) / sizeof(*ord_data))
+ loop = 0;
+
+ /* sysfs provides us PAGE_SIZE buffer */
+ while (len < PAGE_SIZE - 128 &&
+ loop < (sizeof(ord_data) / sizeof(*ord_data))) {
+
+ val_len = sizeof(u32);
+
+ if (ipw2100_get_ordinal(priv, ord_data[loop].index, &val,
+ &val_len))
+ len += sprintf(buf + len, "[0x%02X] = ERROR %s\n",
+ ord_data[loop].index,
+ ord_data[loop].desc);
+ else
+ len += sprintf(buf + len, "[0x%02X] = 0x%08X %s\n",
+ ord_data[loop].index, val,
+ ord_data[loop].desc);
+ loop++;
+ }
+
+ return len;
+}
+static DEVICE_ATTR(ordinals, S_IRUGO, show_ordinals, NULL);
+
+
+static ssize_t show_stats(struct device *d, char *buf)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ char * out = buf;
+
+ out += sprintf(out, "interrupts: %d {tx: %d, rx: %d, other: %d}\n",
+ priv->interrupts, priv->tx_interrupts,
+ priv->rx_interrupts, priv->inta_other);
+ out += sprintf(out, "firmware resets: %d\n", priv->resets);
+ out += sprintf(out, "firmware hangs: %d\n", priv->hangs);
+#ifdef CONFIG_IPW_DEBUG
+ out += sprintf(out, "packet mismatch image: %s\n",
+ priv->snapshot[0] ? "YES" : "NO");
+#endif
+
+ return out - buf;
+}
+static DEVICE_ATTR(stats, S_IRUGO, show_stats, NULL);
+
+
+int ipw2100_switch_mode(struct ipw2100_priv *priv, u32 mode)
+{
+ int err;
+
+ if (mode == priv->ieee->iw_mode)
+ return 0;
+
+ err = ipw2100_disable_adapter(priv);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: Could not disable adapter %d\n",
+ priv->net_dev->name, err);
+ return err;
+ }
+
+ switch (mode) {
+ case IW_MODE_INFRA:
+ priv->net_dev->type = ARPHRD_ETHER;
+ break;
+ case IW_MODE_ADHOC:
+ priv->net_dev->type = ARPHRD_ETHER;
+ break;
+#ifdef CONFIG_IPW2100_PROMISC
+ case IW_MODE_MONITOR:
+ priv->last_mode = priv->ieee->iw_mode;
+ priv->net_dev->type = ARPHRD_IEEE80211;
+ break;
+#endif /* CONFIG_IPW2100_PROMISC */
+ }
+
+ priv->ieee->iw_mode = mode;
+
+#ifdef CONFIG_PM
+ /* Indicate ipw2100_download_firmware download firmware
+ * from disk instead of memory. */
+ ipw2100_firmware.version = 0;
+#endif
+
+ priv->reset_backoff = 0;
+ ipw2100_reset_adapter(priv);
+
+ return 0;
+}
+
+static ssize_t show_internals(struct device *d, char *buf)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ int len = 0;
+
+#define DUMP_VAR(x,y) len += sprintf(buf + len, # x ": %" # y "\n", priv-> x)
+
+ if (priv->status & STATUS_ASSOCIATED)
+ len += sprintf(buf + len, "connected: %lu\n",
+ get_seconds() - priv->connect_start);
+ else
+ len += sprintf(buf + len, "not connected\n");
+
+ DUMP_VAR(ieee->crypt[priv->ieee->tx_keyidx], p);
+ DUMP_VAR(status, 08lx);
+ DUMP_VAR(config, 08lx);
+ DUMP_VAR(capability, 08lx);
+
+ DUMP_VAR(fatal_error, d);
+ DUMP_VAR(stop_hang_check, d);
+ DUMP_VAR(stop_rf_kill, d);
+ DUMP_VAR(messages_sent, d);
+
+ DUMP_VAR(tx_pend_stat.value, d);
+ DUMP_VAR(tx_pend_stat.hi, d);
+
+ DUMP_VAR(tx_free_stat.value, d);
+ DUMP_VAR(tx_free_stat.lo, d);
+
+ DUMP_VAR(msg_free_stat.value, d);
+ DUMP_VAR(msg_free_stat.lo, d);
+
+ DUMP_VAR(msg_pend_stat.value, d);
+ DUMP_VAR(msg_pend_stat.hi, d);
+
+ DUMP_VAR(fw_pend_stat.value, d);
+ DUMP_VAR(fw_pend_stat.hi, d);
+
+ DUMP_VAR(txq_stat.value, d);
+ DUMP_VAR(txq_stat.lo, d);
+
+ DUMP_VAR(ieee->scans, d);
+ DUMP_VAR(reset_backoff, d);
+
+ return len;
+}
+static DEVICE_ATTR(internals, S_IRUGO, show_internals, NULL);
+
+
+static ssize_t show_bssinfo(struct device *d, char *buf)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ char essid[IW_ESSID_MAX_SIZE + 1];
+ u8 bssid[ETH_ALEN];
+ u32 chan = 0;
+ char * out = buf;
+ int length;
+ int ret;
+
+ memset(essid, 0, sizeof(essid));
+ memset(bssid, 0, sizeof(bssid));
+
+ length = IW_ESSID_MAX_SIZE;
+ ret = ipw2100_get_ordinal(priv, IPW_ORD_STAT_ASSN_SSID, essid, &length);
+ if (ret)
+ IPW_DEBUG_INFO("failed querying ordinals at line %d\n",
+ __LINE__);
+
+ length = sizeof(bssid);
+ ret = ipw2100_get_ordinal(priv, IPW_ORD_STAT_ASSN_AP_BSSID,
+ bssid, &length);
+ if (ret)
+ IPW_DEBUG_INFO("failed querying ordinals at line %d\n",
+ __LINE__);
+
+ length = sizeof(u32);
+ ret = ipw2100_get_ordinal(priv, IPW_ORD_OUR_FREQ, &chan, &length);
+ if (ret)
+ IPW_DEBUG_INFO("failed querying ordinals at line %d\n",
+ __LINE__);
+
+ out += sprintf(out, "ESSID: %s\n", essid);
+ out += sprintf(out, "BSSID: %02x:%02x:%02x:%02x:%02x:%02x\n",
+ bssid[0], bssid[1], bssid[2],
+ bssid[3], bssid[4], bssid[5]);
+ out += sprintf(out, "Channel: %d\n", chan);
+
+ return out - buf;
+}
+static DEVICE_ATTR(bssinfo, S_IRUGO, show_bssinfo, NULL);
+
+
+
+
+#ifdef CONFIG_IPW_DEBUG
+static ssize_t show_debug_level(struct device_driver *d, char *buf)
+{
+ return sprintf(buf, "0x%08X\n", ipw2100_debug_level);
+}
+
+static ssize_t store_debug_level(struct device_driver *d, const char *buf,
+ size_t count)
+{
+ char *p = (char *)buf;
+ u32 val;
+
+ if (p[1] == 'x' || p[1] == 'X' || p[0] == 'x' || p[0] == 'X') {
+ p++;
+ if (p[0] == 'x' || p[0] == 'X')
+ p++;
+ val = simple_strtoul(p, &p, 16);
+ } else
+ val = simple_strtoul(p, &p, 10);
+ if (p == buf)
+ IPW_DEBUG_INFO(DRV_NAME
+ ": %s is not in hex or decimal form.\n", buf);
+ else
+ ipw2100_debug_level = val;
+
+ return strnlen(buf, count);
+}
+static DRIVER_ATTR(debug_level, S_IWUSR | S_IRUGO, show_debug_level,
+ store_debug_level);
+#endif /* CONFIG_IPW_DEBUG */
+
+
+static ssize_t show_fatal_error(struct device *d, char *buf)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ char *out = buf;
+ int i;
+
+ if (priv->fatal_error)
+ out += sprintf(out, "0x%08X\n",
+ priv->fatal_error);
+ else
+ out += sprintf(out, "0\n");
+
+ for (i = 1; i <= IPW2100_ERROR_QUEUE; i++) {
+ if (!priv->fatal_errors[(priv->fatal_index - i) %
+ IPW2100_ERROR_QUEUE])
+ continue;
+
+ out += sprintf(out, "%d. 0x%08X\n", i,
+ priv->fatal_errors[(priv->fatal_index - i) %
+ IPW2100_ERROR_QUEUE]);
+ }
+
+ return out - buf;
+}
+
+static ssize_t store_fatal_error(struct device *d, const char *buf,
+ size_t count)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ schedule_reset(priv);
+ return count;
+}
+static DEVICE_ATTR(fatal_error, S_IWUSR|S_IRUGO, show_fatal_error, store_fatal_error);
+
+
+static ssize_t show_scan_age(struct device *d, char *buf)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ return sprintf(buf, "%d\n", priv->ieee->scan_age);
+}
+
+static ssize_t store_scan_age(struct device *d, const char *buf, size_t count)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ struct net_device *dev = priv->net_dev;
+ char buffer[] = "00000000";
+ unsigned long len =
+ (sizeof(buffer) - 1) > count ? count : sizeof(buffer) - 1;
+ unsigned long val;
+ char *p = buffer;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ strncpy(buffer, buf, len);
+ buffer[len] = 0;
+
+ if (p[1] == 'x' || p[1] == 'X' || p[0] == 'x' || p[0] == 'X') {
+ p++;
+ if (p[0] == 'x' || p[0] == 'X')
+ p++;
+ val = simple_strtoul(p, &p, 16);
+ } else
+ val = simple_strtoul(p, &p, 10);
+ if (p == buffer) {
+ IPW_DEBUG_INFO("%s: user supplied invalid value.\n",
+ dev->name);
+ } else {
+ priv->ieee->scan_age = val;
+ IPW_DEBUG_INFO("set scan_age = %u\n", priv->ieee->scan_age);
+ }
+
+ IPW_DEBUG_INFO("exit\n");
+ return len;
+}
+static DEVICE_ATTR(scan_age, S_IWUSR | S_IRUGO, show_scan_age, store_scan_age);
+
+
+static ssize_t show_rf_kill(struct device *d, char *buf)
+{
+ /* 0 - RF kill not enabled
+ 1 - SW based RF kill active (sysfs)
+ 2 - HW based RF kill active
+ 3 - Both HW and SW baed RF kill active */
+ struct ipw2100_priv *priv = (struct ipw2100_priv *)d->driver_data;
+ int val = ((priv->status & STATUS_RF_KILL_SW) ? 0x1 : 0x0) |
+ (rf_kill_active(priv) ? 0x2 : 0x0);
+ return sprintf(buf, "%i\n", val);
+}
+
+static int ipw_radio_kill_sw(struct ipw2100_priv *priv, int disable_radio)
+{
+ if ((disable_radio ? 1 : 0) ==
+ (priv->status & STATUS_RF_KILL_SW ? 1 : 0))
+ return 0 ;
+
+ IPW_DEBUG_RF_KILL("Manual SW RF Kill set to: RADIO %s\n",
+ disable_radio ? "OFF" : "ON");
+
+ if (disable_radio) {
+ priv->status |= STATUS_RF_KILL_SW;
+ ipw2100_down(priv);
+ } else {
+ priv->status &= ~STATUS_RF_KILL_SW;
+ if (rf_kill_active(priv)) {
+ IPW_DEBUG_RF_KILL("Can not turn radio back on - "
+ "disabled by HW switch\n");
+ /* Make sure the RF_KILL check timer is running */
+ cancel_delayed_work(&priv->rf_kill);
+ queue_delayed_work(priv->workqueue, &priv->rf_kill,
+ HZ);
+ } else
+ schedule_reset(priv);
+ }
+
+ return 1;
+}
+
+static ssize_t store_rf_kill(struct device *d, const char *buf, size_t count)
+{
+ struct ipw2100_priv *priv = dev_get_drvdata(d);
+ ipw_radio_kill_sw(priv, buf[0] == '1');
+ return count;
+}
+static DEVICE_ATTR(rf_kill, S_IWUSR|S_IRUGO, show_rf_kill, store_rf_kill);
+
+
+static struct attribute *ipw2100_sysfs_entries[] = {
+ &dev_attr_hardware.attr,
+ &dev_attr_registers.attr,
+ &dev_attr_ordinals.attr,
+ &dev_attr_pci.attr,
+ &dev_attr_stats.attr,
+ &dev_attr_internals.attr,
+ &dev_attr_bssinfo.attr,
+ &dev_attr_memory.attr,
+ &dev_attr_scan_age.attr,
+ &dev_attr_fatal_error.attr,
+ &dev_attr_rf_kill.attr,
+ &dev_attr_cfg.attr,
+ &dev_attr_status.attr,
+ &dev_attr_capability.attr,
+ NULL,
+};
+
+static struct attribute_group ipw2100_attribute_group = {
+ .attrs = ipw2100_sysfs_entries,
+};
+
+
+static int status_queue_allocate(struct ipw2100_priv *priv, int entries)
+{
+ struct ipw2100_status_queue *q = &priv->status_queue;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ q->size = entries * sizeof(struct ipw2100_status);
+ q->drv = (struct ipw2100_status *)pci_alloc_consistent(
+ priv->pci_dev, q->size, &q->nic);
+ if (!q->drv) {
+ IPW_DEBUG_WARNING(
+ "Can not allocate status queue.\n");
+ return -ENOMEM;
+ }
+
+ memset(q->drv, 0, q->size);
+
+ IPW_DEBUG_INFO("exit\n");
+
+ return 0;
+}
+
+static void status_queue_free(struct ipw2100_priv *priv)
+{
+ IPW_DEBUG_INFO("enter\n");
+
+ if (priv->status_queue.drv) {
+ pci_free_consistent(
+ priv->pci_dev, priv->status_queue.size,
+ priv->status_queue.drv, priv->status_queue.nic);
+ priv->status_queue.drv = NULL;
+ }
+
+ IPW_DEBUG_INFO("exit\n");
+}
+
+static int bd_queue_allocate(struct ipw2100_priv *priv,
+ struct ipw2100_bd_queue *q, int entries)
+{
+ IPW_DEBUG_INFO("enter\n");
+
+ memset(q, 0, sizeof(struct ipw2100_bd_queue));
+
+ q->entries = entries;
+ q->size = entries * sizeof(struct ipw2100_bd);
+ q->drv = pci_alloc_consistent(priv->pci_dev, q->size, &q->nic);
+ if (!q->drv) {
+ IPW_DEBUG_INFO("can't allocate shared memory for buffer descriptors\n");
+ return -ENOMEM;
+ }
+ memset(q->drv, 0, q->size);
+
+ IPW_DEBUG_INFO("exit\n");
+
+ return 0;
+}
+
+static void bd_queue_free(struct ipw2100_priv *priv,
+ struct ipw2100_bd_queue *q)
+{
+ IPW_DEBUG_INFO("enter\n");
+
+ if (!q)
+ return;
+
+ if (q->drv) {
+ pci_free_consistent(priv->pci_dev,
+ q->size, q->drv, q->nic);
+ q->drv = NULL;
+ }
+
+ IPW_DEBUG_INFO("exit\n");
+}
+
+static void bd_queue_initialize(
+ struct ipw2100_priv *priv, struct ipw2100_bd_queue * q,
+ u32 base, u32 size, u32 r, u32 w)
+{
+ IPW_DEBUG_INFO("enter\n");
+
+ IPW_DEBUG_INFO("initializing bd queue at virt=%p, phys=%08x\n", q->drv, q->nic);
+
+ write_register(priv->net_dev, base, q->nic);
+ write_register(priv->net_dev, size, q->entries);
+ write_register(priv->net_dev, r, q->oldest);
+ write_register(priv->net_dev, w, q->next);
+
+ IPW_DEBUG_INFO("exit\n");
+}
+
+static void ipw2100_kill_workqueue(struct ipw2100_priv *priv)
+{
+ if (priv->workqueue) {
+ priv->stop_rf_kill = 1;
+ priv->stop_hang_check = 1;
+ cancel_delayed_work(&priv->reset_work);
+ cancel_delayed_work(&priv->security_work);
+ cancel_delayed_work(&priv->wx_event_work);
+ cancel_delayed_work(&priv->hang_check);
+ cancel_delayed_work(&priv->rf_kill);
+ destroy_workqueue(priv->workqueue);
+ priv->workqueue = NULL;
+ }
+}
+
+static int ipw2100_tx_allocate(struct ipw2100_priv *priv)
+{
+ int i, j, err = -EINVAL;
+ void *v;
+ dma_addr_t p;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ err = bd_queue_allocate(priv, &priv->tx_queue, TX_QUEUE_LENGTH);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: failed bd_queue_allocate\n",
+ priv->net_dev->name);
+ return err;
+ }
+
+ priv->tx_buffers = (struct ipw2100_tx_packet *)kmalloc(
+ TX_PENDED_QUEUE_LENGTH * sizeof(struct ipw2100_tx_packet),
+ GFP_ATOMIC);
+ if (!priv->tx_buffers) {
+ IPW_DEBUG_ERROR("%s: alloc failed form tx buffers.\n",
+ priv->net_dev->name);
+ bd_queue_free(priv, &priv->tx_queue);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < TX_PENDED_QUEUE_LENGTH; i++) {
+ v = pci_alloc_consistent(
+ priv->pci_dev, sizeof(struct ipw2100_data_header), &p);
+ if (!v) {
+ IPW_DEBUG_ERROR("%s: PCI alloc failed for tx "
+ "buffers.\n", priv->net_dev->name);
+ err = -ENOMEM;
+ break;
+ }
+
+ priv->tx_buffers[i].type = DATA;
+ priv->tx_buffers[i].info.d_struct.data = (struct ipw2100_data_header*)v;
+ priv->tx_buffers[i].info.d_struct.data_phys = p;
+ priv->tx_buffers[i].info.d_struct.txb = NULL;
+ }
+
+ if (i == TX_PENDED_QUEUE_LENGTH)
+ return 0;
+
+ for (j = 0; j < i; j++) {
+ pci_free_consistent(
+ priv->pci_dev,
+ sizeof(struct ipw2100_data_header),
+ priv->tx_buffers[j].info.d_struct.data,
+ priv->tx_buffers[j].info.d_struct.data_phys);
+ }
+
+ kfree(priv->tx_buffers);
+ priv->tx_buffers = NULL;
+
+ return err;
+}
+
+static void ipw2100_tx_initialize(struct ipw2100_priv *priv)
+{
+ int i;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ /*
+ * reinitialize packet info lists
+ */
+ INIT_LIST_HEAD(&priv->fw_pend_list);
+ INIT_STAT(&priv->fw_pend_stat);
+
+ /*
+ * reinitialize lists
+ */
+ INIT_LIST_HEAD(&priv->tx_pend_list);
+ INIT_LIST_HEAD(&priv->tx_free_list);
+ INIT_STAT(&priv->tx_pend_stat);
+ INIT_STAT(&priv->tx_free_stat);
+
+ for (i = 0; i < TX_PENDED_QUEUE_LENGTH; i++) {
+ /* We simply drop any SKBs that have been queued for
+ * transmit */
+ if (priv->tx_buffers[i].info.d_struct.txb) {
+ ieee80211_txb_free(priv->tx_buffers[i].info.d_struct.txb);
+ priv->tx_buffers[i].info.d_struct.txb = NULL;
+ }
+
+ list_add_tail(&priv->tx_buffers[i].list, &priv->tx_free_list);
+ }
+
+ SET_STAT(&priv->tx_free_stat, i);
+
+ priv->tx_queue.oldest = 0;
+ priv->tx_queue.available = priv->tx_queue.entries;
+ priv->tx_queue.next = 0;
+ INIT_STAT(&priv->txq_stat);
+ SET_STAT(&priv->txq_stat, priv->tx_queue.available);
+
+ bd_queue_initialize(priv, &priv->tx_queue,
+ IPW_MEM_HOST_SHARED_TX_QUEUE_BD_BASE,
+ IPW_MEM_HOST_SHARED_TX_QUEUE_BD_SIZE,
+ IPW_MEM_HOST_SHARED_TX_QUEUE_READ_INDEX,
+ IPW_MEM_HOST_SHARED_TX_QUEUE_WRITE_INDEX);
+
+ IPW_DEBUG_INFO("exit\n");
+
+}
+
+static void ipw2100_tx_free(struct ipw2100_priv *priv)
+{
+ int i;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ bd_queue_free(priv, &priv->tx_queue);
+
+ if (!priv->tx_buffers)
+ return;
+
+ for (i = 0; i < TX_PENDED_QUEUE_LENGTH; i++) {
+ if (priv->tx_buffers[i].info.d_struct.txb) {
+ ieee80211_txb_free(priv->tx_buffers[i].info.d_struct.txb);
+ priv->tx_buffers[i].info.d_struct.txb = NULL;
+ }
+ if (priv->tx_buffers[i].info.d_struct.data)
+ pci_free_consistent(
+ priv->pci_dev,
+ sizeof(struct ipw2100_data_header),
+ priv->tx_buffers[i].info.d_struct.data,
+ priv->tx_buffers[i].info.d_struct.data_phys);
+ }
+
+ kfree(priv->tx_buffers);
+ priv->tx_buffers = NULL;
+
+ IPW_DEBUG_INFO("exit\n");
+}
+
+
+
+static int ipw2100_rx_allocate(struct ipw2100_priv *priv)
+{
+ int i, j, err = -EINVAL;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ err = bd_queue_allocate(priv, &priv->rx_queue, RX_QUEUE_LENGTH);
+ if (err) {
+ IPW_DEBUG_INFO("failed bd_queue_allocate\n");
+ return err;
+ }
+
+ err = status_queue_allocate(priv, RX_QUEUE_LENGTH);
+ if (err) {
+ IPW_DEBUG_INFO("failed status_queue_allocate\n");
+ bd_queue_free(priv, &priv->rx_queue);
+ return err;
+ }
+
+ /*
+ * allocate packets
+ */
+ priv->rx_buffers = (struct ipw2100_rx_packet *)
+ kmalloc(RX_QUEUE_LENGTH * sizeof(struct ipw2100_rx_packet),
+ GFP_KERNEL);
+ if (!priv->rx_buffers) {
+ IPW_DEBUG_INFO("can't allocate rx packet buffer table\n");
+
+ bd_queue_free(priv, &priv->rx_queue);
+
+ status_queue_free(priv);
+
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < RX_QUEUE_LENGTH; i++) {
+ struct ipw2100_rx_packet *packet = &priv->rx_buffers[i];
+
+ err = ipw2100_alloc_skb(priv, packet);
+ if (unlikely(err)) {
+ err = -ENOMEM;
+ break;
+ }
+
+ /* The BD holds the cache aligned address */
+ priv->rx_queue.drv[i].host_addr = packet->dma_addr;
+ priv->rx_queue.drv[i].buf_length = IPW_RX_NIC_BUFFER_LENGTH;
+ priv->status_queue.drv[i].status_fields = 0;
+ }
+
+ if (i == RX_QUEUE_LENGTH)
+ return 0;
+
+ for (j = 0; j < i; j++) {
+ pci_unmap_single(priv->pci_dev, priv->rx_buffers[j].dma_addr,
+ sizeof(struct ipw2100_rx_packet),
+ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(priv->rx_buffers[j].skb);
+ }
+
+ kfree(priv->rx_buffers);
+ priv->rx_buffers = NULL;
+
+ bd_queue_free(priv, &priv->rx_queue);
+
+ status_queue_free(priv);
+
+ return err;
+}
+
+static void ipw2100_rx_initialize(struct ipw2100_priv *priv)
+{
+ IPW_DEBUG_INFO("enter\n");
+
+ priv->rx_queue.oldest = 0;
+ priv->rx_queue.available = priv->rx_queue.entries - 1;
+ priv->rx_queue.next = priv->rx_queue.entries - 1;
+
+ INIT_STAT(&priv->rxq_stat);
+ SET_STAT(&priv->rxq_stat, priv->rx_queue.available);
+
+ bd_queue_initialize(priv, &priv->rx_queue,
+ IPW_MEM_HOST_SHARED_RX_BD_BASE,
+ IPW_MEM_HOST_SHARED_RX_BD_SIZE,
+ IPW_MEM_HOST_SHARED_RX_READ_INDEX,
+ IPW_MEM_HOST_SHARED_RX_WRITE_INDEX);
+
+ /* set up the status queue */
+ write_register(priv->net_dev, IPW_MEM_HOST_SHARED_RX_STATUS_BASE,
+ priv->status_queue.nic);
+
+ IPW_DEBUG_INFO("exit\n");
+}
+
+static void ipw2100_rx_free(struct ipw2100_priv *priv)
+{
+ int i;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ bd_queue_free(priv, &priv->rx_queue);
+ status_queue_free(priv);
+
+ if (!priv->rx_buffers)
+ return;
+
+ for (i = 0; i < RX_QUEUE_LENGTH; i++) {
+ if (priv->rx_buffers[i].rxp) {
+ pci_unmap_single(priv->pci_dev,
+ priv->rx_buffers[i].dma_addr,
+ sizeof(struct ipw2100_rx),
+ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(priv->rx_buffers[i].skb);
+ }
+ }
+
+ kfree(priv->rx_buffers);
+ priv->rx_buffers = NULL;
+
+ IPW_DEBUG_INFO("exit\n");
+}
+
+static int ipw2100_read_mac_address(struct ipw2100_priv *priv)
+{
+ u32 length = ETH_ALEN;
+ u8 mac[ETH_ALEN];
+
+ int err;
+
+ err = ipw2100_get_ordinal(priv, IPW_ORD_STAT_ADAPTER_MAC,
+ mac, &length);
+ if (err) {
+ IPW_DEBUG_INFO("MAC address read failed\n");
+ return -EIO;
+ }
+ IPW_DEBUG_INFO("card MAC is %02X:%02X:%02X:%02X:%02X:%02X\n",
+ mac[0], mac[1], mac[2], mac[3], mac[4], mac[5]);
+
+ memcpy(priv->net_dev->dev_addr, mac, ETH_ALEN);
+
+ return 0;
+}
+
+/********************************************************************
+ *
+ * Firmware Commands
+ *
+ ********************************************************************/
+
+int ipw2100_set_mac_address(struct ipw2100_priv *priv, int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = ADAPTER_ADDRESS,
+ .host_command_sequence = 0,
+ .host_command_length = ETH_ALEN
+ };
+ int err;
+
+ IPW_DEBUG_HC("SET_MAC_ADDRESS\n");
+
+ IPW_DEBUG_INFO("enter\n");
+
+ if (priv->config & CFG_CUSTOM_MAC) {
+ memcpy(cmd.host_command_parameters, priv->mac_addr,
+ ETH_ALEN);
+ memcpy(priv->net_dev->dev_addr, priv->mac_addr, ETH_ALEN);
+ } else
+ memcpy(cmd.host_command_parameters, priv->net_dev->dev_addr,
+ ETH_ALEN);
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ IPW_DEBUG_INFO("exit\n");
+ return err;
+}
+
+int ipw2100_set_port_type(struct ipw2100_priv *priv, u32 port_type,
+ int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = PORT_TYPE,
+ .host_command_sequence = 0,
+ .host_command_length = sizeof(u32)
+ };
+ int err;
+
+ switch (port_type) {
+ case IW_MODE_INFRA:
+ cmd.host_command_parameters[0] = IPW_BSS;
+ break;
+ case IW_MODE_ADHOC:
+ cmd.host_command_parameters[0] = IPW_IBSS;
+ break;
+ }
+
+ IPW_DEBUG_HC("PORT_TYPE: %s\n",
+ port_type == IPW_IBSS ? "Ad-Hoc" : "Managed");
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: Could not disable adapter %d\n",
+ priv->net_dev->name, err);
+ return err;
+ }
+ }
+
+ /* send cmd to firmware */
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ if (!batch_mode)
+ ipw2100_enable_adapter(priv);
+
+ return err;
+}
+
+
+int ipw2100_set_channel(struct ipw2100_priv *priv, u32 channel, int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = CHANNEL,
+ .host_command_sequence = 0,
+ .host_command_length = sizeof(u32)
+ };
+ int err;
+
+ cmd.host_command_parameters[0] = channel;
+
+ IPW_DEBUG_HC("CHANNEL: %d\n", channel);
+
+ /* If BSS then we don't support channel selection */
+ if (priv->ieee->iw_mode == IW_MODE_INFRA)
+ return 0;
+
+ if ((channel != 0) &&
+ ((channel < REG_MIN_CHANNEL) || (channel > REG_MAX_CHANNEL)))
+ return -EINVAL;
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err) {
+ IPW_DEBUG_INFO("Failed to set channel to %d",
+ channel);
+ return err;
+ }
+
+ if (channel)
+ priv->config |= CFG_STATIC_CHANNEL;
+ else
+ priv->config &= ~CFG_STATIC_CHANNEL;
+
+ priv->channel = channel;
+
+ if (!batch_mode) {
+ err = ipw2100_enable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+int ipw2100_system_config(struct ipw2100_priv *priv, int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = SYSTEM_CONFIG,
+ .host_command_sequence = 0,
+ .host_command_length = 12,
+ };
+ u32 ibss_mask, len = sizeof(u32);
+ int err;
+
+ /* Set system configuration */
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ if (priv->ieee->iw_mode == IW_MODE_ADHOC)
+ cmd.host_command_parameters[0] |= IPW_CFG_IBSS_AUTO_START;
+
+ cmd.host_command_parameters[0] |= IPW_CFG_IBSS_MASK |
+ IPW_CFG_BSS_MASK |
+ IPW_CFG_802_1x_ENABLE;
+
+ if (!(priv->config & CFG_LONG_PREAMBLE))
+ cmd.host_command_parameters[0] |= IPW_CFG_PREAMBLE_AUTO;
+
+ err = ipw2100_get_ordinal(priv,
+ IPW_ORD_EEPROM_IBSS_11B_CHANNELS,
+ &ibss_mask, &len);
+ if (err)
+ ibss_mask = IPW_IBSS_11B_DEFAULT_MASK;
+
+ cmd.host_command_parameters[1] = REG_CHANNEL_MASK;
+ cmd.host_command_parameters[2] = REG_CHANNEL_MASK & ibss_mask;
+
+ /* 11b only */
+ /*cmd.host_command_parameters[0] |= DIVERSITY_ANTENNA_A;*/
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err)
+ return err;
+
+/* If IPv6 is configured in the kernel then we don't want to filter out all
+ * of the multicast packets as IPv6 needs some. */
+#if !defined(CONFIG_IPV6) && !defined(CONFIG_IPV6_MODULE)
+ cmd.host_command = ADD_MULTICAST;
+ cmd.host_command_sequence = 0;
+ cmd.host_command_length = 0;
+
+ ipw2100_hw_send_command(priv, &cmd);
+#endif
+ if (!batch_mode) {
+ err = ipw2100_enable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+int ipw2100_set_tx_rates(struct ipw2100_priv *priv, u32 rate, int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = BASIC_TX_RATES,
+ .host_command_sequence = 0,
+ .host_command_length = 4
+ };
+ int err;
+
+ cmd.host_command_parameters[0] = rate & TX_RATE_MASK;
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ /* Set BASIC TX Rate first */
+ ipw2100_hw_send_command(priv, &cmd);
+
+ /* Set TX Rate */
+ cmd.host_command = TX_RATES;
+ ipw2100_hw_send_command(priv, &cmd);
+
+ /* Set MSDU TX Rate */
+ cmd.host_command = MSDU_TX_RATES;
+ ipw2100_hw_send_command(priv, &cmd);
+
+ if (!batch_mode) {
+ err = ipw2100_enable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ priv->tx_rates = rate;
+
+ return 0;
+}
+
+int ipw2100_set_power_mode(struct ipw2100_priv *priv,
+ int power_level)
+{
+ struct host_command cmd = {
+ .host_command = POWER_MODE,
+ .host_command_sequence = 0,
+ .host_command_length = 4
+ };
+ int err;
+
+ cmd.host_command_parameters[0] = power_level;
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err)
+ return err;
+
+ if (power_level == IPW_POWER_MODE_CAM)
+ priv->power_mode = IPW_POWER_LEVEL(priv->power_mode);
+ else
+ priv->power_mode = IPW_POWER_ENABLED | power_level;
+
+#ifdef CONFIG_IPW2100_TX_POWER
+ if (priv->port_type == IBSS &&
+ priv->adhoc_power != DFTL_IBSS_TX_POWER) {
+ /* Set beacon interval */
+ cmd.host_command = TX_POWER_INDEX;
+ cmd.host_command_parameters[0] = (u32)priv->adhoc_power;
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err)
+ return err;
+ }
+#endif
+
+ return 0;
+}
+
+
+int ipw2100_set_rts_threshold(struct ipw2100_priv *priv, u32 threshold)
+{
+ struct host_command cmd = {
+ .host_command = RTS_THRESHOLD,
+ .host_command_sequence = 0,
+ .host_command_length = 4
+ };
+ int err;
+
+ if (threshold & RTS_DISABLED)
+ cmd.host_command_parameters[0] = MAX_RTS_THRESHOLD;
+ else
+ cmd.host_command_parameters[0] = threshold & ~RTS_DISABLED;
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err)
+ return err;
+
+ priv->rts_threshold = threshold;
+
+ return 0;
+}
+
+#if 0
+int ipw2100_set_fragmentation_threshold(struct ipw2100_priv *priv,
+ u32 threshold, int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = FRAG_THRESHOLD,
+ .host_command_sequence = 0,
+ .host_command_length = 4,
+ .host_command_parameters[0] = 0,
+ };
+ int err;
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ if (threshold == 0)
+ threshold = DEFAULT_FRAG_THRESHOLD;
+ else {
+ threshold = max(threshold, MIN_FRAG_THRESHOLD);
+ threshold = min(threshold, MAX_FRAG_THRESHOLD);
+ }
+
+ cmd.host_command_parameters[0] = threshold;
+
+ IPW_DEBUG_HC("FRAG_THRESHOLD: %u\n", threshold);
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ if (!batch_mode)
+ ipw2100_enable_adapter(priv);
+
+ if (!err)
+ priv->frag_threshold = threshold;
+
+ return err;
+}
+#endif
+
+int ipw2100_set_short_retry(struct ipw2100_priv *priv, u32 retry)
+{
+ struct host_command cmd = {
+ .host_command = SHORT_RETRY_LIMIT,
+ .host_command_sequence = 0,
+ .host_command_length = 4
+ };
+ int err;
+
+ cmd.host_command_parameters[0] = retry;
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err)
+ return err;
+
+ priv->short_retry_limit = retry;
+
+ return 0;
+}
+
+int ipw2100_set_long_retry(struct ipw2100_priv *priv, u32 retry)
+{
+ struct host_command cmd = {
+ .host_command = LONG_RETRY_LIMIT,
+ .host_command_sequence = 0,
+ .host_command_length = 4
+ };
+ int err;
+
+ cmd.host_command_parameters[0] = retry;
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (err)
+ return err;
+
+ priv->long_retry_limit = retry;
+
+ return 0;
+}
+
+
+int ipw2100_set_mandatory_bssid(struct ipw2100_priv *priv, u8 *bssid,
+ int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = MANDATORY_BSSID,
+ .host_command_sequence = 0,
+ .host_command_length = (bssid == NULL) ? 0 : ETH_ALEN
+ };
+ int err;
+
+#ifdef CONFIG_IPW_DEBUG
+ if (bssid != NULL)
+ IPW_DEBUG_HC(
+ "MANDATORY_BSSID: %02X:%02X:%02X:%02X:%02X:%02X\n",
+ bssid[0], bssid[1], bssid[2], bssid[3], bssid[4],
+ bssid[5]);
+ else
+ IPW_DEBUG_HC("MANDATORY_BSSID: <clear>\n");
+#endif
+ /* if BSSID is empty then we disable mandatory bssid mode */
+ if (bssid != NULL)
+ memcpy((u8 *)cmd.host_command_parameters, bssid, ETH_ALEN);
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ if (!batch_mode)
+ ipw2100_enable_adapter(priv);
+
+ return err;
+}
+
+#ifdef CONFIG_IEEE80211_WPA
+static int ipw2100_disassociate_bssid(struct ipw2100_priv *priv)
+{
+ struct host_command cmd = {
+ .host_command = DISASSOCIATION_BSSID,
+ .host_command_sequence = 0,
+ .host_command_length = ETH_ALEN
+ };
+ int err;
+ int len;
+
+ IPW_DEBUG_HC("DISASSOCIATION_BSSID\n");
+
+ len = ETH_ALEN;
+ /* The Firmware currently ignores the BSSID and just disassociates from
+ * the currently associated AP -- but in the off chance that a future
+ * firmware does use the BSSID provided here, we go ahead and try and
+ * set it to the currently associated AP's BSSID */
+ memcpy(cmd.host_command_parameters, priv->bssid, ETH_ALEN);
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ return err;
+}
+#endif
+
+/*
+ * Pseudo code for setting up wpa_frame:
+ */
+#if 0
+void x(struct ieee80211_assoc_frame *wpa_assoc)
+{
+ struct ipw2100_wpa_assoc_frame frame;
+ frame->fixed_ie_mask = IPW_WPA_CAPABILTIES |
+ IPW_WPA_LISTENINTERVAL |
+ IPW_WPA_AP_ADDRESS;
+ frame->capab_info = wpa_assoc->capab_info;
+ frame->lisen_interval = wpa_assoc->listent_interval;
+ memcpy(frame->current_ap, wpa_assoc->current_ap, ETH_ALEN);
+
+ /* UNKNOWN -- I'm not postivive about this part; don't have any WPA
+ * setup here to test it with.
+ *
+ * Walk the IEs in the wpa_assoc and figure out the total size of all
+ * that data. Stick that into frame->var_ie_len. Then memcpy() all of
+ * the IEs from wpa_frame into frame.
+ */
+ frame->var_ie_len = calculate_ie_len(wpa_assoc);
+ memcpy(frame->var_ie, wpa_assoc->variable, frame->var_ie_len);
+
+ ipw2100_set_wpa_ie(priv, &frame, 0);
+}
+#endif
+
+
+
+
+static int ipw2100_set_wpa_ie(struct ipw2100_priv *,
+ struct ipw2100_wpa_assoc_frame *, int)
+__attribute__ ((unused));
+
+static int ipw2100_set_wpa_ie(struct ipw2100_priv *priv,
+ struct ipw2100_wpa_assoc_frame *wpa_frame,
+ int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = SET_WPA_IE,
+ .host_command_sequence = 0,
+ .host_command_length = sizeof(struct ipw2100_wpa_assoc_frame),
+ };
+ int err;
+
+ IPW_DEBUG_HC("SET_WPA_IE\n");
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ memcpy(cmd.host_command_parameters, wpa_frame,
+ sizeof(struct ipw2100_wpa_assoc_frame));
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ if (!batch_mode) {
+ if (ipw2100_enable_adapter(priv))
+ err = -EIO;
+ }
+
+ return err;
+}
+
+struct security_info_params {
+ u32 allowed_ciphers;
+ u16 version;
+ u8 auth_mode;
+ u8 replay_counters_number;
+ u8 unicast_using_group;
+} __attribute__ ((packed));
+
+int ipw2100_set_security_information(struct ipw2100_priv *priv,
+ int auth_mode,
+ int security_level,
+ int unicast_using_group,
+ int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = SET_SECURITY_INFORMATION,
+ .host_command_sequence = 0,
+ .host_command_length = sizeof(struct security_info_params)
+ };
+ struct security_info_params *security =
+ (struct security_info_params *)&cmd.host_command_parameters;
+ int err;
+ memset(security, 0, sizeof(*security));
+
+ /* If shared key AP authentication is turned on, then we need to
+ * configure the firmware to try and use it.
+ *
+ * Actual data encryption/decryption is handled by the host. */
+ security->auth_mode = auth_mode;
+ security->unicast_using_group = unicast_using_group;
+
+ switch (security_level) {
+ case SEC_LEVEL_0:
+ security->allowed_ciphers = IPW_NONE_CIPHER;
+ break;
+ case SEC_LEVEL_1:
+ security->allowed_ciphers = IPW_WEP40_CIPHER |
+ IPW_WEP104_CIPHER;
+ break;
+ case SEC_LEVEL_2:
+ security->allowed_ciphers = IPW_WEP40_CIPHER |
+ IPW_WEP104_CIPHER | IPW_TKIP_CIPHER;
+ break;
+ case SEC_LEVEL_2_CKIP:
+ security->allowed_ciphers = IPW_WEP40_CIPHER |
+ IPW_WEP104_CIPHER | IPW_CKIP_CIPHER;
+ break;
+ case SEC_LEVEL_3:
+ security->allowed_ciphers = IPW_WEP40_CIPHER |
+ IPW_WEP104_CIPHER | IPW_TKIP_CIPHER | IPW_CCMP_CIPHER;
+ break;
+ }
+
+ IPW_DEBUG_HC(
+ "SET_SECURITY_INFORMATION: auth:%d cipher:0x%02X\n",
+ security->auth_mode, security->allowed_ciphers);
+
+ security->replay_counters_number = 0;
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ if (!batch_mode)
+ ipw2100_enable_adapter(priv);
+
+ return err;
+}
+
+int ipw2100_set_tx_power(struct ipw2100_priv *priv,
+ u32 tx_power)
+{
+ struct host_command cmd = {
+ .host_command = TX_POWER_INDEX,
+ .host_command_sequence = 0,
+ .host_command_length = 4
+ };
+ int err = 0;
+
+ cmd.host_command_parameters[0] = tx_power;
+
+ if (priv->ieee->iw_mode == IW_MODE_ADHOC)
+ err = ipw2100_hw_send_command(priv, &cmd);
+ if (!err)
+ priv->tx_power = tx_power;
+
+ return 0;
+}
+
+int ipw2100_set_ibss_beacon_interval(struct ipw2100_priv *priv,
+ u32 interval, int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = BEACON_INTERVAL,
+ .host_command_sequence = 0,
+ .host_command_length = 4
+ };
+ int err;
+
+ cmd.host_command_parameters[0] = interval;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ if (priv->ieee->iw_mode == IW_MODE_ADHOC) {
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ ipw2100_hw_send_command(priv, &cmd);
+
+ if (!batch_mode) {
+ err = ipw2100_enable_adapter(priv);
+ if (err)
+ return err;
+ }
+ }
+
+ IPW_DEBUG_INFO("exit\n");
+
+ return 0;
+}
+
+
+void ipw2100_queues_initialize(struct ipw2100_priv *priv)
+{
+ ipw2100_tx_initialize(priv);
+ ipw2100_rx_initialize(priv);
+ ipw2100_msg_initialize(priv);
+}
+
+void ipw2100_queues_free(struct ipw2100_priv *priv)
+{
+ ipw2100_tx_free(priv);
+ ipw2100_rx_free(priv);
+ ipw2100_msg_free(priv);
+}
+
+int ipw2100_queues_allocate(struct ipw2100_priv *priv)
+{
+ if (ipw2100_tx_allocate(priv) ||
+ ipw2100_rx_allocate(priv) ||
+ ipw2100_msg_allocate(priv))
+ goto fail;
+
+ return 0;
+
+ fail:
+ ipw2100_tx_free(priv);
+ ipw2100_rx_free(priv);
+ ipw2100_msg_free(priv);
+ return -ENOMEM;
+}
+
+#define IPW_PRIVACY_CAPABLE 0x0008
+
+static int ipw2100_set_wep_flags(struct ipw2100_priv *priv, u32 flags,
+ int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = WEP_FLAGS,
+ .host_command_sequence = 0,
+ .host_command_length = 4
+ };
+ int err;
+
+ cmd.host_command_parameters[0] = flags;
+
+ IPW_DEBUG_HC("WEP_FLAGS: flags = 0x%08X\n", flags);
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: Could not disable adapter %d\n",
+ priv->net_dev->name, err);
+ return err;
+ }
+ }
+
+ /* send cmd to firmware */
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ if (!batch_mode)
+ ipw2100_enable_adapter(priv);
+
+ return err;
+}
+
+struct ipw2100_wep_key {
+ u8 idx;
+ u8 len;
+ u8 key[13];
+};
+
+/* Macros to ease up priting WEP keys */
+#define WEP_FMT_64 "%02X%02X%02X%02X-%02X"
+#define WEP_FMT_128 "%02X%02X%02X%02X-%02X%02X%02X%02X-%02X%02X%02X"
+#define WEP_STR_64(x) x[0],x[1],x[2],x[3],x[4]
+#define WEP_STR_128(x) x[0],x[1],x[2],x[3],x[4],x[5],x[6],x[7],x[8],x[9],x[10]
+
+
+/**
+ * Set a the wep key
+ *
+ * @priv: struct to work on
+ * @idx: index of the key we want to set
+ * @key: ptr to the key data to set
+ * @len: length of the buffer at @key
+ * @batch_mode: FIXME perform the operation in batch mode, not
+ * disabling the device.
+ *
+ * @returns 0 if OK, < 0 errno code on error.
+ *
+ * Fill out a command structure with the new wep key, length an
+ * index and send it down the wire.
+ */
+static int ipw2100_set_key(struct ipw2100_priv *priv,
+ int idx, char *key, int len, int batch_mode)
+{
+ int keylen = len ? (len <= 5 ? 5 : 13) : 0;
+ struct host_command cmd = {
+ .host_command = WEP_KEY_INFO,
+ .host_command_sequence = 0,
+ .host_command_length = sizeof(struct ipw2100_wep_key),
+ };
+ struct ipw2100_wep_key *wep_key = (void*)cmd.host_command_parameters;
+ int err;
+
+ IPW_DEBUG_HC("WEP_KEY_INFO: index = %d, len = %d/%d\n",
+ idx, keylen, len);
+
+ /* NOTE: We don't check cached values in case the firmware was reset
+ * or some other problem is occuring. If the user is setting the key,
+ * then we push the change */
+
+ wep_key->idx = idx;
+ wep_key->len = keylen;
+
+ if (keylen) {
+ memcpy(wep_key->key, key, len);
+ memset(wep_key->key + len, 0, keylen - len);
+ }
+
+ /* Will be optimized out on debug not being configured in */
+ if (keylen == 0)
+ IPW_DEBUG_WEP("%s: Clearing key %d\n",
+ priv->net_dev->name, wep_key->idx);
+ else if (keylen == 5)
+ IPW_DEBUG_WEP("%s: idx: %d, len: %d key: " WEP_FMT_64 "\n",
+ priv->net_dev->name, wep_key->idx, wep_key->len,
+ WEP_STR_64(wep_key->key));
+ else
+ IPW_DEBUG_WEP("%s: idx: %d, len: %d key: " WEP_FMT_128
+ "\n",
+ priv->net_dev->name, wep_key->idx, wep_key->len,
+ WEP_STR_128(wep_key->key));
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ /* FIXME: IPG: shouldn't this prink be in _disable_adapter()? */
+ if (err) {
+ IPW_DEBUG_ERROR("%s: Could not disable adapter %d\n",
+ priv->net_dev->name, err);
+ return err;
+ }
+ }
+
+ /* send cmd to firmware */
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ if (!batch_mode) {
+ int err2 = ipw2100_enable_adapter(priv);
+ if (err == 0)
+ err = err2;
+ }
+ return err;
+}
+
+#if 0
+static int ipw2100_set_key_index(struct ipw2100_priv *priv,
+ int idx, int batch_mode)
+{
+ struct host_command cmd = {
+ .host_command = WEP_KEY_INDEX,
+ .host_command_sequence = 0,
+ .host_command_length = 4,
+ .host_command_parameters[0] = idx,
+ };
+ int err;
+
+ IPW_DEBUG_HC("WEP_KEY_INDEX: index = %d\n", idx);
+
+ if (idx < 0 || idx > 3)
+ return -EINVAL;
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: Could not disable adapter %d\n",
+ priv->net_dev->name, err);
+ return err;
+ }
+ }
+
+ /* send cmd to firmware */
+ err = ipw2100_hw_send_command(priv, &cmd);
+
+ if (!batch_mode)
+ ipw2100_enable_adapter(priv);
+
+ return err;
+}
+#endif
+
+
+static int ipw2100_configure_security(struct ipw2100_priv *priv,
+ int batch_mode)
+{
+ int i, err, auth_mode, sec_level, use_group;
+
+ if (!(priv->status & STATUS_RUNNING))
+ return 0;
+
+ if (!batch_mode) {
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+ }
+
+ if (!priv->sec.enabled) {
+ err = ipw2100_set_security_information(
+ priv, IPW_AUTH_OPEN, SEC_LEVEL_0, 0, 1);
+ } else {
+ auth_mode = IPW_AUTH_OPEN;
+ if ((priv->sec.flags & SEC_AUTH_MODE) &&
+ (priv->sec.auth_mode == WLAN_AUTH_SHARED_KEY))
+ auth_mode = IPW_AUTH_SHARED;
+
+ sec_level = SEC_LEVEL_0;
+ if (priv->sec.flags & SEC_LEVEL)
+ sec_level = priv->sec.level;
+
+ use_group = 0;
+ if (priv->sec.flags & SEC_UNICAST_GROUP)
+ use_group = priv->sec.unicast_uses_group;
+
+ err = ipw2100_set_security_information(
+ priv, auth_mode, sec_level, use_group, 1);
+ }
+
+ if (err)
+ goto exit;
+
+ if (priv->sec.enabled) {
+ for (i = 0; i < 4; i++) {
+ if (!(priv->sec.flags & (1 << i))) {
+ memset(priv->sec.keys[i], 0, WEP_KEY_LEN);
+ priv->sec.key_sizes[i] = 0;
+ } else {
+ err = ipw2100_set_key(priv, i,
+ priv->sec.keys[i],
+ priv->sec.key_sizes[i],
+ 1);
+ if (err)
+ goto exit;
+ }
+ }
+ }
+
+ /* Always enable privacy so the Host can filter WEP packets if
+ * encrypted data is sent up */
+ err = ipw2100_set_wep_flags(
+ priv, priv->sec.enabled ? IPW_PRIVACY_CAPABLE : 0, 1);
+ if (err)
+ goto exit;
+
+ priv->status &= ~STATUS_SECURITY_UPDATED;
+
+ exit:
+ if (!batch_mode)
+ ipw2100_enable_adapter(priv);
+
+ return err;
+}
+
+static void ipw2100_security_work(struct ipw2100_priv *priv)
+{
+ /* If we happen to have reconnected before we get a chance to
+ * process this, then update the security settings--which causes
+ * a disassociation to occur */
+ if (!(priv->status & STATUS_ASSOCIATED) &&
+ priv->status & STATUS_SECURITY_UPDATED)
+ ipw2100_configure_security(priv, 0);
+}
+
+static void shim__set_security(struct ieee80211_device *ieee,
+ struct ieee80211_security *sec)
+{
+ struct ipw2100_priv *priv = ieee->priv;
+ int i, force_update = 0;
+
+ if (!(priv->status & STATUS_INITIALIZED))
+ return;
+
+ for (i = 0; i < 4; i++) {
+ if (sec->flags & (1 << i)) {
+ priv->sec.key_sizes[i] = sec->key_sizes[i];
+ if (sec->key_sizes[i] == 0)
+ priv->sec.flags &= ~(1 << i);
+ else
+ memcpy(priv->sec.keys[i], sec->keys[i],
+ sec->key_sizes[i]);
+ priv->sec.flags |= (1 << i);
+ priv->status |= STATUS_SECURITY_UPDATED;
+ }
+ }
+
+ if ((sec->flags & SEC_ACTIVE_KEY) &&
+ priv->sec.active_key != sec->active_key) {
+ if (sec->active_key <= 3) {
+ priv->sec.active_key = sec->active_key;
+ priv->sec.flags |= SEC_ACTIVE_KEY;
+ } else
+ priv->sec.flags &= ~SEC_ACTIVE_KEY;
+
+ priv->status |= STATUS_SECURITY_UPDATED;
+ }
+
+ if ((sec->flags & SEC_AUTH_MODE) &&
+ (priv->sec.auth_mode != sec->auth_mode)) {
+ priv->sec.auth_mode = sec->auth_mode;
+ priv->sec.flags |= SEC_AUTH_MODE;
+ priv->status |= STATUS_SECURITY_UPDATED;
+ }
+
+ if (sec->flags & SEC_ENABLED &&
+ priv->sec.enabled != sec->enabled) {
+ priv->sec.flags |= SEC_ENABLED;
+ priv->sec.enabled = sec->enabled;
+ priv->status |= STATUS_SECURITY_UPDATED;
+ force_update = 1;
+ }
+
+ if (sec->flags & SEC_LEVEL &&
+ priv->sec.level != sec->level) {
+ priv->sec.level = sec->level;
+ priv->sec.flags |= SEC_LEVEL;
+ priv->status |= STATUS_SECURITY_UPDATED;
+ }
+
+ IPW_DEBUG_WEP("Security flags: %c %c%c%c%c %c%c%c%c\n",
+ priv->sec.flags & BIT(8) ? '1' : '0',
+ priv->sec.flags & BIT(7) ? '1' : '0',
+ priv->sec.flags & BIT(6) ? '1' : '0',
+ priv->sec.flags & BIT(5) ? '1' : '0',
+ priv->sec.flags & BIT(4) ? '1' : '0',
+ priv->sec.flags & BIT(3) ? '1' : '0',
+ priv->sec.flags & BIT(2) ? '1' : '0',
+ priv->sec.flags & BIT(1) ? '1' : '0',
+ priv->sec.flags & BIT(0) ? '1' : '0');
+
+/* As a temporary work around to enable WPA until we figure out why
+ * wpa_supplicant toggles the security capability of the driver, which
+ * forces a disassocation with force_update...
+ *
+ * if (force_update || !(priv->status & STATUS_ASSOCIATED))*/
+ if (!(priv->status & STATUS_ASSOCIATED))
+ ipw2100_configure_security(priv, 0);
+}
+
+static struct ieee80211_helper_functions ipw2100_ieee_callbacks = {
+ .set_security = shim__set_security,
+};
+
+static int ipw2100_adapter_setup(struct ipw2100_priv *priv)
+{
+ int err;
+ int batch_mode = 1;
+ u8 *bssid;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ err = ipw2100_disable_adapter(priv);
+ if (err)
+ return err;
+#ifdef CONFIG_IPW2100_PROMISC
+ if (priv->ieee->iw_mode == IW_MODE_MONITOR) {
+ err = ipw2100_set_channel(priv, priv->channel, batch_mode);
+ if (err)
+ return err;
+
+ IPW_DEBUG_INFO("exit\n");
+
+ return 0;
+ }
+#endif /* CONFIG_IPW2100_PROMISC */
+
+ err = ipw2100_read_mac_address(priv);
+ if (err)
+ return -EIO;
+
+ err = ipw2100_set_mac_address(priv, batch_mode);
+ if (err)
+ return err;
+
+ err = ipw2100_set_port_type(priv, priv->ieee->iw_mode, batch_mode);
+ if (err)
+ return err;
+
+ if (priv->ieee->iw_mode == IW_MODE_ADHOC) {
+ err = ipw2100_set_channel(priv, priv->channel, batch_mode);
+ if (err)
+ return err;
+ }
+
+ err = ipw2100_system_config(priv, batch_mode);
+ if (err)
+ return err;
+
+ err = ipw2100_set_tx_rates(priv, priv->tx_rates, batch_mode);
+ if (err)
+ return err;
+
+ /* Default to power mode OFF */
+ err = ipw2100_set_power_mode(priv, IPW_POWER_MODE_CAM);
+ if (err)
+ return err;
+
+ err = ipw2100_set_rts_threshold(priv, priv->rts_threshold);
+ if (err)
+ return err;
+
+ if (priv->config & CFG_STATIC_BSSID)
+ bssid = priv->bssid;
+ else
+ bssid = NULL;
+ err = ipw2100_set_mandatory_bssid(priv, bssid, batch_mode);
+ if (err)
+ return err;
+
+ if (priv->config & CFG_STATIC_ESSID)
+ err = ipw2100_set_essid(priv, priv->essid, priv->essid_len,
+ batch_mode);
+ else
+ err = ipw2100_set_essid(priv, NULL, 0, batch_mode);
+ if (err)
+ return err;
+
+ err = ipw2100_configure_security(priv, batch_mode);
+ if (err)
+ return err;
+
+ if (priv->ieee->iw_mode == IW_MODE_ADHOC) {
+ err = ipw2100_set_ibss_beacon_interval(
+ priv, priv->beacon_interval, batch_mode);
+ if (err)
+ return err;
+
+ err = ipw2100_set_tx_power(priv, priv->tx_power);
+ if (err)
+ return err;
+ }
+
+ /*
+ err = ipw2100_set_fragmentation_threshold(
+ priv, priv->frag_threshold, batch_mode);
+ if (err)
+ return err;
+ */
+
+ IPW_DEBUG_INFO("exit\n");
+
+ return 0;
+}
+
+
+/*************************************************************************
+ *
+ * EXTERNALLY CALLED METHODS
+ *
+ *************************************************************************/
+
+/* This method is called by the network layer -- not to be confused with
+ * ipw2100_set_mac_address() declared above called by this driver (and this
+ * method as well) to talk to the firmware */
+static int ipw2100_set_address(struct net_device *dev, void *p)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ struct sockaddr *addr = p;
+ int err = 0;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ priv->config |= CFG_CUSTOM_MAC;
+ memcpy(priv->mac_addr, addr->sa_data, ETH_ALEN);
+
+ err = ipw2100_set_mac_address(priv, 0);
+ if (err)
+ goto done;
+
+ priv->reset_backoff = 0;
+ ipw2100_reset_adapter(priv);
+ done:
+ return err;
+}
+
+static int ipw2100_open(struct net_device *dev)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ unsigned long flags;
+ IPW_DEBUG_INFO("dev->open\n");
+
+ spin_lock_irqsave(&priv->low_lock, flags);
+ if (priv->status & STATUS_ASSOCIATED)
+ netif_start_queue(dev);
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+
+ return 0;
+}
+
+static int ipw2100_close(struct net_device *dev)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ unsigned long flags;
+ struct list_head *element;
+ struct ipw2100_tx_packet *packet;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ spin_lock_irqsave(&priv->low_lock, flags);
+
+ if (priv->status & STATUS_ASSOCIATED)
+ netif_carrier_off(dev);
+ netif_stop_queue(dev);
+
+ /* Flush the TX queue ... */
+ while (!list_empty(&priv->tx_pend_list)) {
+ element = priv->tx_pend_list.next;
+ packet = list_entry(element, struct ipw2100_tx_packet, list);
+
+ list_del(element);
+ DEC_STAT(&priv->tx_pend_stat);
+
+ ieee80211_txb_free(packet->info.d_struct.txb);
+ packet->info.d_struct.txb = NULL;
+
+ list_add_tail(element, &priv->tx_free_list);
+ INC_STAT(&priv->tx_free_stat);
+ }
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+
+ IPW_DEBUG_INFO("exit\n");
+
+ return 0;
+}
+
+
+
+/*
+ * TODO: Fix this function... its just wrong
+ */
+static void ipw2100_tx_timeout(struct net_device *dev)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ priv->ieee->stats.tx_errors++;
+
+#ifdef CONFIG_IPW2100_PROMISC
+ if (priv->ieee->iw_mode == IW_MODE_MONITOR)
+ return;
+#endif
+
+ IPW_DEBUG_INFO("%s: TX timed out. Scheduling firmware restart.\n",
+ dev->name);
+ schedule_reset(priv);
+}
+
+
+/*
+ * TODO: reimplement it so that it reads statistics
+ * from the adapter using ordinal tables
+ * instead of/in addition to collecting them
+ * in the driver
+ */
+static struct net_device_stats *ipw2100_stats(struct net_device *dev)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ return &priv->ieee->stats;
+}
+
+/* Support for wpa_supplicant. Will be replaced with WEXT once
+ * they get WPA support. */
+#ifdef CONFIG_IEEE80211_WPA
+
+/* following definitions must match definitions in driver_ipw2100.c */
+
+#define IPW2100_IOCTL_WPA_SUPPLICANT SIOCIWFIRSTPRIV+30
+
+#define IPW2100_CMD_SET_WPA_PARAM 1
+#define IPW2100_CMD_SET_WPA_IE 2
+#define IPW2100_CMD_SET_ENCRYPTION 3
+#define IPW2100_CMD_MLME 4
+
+#define IPW2100_PARAM_WPA_ENABLED 1
+#define IPW2100_PARAM_TKIP_COUNTERMEASURES 2
+#define IPW2100_PARAM_DROP_UNENCRYPTED 3
+#define IPW2100_PARAM_PRIVACY_INVOKED 4
+#define IPW2100_PARAM_AUTH_ALGS 5
+#define IPW2100_PARAM_IEEE_802_1X 6
+
+#define IPW2100_MLME_STA_DEAUTH 1
+#define IPW2100_MLME_STA_DISASSOC 2
+
+#define IPW2100_CRYPT_ERR_UNKNOWN_ALG 2
+#define IPW2100_CRYPT_ERR_UNKNOWN_ADDR 3
+#define IPW2100_CRYPT_ERR_CRYPT_INIT_FAILED 4
+#define IPW2100_CRYPT_ERR_KEY_SET_FAILED 5
+#define IPW2100_CRYPT_ERR_TX_KEY_SET_FAILED 6
+#define IPW2100_CRYPT_ERR_CARD_CONF_FAILED 7
+
+#define IPW2100_CRYPT_ALG_NAME_LEN 16
+
+struct ipw2100_param {
+ u32 cmd;
+ u8 sta_addr[ETH_ALEN];
+ union {
+ struct {
+ u8 name;
+ u32 value;
+ } wpa_param;
+ struct {
+ u32 len;
+ u8 *data;
+ } wpa_ie;
+ struct{
+ int command;
+ int reason_code;
+ } mlme;
+ struct {
+ u8 alg[IPW2100_CRYPT_ALG_NAME_LEN];
+ u8 set_tx;
+ u32 err;
+ u8 idx;
+ u8 seq[8]; /* sequence counter (set: RX, get: TX) */
+ u16 key_len;
+ u8 key[0];
+ } crypt;
+
+ } u;
+};
+
+/* end of driver_ipw2100.c code */
+
+static int ipw2100_wpa_enable(struct ipw2100_priv *priv, int value){
+
+ struct ieee80211_device *ieee = priv->ieee;
+ struct ieee80211_security sec = {
+ .flags = SEC_LEVEL | SEC_ENABLED,
+ };
+ int ret = 0;
+
+ ieee->wpa_enabled = value;
+
+ if (value){
+ sec.level = SEC_LEVEL_3;
+ sec.enabled = 1;
+ } else {
+ sec.level = SEC_LEVEL_0;
+ sec.enabled = 0;
+ }
+
+ if (ieee->func && ieee->func->set_security)
+ ieee->func->set_security(ieee, &sec);
+ else
+ ret = -EOPNOTSUPP;
+
+ return ret;
+}
+
+#define AUTH_ALG_OPEN_SYSTEM 0x1
+#define AUTH_ALG_SHARED_KEY 0x2
+
+static int ipw2100_wpa_set_auth_algs(struct ipw2100_priv *priv, int value){
+
+ struct ieee80211_device *ieee = priv->ieee;
+ struct ieee80211_security sec = {
+ .flags = SEC_AUTH_MODE,
+ };
+ int ret = 0;
+
+ if (value & AUTH_ALG_SHARED_KEY){
+ sec.auth_mode = WLAN_AUTH_SHARED_KEY;
+ ieee->open_wep = 0;
+ } else {
+ sec.auth_mode = WLAN_AUTH_OPEN;
+ ieee->open_wep = 1;
+ }
+
+ if (ieee->func && ieee->func->set_security)
+ ieee->func->set_security(ieee, &sec);
+ else
+ ret = -EOPNOTSUPP;
+
+ return ret;
+}
+
+
+static int ipw2100_wpa_set_param(struct net_device *dev, u8 name, u32 value){
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int ret=0;
+
+ switch(name){
+ case IPW2100_PARAM_WPA_ENABLED:
+ ret = ipw2100_wpa_enable(priv, value);
+ break;
+
+ case IPW2100_PARAM_TKIP_COUNTERMEASURES:
+ priv->ieee->tkip_countermeasures=value;
+ break;
+
+ case IPW2100_PARAM_DROP_UNENCRYPTED:
+ priv->ieee->drop_unencrypted=value;
+ break;
+
+ case IPW2100_PARAM_PRIVACY_INVOKED:
+ priv->ieee->privacy_invoked=value;
+ break;
+
+ case IPW2100_PARAM_AUTH_ALGS:
+ ret = ipw2100_wpa_set_auth_algs(priv, value);
+ break;
+
+ case IPW2100_PARAM_IEEE_802_1X:
+ priv->ieee->ieee_802_1x=value;
+ break;
+
+ default:
+ IPW_DEBUG_ERROR("%s: Unknown WPA param: %d\n",
+ dev->name, name);
+ ret = -EOPNOTSUPP;
+ }
+
+ return ret;
+}
+
+static int ipw2100_wpa_mlme(struct net_device *dev, int command, int reason){
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int ret=0;
+
+ switch(command){
+ case IPW2100_MLME_STA_DEAUTH:
+ // silently ignore
+ break;
+
+ case IPW2100_MLME_STA_DISASSOC:
+ ipw2100_disassociate_bssid(priv);
+ break;
+
+ default:
+ IPW_DEBUG_ERROR("%s: Unknown MLME request: %d\n",
+ dev->name, command);
+ ret = -EOPNOTSUPP;
+ }
+
+ return ret;
+}
+
+
+void ipw2100_wpa_assoc_frame(struct ipw2100_priv *priv,
+ char *wpa_ie, int wpa_ie_len){
+
+ struct ipw2100_wpa_assoc_frame frame;
+
+ frame.fixed_ie_mask = 0;
+
+ /* copy WPA IE */
+ memcpy(frame.var_ie, wpa_ie, wpa_ie_len);
+ frame.var_ie_len = wpa_ie_len;
+
+ /* make sure WPA is enabled */
+ ipw2100_wpa_enable(priv, 1);
+ ipw2100_set_wpa_ie(priv, &frame, 0);
+}
+
+
+static int ipw2100_wpa_set_wpa_ie(struct net_device *dev,
+ struct ipw2100_param *param, int plen){
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ struct ieee80211_device *ieee = priv->ieee;
+ u8 *buf;
+
+ if (! ieee->wpa_enabled)
+ return -EOPNOTSUPP;
+
+ if (param->u.wpa_ie.len > MAX_WPA_IE_LEN ||
+ (param->u.wpa_ie.len &&
+ param->u.wpa_ie.data==NULL))
+ return -EINVAL;
+
+ if (param->u.wpa_ie.len){
+ buf = kmalloc(param->u.wpa_ie.len, GFP_KERNEL);
+ if (buf == NULL)
+ return -ENOMEM;
+
+ memcpy(buf, param->u.wpa_ie.data, param->u.wpa_ie.len);
+
+ kfree(ieee->wpa_ie);
+ ieee->wpa_ie = buf;
+ ieee->wpa_ie_len = param->u.wpa_ie.len;
+
+ } else {
+ kfree(ieee->wpa_ie);
+ ieee->wpa_ie = NULL;
+ ieee->wpa_ie_len = 0;
+ }
+
+ ipw2100_wpa_assoc_frame(priv, ieee->wpa_ie, ieee->wpa_ie_len);
+
+ return 0;
+}
+
+/* implementation borrowed from hostap driver */
+
+static int ipw2100_wpa_set_encryption(struct net_device *dev,
+ struct ipw2100_param *param, int param_len){
+
+ int ret = 0;
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ struct ieee80211_device *ieee = priv->ieee;
+ struct ieee80211_crypto_ops *ops;
+ struct ieee80211_crypt_data **crypt;
+
+ struct ieee80211_security sec = {
+ .flags = 0,
+ };
+
+ param->u.crypt.err = 0;
+ param->u.crypt.alg[IPW2100_CRYPT_ALG_NAME_LEN - 1] = '\0';
+
+ if (param_len !=
+ (int) ((char *) param->u.crypt.key - (char *) param) +
+ param->u.crypt.key_len){
+ IPW_DEBUG_INFO("Len mismatch %d, %d\n", param_len, param->u.crypt.key_len);
+ return -EINVAL;
+ }
+ if (param->sta_addr[0] == 0xff && param->sta_addr[1] == 0xff &&
+ param->sta_addr[2] == 0xff && param->sta_addr[3] == 0xff &&
+ param->sta_addr[4] == 0xff && param->sta_addr[5] == 0xff) {
+ if (param->u.crypt.idx >= WEP_KEYS)
+ return -EINVAL;
+ crypt = &ieee->crypt[param->u.crypt.idx];
+ } else {
+ return -EINVAL;
+ }
+
+ if (strcmp(param->u.crypt.alg, "none") == 0) {
+ if (crypt){
+ sec.enabled = 0;
+ sec.level = SEC_LEVEL_0;
+ sec.flags |= SEC_ENABLED | SEC_LEVEL;
+ ieee80211_crypt_delayed_deinit(ieee, crypt);
+ }
+ goto done;
+ }
+ sec.enabled = 1;
+ sec.flags |= SEC_ENABLED;
+
+ ops = ieee80211_get_crypto_ops(param->u.crypt.alg);
+ if (ops == NULL && strcmp(param->u.crypt.alg, "WEP") == 0) {
+ request_module("ieee80211_crypt_wep");
+ ops = ieee80211_get_crypto_ops(param->u.crypt.alg);
+ } else if (ops == NULL && strcmp(param->u.crypt.alg, "TKIP") == 0) {
+ request_module("ieee80211_crypt_tkip");
+ ops = ieee80211_get_crypto_ops(param->u.crypt.alg);
+ } else if (ops == NULL && strcmp(param->u.crypt.alg, "CCMP") == 0) {
+ request_module("ieee80211_crypt_ccmp");
+ ops = ieee80211_get_crypto_ops(param->u.crypt.alg);
+ }
+ if (ops == NULL) {
+ IPW_DEBUG_INFO("%s: unknown crypto alg '%s'\n",
+ dev->name, param->u.crypt.alg);
+ param->u.crypt.err = IPW2100_CRYPT_ERR_UNKNOWN_ALG;
+ ret = -EINVAL;
+ goto done;
+ }
+
+ if (*crypt == NULL || (*crypt)->ops != ops) {
+ struct ieee80211_crypt_data *new_crypt;
+
+ ieee80211_crypt_delayed_deinit(ieee, crypt);
+
+ new_crypt = (struct ieee80211_crypt_data *)
+ kmalloc(sizeof(struct ieee80211_crypt_data), GFP_KERNEL);
+ if (new_crypt == NULL) {
+ ret = -ENOMEM;
+ goto done;
+ }
+ memset(new_crypt, 0, sizeof(struct ieee80211_crypt_data));
+ new_crypt->ops = ops;
+ if (new_crypt->ops && try_module_get(new_crypt->ops->owner))
+ new_crypt->priv = new_crypt->ops->init(param->u.crypt.idx);
+
+ if (new_crypt->priv == NULL) {
+ kfree(new_crypt);
+ param->u.crypt.err =
+ IPW2100_CRYPT_ERR_CRYPT_INIT_FAILED;
+ ret = -EINVAL;
+ goto done;
+ }
+
+ *crypt = new_crypt;
+ }
+
+ if (param->u.crypt.key_len > 0 && (*crypt)->ops->set_key &&
+ (*crypt)->ops->set_key(param->u.crypt.key,
+ param->u.crypt.key_len, param->u.crypt.seq,
+ (*crypt)->priv) < 0) {
+ IPW_DEBUG_INFO("%s: key setting failed\n",
+ dev->name);
+ param->u.crypt.err = IPW2100_CRYPT_ERR_KEY_SET_FAILED;
+ ret = -EINVAL;
+ goto done;
+ }
+
+ if (param->u.crypt.set_tx){
+ ieee->tx_keyidx = param->u.crypt.idx;
+ sec.active_key = param->u.crypt.idx;
+ sec.flags |= SEC_ACTIVE_KEY;
+ }
+
+ if (ops->name != NULL){
+
+ if (strcmp(ops->name, "WEP") == 0) {
+ memcpy(sec.keys[param->u.crypt.idx], param->u.crypt.key, param->u.crypt.key_len);
+ sec.key_sizes[param->u.crypt.idx] = param->u.crypt.key_len;
+ sec.flags |= (1 << param->u.crypt.idx);
+ sec.flags |= SEC_LEVEL;
+ sec.level = SEC_LEVEL_1;
+ } else if (strcmp(ops->name, "TKIP") == 0) {
+ sec.flags |= SEC_LEVEL;
+ sec.level = SEC_LEVEL_2;
+ } else if (strcmp(ops->name, "CCMP") == 0) {
+ sec.flags |= SEC_LEVEL;
+ sec.level = SEC_LEVEL_3;
+ }
+ }
+ done:
+ if (ieee->func && ieee->func->set_security)
+ ieee->func->set_security(ieee, &sec);
+
+ /* Do not reset port if card is in Managed mode since resetting will
+ * generate new IEEE 802.11 authentication which may end up in looping
+ * with IEEE 802.1X. If your hardware requires a reset after WEP
+ * configuration (for example... Prism2), implement the reset_port in
+ * the callbacks structures used to initialize the 802.11 stack. */
+ if (ieee->reset_on_keychange &&
+ ieee->iw_mode != IW_MODE_INFRA &&
+ ieee->func->reset_port &&
+ ieee->func->reset_port(dev)) {
+ IPW_DEBUG_INFO("%s: reset_port failed\n", dev->name);
+ param->u.crypt.err = IPW2100_CRYPT_ERR_CARD_CONF_FAILED;
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+
+static int ipw2100_wpa_supplicant(struct net_device *dev, struct iw_point *p){
+
+ struct ipw2100_param *param;
+ int ret=0;
+
+ IPW_DEBUG_IOCTL("wpa_supplicant: len=%d\n", p->length);
+
+ if (p->length < sizeof(struct ipw2100_param) || !p->pointer)
+ return -EINVAL;
+
+ param = (struct ipw2100_param *)kmalloc(p->length, GFP_KERNEL);
+ if (param == NULL)
+ return -ENOMEM;
+
+ if (copy_from_user(param, p->pointer, p->length)){
+ kfree(param);
+ return -EFAULT;
+ }
+
+ switch (param->cmd){
+
+ case IPW2100_CMD_SET_WPA_PARAM:
+ ret = ipw2100_wpa_set_param(dev, param->u.wpa_param.name,
+ param->u.wpa_param.value);
+ break;
+
+ case IPW2100_CMD_SET_WPA_IE:
+ ret = ipw2100_wpa_set_wpa_ie(dev, param, p->length);
+ break;
+
+ case IPW2100_CMD_SET_ENCRYPTION:
+ ret = ipw2100_wpa_set_encryption(dev, param, p->length);
+ break;
+
+ case IPW2100_CMD_MLME:
+ ret = ipw2100_wpa_mlme(dev, param->u.mlme.command,
+ param->u.mlme.reason_code);
+ break;
+
+ default:
+ IPW_DEBUG_ERROR("%s: Unknown WPA supplicant request: %d\n",
+ dev->name, param->cmd);
+ ret = -EOPNOTSUPP;
+
+ }
+
+ if (ret == 0 && copy_to_user(p->pointer, param, p->length))
+ ret = -EFAULT;
+
+ kfree(param);
+ return ret;
+}
+#endif /* CONFIG_IEEE80211_WPA */
+
+static int ipw2100_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+#ifdef CONFIG_IEEE80211_WPA
+ struct iwreq *wrq = (struct iwreq *) rq;
+ int ret=-1;
+ switch (cmd){
+ case IPW2100_IOCTL_WPA_SUPPLICANT:
+ ret = ipw2100_wpa_supplicant(dev, &wrq->u.data);
+ return ret;
+
+ default:
+ return -EOPNOTSUPP;
+ }
+
+#endif /* CONFIG_IEEE80211_WPA */
+
+ return -EOPNOTSUPP;
+}
+
+
+static void ipw_ethtool_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ char fw_ver[64], ucode_ver[64];
+
+ strcpy(info->driver, DRV_NAME);
+ strcpy(info->version, DRV_VERSION);
+
+ ipw2100_get_fwversion(priv, fw_ver, sizeof(fw_ver));
+ ipw2100_get_ucodeversion(priv, ucode_ver, sizeof(ucode_ver));
+
+ snprintf(info->fw_version, sizeof(info->fw_version), "%s:%d:%s",
+ fw_ver, priv->eeprom_version, ucode_ver);
+
+ strcpy(info->bus_info, pci_name(priv->pci_dev));
+}
+
+static u32 ipw2100_ethtool_get_link(struct net_device *dev)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ return (priv->status & STATUS_ASSOCIATED) ? 1 : 0;
+}
+
+
+static struct ethtool_ops ipw2100_ethtool_ops = {
+ .get_link = ipw2100_ethtool_get_link,
+ .get_drvinfo = ipw_ethtool_get_drvinfo,
+};
+
+static void ipw2100_hang_check(void *adapter)
+{
+ struct ipw2100_priv *priv = adapter;
+ unsigned long flags;
+ u32 rtc = 0xa5a5a5a5;
+ u32 len = sizeof(rtc);
+ int restart = 0;
+
+ spin_lock_irqsave(&priv->low_lock, flags);
+
+ if (priv->fatal_error != 0) {
+ /* If fatal_error is set then we need to restart */
+ IPW_DEBUG_INFO("%s: Hardware fatal error detected.\n",
+ priv->net_dev->name);
+
+ restart = 1;
+ } else if (ipw2100_get_ordinal(priv, IPW_ORD_RTC_TIME, &rtc, &len) ||
+ (rtc == priv->last_rtc)) {
+ /* Check if firmware is hung */
+ IPW_DEBUG_INFO("%s: Firmware RTC stalled.\n",
+ priv->net_dev->name);
+
+ restart = 1;
+ } /*else {
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+ if (ipw2100_set_rts_threshold(priv, priv->rts_threshold)) {
+ IPW_DEBUG_INFO("%s: Firmware command queue "
+ "stalled.\n",
+ priv->net_dev->name);
+
+ restart = 1;
+ }
+ spin_lock_irqsave(&priv->low_lock, flags);
+ }*/
+
+ if (restart) {
+ /* Kill timer */
+ priv->stop_hang_check = 1;
+ priv->hangs++;
+
+ /* Restart the NIC */
+ schedule_reset(priv);
+ }
+
+ priv->last_rtc = rtc;
+
+ /* Check again in two seconds */
+ if (!priv->stop_hang_check)
+ queue_delayed_work(priv->workqueue, &priv->hang_check, HZ);
+
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+}
+
+
+static void ipw2100_rf_kill(void *adapter)
+{
+ struct ipw2100_priv *priv = adapter;
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->low_lock, flags);
+
+ if (rf_kill_active(priv)) {
+ IPW_DEBUG_RF_KILL("RF Kill active, rescheduling GPIO check\n");
+ if (!priv->stop_rf_kill)
+ queue_delayed_work(priv->workqueue, &priv->rf_kill, HZ);
+ goto exit_unlock;
+ }
+
+ /* RF Kill is now disabled, so bring the device back up */
+
+ if (!(priv->status & STATUS_RF_KILL_MASK)) {
+ IPW_DEBUG_RF_KILL("HW RF Kill no longer active, restarting "
+ "device\n");
+ schedule_reset(priv);
+ } else
+ IPW_DEBUG_RF_KILL("HW RF Kill deactivated. SW RF Kill still "
+ "enabled\n");
+
+ exit_unlock:
+ spin_unlock_irqrestore(&priv->low_lock, flags);
+}
+
+static void ipw2100_irq_tasklet(struct ipw2100_priv *priv);
+
+static struct net_device *ipw2100_alloc_device(
+ struct pci_dev *pci_dev,
+ char *base_addr,
+ unsigned long mem_start,
+ unsigned long mem_len)
+{
+ struct ipw2100_priv *priv;
+ struct net_device *dev;
+
+ dev = alloc_etherdev(sizeof(struct ipw2100_priv));
+ if (!dev)
+ return NULL;
+
+ dev->type = ARPHRD_ETHER;
+ dev->open = ipw2100_open;
+ dev->stop = ipw2100_close;
+ dev->init = ipw2100_net_init;
+ dev->hard_start_xmit = ipw2100_tx;
+ dev->do_ioctl = ipw2100_ioctl;
+ dev->get_stats = ipw2100_stats;
+ dev->ethtool_ops = &ipw2100_ethtool_ops;
+ dev->tx_timeout = ipw2100_tx_timeout;
+ dev->wireless_handlers = &ipw2100_wx_handler_def;
+ dev->get_wireless_stats = ipw2100_wx_wireless_stats;
+ dev->set_mac_address = ipw2100_set_address;
+ dev->watchdog_timeo = 3*HZ;
+ dev->irq = 0;
+
+ dev->base_addr = (unsigned long)base_addr;
+ dev->mem_start = mem_start;
+ dev->mem_end = dev->mem_start + mem_len - 1;
+
+ if (ifname && ifname[0] != '\0')
+ strncpy(dev->name, ifname, IFNAMSIZ - 1);
+
+ /* NOTE: We don't use the wireless_handlers hook
+ * in dev as the system will start throwing WX requests
+ * to us before we're actually initialized and it just
+ * ends up causing problems. So, we just handle
+ * the WX extensions through the ipw2100_ioctl interface */
+
+ priv = netdev_priv(dev);
+
+ priv->pci_dev = pci_dev;
+ priv->net_dev = dev;
+
+ /* memset() puts everything to 0, so we only have explicitely set
+ * those values that need to be something else */
+
+ /* If power management is turned on, default to AUTO mode */
+ priv->power_mode = IPW_POWER_AUTO;
+
+
+
+ /* Initialize IEEE80211 stack */
+ priv->ieee = ieee80211_alloc(dev, priv);
+ if (!priv->ieee) {
+ IPW_DEBUG_WARNING(DRV_NAME
+ ": Unable to allocate IEEE stack\n");
+ free_netdev(dev);
+ return NULL;
+ }
+
+ priv->ieee->func = &ipw2100_ieee_callbacks;
+ priv->ieee->tx_payload_only = 1;
+
+#ifdef CONFIG_IEEE80211_WPA
+ priv->ieee->wpa_enabled = 0;
+ priv->ieee->tkip_countermeasures = 0;
+ priv->ieee->drop_unencrypted = 0;
+ priv->ieee->privacy_invoked = 0;
+ priv->ieee->ieee_802_1x = 1;
+#endif /* CONFIG_IEEE80211_WPA */
+
+ /* Set module parameters */
+ switch (mode) {
+ case 1:
+ priv->ieee->iw_mode = IW_MODE_ADHOC;
+ break;
+#ifdef CONFIG_IPW2100_PROMISC
+ case 2:
+ priv->ieee->iw_mode = IW_MODE_MONITOR;
+ break;
+#endif
+ default:
+ case 0:
+ priv->ieee->iw_mode = IW_MODE_INFRA;
+ break;
+ }
+
+ if (disable == 1)
+ priv->status |= STATUS_RF_KILL_SW;
+
+ if (channel != 0 &&
+ ((channel >= REG_MIN_CHANNEL) &&
+ (channel <= REG_MAX_CHANNEL))) {
+ priv->config |= CFG_STATIC_CHANNEL;
+ priv->channel = channel;
+ }
+
+ if (associate)
+ priv->config |= CFG_ASSOCIATE;
+
+ priv->beacon_interval = DEFAULT_BEACON_INTERVAL;
+ priv->short_retry_limit = DEFAULT_SHORT_RETRY_LIMIT;
+ priv->long_retry_limit = DEFAULT_LONG_RETRY_LIMIT;
+ priv->rts_threshold = DEFAULT_RTS_THRESHOLD | RTS_DISABLED;
+ priv->frag_threshold = DEFAULT_FTS | FRAG_DISABLED;
+ priv->tx_power = IPW_TX_POWER_DEFAULT;
+ priv->tx_rates = DEFAULT_TX_RATES;
+
+ strcpy(priv->nick, "ipw2100");
+
+ spin_lock_init(&priv->low_lock);
+
+ init_waitqueue_head(&priv->wait_command_queue);
+
+ netif_carrier_off(dev);
+
+ INIT_LIST_HEAD(&priv->msg_free_list);
+ INIT_LIST_HEAD(&priv->msg_pend_list);
+ INIT_STAT(&priv->msg_free_stat);
+ INIT_STAT(&priv->msg_pend_stat);
+
+ INIT_LIST_HEAD(&priv->tx_free_list);
+ INIT_LIST_HEAD(&priv->tx_pend_list);
+ INIT_STAT(&priv->tx_free_stat);
+ INIT_STAT(&priv->tx_pend_stat);
+
+ INIT_LIST_HEAD(&priv->fw_pend_list);
+ INIT_STAT(&priv->fw_pend_stat);
+
+
+#ifdef CONFIG_SOFTWARE_SUSPEND2
+ priv->workqueue = create_workqueue(DRV_NAME, 0);
+#else
+ priv->workqueue = create_workqueue(DRV_NAME);
+#endif
+ INIT_WORK(&priv->reset_work,
+ (void (*)(void *))ipw2100_reset_adapter, priv);
+ INIT_WORK(&priv->security_work,
+ (void (*)(void *))ipw2100_security_work, priv);
+ INIT_WORK(&priv->wx_event_work,
+ (void (*)(void *))ipw2100_wx_event_work, priv);
+ INIT_WORK(&priv->hang_check, ipw2100_hang_check, priv);
+ INIT_WORK(&priv->rf_kill, ipw2100_rf_kill, priv);
+
+ tasklet_init(&priv->irq_tasklet, (void (*)(unsigned long))
+ ipw2100_irq_tasklet, (unsigned long)priv);
+
+ /* NOTE: We do not start the deferred work for status checks yet */
+ priv->stop_rf_kill = 1;
+ priv->stop_hang_check = 1;
+
+ return dev;
+}
+
+
+
+#define PCI_DMA_32BIT 0x00000000ffffffffULL
+
+static int ipw2100_pci_init_one(struct pci_dev *pci_dev,
+ const struct pci_device_id *ent)
+{
+ unsigned long mem_start, mem_len, mem_flags;
+ char *base_addr = NULL;
+ struct net_device *dev = NULL;
+ struct ipw2100_priv *priv = NULL;
+ int err = 0;
+ int registered = 0;
+ u32 val;
+
+ IPW_DEBUG_INFO("enter\n");
+
+ mem_start = pci_resource_start(pci_dev, 0);
+ mem_len = pci_resource_len(pci_dev, 0);
+ mem_flags = pci_resource_flags(pci_dev, 0);
+
+ if ((mem_flags & IORESOURCE_MEM) != IORESOURCE_MEM) {
+ IPW_DEBUG_INFO("weird - resource type is not memory\n");
+ err = -ENODEV;
+ goto fail;
+ }
+
+ base_addr = ioremap_nocache(mem_start, mem_len);
+ if (!base_addr) {
+ printk(KERN_WARNING DRV_NAME
+ "Error calling ioremap_nocache.\n");
+ err = -EIO;
+ goto fail;
+ }
+
+ /* allocate and initialize our net_device */
+ dev = ipw2100_alloc_device(pci_dev, base_addr, mem_start, mem_len);
+ if (!dev) {
+ printk(KERN_WARNING DRV_NAME
+ "Error calling ipw2100_alloc_device.\n");
+ err = -ENOMEM;
+ goto fail;
+ }
+
+ /* set up PCI mappings for device */
+ err = pci_enable_device(pci_dev);
+ if (err) {
+ printk(KERN_WARNING DRV_NAME
+ "Error calling pci_enable_device.\n");
+ return err;
+ }
+
+ priv = netdev_priv(dev);
+
+ pci_set_master(pci_dev);
+ pci_set_drvdata(pci_dev, priv);
+
+ err = pci_set_dma_mask(pci_dev, PCI_DMA_32BIT);
+ if (err) {
+ printk(KERN_WARNING DRV_NAME
+ "Error calling pci_set_dma_mask.\n");
+ pci_disable_device(pci_dev);
+ return err;
+ }
+
+ err = pci_request_regions(pci_dev, DRV_NAME);
+ if (err) {
+ printk(KERN_WARNING DRV_NAME
+ "Error calling pci_request_regions.\n");
+ pci_disable_device(pci_dev);
+ return err;
+ }
+
+ /* We disable the RETRY_TIMEOUT register (0x41) to keep
+ * PCI Tx retries from interfering with C3 CPU state */
+ pci_read_config_dword(pci_dev, 0x40, &val);
+ if ((val & 0x0000ff00) != 0)
+ pci_write_config_dword(pci_dev, 0x40, val & 0xffff00ff);
+
+ pci_set_power_state(pci_dev, 0);
+
+ if (!ipw2100_hw_is_adapter_in_system(dev)) {
+ printk(KERN_WARNING DRV_NAME
+ "Device not found via register read.\n");
+ err = -ENODEV;
+ goto fail;
+ }
+
+ SET_NETDEV_DEV(dev, &pci_dev->dev);
+
+ /* Force interrupts to be shut off on the device */
+ priv->status |= STATUS_INT_ENABLED;
+ ipw2100_disable_interrupts(priv);
+
+ /* Allocate and initialize the Tx/Rx queues and lists */
+ if (ipw2100_queues_allocate(priv)) {
+ printk(KERN_WARNING DRV_NAME
+ "Error calilng ipw2100_queues_allocate.\n");
+ err = -ENOMEM;
+ goto fail;
+ }
+ ipw2100_queues_initialize(priv);
+
+ err = request_irq(pci_dev->irq,
+ ipw2100_interrupt, SA_SHIRQ,
+ dev->name, priv);
+ if (err) {
+ printk(KERN_WARNING DRV_NAME
+ "Error calling request_irq: %d.\n",
+ pci_dev->irq);
+ goto fail;
+ }
+ dev->irq = pci_dev->irq;
+
+ IPW_DEBUG_INFO("Attempting to register device...\n");
+
+ SET_MODULE_OWNER(dev);
+
+ printk(KERN_INFO DRV_NAME
+ ": Detected Intel PRO/Wireless 2100 Network Connection\n");
+
+ /* Bring up the interface. Pre 0.46, after we registered the
+ * network device we would call ipw2100_up. This introduced a race
+ * condition with newer hotplug configurations (network was coming
+ * up and making calls before the device was initialized).
+ *
+ * If we called ipw2100_up before we registered the device, then the
+ * device name wasn't registered. So, we instead use the net_dev->init
+ * member to call a function that then just turns and calls ipw2100_up.
+ * net_dev->init is called after name allocation but before the
+ * notifier chain is called */
+ err = register_netdev(dev);
+ if (err) {
+ printk(KERN_WARNING DRV_NAME
+ "Error calling regiser_netdev.\n");
+ goto fail_unlock;
+ }
+ registered = 1;
+
+ IPW_DEBUG_INFO("%s: Bound to %s\n", dev->name, pci_name(pci_dev));
+
+ /* perform this after register_netdev so that dev->name is set */
+ sysfs_create_group(&pci_dev->dev.kobj, &ipw2100_attribute_group);
+ netif_carrier_off(dev);
+
+ /* If the RF Kill switch is disabled, go ahead and complete the
+ * startup sequence */
+ if (!(priv->status & STATUS_RF_KILL_MASK)) {
+ /* Enable the adapter - sends HOST_COMPLETE */
+ if (ipw2100_enable_adapter(priv)) {
+ printk(KERN_WARNING DRV_NAME
+ ": %s: failed in call to enable adapter.\n",
+ priv->net_dev->name);
+ ipw2100_hw_stop_adapter(priv);
+ err = -EIO;
+ goto fail_unlock;
+ }
+
+ /* Start a scan . . . */
+ ipw2100_set_scan_options(priv);
+ ipw2100_start_scan(priv);
+ }
+
+ IPW_DEBUG_INFO("exit\n");
+
+ priv->status |= STATUS_INITIALIZED;
+
+ return 0;
+
+ fail_unlock:
+
+ fail:
+ if (dev) {
+ if (registered)
+ unregister_netdev(dev);
+
+ ipw2100_hw_stop_adapter(priv);
+
+ ipw2100_disable_interrupts(priv);
+
+ if (dev->irq)
+ free_irq(dev->irq, priv);
+
+ ipw2100_kill_workqueue(priv);
+
+ if (priv->ieee) {
+ ieee80211_free(priv->ieee);
+ priv->ieee = NULL;
+ }
+
+ /* These are safe to call even if they weren't allocated */
+ ipw2100_queues_free(priv);
+ sysfs_remove_group(&pci_dev->dev.kobj, &ipw2100_attribute_group);
+
+ free_netdev(dev);
+ pci_set_drvdata(pci_dev, NULL);
+ }
+
+ if (base_addr)
+ iounmap((char*)base_addr);
+
+ pci_release_regions(pci_dev);
+ pci_disable_device(pci_dev);
+
+ return err;
+}
+
+static void __devexit ipw2100_pci_remove_one(struct pci_dev *pci_dev)
+{
+ struct ipw2100_priv *priv = pci_get_drvdata(pci_dev);
+ struct net_device *dev;
+
+ if (priv) {
+
+ priv->status &= ~STATUS_INITIALIZED;
+
+ dev = priv->net_dev;
+ sysfs_remove_group(&pci_dev->dev.kobj, &ipw2100_attribute_group);
+
+#ifdef CONFIG_PM
+ if (ipw2100_firmware.version)
+ ipw2100_release_firmware(priv, &ipw2100_firmware);
+#endif
+ /* Take down the hardware */
+ ipw2100_down(priv);
+
+ /* Unregister the device first - this results in close()
+ * being called if the device is open. If we free storage
+ * first, then close() will crash. */
+ unregister_netdev(dev);
+
+ /* ipw2100_down will ensure that there is no more pending work
+ * in the workqueue's, so we can safely remove them now. */
+ ipw2100_kill_workqueue(priv);
+
+ ieee80211_free(priv->ieee);
+ priv->ieee = NULL;
+
+ ipw2100_queues_free(priv);
+
+ /* Free potential debugging firmware snapshot */
+ ipw2100_snapshot_free(priv);
+
+ if (dev->irq)
+ free_irq(dev->irq, priv);
+
+ if (dev->base_addr)
+ iounmap((unsigned char *)dev->base_addr);
+
+ free_netdev(dev);
+ }
+
+ pci_release_regions(pci_dev);
+ pci_disable_device(pci_dev);
+
+ IPW_DEBUG_INFO("exit\n");
+}
+
+
+#ifdef CONFIG_PM
+static int ipw2100_suspend(struct pci_dev *pci_dev, u32 state)
+{
+ struct ipw2100_priv *priv = pci_get_drvdata(pci_dev);
+ struct net_device *dev = priv->net_dev;
+
+ IPW_DEBUG_INFO("%s: Going into suspend...\n",
+ dev->name);
+
+ if (priv->status & STATUS_INITIALIZED) {
+ /* Take down the device; powers it off, etc. */
+ ipw2100_down(priv);
+ }
+
+ /* Remove the PRESENT state of the device */
+ netif_device_detach(dev);
+
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,10)
+ pci_save_state(pci_dev, priv->pm_state);
+#endif
+ pci_set_power_state(pci_dev, state);
+
+ return 0;
+}
+
+static int ipw2100_resume(struct pci_dev *pci_dev)
+{
+ struct ipw2100_priv *priv = pci_get_drvdata(pci_dev);
+ struct net_device *dev = priv->net_dev;
+ u32 val;
+
+ if (IPW2100_PM_DISABLED)
+ return 0;
+
+ IPW_DEBUG_INFO("%s: Coming out of suspend...\n",
+ dev->name);
+
+ pci_set_power_state(pci_dev, 0);
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,10)
+ pci_restore_state(pci_dev, priv->pm_state);
+#else
+ pci_restore_state(pci_dev);
+#endif
+
+ /*
+ * Suspend/Resume resets the PCI configuration space, so we have to
+ * re-disable the RETRY_TIMEOUT register (0x41) to keep PCI Tx retries
+ * from interfering with C3 CPU state. pci_restore_state won't help
+ * here since it only restores the first 64 bytes pci config header.
+ */
+ pci_read_config_dword(pci_dev, 0x40, &val);
+ if ((val & 0x0000ff00) != 0)
+ pci_write_config_dword(pci_dev, 0x40, val & 0xffff00ff);
+
+ /* Set the device back into the PRESENT state; this will also wake
+ * the queue of needed */
+ netif_device_attach(dev);
+
+ /* Bring the device back up */
+ if (!(priv->status & STATUS_RF_KILL_SW))
+ ipw2100_up(priv, 0);
+
+ return 0;
+}
+#endif
+
+
+#define IPW2100_DEV_ID(x) { PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, x }
+
+static struct pci_device_id ipw2100_pci_id_table[] __devinitdata = {
+ IPW2100_DEV_ID(0x2520), /* IN 2100A mPCI 3A */
+ IPW2100_DEV_ID(0x2521), /* IN 2100A mPCI 3B */
+ IPW2100_DEV_ID(0x2524), /* IN 2100A mPCI 3B */
+ IPW2100_DEV_ID(0x2525), /* IN 2100A mPCI 3B */
+ IPW2100_DEV_ID(0x2526), /* IN 2100A mPCI Gen A3 */
+ IPW2100_DEV_ID(0x2522), /* IN 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2523), /* IN 2100 mPCI 3A */
+ IPW2100_DEV_ID(0x2527), /* IN 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2528), /* IN 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2529), /* IN 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x252B), /* IN 2100 mPCI 3A */
+ IPW2100_DEV_ID(0x252C), /* IN 2100 mPCI 3A */
+ IPW2100_DEV_ID(0x252D), /* IN 2100 mPCI 3A */
+
+ IPW2100_DEV_ID(0x2550), /* IB 2100A mPCI 3B */
+ IPW2100_DEV_ID(0x2551), /* IB 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2553), /* IB 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2554), /* IB 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2555), /* IB 2100 mPCI 3B */
+
+ IPW2100_DEV_ID(0x2560), /* DE 2100A mPCI 3A */
+ IPW2100_DEV_ID(0x2562), /* DE 2100A mPCI 3A */
+ IPW2100_DEV_ID(0x2563), /* DE 2100A mPCI 3A */
+ IPW2100_DEV_ID(0x2561), /* DE 2100 mPCI 3A */
+ IPW2100_DEV_ID(0x2565), /* DE 2100 mPCI 3A */
+ IPW2100_DEV_ID(0x2566), /* DE 2100 mPCI 3A */
+ IPW2100_DEV_ID(0x2567), /* DE 2100 mPCI 3A */
+
+ IPW2100_DEV_ID(0x2570), /* GA 2100 mPCI 3B */
+
+ IPW2100_DEV_ID(0x2580), /* TO 2100A mPCI 3B */
+ IPW2100_DEV_ID(0x2582), /* TO 2100A mPCI 3B */
+ IPW2100_DEV_ID(0x2583), /* TO 2100A mPCI 3B */
+ IPW2100_DEV_ID(0x2581), /* TO 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2585), /* TO 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2586), /* TO 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2587), /* TO 2100 mPCI 3B */
+
+ IPW2100_DEV_ID(0x2590), /* SO 2100A mPCI 3B */
+ IPW2100_DEV_ID(0x2592), /* SO 2100A mPCI 3B */
+ IPW2100_DEV_ID(0x2591), /* SO 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2593), /* SO 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2596), /* SO 2100 mPCI 3B */
+ IPW2100_DEV_ID(0x2598), /* SO 2100 mPCI 3B */
+
+ IPW2100_DEV_ID(0x25A0), /* HP 2100 mPCI 3B */
+ {0,},
+};
+
+MODULE_DEVICE_TABLE(pci, ipw2100_pci_id_table);
+
+static struct pci_driver ipw2100_pci_driver = {
+ .name = DRV_NAME,
+ .id_table = ipw2100_pci_id_table,
+ .probe = ipw2100_pci_init_one,
+ .remove = __devexit_p(ipw2100_pci_remove_one),
+#ifdef CONFIG_PM
+ .suspend = ipw2100_suspend,
+ .resume = ipw2100_resume,
+#endif
+};
+
+
+/**
+ * Initialize the ipw2100 driver/module
+ *
+ * @returns 0 if ok, < 0 errno node con error.
+ *
+ * Note: we cannot init the /proc stuff until the PCI driver is there,
+ * or we risk an unlikely race condition on someone accessing
+ * uninitialized data in the PCI dev struct through /proc.
+ */
+static int __init ipw2100_init(void)
+{
+ int ret;
+
+ printk(KERN_INFO DRV_NAME ": %s, %s\n", DRV_DESCRIPTION, DRV_VERSION);
+ printk(KERN_INFO DRV_NAME ": %s\n", DRV_COPYRIGHT);
+
+#ifdef CONFIG_IEEE80211_NOWEP
+ IPW_DEBUG_INFO(DRV_NAME ": Compiled with WEP disabled.\n");
+#endif
+
+#ifdef CONFIG_IPW2100_LEGACY_FW_LOAD
+ IPW_DEBUG_INFO(DRV_NAME ": Compiled with LEGACY FW load.\n");
+#endif
+
+ ret = pci_module_init(&ipw2100_pci_driver);
+
+#ifdef CONFIG_IPW_DEBUG
+ ipw2100_debug_level = debug;
+ driver_create_file(&ipw2100_pci_driver.driver,
+ &driver_attr_debug_level);
+#endif
+
+ return ret;
+}
+
+
+/**
+ * Cleanup ipw2100 driver registration
+ */
+static void __exit ipw2100_exit(void)
+{
+ /* FIXME: IPG: check that we have no instances of the devices open */
+#ifdef CONFIG_IPW_DEBUG
+ driver_remove_file(&ipw2100_pci_driver.driver,
+ &driver_attr_debug_level);
+#endif
+ pci_unregister_driver(&ipw2100_pci_driver);
+}
+
+module_init(ipw2100_init);
+module_exit(ipw2100_exit);
+
+#define WEXT_USECHANNELS 1
+
+const long ipw2100_frequencies[] = {
+ 2412, 2417, 2422, 2427,
+ 2432, 2437, 2442, 2447,
+ 2452, 2457, 2462, 2467,
+ 2472, 2484
+};
+
+#define FREQ_COUNT (sizeof(ipw2100_frequencies) / \
+ sizeof(ipw2100_frequencies[0]))
+
+const long ipw2100_rates_11b[] = {
+ 1000000,
+ 2000000,
+ 5500000,
+ 11000000
+};
+
+#define RATE_COUNT (sizeof(ipw2100_rates_11b) / sizeof(ipw2100_rates_11b[0]))
+
+static int ipw2100_wx_get_name(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ if (!(priv->status & STATUS_ASSOCIATED))
+ strcpy(wrqu->name, "unassociated");
+ else
+ snprintf(wrqu->name, IFNAMSIZ, "IEEE 802.11b");
+
+ IPW_DEBUG_WX("Name: %s\n", wrqu->name);
+ return 0;
+}
+
+
+static int ipw2100_wx_set_freq(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ struct iw_freq *fwrq = &wrqu->freq;
+ int err = 0;
+
+ if (priv->ieee->iw_mode == IW_MODE_INFRA)
+ return -EOPNOTSUPP;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ /* if setting by freq convert to channel */
+ if (fwrq->e == 1) {
+ if ((fwrq->m >= (int) 2.412e8 &&
+ fwrq->m <= (int) 2.487e8)) {
+ int f = fwrq->m / 100000;
+ int c = 0;
+
+ while ((c < REG_MAX_CHANNEL) &&
+ (f != ipw2100_frequencies[c]))
+ c++;
+
+ /* hack to fall through */
+ fwrq->e = 0;
+ fwrq->m = c + 1;
+ }
+ }
+
+ if (fwrq->e > 0 || fwrq->m > 1000)
+ return -EOPNOTSUPP;
+ else { /* Set the channel */
+ IPW_DEBUG_WX("SET Freq/Channel -> %d \n", fwrq->m);
+ err = ipw2100_set_channel(priv, fwrq->m, 0);
+ }
+
+ done:
+ return err;
+}
+
+
+static int ipw2100_wx_get_freq(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ wrqu->freq.e = 0;
+
+ /* If we are associated, trying to associate, or have a statically
+ * configured CHANNEL then return that; otherwise return ANY */
+ if (priv->config & CFG_STATIC_CHANNEL ||
+ priv->status & STATUS_ASSOCIATED)
+ wrqu->freq.m = priv->channel;
+ else
+ wrqu->freq.m = 0;
+
+ IPW_DEBUG_WX("GET Freq/Channel -> %d \n", priv->channel);
+ return 0;
+
+}
+
+static int ipw2100_wx_set_mode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int err = 0;
+
+ IPW_DEBUG_WX("SET Mode -> %d \n", wrqu->mode);
+
+ if (wrqu->mode == priv->ieee->iw_mode)
+ return 0;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ switch (wrqu->mode) {
+#ifdef CONFIG_IPW2100_PROMISC
+ case IW_MODE_MONITOR:
+ err = ipw2100_switch_mode(priv, IW_MODE_MONITOR);
+ break;
+#endif /* CONFIG_IPW2100_PROMISC */
+ case IW_MODE_ADHOC:
+ err = ipw2100_switch_mode(priv, IW_MODE_ADHOC);
+ break;
+ case IW_MODE_INFRA:
+ case IW_MODE_AUTO:
+ default:
+ err = ipw2100_switch_mode(priv, IW_MODE_INFRA);
+ break;
+ }
+
+done:
+ return err;
+}
+
+static int ipw2100_wx_get_mode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ wrqu->mode = priv->ieee->iw_mode;
+ IPW_DEBUG_WX("GET Mode -> %d\n", wrqu->mode);
+
+ return 0;
+}
+
+
+#define POWER_MODES 5
+
+/* Values are in microsecond */
+const s32 timeout_duration[POWER_MODES] = {
+ 350000,
+ 250000,
+ 75000,
+ 37000,
+ 25000,
+};
+
+const s32 period_duration[POWER_MODES] = {
+ 400000,
+ 700000,
+ 1000000,
+ 1000000,
+ 1000000
+};
+
+static int ipw2100_wx_get_range(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ struct iw_range *range = (struct iw_range *)extra;
+ u16 val;
+ int i, level;
+
+ wrqu->data.length = sizeof(*range);
+ memset(range, 0, sizeof(*range));
+
+ /* Let's try to keep this struct in the same order as in
+ * linux/include/wireless.h
+ */
+
+ /* TODO: See what values we can set, and remove the ones we can't
+ * set, or fill them with some default data.
+ */
+
+ /* ~5 Mb/s real (802.11b) */
+ range->throughput = 5 * 1000 * 1000;
+
+// range->sensitivity; /* signal level threshold range */
+
+ range->max_qual.qual = 100;
+ /* TODO: Find real max RSSI and stick here */
+ range->max_qual.level = 0;
+ range->max_qual.noise = 0;
+ range->max_qual.updated = 7; /* Updated all three */
+
+ range->avg_qual.qual = 70; /* > 8% missed beacons is 'bad' */
+ /* TODO: Find real 'good' to 'bad' threshol value for RSSI */
+ range->avg_qual.level = 20 + IPW2100_RSSI_TO_DBM;
+ range->avg_qual.noise = 0;
+ range->avg_qual.updated = 7; /* Updated all three */
+
+ range->num_bitrates = RATE_COUNT;
+
+ for (i = 0; i < RATE_COUNT && i < IW_MAX_BITRATES; i++) {
+ range->bitrate[i] = ipw2100_rates_11b[i];
+ }
+
+ range->min_rts = MIN_RTS_THRESHOLD;
+ range->max_rts = MAX_RTS_THRESHOLD;
+ range->min_frag = MIN_FRAG_THRESHOLD;
+ range->max_frag = MAX_FRAG_THRESHOLD;
+
+ range->min_pmp = period_duration[0]; /* Minimal PM period */
+ range->max_pmp = period_duration[POWER_MODES-1];/* Maximal PM period */
+ range->min_pmt = timeout_duration[POWER_MODES-1]; /* Minimal PM timeout */
+ range->max_pmt = timeout_duration[0];/* Maximal PM timeout */
+
+ /* How to decode max/min PM period */
+ range->pmp_flags = IW_POWER_PERIOD;
+ /* How to decode max/min PM period */
+ range->pmt_flags = IW_POWER_TIMEOUT;
+ /* What PM options are supported */
+ range->pm_capa = IW_POWER_TIMEOUT | IW_POWER_PERIOD;
+
+ range->encoding_size[0] = 5;
+ range->encoding_size[1] = 13; /* Different token sizes */
+ range->num_encoding_sizes = 2; /* Number of entry in the list */
+ range->max_encoding_tokens = WEP_KEYS; /* Max number of tokens */
+// range->encoding_login_index; /* token index for login token */
+
+ if (priv->ieee->iw_mode == IW_MODE_ADHOC) {
+ range->txpower_capa = IW_TXPOW_DBM;
+ range->num_txpower = IW_MAX_TXPOWER;
+ for (i = 0, level = (IPW_TX_POWER_MAX_DBM * 16); i < IW_MAX_TXPOWER;
+ i++, level -= ((IPW_TX_POWER_MAX_DBM - IPW_TX_POWER_MIN_DBM) * 16) /
+ (IW_MAX_TXPOWER - 1))
+ range->txpower[i] = level / 16;
+ } else {
+ range->txpower_capa = 0;
+ range->num_txpower = 0;
+ }
+
+
+ /* Set the Wireless Extension versions */
+ range->we_version_compiled = WIRELESS_EXT;
+ range->we_version_source = 16;
+
+// range->retry_capa; /* What retry options are supported */
+// range->retry_flags; /* How to decode max/min retry limit */
+// range->r_time_flags; /* How to decode max/min retry life */
+// range->min_retry; /* Minimal number of retries */
+// range->max_retry; /* Maximal number of retries */
+// range->min_r_time; /* Minimal retry lifetime */
+// range->max_r_time; /* Maximal retry lifetime */
+
+ range->num_channels = FREQ_COUNT;
+
+ val = 0;
+ for (i = 0; i < FREQ_COUNT; i++) {
+ // TODO: Include only legal frequencies for some countries
+// if (local->channel_mask & (1 << i)) {
+ range->freq[val].i = i + 1;
+ range->freq[val].m = ipw2100_frequencies[i] * 100000;
+ range->freq[val].e = 1;
+ val++;
+// }
+ if (val == IW_MAX_FREQUENCIES)
+ break;
+ }
+ range->num_frequency = val;
+
+ IPW_DEBUG_WX("GET Range\n");
+
+ return 0;
+}
+
+static int ipw2100_wx_set_wap(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int err = 0;
+
+ static const unsigned char any[] = {
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+ };
+ static const unsigned char off[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ };
+
+ // sanity checks
+ if (wrqu->ap_addr.sa_family != ARPHRD_ETHER)
+ return -EINVAL;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ if (!memcmp(any, wrqu->ap_addr.sa_data, ETH_ALEN) ||
+ !memcmp(off, wrqu->ap_addr.sa_data, ETH_ALEN)) {
+ /* we disable mandatory BSSID association */
+ IPW_DEBUG_WX("exit - disable mandatory BSSID\n");
+ priv->config &= ~CFG_STATIC_BSSID;
+ err = ipw2100_set_mandatory_bssid(priv, NULL, 0);
+ goto done;
+ }
+
+ priv->config |= CFG_STATIC_BSSID;
+ memcpy(priv->mandatory_bssid_mac, wrqu->ap_addr.sa_data, ETH_ALEN);
+
+ err = ipw2100_set_mandatory_bssid(priv, wrqu->ap_addr.sa_data, 0);
+
+ IPW_DEBUG_WX("SET BSSID -> %02X:%02X:%02X:%02X:%02X:%02X\n",
+ wrqu->ap_addr.sa_data[0] & 0xff,
+ wrqu->ap_addr.sa_data[1] & 0xff,
+ wrqu->ap_addr.sa_data[2] & 0xff,
+ wrqu->ap_addr.sa_data[3] & 0xff,
+ wrqu->ap_addr.sa_data[4] & 0xff,
+ wrqu->ap_addr.sa_data[5] & 0xff);
+
+ done:
+ return err;
+}
+
+static int ipw2100_wx_get_wap(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ /* If we are associated, trying to associate, or have a statically
+ * configured BSSID then return that; otherwise return ANY */
+ if (priv->config & CFG_STATIC_BSSID ||
+ priv->status & STATUS_ASSOCIATED) {
+ wrqu->ap_addr.sa_family = ARPHRD_ETHER;
+ memcpy(wrqu->ap_addr.sa_data, &priv->bssid, ETH_ALEN);
+ } else
+ memset(wrqu->ap_addr.sa_data, 0, ETH_ALEN);
+
+ IPW_DEBUG_WX("Getting WAP BSSID: " MAC_FMT "\n",
+ MAC_ARG(wrqu->ap_addr.sa_data));
+ return 0;
+}
+
+static int ipw2100_wx_set_essid(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ char *essid = ""; /* ANY */
+ int length = 0;
+ int err = 0;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ if (wrqu->essid.flags && wrqu->essid.length) {
+ length = wrqu->essid.length - 1;
+ essid = extra;
+ }
+
+ if (length == 0) {
+ IPW_DEBUG_WX("Setting ESSID to ANY\n");
+ priv->config &= ~CFG_STATIC_ESSID;
+ err = ipw2100_set_essid(priv, NULL, 0, 0);
+ goto done;
+ }
+
+ length = min(length, IW_ESSID_MAX_SIZE);
+
+ priv->config |= CFG_STATIC_ESSID;
+
+ if (priv->essid_len == length && !memcmp(priv->essid, extra, length)) {
+ IPW_DEBUG_WX("ESSID set to current ESSID.\n");
+ err = 0;
+ goto done;
+ }
+
+ IPW_DEBUG_WX("Setting ESSID: '%s' (%d)\n", escape_essid(essid, length),
+ length);
+
+ priv->essid_len = length;
+ memcpy(priv->essid, essid, priv->essid_len);
+
+ err = ipw2100_set_essid(priv, essid, length, 0);
+
+ done:
+ return err;
+}
+
+static int ipw2100_wx_get_essid(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ /* If we are associated, trying to associate, or have a statically
+ * configured ESSID then return that; otherwise return ANY */
+ if (priv->config & CFG_STATIC_ESSID ||
+ priv->status & STATUS_ASSOCIATED) {
+ IPW_DEBUG_WX("Getting essid: '%s'\n",
+ escape_essid(priv->essid, priv->essid_len));
+ memcpy(extra, priv->essid, priv->essid_len);
+ wrqu->essid.length = priv->essid_len;
+ wrqu->essid.flags = 1; /* active */
+ } else {
+ IPW_DEBUG_WX("Getting essid: ANY\n");
+ wrqu->essid.length = 0;
+ wrqu->essid.flags = 0; /* active */
+ }
+
+ return 0;
+}
+
+static int ipw2100_wx_set_nick(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ if (wrqu->data.length > IW_ESSID_MAX_SIZE)
+ return -E2BIG;
+
+ wrqu->data.length = min((size_t)wrqu->data.length, sizeof(priv->nick));
+ memset(priv->nick, 0, sizeof(priv->nick));
+ memcpy(priv->nick, extra, wrqu->data.length);
+
+ IPW_DEBUG_WX("SET Nickname -> %s \n", priv->nick);
+
+ return 0;
+}
+
+static int ipw2100_wx_get_nick(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ wrqu->data.length = strlen(priv->nick) + 1;
+ memcpy(extra, priv->nick, wrqu->data.length);
+ wrqu->data.flags = 1; /* active */
+
+ IPW_DEBUG_WX("GET Nickname -> %s \n", extra);
+
+ return 0;
+}
+
+static int ipw2100_wx_set_rate(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ u32 target_rate = wrqu->bitrate.value;
+ u32 rate;
+ int err = 0;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ rate = 0;
+
+ if (target_rate == 1000000 ||
+ (!wrqu->bitrate.fixed && target_rate > 1000000))
+ rate |= TX_RATE_1_MBIT;
+ if (target_rate == 2000000 ||
+ (!wrqu->bitrate.fixed && target_rate > 2000000))
+ rate |= TX_RATE_2_MBIT;
+ if (target_rate == 5500000 ||
+ (!wrqu->bitrate.fixed && target_rate > 5500000))
+ rate |= TX_RATE_5_5_MBIT;
+ if (target_rate == 11000000 ||
+ (!wrqu->bitrate.fixed && target_rate > 11000000))
+ rate |= TX_RATE_11_MBIT;
+ if (rate == 0)
+ rate = DEFAULT_TX_RATES;
+
+ err = ipw2100_set_tx_rates(priv, rate, 0);
+
+ IPW_DEBUG_WX("SET Rate -> %04X \n", rate);
+ done:
+ return err;
+}
+
+
+static int ipw2100_wx_get_rate(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int val;
+ int len = sizeof(val);
+ int err;
+
+ if (!(priv->status & STATUS_ENABLED) ||
+ priv->status & STATUS_RF_KILL_MASK ||
+ !(priv->status & STATUS_ASSOCIATED)) {
+ wrqu->bitrate.value = 0;
+ return 0;
+ }
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ err = ipw2100_get_ordinal(priv, IPW_ORD_CURRENT_TX_RATE, &val, &len);
+ if (err) {
+ IPW_DEBUG_WX("failed querying ordinals.\n");
+ return err;
+ }
+
+ switch (val & TX_RATE_MASK) {
+ case TX_RATE_1_MBIT:
+ wrqu->bitrate.value = 1000000;
+ break;
+ case TX_RATE_2_MBIT:
+ wrqu->bitrate.value = 2000000;
+ break;
+ case TX_RATE_5_5_MBIT:
+ wrqu->bitrate.value = 5500000;
+ break;
+ case TX_RATE_11_MBIT:
+ wrqu->bitrate.value = 11000000;
+ break;
+ default:
+ wrqu->bitrate.value = 0;
+ }
+
+ IPW_DEBUG_WX("GET Rate -> %d \n", wrqu->bitrate.value);
+
+ done:
+
+ return 0;
+}
+
+static int ipw2100_wx_set_rts(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int value, err;
+
+ /* Auto RTS not yet supported */
+ if (wrqu->rts.fixed == 0)
+ return -EINVAL;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ if (wrqu->rts.disabled)
+ value = priv->rts_threshold | RTS_DISABLED;
+ else {
+ if (wrqu->rts.value < 1 ||
+ wrqu->rts.value > 2304)
+ return -EINVAL;
+ value = wrqu->rts.value;
+ }
+
+ err = ipw2100_set_rts_threshold(priv, value);
+
+ IPW_DEBUG_WX("SET RTS Threshold -> 0x%08X \n", value);
+ done:
+ return err;
+}
+
+static int ipw2100_wx_get_rts(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ wrqu->rts.value = priv->rts_threshold & ~RTS_DISABLED;
+ wrqu->rts.fixed = 1; /* no auto select */
+
+ /* If RTS is set to the default value, then it is disabled */
+ wrqu->rts.disabled = (priv->rts_threshold & RTS_DISABLED) ? 1 : 0;
+
+ IPW_DEBUG_WX("GET RTS Threshold -> 0x%08X \n", wrqu->rts.value);
+
+ return 0;
+}
+
+static int ipw2100_wx_set_txpow(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int err, value;
+
+ if (priv->ieee->iw_mode != IW_MODE_ADHOC)
+ return -EINVAL;
+
+ if (wrqu->txpower.disabled == 1 || wrqu->txpower.fixed == 0)
+ value = IPW_TX_POWER_DEFAULT;
+ else {
+ if (wrqu->txpower.value < IPW_TX_POWER_MIN_DBM ||
+ wrqu->txpower.value > IPW_TX_POWER_MAX_DBM)
+ return -EINVAL;
+
+ value = (wrqu->txpower.value - IPW_TX_POWER_MIN_DBM) * 16 /
+ (IPW_TX_POWER_MAX_DBM - IPW_TX_POWER_MIN_DBM);
+ }
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ err = ipw2100_set_tx_power(priv, value);
+
+ IPW_DEBUG_WX("SET TX Power -> %d \n", value);
+
+ done:
+
+ return err;
+}
+
+static int ipw2100_wx_get_txpow(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ if (priv->ieee->iw_mode != IW_MODE_ADHOC) {
+ wrqu->power.disabled = 1;
+ return 0;
+ }
+
+ if (priv->tx_power == IPW_TX_POWER_DEFAULT) {
+ wrqu->power.fixed = 0;
+ wrqu->power.value = IPW_TX_POWER_MAX_DBM;
+ wrqu->power.disabled = 1;
+ } else {
+ wrqu->power.disabled = 0;
+ wrqu->power.fixed = 1;
+ wrqu->power.value =
+ (priv->tx_power *
+ (IPW_TX_POWER_MAX_DBM - IPW_TX_POWER_MIN_DBM)) /
+ (IPW_TX_POWER_MAX - IPW_TX_POWER_MIN) +
+ IPW_TX_POWER_MIN_DBM;
+ }
+
+ wrqu->power.flags = IW_TXPOW_DBM;
+
+ IPW_DEBUG_WX("GET TX Power -> %d \n", wrqu->power.value);
+
+ return 0;
+}
+
+static int ipw2100_wx_set_frag(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ if (!wrqu->frag.fixed)
+ return -EINVAL;
+
+ if (wrqu->frag.disabled) {
+ priv->frag_threshold |= FRAG_DISABLED;
+ priv->ieee->fts = DEFAULT_FTS;
+ } else {
+ if (wrqu->frag.value < MIN_FRAG_THRESHOLD ||
+ wrqu->frag.value > MAX_FRAG_THRESHOLD)
+ return -EINVAL;
+
+ priv->ieee->fts = wrqu->frag.value & ~0x1;
+ priv->frag_threshold = priv->ieee->fts;
+ }
+
+ IPW_DEBUG_WX("SET Frag Threshold -> %d \n", priv->ieee->fts);
+
+ return 0;
+}
+
+static int ipw2100_wx_get_frag(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ wrqu->frag.value = priv->frag_threshold & ~FRAG_DISABLED;
+ wrqu->frag.fixed = 0; /* no auto select */
+ wrqu->frag.disabled = (priv->frag_threshold & FRAG_DISABLED) ? 1 : 0;
+
+ IPW_DEBUG_WX("GET Frag Threshold -> %d \n", wrqu->frag.value);
+
+ return 0;
+}
+
+static int ipw2100_wx_set_retry(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int err = 0;
+
+ if (wrqu->retry.flags & IW_RETRY_LIFETIME ||
+ wrqu->retry.disabled)
+ return -EINVAL;
+
+ if (!(wrqu->retry.flags & IW_RETRY_LIMIT))
+ return 0;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ if (wrqu->retry.flags & IW_RETRY_MIN) {
+ err = ipw2100_set_short_retry(priv, wrqu->retry.value);
+ IPW_DEBUG_WX("SET Short Retry Limit -> %d \n",
+ wrqu->retry.value);
+ goto done;
+ }
+
+ if (wrqu->retry.flags & IW_RETRY_MAX) {
+ err = ipw2100_set_long_retry(priv, wrqu->retry.value);
+ IPW_DEBUG_WX("SET Long Retry Limit -> %d \n",
+ wrqu->retry.value);
+ goto done;
+ }
+
+ err = ipw2100_set_short_retry(priv, wrqu->retry.value);
+ if (!err)
+ err = ipw2100_set_long_retry(priv, wrqu->retry.value);
+
+ IPW_DEBUG_WX("SET Both Retry Limits -> %d \n", wrqu->retry.value);
+
+ done:
+ return err;
+}
+
+static int ipw2100_wx_get_retry(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ wrqu->retry.disabled = 0; /* can't be disabled */
+
+ if ((wrqu->retry.flags & IW_RETRY_TYPE) ==
+ IW_RETRY_LIFETIME)
+ return -EINVAL;
+
+ if (wrqu->retry.flags & IW_RETRY_MAX) {
+ wrqu->retry.flags = IW_RETRY_LIMIT & IW_RETRY_MAX;
+ wrqu->retry.value = priv->long_retry_limit;
+ } else {
+ wrqu->retry.flags =
+ (priv->short_retry_limit !=
+ priv->long_retry_limit) ?
+ IW_RETRY_LIMIT & IW_RETRY_MIN : IW_RETRY_LIMIT;
+
+ wrqu->retry.value = priv->short_retry_limit;
+ }
+
+ IPW_DEBUG_WX("GET Retry -> %d \n", wrqu->retry.value);
+
+ return 0;
+}
+
+static int ipw2100_wx_set_scan(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int err = 0;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ IPW_DEBUG_WX("Initiating scan...\n");
+ if (ipw2100_set_scan_options(priv) ||
+ ipw2100_start_scan(priv)) {
+ IPW_DEBUG_WX("Start scan failed.\n");
+
+ /* TODO: Mark a scan as pending so when hardware initialized
+ * a scan starts */
+ }
+
+ done:
+ return err;
+}
+
+static int ipw2100_wx_get_scan(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ return ieee80211_wx_get_scan(priv->ieee, info, wrqu, extra);
+}
+
+
+/*
+ * Implementation based on code in hostap-driver v0.1.3 hostap_ioctl.c
+ */
+static int ipw2100_wx_set_encode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *key)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ return ieee80211_wx_set_encode(priv->ieee, info, wrqu, key);
+}
+
+static int ipw2100_wx_get_encode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *key)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ return ieee80211_wx_get_encode(priv->ieee, info, wrqu, key);
+}
+
+static int ipw2100_wx_set_power(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int err = 0;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ if (wrqu->power.disabled) {
+ priv->power_mode = IPW_POWER_LEVEL(priv->power_mode);
+ err = ipw2100_set_power_mode(priv, IPW_POWER_MODE_CAM);
+ IPW_DEBUG_WX("SET Power Management Mode -> off\n");
+ goto done;
+ }
+
+ switch (wrqu->power.flags & IW_POWER_MODE) {
+ case IW_POWER_ON: /* If not specified */
+ case IW_POWER_MODE: /* If set all mask */
+ case IW_POWER_ALL_R: /* If explicitely state all */
+ break;
+ default: /* Otherwise we don't support it */
+ IPW_DEBUG_WX("SET PM Mode: %X not supported.\n",
+ wrqu->power.flags);
+ err = -EOPNOTSUPP;
+ goto done;
+ }
+
+ /* If the user hasn't specified a power management mode yet, default
+ * to BATTERY */
+ priv->power_mode = IPW_POWER_ENABLED | priv->power_mode;
+ err = ipw2100_set_power_mode(priv, IPW_POWER_LEVEL(priv->power_mode));
+
+ IPW_DEBUG_WX("SET Power Management Mode -> 0x%02X\n",
+ priv->power_mode);
+
+ done:
+ return err;
+
+}
+
+static int ipw2100_wx_get_power(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ if (!(priv->power_mode & IPW_POWER_ENABLED)) {
+ wrqu->power.disabled = 1;
+ } else {
+ wrqu->power.disabled = 0;
+ wrqu->power.flags = 0;
+ }
+
+ IPW_DEBUG_WX("GET Power Management Mode -> %02X\n", priv->power_mode);
+
+ return 0;
+}
+
+
+/*
+ *
+ * IWPRIV handlers
+ *
+ */
+#ifdef CONFIG_IPW2100_PROMISC
+static int ipw2100_wx_set_promisc(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int *parms = (int *)extra;
+ int enable = (parms[0] > 0);
+ int err = 0;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ if (enable) {
+ if (priv->ieee->iw_mode == IW_MODE_MONITOR) {
+ err = ipw2100_set_channel(priv, parms[1], 0);
+ goto done;
+ }
+ priv->channel = parms[1];
+ err = ipw2100_switch_mode(priv, IW_MODE_MONITOR);
+ } else {
+ if (priv->ieee->iw_mode == IW_MODE_MONITOR)
+ err = ipw2100_switch_mode(priv, priv->last_mode);
+ }
+ done:
+ return err;
+}
+
+static int ipw2100_wx_reset(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ if (priv->status & STATUS_INITIALIZED)
+ ipw2100_reset_adapter(priv);
+ return 0;
+}
+
+#endif
+
+static int ipw2100_wx_set_powermode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int err = 0, mode = *(int *)extra;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ if ((mode < 1) || (mode > POWER_MODES))
+ mode = IPW_POWER_AUTO;
+
+ if (priv->power_mode != mode)
+ err = ipw2100_set_power_mode(priv, mode);
+ done:
+ return err;
+}
+
+#define MAX_POWER_STRING 80
+static int ipw2100_wx_get_powermode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int level = IPW_POWER_LEVEL(priv->power_mode);
+ s32 timeout, period;
+
+ if (!(priv->power_mode & IPW_POWER_ENABLED)) {
+ snprintf(extra, MAX_POWER_STRING,
+ "Power save level: %d (Off)", level);
+ } else {
+ switch (level) {
+ case IPW_POWER_MODE_CAM:
+ snprintf(extra, MAX_POWER_STRING,
+ "Power save level: %d (None)", level);
+ break;
+ case IPW_POWER_AUTO:
+ snprintf(extra, MAX_POWER_STRING,
+ "Power save level: %d (Auto)", 0);
+ break;
+ default:
+ timeout = timeout_duration[level - 1] / 1000;
+ period = period_duration[level - 1] / 1000;
+ snprintf(extra, MAX_POWER_STRING,
+ "Power save level: %d "
+ "(Timeout %dms, Period %dms)",
+ level, timeout, period);
+ }
+ }
+
+ wrqu->data.length = strlen(extra) + 1;
+
+ return 0;
+}
+
+
+static int ipw2100_wx_set_preamble(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ int err, mode = *(int *)extra;
+
+ if (!(priv->status & STATUS_INITIALIZED)) {
+ err = -EIO;
+ goto done;
+ }
+
+ if (mode == 1)
+ priv->config |= CFG_LONG_PREAMBLE;
+ else if (mode == 0)
+ priv->config &= ~CFG_LONG_PREAMBLE;
+ else
+ return -EINVAL;
+
+ err = ipw2100_system_config(priv, 0);
+
+done:
+ return err;
+}
+
+static int ipw2100_wx_get_preamble(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ /*
+ * No check of STATUS_INITIALIZED required
+ */
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+
+ if (priv->config & CFG_LONG_PREAMBLE)
+ snprintf(wrqu->name, IFNAMSIZ, "long (1)");
+ else
+ snprintf(wrqu->name, IFNAMSIZ, "auto (0)");
+
+ return 0;
+}
+
+static iw_handler ipw2100_wx_handlers[] =
+{
+ NULL, /* SIOCSIWCOMMIT */
+ ipw2100_wx_get_name, /* SIOCGIWNAME */
+ NULL, /* SIOCSIWNWID */
+ NULL, /* SIOCGIWNWID */
+ ipw2100_wx_set_freq, /* SIOCSIWFREQ */
+ ipw2100_wx_get_freq, /* SIOCGIWFREQ */
+ ipw2100_wx_set_mode, /* SIOCSIWMODE */
+ ipw2100_wx_get_mode, /* SIOCGIWMODE */
+ NULL, /* SIOCSIWSENS */
+ NULL, /* SIOCGIWSENS */
+ NULL, /* SIOCSIWRANGE */
+ ipw2100_wx_get_range, /* SIOCGIWRANGE */
+ NULL, /* SIOCSIWPRIV */
+ NULL, /* SIOCGIWPRIV */
+ NULL, /* SIOCSIWSTATS */
+ NULL, /* SIOCGIWSTATS */
+ NULL, /* SIOCSIWSPY */
+ NULL, /* SIOCGIWSPY */
+ NULL, /* SIOCGIWTHRSPY */
+ NULL, /* SIOCWIWTHRSPY */
+ ipw2100_wx_set_wap, /* SIOCSIWAP */
+ ipw2100_wx_get_wap, /* SIOCGIWAP */
+ NULL, /* -- hole -- */
+ NULL, /* SIOCGIWAPLIST -- depricated */
+ ipw2100_wx_set_scan, /* SIOCSIWSCAN */
+ ipw2100_wx_get_scan, /* SIOCGIWSCAN */
+ ipw2100_wx_set_essid, /* SIOCSIWESSID */
+ ipw2100_wx_get_essid, /* SIOCGIWESSID */
+ ipw2100_wx_set_nick, /* SIOCSIWNICKN */
+ ipw2100_wx_get_nick, /* SIOCGIWNICKN */
+ NULL, /* -- hole -- */
+ NULL, /* -- hole -- */
+ ipw2100_wx_set_rate, /* SIOCSIWRATE */
+ ipw2100_wx_get_rate, /* SIOCGIWRATE */
+ ipw2100_wx_set_rts, /* SIOCSIWRTS */
+ ipw2100_wx_get_rts, /* SIOCGIWRTS */
+ ipw2100_wx_set_frag, /* SIOCSIWFRAG */
+ ipw2100_wx_get_frag, /* SIOCGIWFRAG */
+ ipw2100_wx_set_txpow, /* SIOCSIWTXPOW */
+ ipw2100_wx_get_txpow, /* SIOCGIWTXPOW */
+ ipw2100_wx_set_retry, /* SIOCSIWRETRY */
+ ipw2100_wx_get_retry, /* SIOCGIWRETRY */
+ ipw2100_wx_set_encode, /* SIOCSIWENCODE */
+ ipw2100_wx_get_encode, /* SIOCGIWENCODE */
+ ipw2100_wx_set_power, /* SIOCSIWPOWER */
+ ipw2100_wx_get_power, /* SIOCGIWPOWER */
+};
+
+#define IPW2100_PRIV_SET_PROMISC SIOCIWFIRSTPRIV
+#define IPW2100_PRIV_RESET SIOCIWFIRSTPRIV+1
+#define IPW2100_PRIV_SET_POWER SIOCIWFIRSTPRIV+2
+#define IPW2100_PRIV_GET_POWER SIOCIWFIRSTPRIV+3
+#define IPW2100_PRIV_SET_LONGPREAMBLE SIOCIWFIRSTPRIV+4
+#define IPW2100_PRIV_GET_LONGPREAMBLE SIOCIWFIRSTPRIV+5
+
+static const struct iw_priv_args ipw2100_private_args[] = {
+
+#ifdef CONFIG_IPW2100_PROMISC
+ {
+ IPW2100_PRIV_SET_PROMISC,
+ IW_PRIV_TYPE_INT | IW_PRIV_SIZE_FIXED | 2, 0, "monitor"
+ },
+ {
+ IPW2100_PRIV_RESET,
+ IW_PRIV_TYPE_INT | IW_PRIV_SIZE_FIXED | 0, 0, "reset"
+ },
+#endif /* CONFIG_IPW2100_PROMISC */
+
+ {
+ IPW2100_PRIV_SET_POWER,
+ IW_PRIV_TYPE_INT | IW_PRIV_SIZE_FIXED | 1, 0, "set_power"
+ },
+ {
+ IPW2100_PRIV_GET_POWER,
+ 0, IW_PRIV_TYPE_CHAR | IW_PRIV_SIZE_FIXED | MAX_POWER_STRING, "get_power"
+ },
+ {
+ IPW2100_PRIV_SET_LONGPREAMBLE,
+ IW_PRIV_TYPE_INT | IW_PRIV_SIZE_FIXED | 1, 0, "set_preamble"
+ },
+ {
+ IPW2100_PRIV_GET_LONGPREAMBLE,
+ 0, IW_PRIV_TYPE_CHAR | IW_PRIV_SIZE_FIXED | IFNAMSIZ, "get_preamble"
+ },
+};
+
+static iw_handler ipw2100_private_handler[] = {
+#ifdef CONFIG_IPW2100_PROMISC
+ ipw2100_wx_set_promisc,
+ ipw2100_wx_reset,
+#else /* CONFIG_IPW2100_PROMISC */
+ NULL,
+ NULL,
+#endif /* CONFIG_IPW2100_PROMISC */
+ ipw2100_wx_set_powermode,
+ ipw2100_wx_get_powermode,
+ ipw2100_wx_set_preamble,
+ ipw2100_wx_get_preamble,
+};
+
+struct iw_handler_def ipw2100_wx_handler_def =
+{
+ .standard = ipw2100_wx_handlers,
+ .num_standard = sizeof(ipw2100_wx_handlers) / sizeof(iw_handler),
+ .num_private = sizeof(ipw2100_private_handler) / sizeof(iw_handler),
+ .num_private_args = sizeof(ipw2100_private_args) /
+ sizeof(struct iw_priv_args),
+ .private = (iw_handler *)ipw2100_private_handler,
+ .private_args = (struct iw_priv_args *)ipw2100_private_args,
+};
+
+/*
+ * Get wireless statistics.
+ * Called by /proc/net/wireless
+ * Also called by SIOCGIWSTATS
+ */
+struct iw_statistics *ipw2100_wx_wireless_stats(struct net_device * dev)
+{
+ enum {
+ POOR = 30,
+ FAIR = 60,
+ GOOD = 80,
+ VERY_GOOD = 90,
+ EXCELLENT = 95,
+ PERFECT = 100
+ };
+ int rssi_qual;
+ int tx_qual;
+ int beacon_qual;
+
+ struct ipw2100_priv *priv = netdev_priv(dev);
+ struct iw_statistics *wstats;
+ u32 rssi, quality, tx_retries, missed_beacons, tx_failures;
+ u32 ord_len = sizeof(u32);
+
+ if (!priv)
+ return (struct iw_statistics *) NULL;
+
+ wstats = &priv->wstats;
+
+ /* if hw is disabled, then ipw2100_get_ordinal() can't be called.
+ * ipw2100_wx_wireless_stats seems to be called before fw is
+ * initialized. STATUS_ASSOCIATED will only be set if the hw is up
+ * and associated; if not associcated, the values are all meaningless
+ * anyway, so set them all to NULL and INVALID */
+ if (!(priv->status & STATUS_ASSOCIATED)) {
+ wstats->miss.beacon = 0;
+ wstats->discard.retries = 0;
+ wstats->qual.qual = 0;
+ wstats->qual.level = 0;
+ wstats->qual.noise = 0;
+ wstats->qual.updated = 7;
+ wstats->qual.updated |= IW_QUAL_NOISE_INVALID |
+ IW_QUAL_QUAL_INVALID | IW_QUAL_LEVEL_INVALID;
+ return wstats;
+ }
+
+ if (ipw2100_get_ordinal(priv, IPW_ORD_STAT_PERCENT_MISSED_BCNS,
+ &missed_beacons, &ord_len))
+ goto fail_get_ordinal;
+
+ /* If we don't have a connection the quality and level is 0*/
+ if (!(priv->status & STATUS_ASSOCIATED)) {
+ wstats->qual.qual = 0;
+ wstats->qual.level = 0;
+ } else {
+ if (ipw2100_get_ordinal(priv, IPW_ORD_RSSI_AVG_CURR,
+ &rssi, &ord_len))
+ goto fail_get_ordinal;
+ wstats->qual.level = rssi + IPW2100_RSSI_TO_DBM;
+ if (rssi < 10)
+ rssi_qual = rssi * POOR / 10;
+ else if (rssi < 15)
+ rssi_qual = (rssi - 10) * (FAIR - POOR) / 5 + POOR;
+ else if (rssi < 20)
+ rssi_qual = (rssi - 15) * (GOOD - FAIR) / 5 + FAIR;
+ else if (rssi < 30)
+ rssi_qual = (rssi - 20) * (VERY_GOOD - GOOD) /
+ 10 + GOOD;
+ else
+ rssi_qual = (rssi - 30) * (PERFECT - VERY_GOOD) /
+ 10 + VERY_GOOD;
+
+ if (ipw2100_get_ordinal(priv, IPW_ORD_STAT_PERCENT_RETRIES,
+ &tx_retries, &ord_len))
+ goto fail_get_ordinal;
+
+ if (tx_retries > 75)
+ tx_qual = (90 - tx_retries) * POOR / 15;
+ else if (tx_retries > 70)
+ tx_qual = (75 - tx_retries) * (FAIR - POOR) / 5 + POOR;
+ else if (tx_retries > 65)
+ tx_qual = (70 - tx_retries) * (GOOD - FAIR) / 5 + FAIR;
+ else if (tx_retries > 50)
+ tx_qual = (65 - tx_retries) * (VERY_GOOD - GOOD) /
+ 15 + GOOD;
+ else
+ tx_qual = (50 - tx_retries) *
+ (PERFECT - VERY_GOOD) / 50 + VERY_GOOD;
+
+ if (missed_beacons > 50)
+ beacon_qual = (60 - missed_beacons) * POOR / 10;
+ else if (missed_beacons > 40)
+ beacon_qual = (50 - missed_beacons) * (FAIR - POOR) /
+ 10 + POOR;
+ else if (missed_beacons > 32)
+ beacon_qual = (40 - missed_beacons) * (GOOD - FAIR) /
+ 18 + FAIR;
+ else if (missed_beacons > 20)
+ beacon_qual = (32 - missed_beacons) *
+ (VERY_GOOD - GOOD) / 20 + GOOD;
+ else
+ beacon_qual = (20 - missed_beacons) *
+ (PERFECT - VERY_GOOD) / 20 + VERY_GOOD;
+
+ quality = min(beacon_qual, min(tx_qual, rssi_qual));
+
+#ifdef CONFIG_IPW_DEBUG
+ if (beacon_qual == quality)
+ IPW_DEBUG_WX("Quality clamped by Missed Beacons\n");
+ else if (tx_qual == quality)
+ IPW_DEBUG_WX("Quality clamped by Tx Retries\n");
+ else if (quality != 100)
+ IPW_DEBUG_WX("Quality clamped by Signal Strength\n");
+ else
+ IPW_DEBUG_WX("Quality not clamped.\n");
+#endif
+
+ wstats->qual.qual = quality;
+ wstats->qual.level = rssi + IPW2100_RSSI_TO_DBM;
+ }
+
+ wstats->qual.noise = 0;
+ wstats->qual.updated = 7;
+ wstats->qual.updated |= IW_QUAL_NOISE_INVALID;
+
+ /* FIXME: this is percent and not a # */
+ wstats->miss.beacon = missed_beacons;
+
+ if (ipw2100_get_ordinal(priv, IPW_ORD_STAT_TX_FAILURES,
+ &tx_failures, &ord_len))
+ goto fail_get_ordinal;
+ wstats->discard.retries = tx_failures;
+
+ return wstats;
+
+ fail_get_ordinal:
+ IPW_DEBUG_WX("failed querying ordinals.\n");
+
+ return (struct iw_statistics *) NULL;
+}
+
+void ipw2100_wx_event_work(struct ipw2100_priv *priv)
+{
+ union iwreq_data wrqu;
+ int len = ETH_ALEN;
+
+ if (priv->status & STATUS_STOPPING)
+ return;
+
+ IPW_DEBUG_WX("enter\n");
+
+
+ wrqu.ap_addr.sa_family = ARPHRD_ETHER;
+
+ /* Fetch BSSID from the hardware */
+ if (!(priv->status & STATUS_ASSOCIATED) ||
+ priv->status & STATUS_RF_KILL_MASK ||
+ ipw2100_get_ordinal(priv, IPW_ORD_STAT_ASSN_AP_BSSID,
+ &priv->bssid, &len)) {
+ memset(wrqu.ap_addr.sa_data, 0, ETH_ALEN);
+ } else {
+ memcpy(wrqu.ap_addr.sa_data, priv->bssid, ETH_ALEN);
+ memcpy(&priv->ieee->bssid, priv->bssid, ETH_ALEN);
+ }
+
+ if (!(priv->status & STATUS_ASSOCIATED)) {
+ /* This is a disassociation event, so kick the firmware to
+ * look for another AP */
+ if (priv->config & CFG_STATIC_ESSID)
+ ipw2100_set_essid(priv, priv->essid, priv->essid_len, 0);
+ else
+ ipw2100_set_essid(priv, NULL, 0, 0);
+ }
+
+
+ wireless_send_event(priv->net_dev, SIOCGIWAP, &wrqu, NULL);
+}
+
+#define IPW2100_FW_MAJOR_VERSION 1
+#define IPW2100_FW_MINOR_VERSION 3
+
+#define IPW2100_FW_MINOR(x) ((x & 0xff) >> 8)
+#define IPW2100_FW_MAJOR(x) (x & 0xff)
+
+#define IPW2100_FW_VERSION ((IPW2100_FW_MINOR_VERSION << 8) | \
+ IPW2100_FW_MAJOR_VERSION)
+
+#define IPW2100_FW_PREFIX "ipw2100-" __stringify(IPW2100_FW_MAJOR_VERSION) \
+"." __stringify(IPW2100_FW_MINOR_VERSION)
+
+#define IPW2100_FW_NAME(x) IPW2100_FW_PREFIX "" x ".fw"
+
+#ifdef CONFIG_IPW2100_LEGACY_FW_LOAD
+
+static char *firmware = NULL;
+
+/* Module paramter for path to the firmware*/
+#include <linux/moduleparam.h>
+module_param(firmware, charp, 0);
+
+MODULE_PARM_DESC(firmware, "complete path to firmware file");
+
+#endif /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+
+/*
+
+BINARY FIRMWARE HEADER FORMAT
+
+offset length desc
+0 2 version
+2 2 mode == 0:BSS,1:IBSS,2:PROMISC
+4 4 fw_len
+8 4 uc_len
+C fw_len firmware data
+12 + fw_len uc_len microcode data
+
+*/
+
+struct ipw2100_fw_header {
+ short version;
+ short mode;
+ unsigned int fw_size;
+ unsigned int uc_size;
+} __attribute__ ((packed));
+
+
+#ifdef CONFIG_IPW2100_LEGACY_FW_LOAD
+
+/*
+ *
+ * The following was originally based on the mod_firmware_load in
+ * drivers/sound/sound_firmware.c. Primary changes revolved around
+ * making it work for firmware images > 128k and to support having
+ * both a firmware and microcode image in the file loaded.
+ *
+ */
+static void ipw2100_fw_free(struct ipw2100_fw *fw)
+{
+ struct ipw2100_fw_chunk *c;
+ struct list_head *e;
+
+ while (!list_empty(&fw->fw.chunk_list)) {
+ e = fw->fw.chunk_list.next;
+ c = list_entry(e, struct ipw2100_fw_chunk, list);
+ list_del(e);
+ vfree(c->buf);
+ vfree(c);
+ }
+
+ while (!list_empty(&fw->uc.chunk_list)) {
+ e = fw->uc.chunk_list.next;
+ c = list_entry(e, struct ipw2100_fw_chunk, list);
+ list_del(e);
+ vfree(c->buf);
+ vfree(c);
+ }
+}
+
+
+static int ipw2100_fw_load(struct file *filp, struct ipw2100_fw_chunk_set *cs, long size)
+{
+ struct ipw2100_fw_chunk *c;
+ int i = 0;
+
+ /* Break firmware image into chunks of 128k */
+ cs->size = size;
+ cs->chunks = cs->size >> 17;
+
+ if (cs->size & 0x1FFFF)
+ cs->chunks++;
+
+ IPW_DEBUG_FW("Loading %ld bytes from %u chunks\n",
+ cs->size, cs->chunks);
+
+ /* Load the chunks */
+ while (size > 0) {
+ i++;
+
+ c = (struct ipw2100_fw_chunk *)vmalloc(
+ sizeof(struct ipw2100_fw_chunk));
+ if (c == NULL) {
+ IPW_DEBUG_INFO("Out of memory loading firmware "
+ "chunk %d.\n", i);
+ goto fail;
+ }
+ c->pos = 0;
+
+ if (size >= 0x20000)
+ c->len = 0x20000;
+ else
+ c->len = size;
+
+ c->buf = (unsigned char *)vmalloc(c->len);
+ if (c->buf == NULL) {
+ IPW_DEBUG_INFO("Out of memory loading firmware "
+ "chunk %d.\n", i);
+ goto fail;
+
+ }
+ if (vfs_read(filp, c->buf, c->len, &filp->f_pos) != c->len) {
+ IPW_DEBUG_INFO("Failed to read chunk firmware "
+ "chunk %d.\n", i);
+ goto fail;
+ }
+
+ list_add_tail(&c->list, &cs->chunk_list);
+
+ IPW_DEBUG_FW("Chunk %d loaded: %lu bytes\n",
+ i, c->len);
+ size -= c->len;
+ }
+
+ return 0;
+
+ fail:
+ return 1;
+}
+
+static int ipw2100_do_mod_firmware_load(const char *fn, struct ipw2100_fw *fw)
+{
+ struct file *filp;
+ long l;
+ struct ipw2100_fw_header h;
+
+ /* Make sure the lists are init'd so that error paths can safely walk
+ * them to free potentially allocated storage */
+ INIT_LIST_HEAD(&fw->fw.chunk_list);
+ INIT_LIST_HEAD(&fw->uc.chunk_list);
+
+ filp = filp_open(fn, 0, 0);
+ if (IS_ERR(filp)) {
+ IPW_DEBUG_INFO("Unable to load '%s'.\n", fn);
+ return 1;
+ }
+
+ l = i_size_read(filp->f_dentry->d_inode);
+ IPW_DEBUG_FW("Loading %ld bytes for firmware '%s'\n", l, fn);
+
+ if (vfs_read(filp, (char *)&h, sizeof(h), &filp->f_pos) != sizeof(h)) {
+ IPW_DEBUG_INFO("Failed to read '%s'.\n", fn);
+ goto fail;
+ }
+
+ if (IPW2100_FW_MAJOR(h.version) != IPW2100_FW_MAJOR_VERSION) {
+ IPW_DEBUG_WARNING("Firmware image not compatible "
+ "(detected version id of %u). "
+ "See Documentation/networking/README.ipw2100\n",
+ h.version);
+ goto fail;
+ }
+
+ fw->version = h.version;
+
+ if (ipw2100_fw_load(filp, &fw->fw, h.fw_size))
+ goto fail;
+
+ if (ipw2100_fw_load(filp, &fw->uc, h.uc_size))
+ goto fail;
+
+ filp_close(filp, current->files);
+ return 0;
+
+ fail:
+ ipw2100_fw_free(fw);
+ filp_close(filp, current->files);
+ return 1;
+}
+
+static int ipw2100_mod_firmware_load(const char *fn, struct ipw2100_fw *fw)
+{
+ int r;
+ mm_segment_t fs = get_fs();
+ set_fs(get_ds());
+ r = ipw2100_do_mod_firmware_load(fn, fw);
+ set_fs(fs);
+ return r;
+}
+
+static inline struct list_head *ipw2100_fw_read(
+ struct list_head *e, struct ipw2100_fw_chunk_set *cs,
+ unsigned char *data, size_t len)
+{
+ struct ipw2100_fw_chunk *c = list_entry(e, struct ipw2100_fw_chunk,
+ list);
+ unsigned int avail = c->len - c->pos;
+ if (avail <= len) {
+ struct ipw2100_fw_chunk *tmp;
+
+ memcpy(data, c->buf + c->pos, avail);
+ c->pos = 0;
+
+ IPW_DEBUG_INFO("advancing to next chunk...\n");
+
+ e = e->next;
+ tmp = list_entry(e, struct ipw2100_fw_chunk, list);
+
+ if (avail != len) {
+ memcpy(data + avail,
+ tmp->buf + tmp->pos,
+ len - avail);
+ tmp->pos += len - avail;
+ }
+
+ return e;
+ }
+
+ memcpy(data, c->buf + c->pos, len);
+ c->pos += len;
+
+ return e;
+}
+
+static inline struct list_head *ipw2100_fw_readw(
+ struct list_head *e, struct ipw2100_fw_chunk_set *cs,
+ unsigned short *data)
+{
+ return ipw2100_fw_read(e, cs, (unsigned char *)data, sizeof(*data));
+}
+
+static inline struct list_head *ipw2100_fw_readl(
+ struct list_head *e, struct ipw2100_fw_chunk_set *cs,
+ unsigned int *data)
+{
+ return ipw2100_fw_read(e, cs, (unsigned char *)data, sizeof(*data));
+}
+
+#else /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+static int ipw2100_mod_firmware_load(struct ipw2100_fw *fw)
+{
+ struct ipw2100_fw_header *h =
+ (struct ipw2100_fw_header *)fw->fw_entry->data;
+
+ if (IPW2100_FW_MAJOR(h->version) != IPW2100_FW_MAJOR_VERSION) {
+ IPW_DEBUG_WARNING("Firmware image not compatible "
+ "(detected version id of %u). "
+ "See Documentation/networking/README.ipw2100\n",
+ h->version);
+ return 1;
+ }
+
+ fw->version = h->version;
+ fw->fw.data = fw->fw_entry->data + sizeof(struct ipw2100_fw_header);
+ fw->fw.size = h->fw_size;
+ fw->uc.data = fw->fw.data + h->fw_size;
+ fw->uc.size = h->uc_size;
+
+ return 0;
+}
+
+#endif /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+
+int ipw2100_get_firmware(struct ipw2100_priv *priv, struct ipw2100_fw *fw)
+{
+ char *fw_name;
+
+#ifdef CONFIG_IPW2100_LEGACY_FW_LOAD
+ int err = 0;
+
+ IPW_DEBUG_INFO("%s: Using legacy firmware load.\n",
+ priv->net_dev->name);
+
+ if (!firmware) {
+ switch (priv->ieee->iw_mode) {
+ case IW_MODE_ADHOC:
+ fw_name = "/etc/firmware/" IPW2100_FW_NAME("-i");
+ break;
+#ifdef CONFIG_IPW2100_PROMISC
+ case IW_MODE_MONITOR:
+ fw_name = "/etc/firmware/" IPW2100_FW_NAME("-p");
+ break;
+#endif
+ case IW_MODE_INFRA:
+ default:
+ fw_name = "/etc/firmware/" IPW2100_FW_NAME("");
+ break;
+ }
+ } else
+ fw_name = firmware;
+
+ err = ipw2100_mod_firmware_load(fw_name, fw);
+ if (err) {
+ IPW_DEBUG_ERROR("%s: Firmware '%s' not available. "
+ "See Documentation/networking/README.ipw2100\n",
+ priv->net_dev->name, fw_name);
+ return -EIO;
+ }
+
+#else /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+ int rc;
+
+ IPW_DEBUG_INFO("%s: Using hotplug firmware load.\n",
+ priv->net_dev->name);
+
+ switch (priv->ieee->iw_mode) {
+ case IW_MODE_ADHOC:
+ fw_name = IPW2100_FW_NAME("-i");
+ break;
+#ifdef CONFIG_IPW2100_PROMISC
+ case IW_MODE_MONITOR:
+ fw_name = IPW2100_FW_NAME("-p");
+ break;
+#endif
+ case IW_MODE_INFRA:
+ default:
+ fw_name = IPW2100_FW_NAME("");
+ break;
+ }
+
+ rc = request_firmware(&fw->fw_entry, fw_name, &priv->pci_dev->dev);
+
+ if (rc < 0) {
+ IPW_DEBUG_ERROR(
+ "%s: Firmware '%s' not available or load failed.\n",
+ priv->net_dev->name, fw_name);
+ return rc;
+ }
+ IPW_DEBUG_INFO("firmware data %p size %d", fw->fw_entry->data,
+ fw->fw_entry->size);
+
+ ipw2100_mod_firmware_load(fw);
+
+#endif /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+ return 0;
+}
+
+void ipw2100_release_firmware(struct ipw2100_priv *priv,
+ struct ipw2100_fw *fw)
+{
+ fw->version = 0;
+#ifdef CONFIG_IPW2100_LEGACY_FW_LOAD
+
+ ipw2100_fw_free(fw);
+
+#else /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+ if (fw->fw_entry)
+ release_firmware(fw->fw_entry);
+ fw->fw_entry = NULL;
+
+#endif /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+}
+
+
+int ipw2100_get_fwversion(struct ipw2100_priv *priv, char *buf, size_t max)
+{
+ char ver[MAX_FW_VERSION_LEN];
+ u32 len = MAX_FW_VERSION_LEN;
+ u32 tmp;
+ int i;
+ /* firmware version is an ascii string (max len of 14) */
+ if (ipw2100_get_ordinal(priv, IPW_ORD_STAT_FW_VER_NUM,
+ ver, &len))
+ return -EIO;
+ tmp = max;
+ if (len >= max)
+ len = max - 1;
+ for (i = 0; i < len; i++)
+ buf[i] = ver[i];
+ buf[i] = '\0';
+ return tmp;
+}
+
+int ipw2100_get_ucodeversion(struct ipw2100_priv *priv, char *buf, size_t max)
+{
+ u32 ver;
+ u32 len = sizeof(ver);
+ /* microcode version is a 32 bit integer */
+ if (ipw2100_get_ordinal(priv, IPW_ORD_UCODE_VERSION,
+ &ver, &len))
+ return -EIO;
+ return snprintf(buf, max, "%08X", ver);
+}
+
+/*
+ * On exit, the firmware will have been freed from the fw list
+ */
+int ipw2100_fw_download(struct ipw2100_priv *priv, struct ipw2100_fw *fw)
+{
+ /* firmware is constructed of N contiguous entries, each entry is
+ * structured as:
+ *
+ * offset sie desc
+ * 0 4 address to write to
+ * 4 2 length of data run
+ * 6 length data
+ */
+ unsigned int addr;
+ unsigned short len;
+
+#ifdef CONFIG_IPW2100_LEGACY_FW_LOAD
+ unsigned char data[32];
+
+ struct ipw2100_fw_chunk_set *cs = &fw->fw;
+ struct list_head *e = cs->chunk_list.next;
+ unsigned int size = cs->size;
+
+ while (size > 0) {
+ e = ipw2100_fw_readl(e, cs, &addr);
+ size -= sizeof(addr);
+
+ e = ipw2100_fw_readw(e, cs, &len);
+ size -= sizeof(len);
+
+ if (len > 32) {
+ IPW_DEBUG_ERROR(
+ "Invalid firmware run-length of %d bytes\n",
+ len);
+ return -EINVAL;
+ }
+
+ e = ipw2100_fw_read(e, cs, data, len);
+ size -= len;
+
+ write_nic_memory(priv->net_dev, addr, len, data);
+ }
+
+#else /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+ const unsigned char *firmware_data = fw->fw.data;
+ unsigned int firmware_data_left = fw->fw.size;
+
+ while (firmware_data_left > 0) {
+ addr = *(u32 *)(firmware_data);
+ firmware_data += 4;
+ firmware_data_left -= 4;
+
+ len = *(u16 *)(firmware_data);
+ firmware_data += 2;
+ firmware_data_left -= 2;
+
+ if (len > 32) {
+ IPW_DEBUG_ERROR(
+ "Invalid firmware run-length of %d bytes\n",
+ len);
+ return -EINVAL;
+ }
+
+ write_nic_memory(priv->net_dev, addr, len, firmware_data);
+ firmware_data += len;
+ firmware_data_left -= len;
+ }
+#endif /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+ return 0;
+}
+
+struct symbol_alive_response {
+ u8 cmd_id;
+ u8 seq_num;
+ u8 ucode_rev;
+ u8 eeprom_valid;
+ u16 valid_flags;
+ u8 IEEE_addr[6];
+ u16 flags;
+ u16 pcb_rev;
+ u16 clock_settle_time; // 1us LSB
+ u16 powerup_settle_time; // 1us LSB
+ u16 hop_settle_time; // 1us LSB
+ u8 date[3]; // month, day, year
+ u8 time[2]; // hours, minutes
+ u8 ucode_valid;
+};
+
+int ipw2100_ucode_download(struct ipw2100_priv *priv, struct ipw2100_fw *fw)
+{
+ struct net_device *dev = priv->net_dev;
+
+#ifdef CONFIG_IPW2100_LEGACY_FW_LOAD
+
+ struct ipw2100_fw_chunk_set *cs = &fw->uc;
+ struct list_head *e = cs->chunk_list.next;
+ unsigned int size = cs->size;
+ unsigned short uc_data;
+
+#else /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+ const unsigned char *microcode_data = fw->uc.data;
+ unsigned int microcode_data_left = fw->uc.size;
+
+#endif /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+ struct symbol_alive_response response;
+ int i, j;
+ u8 data;
+
+ /* Symbol control */
+ write_nic_word(dev, IPW2100_CONTROL_REG, 0x703);
+ readl((void *)(dev->base_addr));
+ write_nic_word(dev, IPW2100_CONTROL_REG, 0x707);
+ readl((void *)(dev->base_addr));
+
+ /* HW config */
+ write_nic_byte(dev, 0x210014, 0x72); /* fifo width =16 */
+ readl((void *)(dev->base_addr));
+ write_nic_byte(dev, 0x210014, 0x72); /* fifo width =16 */
+ readl((void *)(dev->base_addr));
+
+ /* EN_CS_ACCESS bit to reset control store pointer */
+ write_nic_byte(dev, 0x210000, 0x40);
+ readl((void *)(dev->base_addr));
+ write_nic_byte(dev, 0x210000, 0x0);
+ readl((void *)(dev->base_addr));
+ write_nic_byte(dev, 0x210000, 0x40);
+ readl((void *)(dev->base_addr));
+
+ /* copy microcode from buffer into Symbol */
+
+#ifdef CONFIG_IPW2100_LEGACY_FW_LOAD
+
+ while (size > 0) {
+ e = ipw2100_fw_readw(e, cs, &uc_data);
+ size -= sizeof(uc_data);
+ write_nic_byte(dev, 0x210010, uc_data & 0xFF);
+ write_nic_byte(dev, 0x210010, (uc_data >> 8) & 0xFF);
+ }
+
+#else /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+ while (microcode_data_left > 0) {
+ write_nic_byte(dev, 0x210010, *microcode_data++);
+ write_nic_byte(dev, 0x210010, *microcode_data++);
+ microcode_data_left -= 2;
+ }
+
+#endif /* CONFIG_IPW2100_LEGACY_FW_LOAD */
+
+ /* EN_CS_ACCESS bit to reset the control store pointer */
+ write_nic_byte(dev, 0x210000, 0x0);
+ readl((void *)(dev->base_addr));
+
+ /* Enable System (Reg 0)
+ * first enable causes garbage in RX FIFO */
+ write_nic_byte(dev, 0x210000, 0x0);
+ readl((void *)(dev->base_addr));
+ write_nic_byte(dev, 0x210000, 0x80);
+ readl((void *)(dev->base_addr));
+
+ /* Reset External Baseband Reg */
+ write_nic_word(dev, IPW2100_CONTROL_REG, 0x703);
+ readl((void *)(dev->base_addr));
+ write_nic_word(dev, IPW2100_CONTROL_REG, 0x707);
+ readl((void *)(dev->base_addr));
+
+ /* HW Config (Reg 5) */
+ write_nic_byte(dev, 0x210014, 0x72); // fifo width =16
+ readl((void *)(dev->base_addr));
+ write_nic_byte(dev, 0x210014, 0x72); // fifo width =16
+ readl((void *)(dev->base_addr));
+
+ /* Enable System (Reg 0)
+ * second enable should be OK */
+ write_nic_byte(dev, 0x210000, 0x00); // clear enable system
+ readl((void *)(dev->base_addr));
+ write_nic_byte(dev, 0x210000, 0x80); // set enable system
+
+ /* check Symbol is enabled - upped this from 5 as it wasn't always
+ * catching the update */
+ for (i = 0; i < 10; i++) {
+ udelay(10);
+
+ /* check Dino is enabled bit */
+ read_nic_byte(dev, 0x210000, &data);
+ if (data & 0x1)
+ break;
+ }
+
+ if (i == 10) {
+ IPW_DEBUG_ERROR("%s: Error initializing Symbol\n",
+ dev->name);
+ return -EIO;
+ }
+
+ /* Get Symbol alive response */
+ for (i = 0; i < 30; i++) {
+ /* Read alive response structure */
+ for (j = 0;
+ j < (sizeof(struct symbol_alive_response) >> 1);
+ j++)
+ read_nic_word(dev, 0x210004,
+ ((u16 *)&response) + j);
+
+ if ((response.cmd_id == 1) &&
+ (response.ucode_valid == 0x1))
+ break;
+ udelay(10);
+ }
+
+ if (i == 30) {
+ IPW_DEBUG_ERROR("%s: No response from Symbol - hw not alive\n",
+ dev->name);
+ printk_buf(IPW_DL_ERROR, (u8*)&response, sizeof(response));
+ return -EIO;
+ }
+
+ return 0;
+}
--- /dev/null
+/******************************************************************************
+
+ Copyright(c) 2003 - 2004 Intel Corporation. All rights reserved.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of version 2 of the GNU General Public License as
+ published by the Free Software Foundation.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ You should have received a copy of the GNU General Public License along with
+ this program; if not, write to the Free Software Foundation, Inc., 59
+ Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+ The full GNU General Public License is included in this distribution in the
+ file called LICENSE.
+
+ Contact Information:
+ James P. Ketrenos <ipw2100-admin@linux.intel.com>
+ Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
+******************************************************************************/
+#ifndef _IPW2100_H
+#define _IPW2100_H
+
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/list.h>
+#include <linux/delay.h>
+#include <linux/skbuff.h>
+#include <asm/io.h>
+#include <linux/socket.h>
+#include <linux/if_arp.h>
+#include <linux/wireless.h>
+#include <linux/version.h>
+#include <net/iw_handler.h> // new driver API
+
+#include "ieee80211.h"
+
+#include <linux/workqueue.h>
+
+#ifndef IRQ_NONE
+typedef void irqreturn_t;
+#define IRQ_NONE
+#define IRQ_HANDLED
+#define IRQ_RETVAL(x)
+#endif
+
+#if WIRELESS_EXT < 17
+#define IW_QUAL_QUAL_INVALID 0x10
+#define IW_QUAL_LEVEL_INVALID 0x20
+#define IW_QUAL_NOISE_INVALID 0x40
+#endif
+
+#if ( LINUX_VERSION_CODE < KERNEL_VERSION(2,6,5) )
+#define pci_dma_sync_single_for_cpu pci_dma_sync_single
+#define pci_dma_sync_single_for_device pci_dma_sync_single
+#endif
+
+#ifndef HAVE_FREE_NETDEV
+#define free_netdev(x) kfree(x)
+#endif
+
+
+
+struct ipw2100_priv;
+struct ipw2100_tx_packet;
+struct ipw2100_rx_packet;
+
+#ifdef CONFIG_IPW_DEBUG
+enum { IPW_DEBUG_ENABLED = 1 };
+extern u32 ipw2100_debug_level;
+#define IPW_DEBUG(level, message...) \
+do { \
+ if (ipw2100_debug_level & (level)) { \
+ printk(KERN_DEBUG "ipw2100: %c %s ", \
+ in_interrupt() ? 'I' : 'U', __FUNCTION__); \
+ printk(message); \
+ } \
+} while (0)
+#else
+enum { IPW_DEBUG_ENABLED = 0 };
+#define IPW_DEBUG(level, message...) do {} while (0)
+#endif /* CONFIG_IPW_DEBUG */
+
+#define IPW_DL_UNINIT 0x80000000
+#define IPW_DL_NONE 0x00000000
+#define IPW_DL_ALL 0x7FFFFFFF
+
+/*
+ * To use the debug system;
+ *
+ * If you are defining a new debug classification, simply add it to the #define
+ * list here in the form of:
+ *
+ * #define IPW_DL_xxxx VALUE
+ *
+ * shifting value to the left one bit from the previous entry. xxxx should be
+ * the name of the classification (for example, WEP)
+ *
+ * You then need to either add a IPW2100_xxxx_DEBUG() macro definition for your
+ * classification, or use IPW_DEBUG(IPW_DL_xxxx, ...) whenever you want
+ * to send output to that classification.
+ *
+ * To add your debug level to the list of levels seen when you perform
+ *
+ * % cat /proc/net/ipw2100/debug_level
+ *
+ * you simply need to add your entry to the ipw2100_debug_levels array.
+ *
+ * If you do not see debug_level in /proc/net/ipw2100 then you do not have
+ * CONFIG_IPW_DEBUG defined in your kernel configuration
+ *
+ */
+
+#define IPW_DL_ERROR BIT(0)
+#define IPW_DL_WARNING BIT(1)
+#define IPW_DL_INFO BIT(2)
+#define IPW_DL_WX BIT(3)
+#define IPW_DL_HC BIT(5)
+#define IPW_DL_STATE BIT(6)
+
+#define IPW_DL_NOTIF BIT(10)
+#define IPW_DL_SCAN BIT(11)
+#define IPW_DL_ASSOC BIT(12)
+#define IPW_DL_DROP BIT(13)
+
+#define IPW_DL_IOCTL BIT(14)
+#define IPW_DL_RF_KILL BIT(17)
+
+
+#define IPW_DL_MANAGE BIT(15)
+#define IPW_DL_FW BIT(16)
+
+#define IPW_DL_FRAG BIT(21)
+#define IPW_DL_WEP BIT(22)
+#define IPW_DL_TX BIT(23)
+#define IPW_DL_RX BIT(24)
+#define IPW_DL_ISR BIT(25)
+#define IPW_DL_IO BIT(26)
+#define IPW_DL_TRACE BIT(28)
+
+#define IPW_DEBUG_ERROR(f, a...) printk(KERN_ERR DRV_NAME ": " f, ## a)
+#define IPW_DEBUG_WARNING(f, a...) printk(KERN_WARNING DRV_NAME ": " f, ## a)
+#define IPW_DEBUG_INFO(f...) IPW_DEBUG(IPW_DL_INFO, ## f)
+#define IPW_DEBUG_WX(f...) IPW_DEBUG(IPW_DL_WX, ## f)
+#define IPW_DEBUG_SCAN(f...) IPW_DEBUG(IPW_DL_SCAN, ## f)
+#define IPW_DEBUG_NOTIF(f...) IPW_DEBUG(IPW_DL_NOTIF, ## f)
+#define IPW_DEBUG_TRACE(f...) IPW_DEBUG(IPW_DL_TRACE, ## f)
+#define IPW_DEBUG_RX(f...) IPW_DEBUG(IPW_DL_RX, ## f)
+#define IPW_DEBUG_TX(f...) IPW_DEBUG(IPW_DL_TX, ## f)
+#define IPW_DEBUG_ISR(f...) IPW_DEBUG(IPW_DL_ISR, ## f)
+#define IPW_DEBUG_MANAGEMENT(f...) IPW_DEBUG(IPW_DL_MANAGE, ## f)
+#define IPW_DEBUG_WEP(f...) IPW_DEBUG(IPW_DL_WEP, ## f)
+#define IPW_DEBUG_HC(f...) IPW_DEBUG(IPW_DL_HC, ## f)
+#define IPW_DEBUG_FRAG(f...) IPW_DEBUG(IPW_DL_FRAG, ## f)
+#define IPW_DEBUG_FW(f...) IPW_DEBUG(IPW_DL_FW, ## f)
+#define IPW_DEBUG_RF_KILL(f...) IPW_DEBUG(IPW_DL_RF_KILL, ## f)
+#define IPW_DEBUG_DROP(f...) IPW_DEBUG(IPW_DL_DROP, ## f)
+#define IPW_DEBUG_IO(f...) IPW_DEBUG(IPW_DL_IO, ## f)
+#define IPW_DEBUG_IOCTL(f...) IPW_DEBUG(IPW_DL_IOCTL, ## f)
+#define IPW_DEBUG_STATE(f, a...) IPW_DEBUG(IPW_DL_STATE | IPW_DL_ASSOC | IPW_DL_INFO, f, ## a)
+#define IPW_DEBUG_ASSOC(f, a...) IPW_DEBUG(IPW_DL_ASSOC | IPW_DL_INFO, f, ## a)
+
+
+#define VERIFY(f) \
+{ \
+ int status = 0; \
+ status = f; \
+ if(status) \
+ return status; \
+}
+
+enum {
+ IPW_HW_STATE_DISABLED = 1,
+ IPW_HW_STATE_ENABLED = 0
+};
+
+struct ssid_context {
+ char ssid[IW_ESSID_MAX_SIZE + 1];
+ int ssid_len;
+ unsigned char bssid[ETH_ALEN];
+ int port_type;
+ int channel;
+
+};
+
+extern const char *port_type_str[];
+extern const char *band_str[];
+
+#define NUMBER_OF_BD_PER_COMMAND_PACKET 1
+#define NUMBER_OF_BD_PER_DATA_PACKET 2
+
+#define IPW_MAX_BDS 6
+#define NUMBER_OF_OVERHEAD_BDS_PER_PACKETR 2
+#define NUMBER_OF_BDS_TO_LEAVE_FOR_COMMANDS 1
+
+#define REQUIRED_SPACE_IN_RING_FOR_COMMAND_PACKET \
+ (IPW_BD_QUEUE_W_R_MIN_SPARE + NUMBER_OF_BD_PER_COMMAND_PACKET)
+
+struct bd_status {
+ union {
+ struct { u8 nlf:1, txType:2, intEnabled:1, reserved:4;} fields;
+ u8 field;
+ } info;
+} __attribute__ ((packed));
+
+#define IPW_BUFDESC_LAST_FRAG 0
+
+struct ipw2100_bd {
+ u32 host_addr;
+ u32 buf_length;
+ struct bd_status status;
+ /* number of fragments for frame (should be set only for
+ * 1st TBD) */
+ u8 num_fragments;
+ u8 reserved[6];
+} __attribute__ ((packed));
+
+#define IPW_BD_QUEUE_LENGTH(n) (1<<n)
+#define IPW_BD_ALIGNMENT(L) (L*sizeof(struct ipw2100_bd))
+
+#define IPW_BD_STATUS_TX_FRAME_802_3 0x00
+#define IPW_BD_STATUS_TX_FRAME_NOT_LAST_FRAGMENT 0x01
+#define IPW_BD_STATUS_TX_FRAME_COMMAND 0x02
+#define IPW_BD_STATUS_TX_FRAME_802_11 0x04
+#define IPW_BD_STATUS_TX_INTERRUPT_ENABLE 0x08
+
+struct ipw2100_bd_queue {
+ /* driver (virtual) pointer to queue */
+ struct ipw2100_bd *drv;
+
+ /* firmware (physical) pointer to queue */
+ dma_addr_t nic;
+
+ /* Length of phy memory allocated for BDs */
+ u32 size;
+
+ /* Number of BDs in queue (and in array) */
+ u32 entries;
+
+ /* Number of available BDs (invalid for NIC BDs) */
+ u32 available;
+
+ /* Offset of oldest used BD in array (next one to
+ * check for completion) */
+ u32 oldest;
+
+ /* Offset of next available (unused) BD */
+ u32 next;
+};
+
+#define RX_QUEUE_LENGTH 256
+#define TX_QUEUE_LENGTH 256
+#define HW_QUEUE_LENGTH 256
+
+#define TX_PENDED_QUEUE_LENGTH (TX_QUEUE_LENGTH / NUMBER_OF_BD_PER_DATA_PACKET)
+
+#define STATUS_TYPE_MASK 0x0000000f
+#define COMMAND_STATUS_VAL 0
+#define STATUS_CHANGE_VAL 1
+#define P80211_DATA_VAL 2
+#define P8023_DATA_VAL 3
+#define HOST_NOTIFICATION_VAL 4
+
+#define IPW2100_RSSI_TO_DBM (-98)
+
+struct ipw2100_status {
+ u32 frame_size;
+ u16 status_fields;
+ u8 flags;
+#define IPW_STATUS_FLAG_DECRYPTED BIT(0)
+#define IPW_STATUS_FLAG_WEP_ENCRYPTED BIT(1)
+#define IPW_STATUS_FLAG_CRC_ERROR BIT(2)
+ u8 rssi;
+} __attribute__ ((packed));
+
+struct ipw2100_status_queue {
+ /* driver (virtual) pointer to queue */
+ struct ipw2100_status *drv;
+
+ /* firmware (physical) pointer to queue */
+ dma_addr_t nic;
+
+ /* Length of phy memory allocated for BDs */
+ u32 size;
+};
+
+#define HOST_COMMAND_PARAMS_REG_LEN 100
+#define CMD_STATUS_PARAMS_REG_LEN 3
+
+#define IPW_WPA_CAPABILITIES 0x1
+#define IPW_WPA_LISTENINTERVAL 0x2
+#define IPW_WPA_AP_ADDRESS 0x4
+
+#define IPW_MAX_VAR_IE_LEN ((HOST_COMMAND_PARAMS_REG_LEN - 4) * sizeof(u32))
+
+struct ipw2100_wpa_assoc_frame {
+ u16 fixed_ie_mask;
+ struct {
+ u16 capab_info;
+ u16 listen_interval;
+ u8 current_ap[ETH_ALEN];
+ } fixed_ies;
+ u32 var_ie_len;
+ u8 var_ie[IPW_MAX_VAR_IE_LEN];
+};
+
+#define IPW_BSS 1
+#define IPW_MONITOR 2
+#define IPW_IBSS 3
+
+/**
+ * @struct _tx_cmd - HWCommand
+ * @brief H/W command structure.
+ */
+struct ipw2100_cmd_header {
+ u32 host_command_reg;
+ u32 host_command_reg1;
+ u32 sequence;
+ u32 host_command_len_reg;
+ u32 host_command_params_reg[HOST_COMMAND_PARAMS_REG_LEN];
+ u32 cmd_status_reg;
+ u32 cmd_status_params_reg[CMD_STATUS_PARAMS_REG_LEN];
+ u32 rxq_base_ptr;
+ u32 rxq_next_ptr;
+ u32 rxq_host_ptr;
+ u32 txq_base_ptr;
+ u32 txq_next_ptr;
+ u32 txq_host_ptr;
+ u32 tx_status_reg;
+ u32 reserved;
+ u32 status_change_reg;
+ u32 reserved1[3];
+ u32 *ordinal1_ptr;
+ u32 *ordinal2_ptr;
+} __attribute__ ((packed));
+
+struct ipw2100_data_header {
+ u32 host_command_reg;
+ u32 host_command_reg1;
+ u8 encrypted; // BOOLEAN in win! TRUE if frame is enc by driver
+ u8 needs_encryption; // BOOLEAN in win! TRUE if frma need to be enc in NIC
+ u8 wep_index; // 0 no key, 1-4 key index, 0xff immediate key
+ u8 key_size; // 0 no imm key, 0x5 64bit encr, 0xd 128bit encr, 0x10 128bit encr and 128bit IV
+ u8 key[16];
+ u8 reserved[10]; // f/w reserved
+ u8 src_addr[ETH_ALEN];
+ u8 dst_addr[ETH_ALEN];
+ u16 fragment_size;
+} __attribute__ ((packed));
+
+// Host command data structure
+struct host_command {
+ u32 host_command; // COMMAND ID
+ u32 host_command1; // COMMAND ID
+ u32 host_command_sequence; // UNIQUE COMMAND NUMBER (ID)
+ u32 host_command_length; // LENGTH
+ u32 host_command_parameters[HOST_COMMAND_PARAMS_REG_LEN]; // COMMAND PARAMETERS
+} __attribute__ ((packed));
+
+
+typedef enum {
+ POWER_ON_RESET,
+ EXIT_POWER_DOWN_RESET,
+ SW_RESET,
+ EEPROM_RW,
+ SW_RE_INIT
+} ipw2100_reset_event;
+
+enum {
+ COMMAND = 0xCAFE,
+ DATA,
+ RX
+};
+
+
+struct ipw2100_tx_packet {
+ int type;
+ int index;
+ union {
+ struct { /* COMMAND */
+ struct ipw2100_cmd_header* cmd;
+ dma_addr_t cmd_phys;
+ } c_struct;
+ struct { /* DATA */
+ struct ipw2100_data_header* data;
+ dma_addr_t data_phys;
+ struct ieee80211_txb *txb;
+ } d_struct;
+ } info;
+ int jiffy_start;
+
+ struct list_head list;
+};
+
+
+struct ipw2100_rx_packet {
+ struct ipw2100_rx *rxp;
+ dma_addr_t dma_addr;
+ int jiffy_start;
+ struct sk_buff *skb;
+ struct list_head list;
+};
+
+#define FRAG_DISABLED BIT(31)
+#define RTS_DISABLED BIT(31)
+#define MAX_RTS_THRESHOLD 2304U
+#define MIN_RTS_THRESHOLD 1U
+#define DEFAULT_RTS_THRESHOLD 1000U
+
+#define DEFAULT_BEACON_INTERVAL 100U
+#define DEFAULT_SHORT_RETRY_LIMIT 7U
+#define DEFAULT_LONG_RETRY_LIMIT 4U
+
+struct ipw2100_ordinals {
+ u32 table1_addr;
+ u32 table2_addr;
+ u32 table1_size;
+ u32 table2_size;
+};
+
+/* Host Notification header */
+struct ipw2100_notification {
+ u32 hnhdr_subtype; /* type of host notification */
+ u32 hnhdr_size; /* size in bytes of data
+ or number of entries, if table.
+ Does NOT include header */
+} __attribute__ ((packed));
+
+#define MAX_KEY_SIZE 16
+#define MAX_KEYS 8
+
+#define IPW2100_WEP_ENABLE BIT(1)
+#define IPW2100_WEP_DROP_CLEAR BIT(2)
+
+#define IPW_NONE_CIPHER BIT(0)
+#define IPW_WEP40_CIPHER BIT(1)
+#define IPW_WEP104_CIPHER BIT(5)
+#define IPW_TKIP_CIPHER BIT(2)
+#define IPW_CKIP_CIPHER BIT(6)
+#define IPW_CCMP_CIPHER BIT(4)
+
+#define IPW_AUTH_OPEN 0
+#define IPW_AUTH_SHARED 1
+
+struct statistic {
+ int value;
+ int hi;
+ int lo;
+};
+
+#define INIT_STAT(x) do { \
+ (x)->value = (x)->hi = 0; \
+ (x)->lo = 0x7fffffff; \
+} while (0)
+#define SET_STAT(x,y) do { \
+ (x)->value = y; \
+ if ((x)->value > (x)->hi) (x)->hi = (x)->value; \
+ if ((x)->value < (x)->lo) (x)->lo = (x)->value; \
+} while (0)
+#define INC_STAT(x) do { if (++(x)->value > (x)->hi) (x)->hi = (x)->value; } \
+while (0)
+#define DEC_STAT(x) do { if (--(x)->value < (x)->lo) (x)->lo = (x)->value; } \
+while (0)
+
+#define IPW2100_ERROR_QUEUE 5
+
+/* Power management code: enable or disable? */
+enum {
+#ifdef CONFIG_PM
+ IPW2100_PM_DISABLED = 0,
+ PM_STATE_SIZE = 16,
+#else
+ IPW2100_PM_DISABLED = 1,
+ PM_STATE_SIZE = 0,
+#endif
+};
+
+#define STATUS_POWERED BIT(0)
+#define STATUS_CMD_ACTIVE BIT(1) /**< host command in progress */
+#define STATUS_RUNNING BIT(2) /* Card initialized, but not enabled */
+#define STATUS_ENABLED BIT(3) /* Card enabled -- can scan,Tx,Rx */
+#define STATUS_STOPPING BIT(4) /* Card is in shutdown phase */
+#define STATUS_INITIALIZED BIT(5) /* Card is ready for external calls */
+#define STATUS_ASSOCIATED BIT(9)
+#define STATUS_INT_ENABLED BIT(11)
+#define STATUS_RF_KILL_HW BIT(12)
+#define STATUS_RF_KILL_SW BIT(13)
+#define STATUS_RF_KILL_MASK (STATUS_RF_KILL_HW | STATUS_RF_KILL_SW)
+#define STATUS_EXIT_PENDING BIT(14)
+
+#define STATUS_SCAN_PENDING BIT(23)
+#define STATUS_SCANNING BIT(24)
+#define STATUS_SCAN_ABORTING BIT(25)
+#define STATUS_SCAN_COMPLETE BIT(26)
+#define STATUS_WX_EVENT_PENDING BIT(27)
+#define STATUS_RESET_PENDING BIT(29)
+#define STATUS_SECURITY_UPDATED BIT(30) /* Security sync needed */
+
+
+
+/* Internal NIC states */
+#define IPW_STATE_INITIALIZED BIT(0)
+#define IPW_STATE_COUNTRY_FOUND BIT(1)
+#define IPW_STATE_ASSOCIATED BIT(2)
+#define IPW_STATE_ASSN_LOST BIT(3)
+#define IPW_STATE_ASSN_CHANGED BIT(4)
+#define IPW_STATE_SCAN_COMPLETE BIT(5)
+#define IPW_STATE_ENTERED_PSP BIT(6)
+#define IPW_STATE_LEFT_PSP BIT(7)
+#define IPW_STATE_RF_KILL BIT(8)
+#define IPW_STATE_DISABLED BIT(9)
+#define IPW_STATE_POWER_DOWN BIT(10)
+#define IPW_STATE_SCANNING BIT(11)
+
+
+
+#define CFG_STATIC_CHANNEL BIT(0) /* Restrict assoc. to single channel */
+#define CFG_STATIC_ESSID BIT(1) /* Restrict assoc. to single SSID */
+#define CFG_STATIC_BSSID BIT(2) /* Restrict assoc. to single BSSID */
+#define CFG_CUSTOM_MAC BIT(3)
+#define CFG_LONG_PREAMBLE BIT(4)
+#define CFG_ASSOCIATE BIT(6)
+#define CFG_FIXED_RATE BIT(7)
+#define CFG_ADHOC_CREATE BIT(8)
+#define CFG_C3_DISABLED BIT(9)
+#define CFG_PASSIVE_SCAN BIT(10)
+
+#define CAP_SHARED_KEY BIT(0) /* Off = OPEN */
+#define CAP_PRIVACY_ON BIT(1) /* Off = No privacy */
+
+struct ipw2100_priv {
+
+ int stop_hang_check; /* Set 1 when shutting down to kill hang_check */
+ int stop_rf_kill; /* Set 1 when shutting down to kill rf_kill */
+
+ struct ieee80211_device *ieee;
+ unsigned long status;
+ unsigned long config;
+ unsigned long capability;
+
+ /* Statistics */
+ int resets;
+ int reset_backoff;
+
+ /* Context */
+ u8 essid[IW_ESSID_MAX_SIZE];
+ u8 essid_len;
+ u8 bssid[ETH_ALEN];
+ u8 channel;
+ int last_mode;
+ int cstate_limit;
+
+ unsigned long connect_start;
+ unsigned long last_reset;
+
+ u32 channel_mask;
+ u32 fatal_error;
+ u32 fatal_errors[IPW2100_ERROR_QUEUE];
+ u32 fatal_index;
+ int eeprom_version;
+ int firmware_version;
+ unsigned long hw_features;
+ int hangs;
+ u32 last_rtc;
+ int dump_raw; /* 1 to dump raw bytes in /sys/.../memory */
+ u8* snapshot[0x30];
+
+ u8 mandatory_bssid_mac[ETH_ALEN];
+ u8 mac_addr[ETH_ALEN];
+
+ int power_mode;
+
+ /* WEP data */
+ struct ieee80211_security sec;
+ int messages_sent;
+
+
+ int short_retry_limit;
+ int long_retry_limit;
+
+ u32 rts_threshold;
+ u32 frag_threshold;
+
+ int in_isr;
+
+ u32 tx_rates;
+ int tx_power;
+ u32 beacon_interval;
+
+ char nick[IW_ESSID_MAX_SIZE + 1];
+
+ struct ipw2100_status_queue status_queue;
+
+ struct statistic txq_stat;
+ struct statistic rxq_stat;
+ struct ipw2100_bd_queue rx_queue;
+ struct ipw2100_bd_queue tx_queue;
+ struct ipw2100_rx_packet *rx_buffers;
+
+ struct statistic fw_pend_stat;
+ struct list_head fw_pend_list;
+
+ struct statistic msg_free_stat;
+ struct statistic msg_pend_stat;
+ struct list_head msg_free_list;
+ struct list_head msg_pend_list;
+ struct ipw2100_tx_packet *msg_buffers;
+
+ struct statistic tx_free_stat;
+ struct statistic tx_pend_stat;
+ struct list_head tx_free_list;
+ struct list_head tx_pend_list;
+ struct ipw2100_tx_packet *tx_buffers;
+
+ struct ipw2100_ordinals ordinals;
+
+ struct pci_dev *pci_dev;
+
+ struct proc_dir_entry *dir_dev;
+
+ struct net_device *net_dev;
+ struct iw_statistics wstats;
+
+ struct tasklet_struct irq_tasklet;
+
+ struct workqueue_struct *workqueue;
+ struct work_struct reset_work;
+ struct work_struct security_work;
+ struct work_struct wx_event_work;
+ struct work_struct hang_check;
+ struct work_struct rf_kill;
+
+ u32 interrupts;
+ int tx_interrupts;
+ int rx_interrupts;
+ int inta_other;
+
+ spinlock_t low_lock;
+
+ wait_queue_head_t wait_command_queue;
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,10)
+ u32 pm_state[PM_STATE_SIZE];
+#endif
+};
+
+
+/*********************************************************
+ * Host Command -> From Driver to FW
+ *********************************************************/
+
+/**
+ * Host command identifiers
+ */
+#define HOST_COMPLETE 2
+#define SYSTEM_CONFIG 6
+#define SSID 8
+#define MANDATORY_BSSID 9
+#define AUTHENTICATION_TYPE 10
+#define ADAPTER_ADDRESS 11
+#define PORT_TYPE 12
+#define INTERNATIONAL_MODE 13
+#define CHANNEL 14
+#define RTS_THRESHOLD 15
+#define FRAG_THRESHOLD 16
+#define POWER_MODE 17
+#define TX_RATES 18
+#define BASIC_TX_RATES 19
+#define WEP_KEY_INFO 20
+#define WEP_KEY_INDEX 25
+#define WEP_FLAGS 26
+#define ADD_MULTICAST 27
+#define CLEAR_ALL_MULTICAST 28
+#define BEACON_INTERVAL 29
+#define ATIM_WINDOW 30
+#define CLEAR_STATISTICS 31
+#define SEND 33
+#define TX_POWER_INDEX 36
+#define BROADCAST_SCAN 43
+#define CARD_DISABLE 44
+#define PREFERRED_BSSID 45
+#define SET_SCAN_OPTIONS 46
+#define SCAN_DWELL_TIME 47
+#define SWEEP_TABLE 48
+#define AP_OR_STATION_TABLE 49
+#define GROUP_ORDINALS 50
+#define SHORT_RETRY_LIMIT 51
+#define LONG_RETRY_LIMIT 52
+
+#define HOST_PRE_POWER_DOWN 58
+#define CARD_DISABLE_PHY_OFF 61
+#define MSDU_TX_RATES 62
+
+
+// Rogue AP Detection
+#define SET_STATION_STAT_BITS 64
+#define CLEAR_STATIONS_STAT_BITS 65
+#define LEAP_ROGUE_MODE 66 //TODO tbw replaced by CFG_LEAP_ROGUE_AP
+#define SET_SECURITY_INFORMATION 67
+#define DISASSOCIATION_BSSID 68
+#define SET_WPA_IE 69
+
+
+
+// system configuration bit mask:
+//#define IPW_CFG_ANTENNA_SETTING 0x03
+//#define IPW_CFG_ANTENNA_A 0x01
+//#define IPW_CFG_ANTENNA_B 0x02
+#define IPW_CFG_PROMISCUOUS 0x00004
+//#define IPW_CFG_TX_STATUS_ENABLE 0x00008
+#define IPW_CFG_PREAMBLE_AUTO 0x00010
+#define IPW_CFG_IBSS_AUTO_START 0x00020
+//#define IPW_CFG_KERBEROS_ENABLE 0x00040
+#define IPW_CFG_LOOPBACK 0x00100
+//#define IPW_CFG_WNMP_PING_PASS 0x00200
+//#define IPW_CFG_DEBUG_ENABLE 0x00400
+#define IPW_CFG_ANSWER_BCSSID_PROBE 0x00800
+//#define IPW_CFG_BT_PRIORITY 0x01000
+#define IPW_CFG_BT_SIDEBAND_SIGNAL 0x02000
+#define IPW_CFG_802_1x_ENABLE 0x04000
+#define IPW_CFG_BSS_MASK 0x08000
+#define IPW_CFG_IBSS_MASK 0x10000
+//#define IPW_CFG_DYNAMIC_CW 0x10000
+
+#define IPW_SCAN_NOASSOCIATE BIT(0)
+#define IPW_SCAN_MIXED_CELL BIT(1)
+/* RESERVED BIT(2) */
+#define IPW_SCAN_PASSIVE BIT(3)
+
+#define IPW_NIC_FATAL_ERROR 0x2A7F0
+#define IPW_ERROR_ADDR(x) (x & 0x3FFFF)
+#define IPW_ERROR_CODE(x) ((x & 0xFF000000) >> 24)
+#define IPW2100_ERR_C3_CORRUPTION (0x10 << 24)
+#define IPW2100_ERR_MSG_TIMEOUT (0x11 << 24)
+#define IPW2100_ERR_FW_LOAD (0x12 << 24)
+
+#define IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND 0x200
+#define IPW_MEM_SRAM_HOST_INTERRUPT_AREA_LOWER_BOUND IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x0D80
+
+#define IPW_MEM_HOST_SHARED_RX_BD_BASE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x40)
+#define IPW_MEM_HOST_SHARED_RX_STATUS_BASE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x44)
+#define IPW_MEM_HOST_SHARED_RX_BD_SIZE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x48)
+#define IPW_MEM_HOST_SHARED_RX_READ_INDEX (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0xa0)
+
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_BD_BASE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x00)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_BD_SIZE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x04)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_READ_INDEX (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x80)
+
+#define IPW_MEM_HOST_SHARED_RX_WRITE_INDEX \
+ (IPW_MEM_SRAM_HOST_INTERRUPT_AREA_LOWER_BOUND + 0x20)
+
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_WRITE_INDEX \
+ (IPW_MEM_SRAM_HOST_INTERRUPT_AREA_LOWER_BOUND)
+
+
+#if 0
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_0_BD_BASE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x00)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_0_BD_SIZE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x04)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_1_BD_BASE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x08)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_1_BD_SIZE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x0c)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_2_BD_BASE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x10)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_2_BD_SIZE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x14)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_3_BD_BASE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x18)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_3_BD_SIZE (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x1c)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_0_READ_INDEX (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x80)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_1_READ_INDEX (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x84)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_2_READ_INDEX (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x88)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_3_READ_INDEX (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x8c)
+
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_BD_BASE(QueueNum) \
+ (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + (QueueNum<<3))
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_BD_SIZE(QueueNum) \
+ (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x0004+(QueueNum<<3))
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_READ_INDEX(QueueNum) \
+ (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x0080+(QueueNum<<2))
+
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_0_WRITE_INDEX \
+ (IPW_MEM_SRAM_HOST_INTERRUPT_AREA_LOWER_BOUND + 0x00)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_1_WRITE_INDEX \
+ (IPW_MEM_SRAM_HOST_INTERRUPT_AREA_LOWER_BOUND + 0x04)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_2_WRITE_INDEX \
+ (IPW_MEM_SRAM_HOST_INTERRUPT_AREA_LOWER_BOUND + 0x08)
+#define IPW_MEM_HOST_SHARED_TX_QUEUE_3_WRITE_INDEX \
+ (IPW_MEM_SRAM_HOST_INTERRUPT_AREA_LOWER_BOUND + 0x0c)
+#define IPW_MEM_HOST_SHARED_SLAVE_MODE_INT_REGISTER \
+ (IPW_MEM_SRAM_HOST_INTERRUPT_AREA_LOWER_BOUND + 0x78)
+
+#endif
+
+#define IPW_MEM_HOST_SHARED_ORDINALS_TABLE_1 (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x180)
+#define IPW_MEM_HOST_SHARED_ORDINALS_TABLE_2 (IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND + 0x184)
+
+#define IPW2100_INTA_TX_TRANSFER (0x00000001) // Bit 0 (LSB)
+#define IPW2100_INTA_RX_TRANSFER (0x00000002) // Bit 1
+#define IPW2100_INTA_TX_COMPLETE (0x00000004) // Bit 2
+#define IPW2100_INTA_EVENT_INTERRUPT (0x00000008) // Bit 3
+#define IPW2100_INTA_STATUS_CHANGE (0x00000010) // Bit 4
+#define IPW2100_INTA_BEACON_PERIOD_EXPIRED (0x00000020) // Bit 5
+#define IPW2100_INTA_SLAVE_MODE_HOST_COMMAND_DONE (0x00010000) // Bit 16
+#define IPW2100_INTA_FW_INIT_DONE (0x01000000) // Bit 24
+#define IPW2100_INTA_FW_CALIBRATION_CALC (0x02000000) // Bit 25
+#define IPW2100_INTA_FATAL_ERROR (0x40000000) // Bit 30
+#define IPW2100_INTA_PARITY_ERROR (0x80000000) // Bit 31 (MSB)
+
+#define IPW_AUX_HOST_RESET_REG_PRINCETON_RESET (0x00000001)
+#define IPW_AUX_HOST_RESET_REG_FORCE_NMI (0x00000002)
+#define IPW_AUX_HOST_RESET_REG_PCI_HOST_CLUSTER_FATAL_NMI (0x00000004)
+#define IPW_AUX_HOST_RESET_REG_CORE_FATAL_NMI (0x00000008)
+#define IPW_AUX_HOST_RESET_REG_SW_RESET (0x00000080)
+#define IPW_AUX_HOST_RESET_REG_MASTER_DISABLED (0x00000100)
+#define IPW_AUX_HOST_RESET_REG_STOP_MASTER (0x00000200)
+
+#define IPW_AUX_HOST_GP_CNTRL_BIT_CLOCK_READY (0x00000001) // Bit 0 (LSB)
+#define IPW_AUX_HOST_GP_CNTRL_BIT_HOST_ALLOWS_STANDBY (0x00000002) // Bit 1
+#define IPW_AUX_HOST_GP_CNTRL_BIT_INIT_DONE (0x00000004) // Bit 2
+#define IPW_AUX_HOST_GP_CNTRL_BITS_SYS_CONFIG (0x000007c0) // Bits 6-10
+#define IPW_AUX_HOST_GP_CNTRL_BIT_BUS_TYPE (0x00000200) // Bit 9
+#define IPW_AUX_HOST_GP_CNTRL_BIT_BAR0_BLOCK_SIZE (0x00000400) // Bit 10
+#define IPW_AUX_HOST_GP_CNTRL_BIT_USB_MODE (0x20000000) // Bit 29
+#define IPW_AUX_HOST_GP_CNTRL_BIT_HOST_FORCES_SYS_CLK (0x40000000) // Bit 30
+#define IPW_AUX_HOST_GP_CNTRL_BIT_FW_FORCES_SYS_CLK (0x80000000) // Bit 31 (MSB)
+
+#define IPW_BIT_GPIO_GPIO1_MASK 0x0000000C
+#define IPW_BIT_GPIO_GPIO3_MASK 0x000000C0
+#define IPW_BIT_GPIO_GPIO1_ENABLE 0x00000008
+#define IPW_BIT_GPIO_RF_KILL 0x00010000
+
+#define IPW_BIT_GPIO_LED_OFF 0x00002000 // Bit 13 = 1
+
+#define IPW_REG_DOMAIN_0_OFFSET 0x0000
+#define IPW_REG_DOMAIN_1_OFFSET IPW_MEM_SRAM_HOST_SHARED_LOWER_BOUND
+
+#define IPW_REG_INTA IPW_REG_DOMAIN_0_OFFSET + 0x0008
+#define IPW_REG_INTA_MASK IPW_REG_DOMAIN_0_OFFSET + 0x000C
+#define IPW_REG_INDIRECT_ACCESS_ADDRESS IPW_REG_DOMAIN_0_OFFSET + 0x0010
+#define IPW_REG_INDIRECT_ACCESS_DATA IPW_REG_DOMAIN_0_OFFSET + 0x0014
+#define IPW_REG_AUTOINCREMENT_ADDRESS IPW_REG_DOMAIN_0_OFFSET + 0x0018
+#define IPW_REG_AUTOINCREMENT_DATA IPW_REG_DOMAIN_0_OFFSET + 0x001C
+#define IPW_REG_RESET_REG IPW_REG_DOMAIN_0_OFFSET + 0x0020
+#define IPW_REG_GP_CNTRL IPW_REG_DOMAIN_0_OFFSET + 0x0024
+#define IPW_REG_GPIO IPW_REG_DOMAIN_0_OFFSET + 0x0030
+#define IPW_REG_FW_TYPE IPW_REG_DOMAIN_1_OFFSET + 0x0188
+#define IPW_REG_FW_VERSION IPW_REG_DOMAIN_1_OFFSET + 0x018C
+#define IPW_REG_FW_COMPATABILITY_VERSION IPW_REG_DOMAIN_1_OFFSET + 0x0190
+
+#define IPW_REG_INDIRECT_ADDR_MASK 0x00FFFFFC
+
+#define IPW_INTERRUPT_MASK 0xC1010013
+
+#define IPW2100_CONTROL_REG 0x220000
+#define IPW2100_CONTROL_PHY_OFF 0x8
+
+#define IPW2100_COMMAND 0x00300004
+#define IPW2100_COMMAND_PHY_ON 0x0
+#define IPW2100_COMMAND_PHY_OFF 0x1
+
+/* in DEBUG_AREA, values of memory always 0xd55555d5 */
+#define IPW_REG_DOA_DEBUG_AREA_START IPW_REG_DOMAIN_0_OFFSET + 0x0090
+#define IPW_REG_DOA_DEBUG_AREA_END IPW_REG_DOMAIN_0_OFFSET + 0x00FF
+#define IPW_DATA_DOA_DEBUG_VALUE 0xd55555d5
+
+#define IPW_INTERNAL_REGISTER_HALT_AND_RESET 0x003000e0
+
+#define IPW_WAIT_CLOCK_STABILIZATION_DELAY 50 // micro seconds
+#define IPW_WAIT_RESET_ARC_COMPLETE_DELAY 10 // micro seconds
+#define IPW_WAIT_RESET_MASTER_ASSERT_COMPLETE_DELAY 10 // micro seconds
+
+// BD ring queue read/write difference
+#define IPW_BD_QUEUE_W_R_MIN_SPARE 2
+
+#define IPW_CACHE_LINE_LENGTH_DEFAULT 0x80
+
+#define IPW_CARD_DISABLE_PHY_OFF_COMPLETE_WAIT 100 // 100 milli
+#define IPW_PREPARE_POWER_DOWN_COMPLETE_WAIT 100 // 100 milli
+
+
+
+
+#define IPW_HEADER_802_11_SIZE sizeof(struct ieee80211_header_data)
+#define IPW_MAX_80211_PAYLOAD_SIZE 2304U
+#define IPW_MAX_802_11_PAYLOAD_LENGTH 2312
+#define IPW_MAX_ACCEPTABLE_TX_FRAME_LENGTH 1536
+#define IPW_MIN_ACCEPTABLE_RX_FRAME_LENGTH 60
+#define IPW_MAX_ACCEPTABLE_RX_FRAME_LENGTH \
+ (IPW_MAX_ACCEPTABLE_TX_FRAME_LENGTH + IPW_HEADER_802_11_SIZE - \
+ sizeof(struct ethhdr))
+
+#define IPW_802_11_FCS_LENGTH 4
+#define IPW_RX_NIC_BUFFER_LENGTH \
+ (IPW_MAX_802_11_PAYLOAD_LENGTH + IPW_HEADER_802_11_SIZE + \
+ IPW_802_11_FCS_LENGTH)
+
+#define IPW_802_11_PAYLOAD_OFFSET \
+ (sizeof(struct ieee80211_header_data) + \
+ sizeof(struct ieee80211_snap_hdr))
+
+struct ipw2100_rx {
+ union {
+ unsigned char payload[IPW_RX_NIC_BUFFER_LENGTH];
+ struct ieee80211_hdr header;
+ u32 status;
+ struct ipw2100_notification notification;
+ struct ipw2100_cmd_header command;
+ } rx_data;
+} __attribute__ ((packed));
+
+// Bit 0-7 are for 802.11b tx rates - . Bit 5-7 are reserved
+#define TX_RATE_1_MBIT 0x0001
+#define TX_RATE_2_MBIT 0x0002
+#define TX_RATE_5_5_MBIT 0x0004
+#define TX_RATE_11_MBIT 0x0008
+#define TX_RATE_MASK 0x000F
+#define DEFAULT_TX_RATES 0x000F
+
+#define IPW_POWER_MODE_CAM 0x00 //(always on)
+#define IPW_POWER_INDEX_1 0x01
+#define IPW_POWER_INDEX_2 0x02
+#define IPW_POWER_INDEX_3 0x03
+#define IPW_POWER_INDEX_4 0x04
+#define IPW_POWER_INDEX_5 0x05
+#define IPW_POWER_AUTO 0x06
+#define IPW_POWER_MASK 0x0F
+#define IPW_POWER_ENABLED 0x10
+#define IPW_POWER_LEVEL(x) ((x) & IPW_POWER_MASK)
+
+#define IPW_TX_POWER_AUTO 0
+#define IPW_TX_POWER_ENHANCED 1
+
+#define IPW_TX_POWER_DEFAULT 32
+#define IPW_TX_POWER_MIN 0
+#define IPW_TX_POWER_MAX 16
+#define IPW_TX_POWER_MIN_DBM (-12)
+#define IPW_TX_POWER_MAX_DBM 16
+
+#define FW_SCAN_DONOT_ASSOCIATE 0x0001 // Dont Attempt to Associate after Scan
+#define FW_SCAN_PASSIVE 0x0008 // Force PASSSIVE Scan
+
+#define REG_MIN_CHANNEL 0
+#define REG_MAX_CHANNEL 14
+
+#define REG_CHANNEL_MASK 0x00003FFF
+#define IPW_IBSS_11B_DEFAULT_MASK 0x87ff
+
+#define DIVERSITY_EITHER 0 // Use both antennas
+#define DIVERSITY_ANTENNA_A 1 // Use antenna A
+#define DIVERSITY_ANTENNA_B 2 // Use antenna B
+
+
+#define HOST_COMMAND_WAIT 0
+#define HOST_COMMAND_NO_WAIT 1
+
+#define LOCK_NONE 0
+#define LOCK_DRIVER 1
+#define LOCK_FW 2
+
+#define TYPE_SWEEP_ORD 0x000D
+#define TYPE_IBSS_STTN_ORD 0x000E
+#define TYPE_BSS_AP_ORD 0x000F
+#define TYPE_RAW_BEACON_ENTRY 0x0010
+#define TYPE_CALIBRATION_DATA 0x0011
+#define TYPE_ROGUE_AP_DATA 0x0012
+#define TYPE_ASSOCIATION_REQUEST 0x0013
+#define TYPE_REASSOCIATION_REQUEST 0x0014
+
+
+#define HW_FEATURE_RFKILL (0x0001)
+#define RF_KILLSWITCH_OFF (1)
+#define RF_KILLSWITCH_ON (0)
+
+#define IPW_COMMAND_POOL_SIZE 40
+
+#define IPW_START_ORD_TAB_1 1
+#define IPW_START_ORD_TAB_2 1000
+
+#define IPW_ORD_TAB_1_ENTRY_SIZE sizeof(u32)
+
+#define IS_ORDINAL_TABLE_ONE(mgr,id) \
+ ((id >= IPW_START_ORD_TAB_1) && (id < mgr->table1_size))
+#define IS_ORDINAL_TABLE_TWO(mgr,id) \
+ ((id >= IPW_START_ORD_TAB_2) && (id < (mgr->table2_size + IPW_START_ORD_TAB_2)))
+
+#define BSS_ID_LENGTH 6
+
+// Fixed size data: Ordinal Table 1
+typedef enum _ORDINAL_TABLE_1 { // NS - means Not Supported by FW
+// Transmit statistics
+ IPW_ORD_STAT_TX_HOST_REQUESTS = 1,// # of requested Host Tx's (MSDU)
+ IPW_ORD_STAT_TX_HOST_COMPLETE, // # of successful Host Tx's (MSDU)
+ IPW_ORD_STAT_TX_DIR_DATA, // # of successful Directed Tx's (MSDU)
+
+ IPW_ORD_STAT_TX_DIR_DATA1 = 4, // # of successful Directed Tx's (MSDU) @ 1MB
+ IPW_ORD_STAT_TX_DIR_DATA2, // # of successful Directed Tx's (MSDU) @ 2MB
+ IPW_ORD_STAT_TX_DIR_DATA5_5, // # of successful Directed Tx's (MSDU) @ 5_5MB
+ IPW_ORD_STAT_TX_DIR_DATA11, // # of successful Directed Tx's (MSDU) @ 11MB
+ IPW_ORD_STAT_TX_DIR_DATA22, // # of successful Directed Tx's (MSDU) @ 22MB
+
+ IPW_ORD_STAT_TX_NODIR_DATA1 = 13,// # of successful Non_Directed Tx's (MSDU) @ 1MB
+ IPW_ORD_STAT_TX_NODIR_DATA2, // # of successful Non_Directed Tx's (MSDU) @ 2MB
+ IPW_ORD_STAT_TX_NODIR_DATA5_5, // # of successful Non_Directed Tx's (MSDU) @ 5.5MB
+ IPW_ORD_STAT_TX_NODIR_DATA11, // # of successful Non_Directed Tx's (MSDU) @ 11MB
+
+ IPW_ORD_STAT_NULL_DATA = 21, // # of successful NULL data Tx's
+ IPW_ORD_STAT_TX_RTS, // # of successful Tx RTS
+ IPW_ORD_STAT_TX_CTS, // # of successful Tx CTS
+ IPW_ORD_STAT_TX_ACK, // # of successful Tx ACK
+ IPW_ORD_STAT_TX_ASSN, // # of successful Association Tx's
+ IPW_ORD_STAT_TX_ASSN_RESP, // # of successful Association response Tx's
+ IPW_ORD_STAT_TX_REASSN, // # of successful Reassociation Tx's
+ IPW_ORD_STAT_TX_REASSN_RESP, // # of successful Reassociation response Tx's
+ IPW_ORD_STAT_TX_PROBE, // # of probes successfully transmitted
+ IPW_ORD_STAT_TX_PROBE_RESP, // # of probe responses successfully transmitted
+ IPW_ORD_STAT_TX_BEACON, // # of tx beacon
+ IPW_ORD_STAT_TX_ATIM, // # of Tx ATIM
+ IPW_ORD_STAT_TX_DISASSN, // # of successful Disassociation TX
+ IPW_ORD_STAT_TX_AUTH, // # of successful Authentication Tx
+ IPW_ORD_STAT_TX_DEAUTH, // # of successful Deauthentication TX
+
+ IPW_ORD_STAT_TX_TOTAL_BYTES = 41,// Total successful Tx data bytes
+ IPW_ORD_STAT_TX_RETRIES, // # of Tx retries
+ IPW_ORD_STAT_TX_RETRY1, // # of Tx retries at 1MBPS
+ IPW_ORD_STAT_TX_RETRY2, // # of Tx retries at 2MBPS
+ IPW_ORD_STAT_TX_RETRY5_5, // # of Tx retries at 5.5MBPS
+ IPW_ORD_STAT_TX_RETRY11, // # of Tx retries at 11MBPS
+
+ IPW_ORD_STAT_TX_FAILURES = 51, // # of Tx Failures
+ IPW_ORD_STAT_TX_ABORT_AT_HOP, //NS // # of Tx's aborted at hop time
+ IPW_ORD_STAT_TX_MAX_TRIES_IN_HOP,// # of times max tries in a hop failed
+ IPW_ORD_STAT_TX_ABORT_LATE_DMA, //NS // # of times tx aborted due to late dma setup
+ IPW_ORD_STAT_TX_ABORT_STX, //NS // # of times backoff aborted
+ IPW_ORD_STAT_TX_DISASSN_FAIL, // # of times disassociation failed
+ IPW_ORD_STAT_TX_ERR_CTS, // # of missed/bad CTS frames
+ IPW_ORD_STAT_TX_BPDU, //NS // # of spanning tree BPDUs sent
+ IPW_ORD_STAT_TX_ERR_ACK, // # of tx err due to acks
+
+ // Receive statistics
+ IPW_ORD_STAT_RX_HOST = 61, // # of packets passed to host
+ IPW_ORD_STAT_RX_DIR_DATA, // # of directed packets
+ IPW_ORD_STAT_RX_DIR_DATA1, // # of directed packets at 1MB
+ IPW_ORD_STAT_RX_DIR_DATA2, // # of directed packets at 2MB
+ IPW_ORD_STAT_RX_DIR_DATA5_5, // # of directed packets at 5.5MB
+ IPW_ORD_STAT_RX_DIR_DATA11, // # of directed packets at 11MB
+ IPW_ORD_STAT_RX_DIR_DATA22, // # of directed packets at 22MB
+
+ IPW_ORD_STAT_RX_NODIR_DATA = 71,// # of nondirected packets
+ IPW_ORD_STAT_RX_NODIR_DATA1, // # of nondirected packets at 1MB
+ IPW_ORD_STAT_RX_NODIR_DATA2, // # of nondirected packets at 2MB
+ IPW_ORD_STAT_RX_NODIR_DATA5_5, // # of nondirected packets at 5.5MB
+ IPW_ORD_STAT_RX_NODIR_DATA11, // # of nondirected packets at 11MB
+
+ IPW_ORD_STAT_RX_NULL_DATA = 80, // # of null data rx's
+ IPW_ORD_STAT_RX_POLL, //NS // # of poll rx
+ IPW_ORD_STAT_RX_RTS, // # of Rx RTS
+ IPW_ORD_STAT_RX_CTS, // # of Rx CTS
+ IPW_ORD_STAT_RX_ACK, // # of Rx ACK
+ IPW_ORD_STAT_RX_CFEND, // # of Rx CF End
+ IPW_ORD_STAT_RX_CFEND_ACK, // # of Rx CF End + CF Ack
+ IPW_ORD_STAT_RX_ASSN, // # of Association Rx's
+ IPW_ORD_STAT_RX_ASSN_RESP, // # of Association response Rx's
+ IPW_ORD_STAT_RX_REASSN, // # of Reassociation Rx's
+ IPW_ORD_STAT_RX_REASSN_RESP, // # of Reassociation response Rx's
+ IPW_ORD_STAT_RX_PROBE, // # of probe Rx's
+ IPW_ORD_STAT_RX_PROBE_RESP, // # of probe response Rx's
+ IPW_ORD_STAT_RX_BEACON, // # of Rx beacon
+ IPW_ORD_STAT_RX_ATIM, // # of Rx ATIM
+ IPW_ORD_STAT_RX_DISASSN, // # of disassociation Rx
+ IPW_ORD_STAT_RX_AUTH, // # of authentication Rx
+ IPW_ORD_STAT_RX_DEAUTH, // # of deauthentication Rx
+
+ IPW_ORD_STAT_RX_TOTAL_BYTES = 101,// Total rx data bytes received
+ IPW_ORD_STAT_RX_ERR_CRC, // # of packets with Rx CRC error
+ IPW_ORD_STAT_RX_ERR_CRC1, // # of Rx CRC errors at 1MB
+ IPW_ORD_STAT_RX_ERR_CRC2, // # of Rx CRC errors at 2MB
+ IPW_ORD_STAT_RX_ERR_CRC5_5, // # of Rx CRC errors at 5.5MB
+ IPW_ORD_STAT_RX_ERR_CRC11, // # of Rx CRC errors at 11MB
+
+ IPW_ORD_STAT_RX_DUPLICATE1 = 112, // # of duplicate rx packets at 1MB
+ IPW_ORD_STAT_RX_DUPLICATE2, // # of duplicate rx packets at 2MB
+ IPW_ORD_STAT_RX_DUPLICATE5_5, // # of duplicate rx packets at 5.5MB
+ IPW_ORD_STAT_RX_DUPLICATE11, // # of duplicate rx packets at 11MB
+ IPW_ORD_STAT_RX_DUPLICATE = 119, // # of duplicate rx packets
+
+ IPW_ORD_PERS_DB_LOCK = 120, // # locking fw permanent db
+ IPW_ORD_PERS_DB_SIZE, // # size of fw permanent db
+ IPW_ORD_PERS_DB_ADDR, // # address of fw permanent db
+ IPW_ORD_STAT_RX_INVALID_PROTOCOL, // # of rx frames with invalid protocol
+ IPW_ORD_SYS_BOOT_TIME, // # Boot time
+ IPW_ORD_STAT_RX_NO_BUFFER, // # of rx frames rejected due to no buffer
+ IPW_ORD_STAT_RX_ABORT_LATE_DMA, //NS // # of rx frames rejected due to dma setup too late
+ IPW_ORD_STAT_RX_ABORT_AT_HOP, //NS // # of rx frames aborted due to hop
+ IPW_ORD_STAT_RX_MISSING_FRAG, // # of rx frames dropped due to missing fragment
+ IPW_ORD_STAT_RX_ORPHAN_FRAG, // # of rx frames dropped due to non-sequential fragment
+ IPW_ORD_STAT_RX_ORPHAN_FRAME, // # of rx frames dropped due to unmatched 1st frame
+ IPW_ORD_STAT_RX_FRAG_AGEOUT, // # of rx frames dropped due to uncompleted frame
+ IPW_ORD_STAT_RX_BAD_SSID, //NS // Bad SSID (unused)
+ IPW_ORD_STAT_RX_ICV_ERRORS, // # of ICV errors during decryption
+
+// PSP Statistics
+ IPW_ORD_STAT_PSP_SUSPENSION = 137,// # of times adapter suspended
+ IPW_ORD_STAT_PSP_BCN_TIMEOUT, // # of beacon timeout
+ IPW_ORD_STAT_PSP_POLL_TIMEOUT, // # of poll response timeouts
+ IPW_ORD_STAT_PSP_NONDIR_TIMEOUT,// # of timeouts waiting for last broadcast/muticast pkt
+ IPW_ORD_STAT_PSP_RX_DTIMS, // # of PSP DTIMs received
+ IPW_ORD_STAT_PSP_RX_TIMS, // # of PSP TIMs received
+ IPW_ORD_STAT_PSP_STATION_ID, // PSP Station ID
+
+// Association and roaming
+ IPW_ORD_LAST_ASSN_TIME = 147, // RTC time of last association
+ IPW_ORD_STAT_PERCENT_MISSED_BCNS,// current calculation of % missed beacons
+ IPW_ORD_STAT_PERCENT_RETRIES, // current calculation of % missed tx retries
+ IPW_ORD_ASSOCIATED_AP_PTR, // If associated, this is ptr to the associated
+ // AP table entry. set to 0 if not associated
+ IPW_ORD_AVAILABLE_AP_CNT, // # of AP's decsribed in the AP table
+ IPW_ORD_AP_LIST_PTR, // Ptr to list of available APs
+ IPW_ORD_STAT_AP_ASSNS, // # of associations
+ IPW_ORD_STAT_ASSN_FAIL, // # of association failures
+ IPW_ORD_STAT_ASSN_RESP_FAIL, // # of failuresdue to response fail
+ IPW_ORD_STAT_FULL_SCANS, // # of full scans
+
+ IPW_ORD_CARD_DISABLED, // # Card Disabled
+ IPW_ORD_STAT_ROAM_INHIBIT, // # of times roaming was inhibited due to ongoing activity
+ IPW_FILLER_40,
+ IPW_ORD_RSSI_AT_ASSN = 160, // RSSI of associated AP at time of association
+ IPW_ORD_STAT_ASSN_CAUSE1, // # of reassociations due to no tx from AP in last N
+ // hops or no prob_ responses in last 3 minutes
+ IPW_ORD_STAT_ASSN_CAUSE2, // # of reassociations due to poor tx/rx quality
+ IPW_ORD_STAT_ASSN_CAUSE3, // # of reassociations due to tx/rx quality with excessive
+ // load at the AP
+ IPW_ORD_STAT_ASSN_CAUSE4, // # of reassociations due to AP RSSI level fell below
+ // eligible group
+ IPW_ORD_STAT_ASSN_CAUSE5, // # of reassociations due to load leveling
+ IPW_ORD_STAT_ASSN_CAUSE6, //NS // # of reassociations due to dropped by Ap
+ IPW_FILLER_41,
+ IPW_FILLER_42,
+ IPW_FILLER_43,
+ IPW_ORD_STAT_AUTH_FAIL, // # of times authentication failed
+ IPW_ORD_STAT_AUTH_RESP_FAIL, // # of times authentication response failed
+ IPW_ORD_STATION_TABLE_CNT, // # of entries in association table
+
+// Other statistics
+ IPW_ORD_RSSI_AVG_CURR = 173, // Current avg RSSI
+ IPW_ORD_STEST_RESULTS_CURR, //NS // Current self test results word
+ IPW_ORD_STEST_RESULTS_CUM, //NS // Cummulative self test results word
+ IPW_ORD_SELF_TEST_STATUS, //NS //
+ IPW_ORD_POWER_MGMT_MODE, // Power mode - 0=CAM, 1=PSP
+ IPW_ORD_POWER_MGMT_INDEX, //NS //
+ IPW_ORD_COUNTRY_CODE, // IEEE country code as recv'd from beacon
+ IPW_ORD_COUNTRY_CHANNELS, // channels suported by country
+// IPW_ORD_COUNTRY_CHANNELS:
+// For 11b the lower 2-byte are used for channels from 1-14
+// and the higher 2-byte are not used.
+ IPW_ORD_RESET_CNT, // # of adapter resets (warm)
+ IPW_ORD_BEACON_INTERVAL, // Beacon interval
+
+ IPW_ORD_PRINCETON_VERSION = 184, //NS // Princeton Version
+ IPW_ORD_ANTENNA_DIVERSITY, // TRUE if antenna diversity is disabled
+ IPW_ORD_CCA_RSSI, //NS // CCA RSSI value (factory programmed)
+ IPW_ORD_STAT_EEPROM_UPDATE, //NS // # of times config EEPROM updated
+ IPW_ORD_DTIM_PERIOD, // # of beacon intervals between DTIMs
+ IPW_ORD_OUR_FREQ, // current radio freq lower digits - channel ID
+
+ IPW_ORD_RTC_TIME = 190, // current RTC time
+ IPW_ORD_PORT_TYPE, // operating mode
+ IPW_ORD_CURRENT_TX_RATE, // current tx rate
+ IPW_ORD_SUPPORTED_RATES, // Bitmap of supported tx rates
+ IPW_ORD_ATIM_WINDOW, // current ATIM Window
+ IPW_ORD_BASIC_RATES, // bitmap of basic tx rates
+ IPW_ORD_NIC_HIGHEST_RATE, // bitmap of basic tx rates
+ IPW_ORD_AP_HIGHEST_RATE, // bitmap of basic tx rates
+ IPW_ORD_CAPABILITIES, // Management frame capability field
+ IPW_ORD_AUTH_TYPE, // Type of authentication
+ IPW_ORD_RADIO_TYPE, // Adapter card platform type
+ IPW_ORD_RTS_THRESHOLD = 201, // Min length of packet after which RTS handshaking is used
+ IPW_ORD_INT_MODE, // International mode
+ IPW_ORD_FRAGMENTATION_THRESHOLD, // protocol frag threshold
+ IPW_ORD_EEPROM_SRAM_DB_BLOCK_START_ADDRESS, // EEPROM offset in SRAM
+ IPW_ORD_EEPROM_SRAM_DB_BLOCK_SIZE, // EEPROM size in SRAM
+ IPW_ORD_EEPROM_SKU_CAPABILITY, // EEPROM SKU Capability 206 =
+ IPW_ORD_EEPROM_IBSS_11B_CHANNELS, // EEPROM IBSS 11b channel set
+
+ IPW_ORD_MAC_VERSION = 209, // MAC Version
+ IPW_ORD_MAC_REVISION, // MAC Revision
+ IPW_ORD_RADIO_VERSION, // Radio Version
+ IPW_ORD_NIC_MANF_DATE_TIME, // MANF Date/Time STAMP
+ IPW_ORD_UCODE_VERSION, // Ucode Version
+ IPW_ORD_HW_RF_SWITCH_STATE = 214, // HW RF Kill Switch State
+} ORDINALTABLE1;
+//ENDOF TABLE1
+
+// ordinal table 2
+// Variable length data:
+#define IPW_FIRST_VARIABLE_LENGTH_ORDINAL 1001
+
+typedef enum _ORDINAL_TABLE_2 { // NS - means Not Supported by FW
+ IPW_ORD_STAT_BASE = 1000, // contains number of variable ORDs
+ IPW_ORD_STAT_ADAPTER_MAC = 1001, // 6 bytes: our adapter MAC address
+ IPW_ORD_STAT_PREFERRED_BSSID = 1002, // 6 bytes: BSSID of the preferred AP
+ IPW_ORD_STAT_MANDATORY_BSSID = 1003, // 6 bytes: BSSID of the mandatory AP
+ IPW_FILL_1, //NS //
+ IPW_ORD_STAT_COUNTRY_TEXT = 1005, // 36 bytes: Country name text, First two bytes are Country code
+ IPW_ORD_STAT_ASSN_SSID = 1006, // 32 bytes: ESSID String
+ IPW_ORD_STATION_TABLE = 1007, // ? bytes: Station/AP table (via Direct SSID Scans)
+ IPW_ORD_STAT_SWEEP_TABLE = 1008, // ? bytes: Sweep/Host Table table (via Broadcast Scans)
+ IPW_ORD_STAT_ROAM_LOG = 1009, // ? bytes: Roaming log
+ IPW_ORD_STAT_RATE_LOG = 1010, //NS // 0 bytes: Rate log
+ IPW_ORD_STAT_FIFO = 1011, //NS // 0 bytes: Fifo buffer data structures
+ IPW_ORD_STAT_FW_VER_NUM = 1012, // 14 bytes: fw version ID string as in (a.bb.ccc; "0.08.011")
+ IPW_ORD_STAT_FW_DATE = 1013, // 14 bytes: fw date string (mmm dd yyyy; "Mar 13 2002")
+ IPW_ORD_STAT_ASSN_AP_BSSID = 1014, // 6 bytes: MAC address of associated AP
+ IPW_ORD_STAT_DEBUG = 1015, //NS // ? bytes:
+ IPW_ORD_STAT_NIC_BPA_NUM = 1016, // 11 bytes: NIC BPA number in ASCII
+ IPW_ORD_STAT_UCODE_DATE = 1017, // 5 bytes: uCode date
+ IPW_ORD_SECURITY_NGOTIATION_RESULT = 1018,
+} ORDINALTABLE2; // NS - means Not Supported by FW
+
+#define IPW_LAST_VARIABLE_LENGTH_ORDINAL 1018
+
+#ifndef WIRELESS_SPY
+#define WIRELESS_SPY // enable iwspy support
+#endif
+
+extern struct iw_handler_def ipw2100_wx_handler_def;
+extern struct iw_statistics *ipw2100_wx_wireless_stats(struct net_device * dev);
+extern void ipw2100_wx_event_work(struct ipw2100_priv *priv);
+
+#define IPW_HOST_FW_SHARED_AREA0 0x0002f200
+#define IPW_HOST_FW_SHARED_AREA0_END 0x0002f510 // 0x310 bytes
+
+#define IPW_HOST_FW_SHARED_AREA1 0x0002f610
+#define IPW_HOST_FW_SHARED_AREA1_END 0x0002f630 // 0x20 bytes
+
+#define IPW_HOST_FW_SHARED_AREA2 0x0002fa00
+#define IPW_HOST_FW_SHARED_AREA2_END 0x0002fa20 // 0x20 bytes
+
+#define IPW_HOST_FW_SHARED_AREA3 0x0002fc00
+#define IPW_HOST_FW_SHARED_AREA3_END 0x0002fc10 // 0x10 bytes
+
+#define IPW_HOST_FW_INTERRUPT_AREA 0x0002ff80
+#define IPW_HOST_FW_INTERRUPT_AREA_END 0x00030000 // 0x80 bytes
+
+struct ipw2100_fw_chunk {
+ unsigned char *buf;
+ long len;
+ long pos;
+ struct list_head list;
+};
+
+struct ipw2100_fw_chunk_set {
+#ifdef CONFIG_IPW2100_LEGACY_FW_LOAD
+ struct list_head chunk_list;
+ unsigned int chunks;
+#else
+ const void *data;
+#endif
+ unsigned long size;
+};
+
+struct ipw2100_fw {
+ int version;
+ struct ipw2100_fw_chunk_set fw;
+ struct ipw2100_fw_chunk_set uc;
+#ifndef CONFIG_IPW2100_LEGACY_FW_LOAD
+ const struct firmware *fw_entry;
+#endif
+};
+
+int ipw2100_get_firmware(struct ipw2100_priv *priv, struct ipw2100_fw *fw);
+void ipw2100_release_firmware(struct ipw2100_priv *priv, struct ipw2100_fw *fw);
+int ipw2100_fw_download(struct ipw2100_priv *priv, struct ipw2100_fw *fw);
+int ipw2100_ucode_download(struct ipw2100_priv *priv, struct ipw2100_fw *fw);
+
+#define MAX_FW_VERSION_LEN 14
+
+int ipw2100_get_fwversion(struct ipw2100_priv *priv, char *buf, size_t max);
+int ipw2100_get_ucodeversion(struct ipw2100_priv *priv, char *buf, size_t max);
+
+#endif /* _IPW2100_H */
--- /dev/null
+
+"This software program is licensed subject to the GNU General Public License
+(GPL). Version 2, June 1991, available at
+<http://www.fsf.org/copyleft/gpl.html>"
+
+GNU General Public License
+
+Version 2, June 1991
+
+Copyright (C) 1989, 1991 Free Software Foundation, Inc.
+59 Temple Place - Suite 330, Boston, MA 02111-1307, USA
+
+Everyone is permitted to copy and distribute verbatim copies of this license
+document, but changing it is not allowed.
+
+Preamble
+
+The licenses for most software are designed to take away your freedom to
+share and change it. By contrast, the GNU General Public License is intended
+to guarantee your freedom to share and change free software--to make sure
+the software is free for all its users. This General Public License applies
+to most of the Free Software Foundation's software and to any other program
+whose authors commit to using it. (Some other Free Software Foundation
+software is covered by the GNU Library General Public License instead.) You
+can apply it to your programs, too.
+
+When we speak of free software, we are referring to freedom, not price. Our
+General Public Licenses are designed to make sure that you have the freedom
+to distribute copies of free software (and charge for this service if you
+wish), that you receive source code or can get it if you want it, that you
+can change the software or use pieces of it in new free programs; and that
+you know you can do these things.
+
+To protect your rights, we need to make restrictions that forbid anyone to
+deny you these rights or to ask you to surrender the rights. These
+restrictions translate to certain responsibilities for you if you distribute
+copies of the software, or if you modify it.
+
+For example, if you distribute copies of such a program, whether gratis or
+for a fee, you must give the recipients all the rights that you have. You
+must make sure that they, too, receive or can get the source code. And you
+must show them these terms so they know their rights.
+
+We protect your rights with two steps: (1) copyright the software, and (2)
+offer you this license which gives you legal permission to copy, distribute
+and/or modify the software.
+
+Also, for each author's protection and ours, we want to make certain that
+everyone understands that there is no warranty for this free software. If
+the software is modified by someone else and passed on, we want its
+recipients to know that what they have is not the original, so that any
+problems introduced by others will not reflect on the original authors'
+reputations.
+
+Finally, any free program is threatened constantly by software patents. We
+wish to avoid the danger that redistributors of a free program will
+individually obtain patent licenses, in effect making the program
+proprietary. To prevent this, we have made it clear that any patent must be
+licensed for everyone's free use or not licensed at all.
+
+The precise terms and conditions for copying, distribution and modification
+follow.
+
+TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+0. This License applies to any program or other work which contains a notice
+ placed by the copyright holder saying it may be distributed under the
+ terms of this General Public License. The "Program", below, refers to any
+ such program or work, and a "work based on the Program" means either the
+ Program or any derivative work under copyright law: that is to say, a
+ work containing the Program or a portion of it, either verbatim or with
+ modifications and/or translated into another language. (Hereinafter,
+ translation is included without limitation in the term "modification".)
+ Each licensee is addressed as "you".
+
+ Activities other than copying, distribution and modification are not
+ covered by this License; they are outside its scope. The act of running
+ the Program is not restricted, and the output from the Program is covered
+ only if its contents constitute a work based on the Program (independent
+ of having been made by running the Program). Whether that is true depends
+ on what the Program does.
+
+1. You may copy and distribute verbatim copies of the Program's source code
+ as you receive it, in any medium, provided that you conspicuously and
+ appropriately publish on each copy an appropriate copyright notice and
+ disclaimer of warranty; keep intact all the notices that refer to this
+ License and to the absence of any warranty; and give any other recipients
+ of the Program a copy of this License along with the Program.
+
+ You may charge a fee for the physical act of transferring a copy, and you
+ may at your option offer warranty protection in exchange for a fee.
+
+2. You may modify your copy or copies of the Program or any portion of it,
+ thus forming a work based on the Program, and copy and distribute such
+ modifications or work under the terms of Section 1 above, provided that
+ you also meet all of these conditions:
+
+ * a) You must cause the modified files to carry prominent notices stating
+ that you changed the files and the date of any change.
+
+ * b) You must cause any work that you distribute or publish, that in
+ whole or in part contains or is derived from the Program or any part
+ thereof, to be licensed as a whole at no charge to all third parties
+ under the terms of this License.
+
+ * c) If the modified program normally reads commands interactively when
+ run, you must cause it, when started running for such interactive
+ use in the most ordinary way, to print or display an announcement
+ including an appropriate copyright notice and a notice that there is
+ no warranty (or else, saying that you provide a warranty) and that
+ users may redistribute the program under these conditions, and
+ telling the user how to view a copy of this License. (Exception: if
+ the Program itself is interactive but does not normally print such
+ an announcement, your work based on the Program is not required to
+ print an announcement.)
+
+ These requirements apply to the modified work as a whole. If identifiable
+ sections of that work are not derived from the Program, and can be
+ reasonably considered independent and separate works in themselves, then
+ this License, and its terms, do not apply to those sections when you
+ distribute them as separate works. But when you distribute the same
+ sections as part of a whole which is a work based on the Program, the
+ distribution of the whole must be on the terms of this License, whose
+ permissions for other licensees extend to the entire whole, and thus to
+ each and every part regardless of who wrote it.
+
+ Thus, it is not the intent of this section to claim rights or contest
+ your rights to work written entirely by you; rather, the intent is to
+ exercise the right to control the distribution of derivative or
+ collective works based on the Program.
+
+ In addition, mere aggregation of another work not based on the Program
+ with the Program (or with a work based on the Program) on a volume of a
+ storage or distribution medium does not bring the other work under the
+ scope of this License.
+
+3. You may copy and distribute the Program (or a work based on it, under
+ Section 2) in object code or executable form under the terms of Sections
+ 1 and 2 above provided that you also do one of the following:
+
+ * a) Accompany it with the complete corresponding machine-readable source
+ code, which must be distributed under the terms of Sections 1 and 2
+ above on a medium customarily used for software interchange; or,
+
+ * b) Accompany it with a written offer, valid for at least three years,
+ to give any third party, for a charge no more than your cost of
+ physically performing source distribution, a complete machine-
+ readable copy of the corresponding source code, to be distributed
+ under the terms of Sections 1 and 2 above on a medium customarily
+ used for software interchange; or,
+
+ * c) Accompany it with the information you received as to the offer to
+ distribute corresponding source code. (This alternative is allowed
+ only for noncommercial distribution and only if you received the
+ program in object code or executable form with such an offer, in
+ accord with Subsection b above.)
+
+ The source code for a work means the preferred form of the work for
+ making modifications to it. For an executable work, complete source code
+ means all the source code for all modules it contains, plus any
+ associated interface definition files, plus the scripts used to control
+ compilation and installation of the executable. However, as a special
+ exception, the source code distributed need not include anything that is
+ normally distributed (in either source or binary form) with the major
+ components (compiler, kernel, and so on) of the operating system on which
+ the executable runs, unless that component itself accompanies the
+ executable.
+
+ If distribution of executable or object code is made by offering access
+ to copy from a designated place, then offering equivalent access to copy
+ the source code from the same place counts as distribution of the source
+ code, even though third parties are not compelled to copy the source
+ along with the object code.
+
+4. You may not copy, modify, sublicense, or distribute the Program except as
+ expressly provided under this License. Any attempt otherwise to copy,
+ modify, sublicense or distribute the Program is void, and will
+ automatically terminate your rights under this License. However, parties
+ who have received copies, or rights, from you under this License will not
+ have their licenses terminated so long as such parties remain in full
+ compliance.
+
+5. You are not required to accept this License, since you have not signed
+ it. However, nothing else grants you permission to modify or distribute
+ the Program or its derivative works. These actions are prohibited by law
+ if you do not accept this License. Therefore, by modifying or
+ distributing the Program (or any work based on the Program), you
+ indicate your acceptance of this License to do so, and all its terms and
+ conditions for copying, distributing or modifying the Program or works
+ based on it.
+
+6. Each time you redistribute the Program (or any work based on the
+ Program), the recipient automatically receives a license from the
+ original licensor to copy, distribute or modify the Program subject to
+ these terms and conditions. You may not impose any further restrictions
+ on the recipients' exercise of the rights granted herein. You are not
+ responsible for enforcing compliance by third parties to this License.
+
+7. If, as a consequence of a court judgment or allegation of patent
+ infringement or for any other reason (not limited to patent issues),
+ conditions are imposed on you (whether by court order, agreement or
+ otherwise) that contradict the conditions of this License, they do not
+ excuse you from the conditions of this License. If you cannot distribute
+ so as to satisfy simultaneously your obligations under this License and
+ any other pertinent obligations, then as a consequence you may not
+ distribute the Program at all. For example, if a patent license would
+ not permit royalty-free redistribution of the Program by all those who
+ receive copies directly or indirectly through you, then the only way you
+ could satisfy both it and this License would be to refrain entirely from
+ distribution of the Program.
+
+ If any portion of this section is held invalid or unenforceable under any
+ particular circumstance, the balance of the section is intended to apply
+ and the section as a whole is intended to apply in other circumstances.
+
+ It is not the purpose of this section to induce you to infringe any
+ patents or other property right claims or to contest validity of any
+ such claims; this section has the sole purpose of protecting the
+ integrity of the free software distribution system, which is implemented
+ by public license practices. Many people have made generous contributions
+ to the wide range of software distributed through that system in
+ reliance on consistent application of that system; it is up to the
+ author/donor to decide if he or she is willing to distribute software
+ through any other system and a licensee cannot impose that choice.
+
+ This section is intended to make thoroughly clear what is believed to be
+ a consequence of the rest of this License.
+
+8. If the distribution and/or use of the Program is restricted in certain
+ countries either by patents or by copyrighted interfaces, the original
+ copyright holder who places the Program under this License may add an
+ explicit geographical distribution limitation excluding those countries,
+ so that distribution is permitted only in or among countries not thus
+ excluded. In such case, this License incorporates the limitation as if
+ written in the body of this License.
+
+9. The Free Software Foundation may publish revised and/or new versions of
+ the General Public License from time to time. Such new versions will be
+ similar in spirit to the present version, but may differ in detail to
+ address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the Program
+ specifies a version number of this License which applies to it and "any
+ later version", you have the option of following the terms and
+ conditions either of that version or of any later version published by
+ the Free Software Foundation. If the Program does not specify a version
+ number of this License, you may choose any version ever published by the
+ Free Software Foundation.
+
+10. If you wish to incorporate parts of the Program into other free programs
+ whose distribution conditions are different, write to the author to ask
+ for permission. For software which is copyrighted by the Free Software
+ Foundation, write to the Free Software Foundation; we sometimes make
+ exceptions for this. Our decision will be guided by the two goals of
+ preserving the free status of all derivatives of our free software and
+ of promoting the sharing and reuse of software generally.
+
+ NO WARRANTY
+
+11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
+ FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
+ OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
+ PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER
+ EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+ WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE
+ ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH
+ YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL
+ NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
+ WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
+ REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR
+ DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL
+ DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM
+ (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED
+ INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF
+ THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR
+ OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+END OF TERMS AND CONDITIONS
+
+How to Apply These Terms to Your New Programs
+
+If you develop a new program, and you want it to be of the greatest
+possible use to the public, the best way to achieve this is to make it free
+software which everyone can redistribute and change under these terms.
+
+To do so, attach the following notices to the program. It is safest to
+attach them to the start of each source file to most effectively convey the
+exclusion of warranty; and each file should have at least the "copyright"
+line and a pointer to where the full notice is found.
+
+one line to give the program's name and an idea of what it does.
+Copyright (C) yyyy name of author
+
+This program is free software; you can redistribute it and/or modify it
+under the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 2 of the License, or (at your option)
+any later version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+more details.
+
+You should have received a copy of the GNU General Public License along with
+this program; if not, write to the Free Software Foundation, Inc., 59
+Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+Also add information on how to contact you by electronic and paper mail.
+
+If the program is interactive, make it output a short notice like this when
+it starts in an interactive mode:
+
+Gnomovision version 69, Copyright (C) year name of author Gnomovision comes
+with ABSOLUTELY NO WARRANTY; for details type 'show w'. This is free
+software, and you are welcome to redistribute it under certain conditions;
+type 'show c' for details.
+
+The hypothetical commands 'show w' and 'show c' should show the appropriate
+parts of the General Public License. Of course, the commands you use may be
+called something other than 'show w' and 'show c'; they could even be
+mouse-clicks or menu items--whatever suits your program.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the program, if
+necessary. Here is a sample; alter the names:
+
+Yoyodyne, Inc., hereby disclaims all copyright interest in the program
+'Gnomovision' (which makes passes at compilers) written by James Hacker.
+
+signature of Ty Coon, 1 April 1989
+Ty Coon, President of Vice
+
+This General Public License does not permit incorporating your program into
+proprietary programs. If your program is a subroutine library, you may
+consider it more useful to permit linking proprietary applications with the
+library. If this is what you want to do, use the GNU Library General Public
+License instead of this License.
--- /dev/null
+#
+# Makefile for the Linux Wireless network device drivers.
+#
+# Original makefile by Peter Johanson
+
+EXTRA_CFLAGS += -I$(TOPDIR)/drivers/net/wireless/ieee80211
+
+ifneq ($(CONFIG_IPW_DEBUG),)
+ EXTRA_CFLAGS += -g -Wa,-adhlms=$@.lst
+endif
+
+list-m :=
+list-$(CONFIG_IPW2200) += ipw2200
+
+obj-$(CONFIG_IPW2200) += ipw2200.o
--- /dev/null
+/******************************************************************************
+
+ Copyright(c) 2003 - 2004 Intel Corporation. All rights reserved.
+
+ 802.11 status code portion of this file from ethereal-0.10.6:
+ Copyright 2000, Axis Communications AB
+ Ethereal - Network traffic analyzer
+ By Gerald Combs <gerald@ethereal.com>
+ Copyright 1998 Gerald Combs
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of version 2 of the GNU General Public License as
+ published by the Free Software Foundation.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ You should have received a copy of the GNU General Public License along with
+ this program; if not, write to the Free Software Foundation, Inc., 59
+ Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+ The full GNU General Public License is included in this distribution in the
+ file called LICENSE.
+
+ Contact Information:
+ James P. Ketrenos <ipw2100-admin@linux.intel.com>
+ Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
+******************************************************************************/
+
+#include "ipw2200.h"
+#include "ieee80211.h"
+
+#define IPW2200_VERSION "0.13"
+#define DRV_DESCRIPTION "Intel(R) PRO/Wireless 2200/2915 Network Driver"
+#define DRV_COPYRIGHT "Copyright(c) 2003-2004 Intel Corporation"
+#define DRV_VERSION IPW2200_VERSION
+
+MODULE_LICENSE("GPL");
+
+static int debug = 0;
+static int channel = 0;
+static char *ifname;
+static int mode = 0;
+
+static u32 ipw_debug_level;
+static int associate = 1;
+static int adhoc_create = 1;
+static int disable = 0;
+static const char ipw_modes[] = {
+ 'a', 'b', 'g', '?'
+};
+
+static void ipw_rx(struct ipw_priv *priv);
+static int ipw_queue_tx_reclaim(struct ipw_priv *priv,
+ struct clx2_tx_queue *txq, int qindex);
+static int ipw_queue_reset(struct ipw_priv *priv);
+
+static int ipw_queue_tx_hcmd(struct ipw_priv *priv, int hcmd, void *buf,
+ int len, int sync);
+
+static void ipw_tx_queue_free(struct ipw_priv *);
+
+static struct ipw_rx_queue *ipw_rx_queue_alloc(struct ipw_priv *);
+static void ipw_rx_queue_free(struct ipw_priv *, struct ipw_rx_queue *);
+static void ipw_rx_queue_replenish(void *);
+
+static int ipw_up(struct ipw_priv *);
+static void ipw_down(struct ipw_priv *);
+static int ipw_config(struct ipw_priv *);
+static int init_supported_rates(struct ipw_priv *priv, struct ipw_supported_rates *prates);
+
+static u8 band_b_active_channel[MAX_B_CHANNELS] = {
+ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 0
+};
+static u8 band_a_active_channel[MAX_A_CHANNELS] = {
+ 36, 40, 44, 48, 149, 153, 157, 161, 165, 52, 56, 60, 64, 0
+};
+
+static char *snprint_line(char *buf, size_t count,
+ const u8 *data, u32 len, u32 ofs)
+{
+ int out, i, j, l;
+ char c;
+
+ out = snprintf(buf, count, "%08X", ofs);
+
+ for (l = 0, i = 0; i < 2; i++) {
+ out += snprintf(buf + out, count - out, " ");
+ for (j = 0; j < 8 && l < len; j++, l++)
+ out += snprintf(buf + out, count - out, "%02X ",
+ data[(i * 8 + j)]);
+ for (; j < 8; j++)
+ out += snprintf(buf + out, count - out, " ");
+ }
+
+ out += snprintf(buf + out, count - out, " ");
+ for (l = 0, i = 0; i < 2; i++) {
+ out += snprintf(buf + out, count - out, " ");
+ for (j = 0; j < 8 && l < len; j++, l++) {
+ c = data[(i * 8 + j)];
+ if (!isascii(c) || !isprint(c))
+ c = '.';
+
+ out += snprintf(buf + out, count - out, "%c", c);
+ }
+
+ for (; j < 8; j++)
+ out += snprintf(buf + out, count - out, " ");
+ }
+
+ return buf;
+}
+
+static void printk_buf(int level, const u8 *data, u32 len)
+{
+ char line[81];
+ u32 ofs = 0;
+ if (!(ipw_debug_level & level))
+ return;
+
+ while (len) {
+ printk(KERN_DEBUG "%s\n",
+ snprint_line(line, sizeof(line), &data[ofs],
+ min(len, 16U), ofs));
+ ofs += 16;
+ len -= min(len, 16U);
+ }
+}
+
+static u32 _ipw_read_reg32(struct ipw_priv *priv, u32 reg);
+#define ipw_read_reg32(a, b) _ipw_read_reg32(a, b)
+
+static u8 _ipw_read_reg8(struct ipw_priv *ipw, u32 reg);
+#define ipw_read_reg8(a, b) _ipw_read_reg8(a, b)
+
+static void _ipw_write_reg8(struct ipw_priv *priv, u32 reg, u8 value);
+static inline void ipw_write_reg8(struct ipw_priv *a, u32 b, u8 c)
+{
+ IPW_DEBUG_IO("%s %d: write_indirect8(0x%08X, 0x%08X)\n", __FILE__, __LINE__, (u32)(b), (u32)(c));
+ _ipw_write_reg8(a, b, c);
+}
+
+static void _ipw_write_reg16(struct ipw_priv *priv, u32 reg, u16 value);
+static inline void ipw_write_reg16(struct ipw_priv *a, u32 b, u16 c)
+{
+ IPW_DEBUG_IO("%s %d: write_indirect16(0x%08X, 0x%08X)\n", __FILE__, __LINE__, (u32)(b), (u32)(c));
+ _ipw_write_reg16(a, b, c);
+}
+
+static void _ipw_write_reg32(struct ipw_priv *priv, u32 reg, u32 value);
+static inline void ipw_write_reg32(struct ipw_priv *a, u32 b, u32 c)
+{
+ IPW_DEBUG_IO("%s %d: write_indirect32(0x%08X, 0x%08X)\n", __FILE__, __LINE__, (u32)(b), (u32)(c));
+ _ipw_write_reg32(a, b, c);
+}
+
+#define _ipw_write8(ipw, ofs, val) writeb((val), (void*)(ipw)->hw_base + (ofs))
+#define ipw_write8(ipw, ofs, val) \
+ IPW_DEBUG_IO("%s %d: write_direct8(0x%08X, 0x%08X)\n", __FILE__, __LINE__, (u32)(ofs), (u32)(val)); \
+ _ipw_write8(ipw, ofs, val)
+
+#define _ipw_write16(ipw, ofs, val) writew((val), (void*)(ipw)->hw_base + (ofs))
+#define ipw_write16(ipw, ofs, val) \
+ IPW_DEBUG_IO("%s %d: write_direct16(0x%08X, 0x%08X)\n", __FILE__, __LINE__, (u32)(ofs), (u32)(val)); \
+ _ipw_write16(ipw, ofs, val)
+
+#define _ipw_write32(ipw, ofs, val) writel((val), (void*)(ipw)->hw_base + (ofs))
+#define ipw_write32(ipw, ofs, val) \
+ IPW_DEBUG_IO("%s %d: write_direct32(0x%08X, 0x%08X)\n", __FILE__, __LINE__, (u32)(ofs), (u32)(val)); \
+ _ipw_write32(ipw, ofs, val)
+
+#define _ipw_read8(ipw, ofs) readb((void*)(ipw)->hw_base + (ofs))
+static inline u8 __ipw_read8(char *f, u32 l, struct ipw_priv *ipw, u32 ofs) {
+ IPW_DEBUG_IO("%s %d: read_direct8(0x%08X)\n", f, l, (u32)(ofs));
+ return _ipw_read8(ipw, ofs);
+}
+#define ipw_read8(ipw, ofs) __ipw_read8(__FILE__, __LINE__, ipw, ofs)
+
+#define _ipw_read16(ipw, ofs) readw((void*)(ipw)->hw_base + (ofs))
+static inline u16 __ipw_read16(char *f, u32 l, struct ipw_priv *ipw, u32 ofs) {
+ IPW_DEBUG_IO("%s %d: read_direct16(0x%08X)\n", f, l, (u32)(ofs));
+ return _ipw_read16(ipw, ofs);
+}
+#define ipw_read16(ipw, ofs) __ipw_read16(__FILE__, __LINE__, ipw, ofs)
+
+#define _ipw_read32(ipw, ofs) readl((void*)(ipw)->hw_base + (ofs))
+static inline u32 __ipw_read32(char *f, u32 l, struct ipw_priv *ipw, u32 ofs) {
+ IPW_DEBUG_IO("%s %d: read_direct32(0x%08X)\n", f, l, (u32)(ofs));
+ return _ipw_read32(ipw, ofs);
+}
+#define ipw_read32(ipw, ofs) __ipw_read32(__FILE__, __LINE__, ipw, ofs)
+
+static void _ipw_read_indirect(struct ipw_priv *, u32, u8 *, int);
+#define ipw_read_indirect(a, b, c, d) \
+ IPW_DEBUG_IO("%s %d: read_inddirect(0x%08X) %d bytes\n", __FILE__, __LINE__, (u32)(b), d); \
+ _ipw_read_indirect(a, b, c, d)
+
+static void _ipw_write_indirect(struct ipw_priv *priv, u32 addr, u8 *data, int num);
+#define ipw_write_indirect(a, b, c, d) \
+ IPW_DEBUG_IO("%s %d: write_indirect(0x%08X) %d bytes\n", __FILE__, __LINE__, (u32)(b), d); \
+ _ipw_write_indirect(a, b, c, d)
+
+/* indirect write s */
+static void _ipw_write_reg32(struct ipw_priv *priv, u32 reg,
+ u32 value)
+{
+ IPW_DEBUG_IO(" %p : reg = 0x%8X : value = 0x%8X\n",
+ priv, reg, value);
+ _ipw_write32(priv, CX2_INDIRECT_ADDR, reg);
+ _ipw_write32(priv, CX2_INDIRECT_DATA, value);
+}
+
+
+static void _ipw_write_reg8(struct ipw_priv *priv, u32 reg, u8 value)
+{
+ IPW_DEBUG_IO(" reg = 0x%8X : value = 0x%8X\n", reg, value);
+ _ipw_write32(priv, CX2_INDIRECT_ADDR, reg & CX2_INDIRECT_ADDR_MASK);
+ _ipw_write8(priv, CX2_INDIRECT_DATA, value);
+ IPW_DEBUG_IO(" reg = 0x%8X : value = 0x%8X\n",
+ (int)priv->hw_base + CX2_INDIRECT_DATA,
+ value);
+}
+
+static void _ipw_write_reg16(struct ipw_priv *priv, u32 reg,
+ u16 value)
+{
+ IPW_DEBUG_IO(" reg = 0x%8X : value = 0x%8X\n", reg, value);
+ _ipw_write32(priv, CX2_INDIRECT_ADDR, reg & CX2_INDIRECT_ADDR_MASK);
+ _ipw_write16(priv, CX2_INDIRECT_DATA, value);
+}
+
+/* indirect read s */
+
+static u8 _ipw_read_reg8(struct ipw_priv *priv, u32 reg)
+{
+ u32 word;
+ _ipw_write32(priv, CX2_INDIRECT_ADDR, reg & CX2_INDIRECT_ADDR_MASK);
+ IPW_DEBUG_IO(" reg = 0x%8X : \n", reg);
+ word = _ipw_read32(priv, CX2_INDIRECT_DATA);
+ return (word >> ((reg & 0x3)*8)) & 0xff;
+}
+
+static u32 _ipw_read_reg32(struct ipw_priv *priv, u32 reg)
+{
+ u32 value;
+
+ IPW_DEBUG_IO("%p : reg = 0x%08x\n", priv, reg);
+
+ _ipw_write32(priv, CX2_INDIRECT_ADDR, reg);
+ value = _ipw_read32(priv, CX2_INDIRECT_DATA);
+ IPW_DEBUG_IO(" reg = 0x%4X : value = 0x%4x \n", reg, value);
+ return value;
+}
+
+/* iterative/auto-increment 32 bit reads and writes */
+static void _ipw_read_indirect(struct ipw_priv *priv, u32 addr, u8 * buf,
+ int num)
+{
+ u32 aligned_addr = addr & CX2_INDIRECT_ADDR_MASK;
+ u32 dif_len = addr - aligned_addr;
+ u32 aligned_len;
+ u32 i;
+
+ IPW_DEBUG_IO("addr = %i, buf = %p, num = %i\n", addr, buf, num);
+
+ /* Read the first nibble byte by byte */
+ if (unlikely(dif_len)) {
+ /* Start reading at aligned_addr + dif_len */
+ _ipw_write32(priv, CX2_INDIRECT_ADDR, aligned_addr);
+ for (i = dif_len; i < 4; i++, buf++)
+ *buf = _ipw_read8(priv, CX2_INDIRECT_DATA + i);
+ num -= dif_len;
+ aligned_addr += 4;
+ }
+
+ /* Read DWs through autoinc register */
+ _ipw_write32(priv, CX2_AUTOINC_ADDR, aligned_addr);
+ aligned_len = num & CX2_INDIRECT_ADDR_MASK;
+ for (i = 0; i < aligned_len; i += 4, buf += 4, aligned_addr += 4)
+ *(u32*)buf = ipw_read32(priv, CX2_AUTOINC_DATA);
+
+ /* Copy the last nibble */
+ dif_len = num - aligned_len;
+ _ipw_write32(priv, CX2_INDIRECT_ADDR, aligned_addr);
+ for (i = 0; i < dif_len; i++, buf++)
+ *buf = ipw_read8(priv, CX2_INDIRECT_DATA + i);
+}
+
+static void _ipw_write_indirect(struct ipw_priv *priv, u32 addr, u8 *buf,
+ int num)
+{
+ u32 aligned_addr = addr & CX2_INDIRECT_ADDR_MASK;
+ u32 dif_len = addr - aligned_addr;
+ u32 aligned_len;
+ u32 i;
+
+ IPW_DEBUG_IO("addr = %i, buf = %p, num = %i\n", addr, buf, num);
+
+ /* Write the first nibble byte by byte */
+ if (unlikely(dif_len)) {
+ /* Start writing at aligned_addr + dif_len */
+ _ipw_write32(priv, CX2_INDIRECT_ADDR, aligned_addr);
+ for (i = dif_len; i < 4; i++, buf++)
+ _ipw_write8(priv, CX2_INDIRECT_DATA + i, *buf);
+ num -= dif_len;
+ aligned_addr += 4;
+ }
+
+ /* Write DWs through autoinc register */
+ _ipw_write32(priv, CX2_AUTOINC_ADDR, aligned_addr);
+ aligned_len = num & CX2_INDIRECT_ADDR_MASK;
+ for (i = 0; i < aligned_len; i += 4, buf += 4, aligned_addr += 4)
+ _ipw_write32(priv, CX2_AUTOINC_DATA, *(u32*)buf);
+
+ /* Copy the last nibble */
+ dif_len = num - aligned_len;
+ _ipw_write32(priv, CX2_INDIRECT_ADDR, aligned_addr);
+ for (i = 0; i < dif_len; i++, buf++)
+ _ipw_write8(priv, CX2_INDIRECT_DATA + i, *buf);
+}
+
+static void ipw_write_direct(struct ipw_priv *priv, u32 addr, void *buf,
+ int num)
+{
+ memcpy_toio((void*)(priv->hw_base + addr), buf, num);
+}
+
+static inline void ipw_set_bit(struct ipw_priv *priv, u32 reg, u32 mask)
+{
+ ipw_write32(priv, reg, ipw_read32(priv, reg) | mask);
+}
+
+static inline void ipw_clear_bit(struct ipw_priv *priv, u32 reg, u32 mask)
+{
+ ipw_write32(priv, reg, ipw_read32(priv, reg) & ~mask);
+}
+
+static inline void ipw_enable_interrupts(struct ipw_priv *priv)
+{
+ if (priv->status & STATUS_INT_ENABLED)
+ return;
+ priv->status |= STATUS_INT_ENABLED;
+ ipw_write32(priv, CX2_INTA_MASK_R, CX2_INTA_MASK_ALL);
+}
+
+static inline void ipw_disable_interrupts(struct ipw_priv *priv)
+{
+ if (!(priv->status & STATUS_INT_ENABLED))
+ return;
+ priv->status &= ~STATUS_INT_ENABLED;
+ ipw_write32(priv, CX2_INTA_MASK_R, ~CX2_INTA_MASK_ALL);
+}
+
+#ifdef CONFIG_IPW_DEBUG
+static char *ipw_error_desc(u32 val)
+{
+ switch (val) {
+ case IPW_FW_ERROR_OK:
+ return "ERROR_OK";
+ case IPW_FW_ERROR_FAIL:
+ return "ERROR_FAIL";
+ case IPW_FW_ERROR_MEMORY_UNDERFLOW:
+ return "MEMORY_UNDERFLOW";
+ case IPW_FW_ERROR_MEMORY_OVERFLOW:
+ return "MEMORY_OVERFLOW";
+ case IPW_FW_ERROR_BAD_PARAM:
+ return "ERROR_BAD_PARAM";
+ case IPW_FW_ERROR_BAD_CHECKSUM:
+ return "ERROR_BAD_CHECKSUM";
+ case IPW_FW_ERROR_NMI_INTERRUPT:
+ return "ERROR_NMI_INTERRUPT";
+ case IPW_FW_ERROR_BAD_DATABASE:
+ return "ERROR_BAD_DATABASE";
+ case IPW_FW_ERROR_ALLOC_FAIL:
+ return "ERROR_ALLOC_FAIL";
+ case IPW_FW_ERROR_DMA_UNDERRUN:
+ return "ERROR_DMA_UNDERRUN";
+ case IPW_FW_ERROR_DMA_STATUS:
+ return "ERROR_DMA_STATUS";
+ case IPW_FW_ERROR_DINOSTATUS_ERROR:
+ return "ERROR_DINOSTATUS_ERROR";
+ case IPW_FW_ERROR_EEPROMSTATUS_ERROR:
+ return "ERROR_EEPROMSTATUS_ERROR";
+ case IPW_FW_ERROR_SYSASSERT:
+ return "ERROR_SYSASSERT";
+ case IPW_FW_ERROR_FATAL_ERROR:
+ return "ERROR_FATALSTATUS_ERROR";
+ default:
+ return "UNKNOWNSTATUS_ERROR";
+ }
+}
+#endif /* CONFIG_IPW_DEBUG */
+
+static void ipw_dump_nic_error_log(struct ipw_priv *priv)
+{
+ u32 desc, time, blink1, blink2, ilink1, ilink2, idata, i, count, base;
+
+ base = ipw_read32(priv, IPWSTATUS_ERROR_LOG);
+ count = ipw_read_reg32(priv, base);
+
+ if (ERROR_START_OFFSET <= count * ERROR_ELEM_SIZE)
+ IPW_ERROR("Start IPW Error Log Dump:\n");
+
+ for (i = ERROR_START_OFFSET;
+ i <= count * ERROR_ELEM_SIZE;
+ i += ERROR_ELEM_SIZE) {
+ desc = ipw_read_reg32(priv, base + i);
+ time = ipw_read_reg32(priv, base + i + 1*sizeof(u32));
+ blink1 = ipw_read_reg32(priv, base + i + 2*sizeof(u32));
+ blink2 = ipw_read_reg32(priv, base + i + 3*sizeof(u32));
+ ilink1 = ipw_read_reg32(priv, base + i + 4*sizeof(u32));
+ ilink2 = ipw_read_reg32(priv, base + i + 5*sizeof(u32));
+ idata = ipw_read_reg32(priv, base + i + 6*sizeof(u32));
+
+#ifdef CONFIG_IPW_DEBUG
+ IPW_ERROR(
+ "%s %i 0x%08x 0x%08x 0x%08x 0x%08x 0x%08x\n",
+ ipw_error_desc(desc), time, blink1, blink2,
+ ilink1, ilink2, idata);
+#endif
+ }
+}
+
+static void ipw_dump_nic_event_log(struct ipw_priv *priv)
+{
+ u32 ev, time, data, i, count, base;
+
+ base = ipw_read32(priv, IPW_EVENT_LOG);
+ count = ipw_read_reg32(priv, base);
+
+ if (EVENT_START_OFFSET <= count * EVENT_ELEM_SIZE)
+ IPW_ERROR("Start IPW Event Log Dump:\n");
+
+ for (i = EVENT_START_OFFSET;
+ i <= count * EVENT_ELEM_SIZE;
+ i += EVENT_ELEM_SIZE) {
+ ev = ipw_read_reg32(priv, base + i);
+ time = ipw_read_reg32(priv, base + i + 1*sizeof(u32));
+ data = ipw_read_reg32(priv, base + i + 2*sizeof(u32));
+
+#ifdef CONFIG_IPW_DEBUG
+ IPW_ERROR("%i\t0x%08x\t%i\n", time, data, ev);
+#endif
+ }
+}
+
+static int ipw_get_ordinal(struct ipw_priv *priv, u32 ord, void *val,
+ u32 *len)
+{
+ u32 addr, field_info, field_len, field_count, total_len;
+
+ IPW_DEBUG_ORD("ordinal = %i\n", ord);
+
+ if (!priv || !val || !len) {
+ IPW_DEBUG_ORD("Invalid argument\n");
+ return -EINVAL;
+ }
+
+ /* verify device ordinal tables have been initialized */
+ if (!priv->table0_addr || !priv->table1_addr || !priv->table2_addr) {
+ IPW_DEBUG_ORD("Access ordinals before initialization\n");
+ return -EINVAL;
+ }
+
+ switch (IPW_ORD_TABLE_ID_MASK & ord) {
+ case IPW_ORD_TABLE_0_MASK:
+ /*
+ * TABLE 0: Direct access to a table of 32 bit values
+ *
+ * This is a very simple table with the data directly
+ * read from the table
+ */
+
+ /* remove the table id from the ordinal */
+ ord &= IPW_ORD_TABLE_VALUE_MASK;
+
+ /* boundary check */
+ if (ord > priv->table0_len) {
+ IPW_DEBUG_ORD("ordinal value (%i) longer then "
+ "max (%i)\n", ord, priv->table0_len);
+ return -EINVAL;
+ }
+
+ /* verify we have enough room to store the value */
+ if (*len < sizeof(u32)) {
+ IPW_DEBUG_ORD("ordinal buffer length too small, "
+ "need %d\n", sizeof(u32));
+ return -EINVAL;
+ }
+
+ IPW_DEBUG_ORD("Reading TABLE0[%i] from offset 0x%08x\n",
+ ord, priv->table0_addr + (ord << 2));
+
+ *len = sizeof(u32);
+ ord <<= 2;
+ *((u32 *)val) = ipw_read32(priv, priv->table0_addr + ord);
+ break;
+
+ case IPW_ORD_TABLE_1_MASK:
+ /*
+ * TABLE 1: Indirect access to a table of 32 bit values
+ *
+ * This is a fairly large table of u32 values each
+ * representing starting addr for the data (which is
+ * also a u32)
+ */
+
+ /* remove the table id from the ordinal */
+ ord &= IPW_ORD_TABLE_VALUE_MASK;
+
+ /* boundary check */
+ if (ord > priv->table1_len) {
+ IPW_DEBUG_ORD("ordinal value too long\n");
+ return -EINVAL;
+ }
+
+ /* verify we have enough room to store the value */
+ if (*len < sizeof(u32)) {
+ IPW_DEBUG_ORD("ordinal buffer length too small, "
+ "need %d\n", sizeof(u32));
+ return -EINVAL;
+ }
+
+ *((u32 *)val) = ipw_read_reg32(priv, (priv->table1_addr + (ord << 2)));
+ *len = sizeof(u32);
+ break;
+
+ case IPW_ORD_TABLE_2_MASK:
+ /*
+ * TABLE 2: Indirect access to a table of variable sized values
+ *
+ * This table consist of six values, each containing
+ * - dword containing the starting offset of the data
+ * - dword containing the lengh in the first 16bits
+ * and the count in the second 16bits
+ */
+
+ /* remove the table id from the ordinal */
+ ord &= IPW_ORD_TABLE_VALUE_MASK;
+
+ /* boundary check */
+ if (ord > priv->table2_len) {
+ IPW_DEBUG_ORD("ordinal value too long\n");
+ return -EINVAL;
+ }
+
+ /* get the address of statistic */
+ addr = ipw_read_reg32(priv, priv->table2_addr + (ord << 3));
+
+ /* get the second DW of statistics ;
+ * two 16-bit words - first is length, second is count */
+ field_info = ipw_read_reg32(priv, priv->table2_addr + (ord << 3) + sizeof(u32));
+
+ /* get each entry length */
+ field_len = *((u16 *)&field_info);
+
+ /* get number of entries */
+ field_count = *(((u16 *)&field_info) + 1);
+
+ /* abort if not enought memory */
+ total_len = field_len * field_count;
+ if (total_len > *len) {
+ *len = total_len;
+ return -EINVAL;
+ }
+
+ *len = total_len;
+ if (!total_len)
+ return 0;
+
+ IPW_DEBUG_ORD("addr = 0x%08x, total_len = %i, "
+ "field_info = 0x%08x\n",
+ addr, total_len, field_info);
+ ipw_read_indirect(priv, addr, val, total_len);
+ break;
+
+ default:
+ IPW_DEBUG_ORD("Invalid ordinal!\n");
+ return -EINVAL;
+
+ }
+
+
+ return 0;
+}
+
+static void ipw_init_ordinals(struct ipw_priv *priv)
+{
+ priv->table0_addr = IPW_ORDINALS_TABLE_LOWER;
+ priv->table0_len = ipw_read32(priv, priv->table0_addr);
+
+ IPW_DEBUG_ORD("table 0 offset at 0x%08x, len = %i\n",
+ priv->table0_addr, priv->table0_len);
+
+ priv->table1_addr = ipw_read32(priv, IPW_ORDINALS_TABLE_1);
+ priv->table1_len = ipw_read_reg32(priv, priv->table1_addr);
+
+ IPW_DEBUG_ORD("table 1 offset at 0x%08x, len = %i\n",
+ priv->table1_addr, priv->table1_len);
+
+ priv->table2_addr = ipw_read32(priv, IPW_ORDINALS_TABLE_2);
+ priv->table2_len = ipw_read_reg32(priv, priv->table2_addr);
+ priv->table2_len &= 0x0000ffff; /* use first two bytes */
+
+ IPW_DEBUG_ORD("table 2 offset at 0x%08x, len = %i\n",
+ priv->table2_addr, priv->table2_len);
+
+}
+
+/*
+ * The following adds a new attribute to the sysfs representation
+ * of this device driver (i.e. a new file in /sys/bus/pci/drivers/ipw/)
+ * used for controling the debug level.
+ *
+ * See the level definitions in ipw for details.
+ */
+static ssize_t show_debug_level(struct device_driver *d, char *buf)
+{
+ return sprintf(buf, "0x%08X\n", ipw_debug_level);
+}
+static ssize_t store_debug_level(struct device_driver *d, const char *buf,
+ size_t count)
+{
+ char *p = (char *)buf;
+ u32 val;
+
+ if (p[1] == 'x' || p[1] == 'X' || p[0] == 'x' || p[0] == 'X') {
+ p++;
+ if (p[0] == 'x' || p[0] == 'X')
+ p++;
+ val = simple_strtoul(p, &p, 16);
+ } else
+ val = simple_strtoul(p, &p, 10);
+ if (p == buf)
+ printk(KERN_INFO DRV_NAME
+ ": %s is not in hex or decimal form.\n", buf);
+ else
+ ipw_debug_level = val;
+
+ return strnlen(buf, count);
+}
+
+static DRIVER_ATTR(debug_level, S_IWUSR | S_IRUGO,
+ show_debug_level, store_debug_level);
+
+static ssize_t show_status(struct device *d, char *buf)
+{
+ struct ipw_priv *p = (struct ipw_priv *)d->driver_data;
+ return sprintf(buf, "0x%08x\n", (int)p->status);
+}
+static DEVICE_ATTR(status, S_IRUGO, show_status, NULL);
+
+static ssize_t show_cfg(struct device *d, char *buf)
+{
+ struct ipw_priv *p = (struct ipw_priv *)d->driver_data;
+ return sprintf(buf, "0x%08x\n", (int)p->config);
+}
+static DEVICE_ATTR(cfg, S_IRUGO, show_cfg, NULL);
+
+static ssize_t show_nic_type(struct device *d, char *buf)
+{
+ struct ipw_priv *p = (struct ipw_priv *)d->driver_data;
+ u8 type = p->eeprom[EEPROM_NIC_TYPE];
+
+ switch (type) {
+ case EEPROM_NIC_TYPE_STANDARD:
+ return sprintf(buf, "STANDARD\n");
+ case EEPROM_NIC_TYPE_DELL:
+ return sprintf(buf, "DELL\n");
+ case EEPROM_NIC_TYPE_FUJITSU:
+ return sprintf(buf, "FUJITSU\n");
+ case EEPROM_NIC_TYPE_IBM:
+ return sprintf(buf, "IBM\n");
+ case EEPROM_NIC_TYPE_HP:
+ return sprintf(buf, "HP\n");
+ }
+
+ return sprintf(buf, "UNKNOWN\n");
+}
+static DEVICE_ATTR(nic_type, S_IRUGO, show_nic_type, NULL);
+
+static ssize_t dump_error_log(struct device *d, const char *buf,
+ size_t count)
+{
+ char *p = (char *)buf;
+
+ if (p[0] == '1')
+ ipw_dump_nic_error_log((struct ipw_priv*)d->driver_data);
+
+ return strnlen(buf, count);
+}
+static DEVICE_ATTR(dump_errors, S_IWUSR, NULL, dump_error_log);
+
+static ssize_t dump_event_log(struct device *d, const char *buf,
+ size_t count)
+{
+ char *p = (char *)buf;
+
+ if (p[0] == '1')
+ ipw_dump_nic_event_log((struct ipw_priv*)d->driver_data);
+
+ return strnlen(buf, count);
+}
+static DEVICE_ATTR(dump_events, S_IWUSR, NULL, dump_event_log);
+
+static ssize_t show_ucode_version(struct device *d, char *buf)
+{
+ u32 len = sizeof(u32), tmp = 0;
+ struct ipw_priv *p = (struct ipw_priv*)d->driver_data;
+
+ if(ipw_get_ordinal(p, IPW_ORD_STAT_UCODE_VERSION, &tmp, &len))
+ return 0;
+
+ return sprintf(buf, "0x%08x\n", tmp);
+}
+static DEVICE_ATTR(ucode_version, S_IWUSR|S_IRUGO, show_ucode_version, NULL);
+
+static ssize_t show_rtc(struct device *d, char *buf)
+{
+ u32 len = sizeof(u32), tmp = 0;
+ struct ipw_priv *p = (struct ipw_priv*)d->driver_data;
+
+ if(ipw_get_ordinal(p, IPW_ORD_STAT_RTC, &tmp, &len))
+ return 0;
+
+ return sprintf(buf, "0x%08x\n", tmp);
+}
+static DEVICE_ATTR(rtc, S_IWUSR|S_IRUGO, show_rtc, NULL);
+
+/*
+ * Add a device attribute to view/control the delay between eeprom
+ * operations.
+ */
+static ssize_t show_eeprom_delay(struct device *d, char *buf)
+{
+ int n = ((struct ipw_priv*)d->driver_data)->eeprom_delay;
+ return sprintf(buf, "%i\n", n);
+}
+static ssize_t store_eeprom_delay(struct device *d, const char *buf,
+ size_t count)
+{
+ struct ipw_priv *p = (struct ipw_priv*)d->driver_data;
+ sscanf(buf, "%i", &p->eeprom_delay);
+ return strnlen(buf, count);
+}
+static DEVICE_ATTR(eeprom_delay, S_IWUSR|S_IRUGO,
+ show_eeprom_delay,store_eeprom_delay);
+
+static ssize_t show_command_event_reg(struct device *d, char *buf)
+{
+ u32 reg = 0;
+ struct ipw_priv *p = (struct ipw_priv *)d->driver_data;
+
+ reg = ipw_read_reg32(p, CX2_INTERNAL_CMD_EVENT);
+ return sprintf(buf, "0x%08x\n", reg);
+}
+static ssize_t store_command_event_reg(struct device *d,
+ const char *buf,
+ size_t count)
+{
+ u32 reg;
+ struct ipw_priv *p = (struct ipw_priv *)d->driver_data;
+
+ sscanf(buf, "%x", ®);
+ ipw_write_reg32(p, CX2_INTERNAL_CMD_EVENT, reg);
+ return strnlen(buf, count);
+}
+static DEVICE_ATTR(command_event_reg, S_IWUSR|S_IRUGO,
+ show_command_event_reg,store_command_event_reg);
+
+static ssize_t show_mem_gpio_reg(struct device *d, char *buf)
+{
+ u32 reg = 0;
+ struct ipw_priv *p = (struct ipw_priv *)d->driver_data;
+
+ reg = ipw_read_reg32(p, 0x301100);
+ return sprintf(buf, "0x%08x\n", reg);
+}
+static ssize_t store_mem_gpio_reg(struct device *d,
+ const char *buf,
+ size_t count)
+{
+ u32 reg;
+ struct ipw_priv *p = (struct ipw_priv *)d->driver_data;
+
+ sscanf(buf, "%x", ®);
+ ipw_write_reg32(p, 0x301100, reg);
+ return strnlen(buf, count);
+}
+static DEVICE_ATTR(mem_gpio_reg, S_IWUSR|S_IRUGO,
+ show_mem_gpio_reg,store_mem_gpio_reg);
+
+static ssize_t show_indirect_dword(struct device *d, char *buf)
+{
+ u32 reg = 0;
+ struct ipw_priv *priv = (struct ipw_priv *)d->driver_data;
+ if (priv->status & STATUS_INDIRECT_DWORD)
+ reg = ipw_read_reg32(priv, priv->indirect_dword);
+ else
+ reg = 0;
+
+ return sprintf(buf, "0x%08x\n", reg);
+}
+static ssize_t store_indirect_dword(struct device *d,
+ const char *buf,
+ size_t count)
+{
+ struct ipw_priv *priv = (struct ipw_priv *)d->driver_data;
+
+ sscanf(buf, "%x", &priv->indirect_dword);
+ priv->status |= STATUS_INDIRECT_DWORD;
+ return strnlen(buf, count);
+}
+static DEVICE_ATTR(indirect_dword, S_IWUSR|S_IRUGO,
+ show_indirect_dword,store_indirect_dword);
+
+static ssize_t show_indirect_byte(struct device *d, char *buf)
+{
+ u8 reg = 0;
+ struct ipw_priv *priv = (struct ipw_priv *)d->driver_data;
+ if (priv->status & STATUS_INDIRECT_BYTE)
+ reg = ipw_read_reg8(priv, priv->indirect_byte);
+ else
+ reg = 0;
+
+ return sprintf(buf, "0x%02x\n", reg);
+}
+static ssize_t store_indirect_byte(struct device *d,
+ const char *buf,
+ size_t count)
+{
+ struct ipw_priv *priv = (struct ipw_priv *)d->driver_data;
+
+ sscanf(buf, "%x", &priv->indirect_byte);
+ priv->status |= STATUS_INDIRECT_BYTE;
+ return strnlen(buf, count);
+}
+static DEVICE_ATTR(indirect_byte, S_IWUSR|S_IRUGO,
+ show_indirect_byte, store_indirect_byte);
+
+static ssize_t show_direct_dword(struct device *d, char *buf)
+{
+ u32 reg = 0;
+ struct ipw_priv *priv = (struct ipw_priv *)d->driver_data;
+
+ if (priv->status & STATUS_DIRECT_DWORD)
+ reg = ipw_read32(priv, priv->direct_dword);
+ else
+ reg = 0;
+
+ return sprintf(buf, "0x%08x\n", reg);
+}
+static ssize_t store_direct_dword(struct device *d,
+ const char *buf,
+ size_t count)
+{
+ struct ipw_priv *priv = (struct ipw_priv *)d->driver_data;
+
+ sscanf(buf, "%x", &priv->direct_dword);
+ priv->status |= STATUS_DIRECT_DWORD;
+ return strnlen(buf, count);
+}
+static DEVICE_ATTR(direct_dword, S_IWUSR|S_IRUGO,
+ show_direct_dword,store_direct_dword);
+
+
+static inline int rf_kill_active(struct ipw_priv *priv)
+{
+ if (0 == (ipw_read32(priv, 0x30) & 0x10000))
+ priv->status |= STATUS_RF_KILL_HW;
+ else
+ priv->status &= ~STATUS_RF_KILL_HW;
+
+ return (priv->status & STATUS_RF_KILL_HW) ? 1 : 0;
+}
+
+static ssize_t show_rf_kill(struct device *d, char *buf)
+{
+ /* 0 - RF kill not enabled
+ 1 - SW based RF kill active (sysfs)
+ 2 - HW based RF kill active
+ 3 - Both HW and SW baed RF kill active */
+ struct ipw_priv *priv = (struct ipw_priv *)d->driver_data;
+ int val = ((priv->status & STATUS_RF_KILL_SW) ? 0x1 : 0x0) |
+ (rf_kill_active(priv) ? 0x2 : 0x0);
+ return sprintf(buf, "%i\n", val);
+}
+
+static int ipw_radio_kill_sw(struct ipw_priv *priv, int disable_radio)
+{
+ if ((disable_radio ? 1 : 0) ==
+ (priv->status & STATUS_RF_KILL_SW ? 1 : 0))
+ return 0 ;
+
+ IPW_DEBUG_RF_KILL("Manual SW RF Kill set to: RADIO %s\n",
+ disable_radio ? "OFF" : "ON");
+
+ if (disable_radio) {
+ priv->status |= STATUS_RF_KILL_SW;
+
+ if (priv->workqueue) {
+ cancel_delayed_work(&priv->request_scan);
+ }
+ wake_up_interruptible(&priv->wait_command_queue);
+ queue_work(priv->workqueue, &priv->down);
+ } else {
+ priv->status &= ~STATUS_RF_KILL_SW;
+ if (rf_kill_active(priv)) {
+ IPW_DEBUG_RF_KILL("Can not turn radio back on - "
+ "disabled by HW switch\n");
+ /* Make sure the RF_KILL check timer is running */
+ cancel_delayed_work(&priv->rf_kill);
+ queue_delayed_work(priv->workqueue, &priv->rf_kill,
+ 2 * HZ);
+ } else
+ queue_work(priv->workqueue, &priv->up);
+ }
+
+ return 1;
+}
+
+static ssize_t store_rf_kill(struct device *d, const char *buf, size_t count)
+{
+ struct ipw_priv *priv = (struct ipw_priv *)d->driver_data;
+
+ ipw_radio_kill_sw(priv, buf[0] == '1');
+
+ return count;
+}
+static DEVICE_ATTR(rf_kill, S_IWUSR|S_IRUGO, show_rf_kill, store_rf_kill);
+
+static void ipw_irq_tasklet(struct ipw_priv *priv)
+{
+ u32 inta, inta_mask, handled = 0;
+ unsigned long flags;
+ int rc = 0;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ inta = ipw_read32(priv, CX2_INTA_RW);
+ inta_mask = ipw_read32(priv, CX2_INTA_MASK_R);
+ inta &= (CX2_INTA_MASK_ALL & inta_mask);
+
+ /* Add any cached INTA values that need to be handled */
+ inta |= priv->isr_inta;
+
+ /* handle all the justifications for the interrupt */
+ if (inta & CX2_INTA_BIT_RX_TRANSFER) {
+ ipw_rx(priv);
+ handled |= CX2_INTA_BIT_RX_TRANSFER;
+ }
+
+ if (inta & CX2_INTA_BIT_TX_CMD_QUEUE) {
+ IPW_DEBUG_HC("Command completed.\n");
+ rc = ipw_queue_tx_reclaim( priv, &priv->txq_cmd, -1);
+ priv->status &= ~STATUS_HCMD_ACTIVE;
+ wake_up_interruptible(&priv->wait_command_queue);
+ handled |= CX2_INTA_BIT_TX_CMD_QUEUE;
+ }
+
+ if (inta & CX2_INTA_BIT_TX_QUEUE_1) {
+ IPW_DEBUG_TX("TX_QUEUE_1\n");
+ rc = ipw_queue_tx_reclaim( priv, &priv->txq[0], 0);
+ handled |= CX2_INTA_BIT_TX_QUEUE_1;
+ }
+
+ if (inta & CX2_INTA_BIT_TX_QUEUE_2) {
+ IPW_DEBUG_TX("TX_QUEUE_2\n");
+ rc = ipw_queue_tx_reclaim( priv, &priv->txq[1], 1);
+ handled |= CX2_INTA_BIT_TX_QUEUE_2;
+ }
+
+ if (inta & CX2_INTA_BIT_TX_QUEUE_3) {
+ IPW_DEBUG_TX("TX_QUEUE_3\n");
+ rc = ipw_queue_tx_reclaim( priv, &priv->txq[2], 2);
+ handled |= CX2_INTA_BIT_TX_QUEUE_3;
+ }
+
+ if (inta & CX2_INTA_BIT_TX_QUEUE_4) {
+ IPW_DEBUG_TX("TX_QUEUE_4\n");
+ rc = ipw_queue_tx_reclaim( priv, &priv->txq[3], 3);
+ handled |= CX2_INTA_BIT_TX_QUEUE_4;
+ }
+
+ if (inta & CX2_INTA_BIT_STATUS_CHANGE) {
+ IPW_WARNING("STATUS_CHANGE\n");
+ handled |= CX2_INTA_BIT_STATUS_CHANGE;
+ }
+
+ if (inta & CX2_INTA_BIT_BEACON_PERIOD_EXPIRED) {
+ IPW_WARNING("TX_PERIOD_EXPIRED\n");
+ handled |= CX2_INTA_BIT_BEACON_PERIOD_EXPIRED;
+ }
+
+ if (inta & CX2_INTA_BIT_SLAVE_MODE_HOST_CMD_DONE) {
+ IPW_WARNING("HOST_CMD_DONE\n");
+ handled |= CX2_INTA_BIT_SLAVE_MODE_HOST_CMD_DONE;
+ }
+
+ if (inta & CX2_INTA_BIT_FW_INITIALIZATION_DONE) {
+ IPW_WARNING("FW_INITIALIZATION_DONE\n");
+ priv->status |= STATUS_FW_READY;
+ handled |= CX2_INTA_BIT_FW_INITIALIZATION_DONE;
+ }
+
+ if (inta & CX2_INTA_BIT_FW_CARD_DISABLE_PHY_OFF_DONE) {
+ IPW_WARNING("PHY_OFF_DONE\n");
+ handled |= CX2_INTA_BIT_FW_CARD_DISABLE_PHY_OFF_DONE;
+ }
+
+ if (inta & CX2_INTA_BIT_RF_KILL_DONE) {
+ IPW_DEBUG_RF_KILL("RF_KILL_DONE\n");
+ priv->status |= STATUS_RF_KILL_HW;
+ wake_up_interruptible(&priv->wait_command_queue);
+ netif_carrier_off(priv->net_dev);
+ netif_stop_queue(priv->net_dev);
+ cancel_delayed_work(&priv->request_scan);
+ queue_delayed_work(priv->workqueue, &priv->rf_kill, 2 * HZ);
+ handled |= CX2_INTA_BIT_RF_KILL_DONE;
+ }
+
+ if (inta & CX2_INTA_BIT_FATAL_ERROR) {
+ IPW_ERROR("Fatal error\n");
+ ipw_dump_nic_error_log(priv);
+ ipw_dump_nic_event_log(priv);
+ queue_work(priv->workqueue, &priv->adapter_restart);
+ handled |= CX2_INTA_BIT_FATAL_ERROR;
+ }
+
+ if (inta & CX2_INTA_BIT_PARITY_ERROR) {
+ IPW_ERROR("Parity error\n");
+ handled |= CX2_INTA_BIT_PARITY_ERROR;
+ }
+
+ if (handled != inta) {
+ IPW_ERROR("Unhandled INTA bits 0x%08x\n",
+ inta & ~handled);
+ }
+
+ /* enable all interrupts */
+ ipw_enable_interrupts(priv);
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+}
+
+#ifdef CONFIG_IPW_DEBUG
+#define IPW_CMD(x) case IPW_CMD_ ## x : return #x
+static char *get_cmd_string(u8 cmd)
+{
+ switch (cmd) {
+ IPW_CMD(HOST_COMPLETE);
+ IPW_CMD(POWER_DOWN);
+ IPW_CMD(SYSTEM_CONFIG);
+ IPW_CMD(MULTICAST_ADDRESS);
+ IPW_CMD(SSID);
+ IPW_CMD(ADAPTER_ADDRESS);
+ IPW_CMD(PORT_TYPE);
+ IPW_CMD(RTS_THRESHOLD);
+ IPW_CMD(FRAG_THRESHOLD);
+ IPW_CMD(POWER_MODE);
+ IPW_CMD(WEP_KEY);
+ IPW_CMD(TGI_TX_KEY);
+ IPW_CMD(SCAN_REQUEST);
+ IPW_CMD(SCAN_REQUEST_EXT);
+ IPW_CMD(ASSOCIATE);
+ IPW_CMD(SUPPORTED_RATES);
+ IPW_CMD(SCAN_ABORT);
+ IPW_CMD(TX_FLUSH);
+ IPW_CMD(QOS_PARAMETERS);
+ IPW_CMD(DINO_CONFIG);
+ IPW_CMD(RSN_CAPABILITIES);
+ IPW_CMD(RX_KEY);
+ IPW_CMD(CARD_DISABLE);
+ IPW_CMD(SEED_NUMBER);
+ IPW_CMD(TX_POWER);
+ IPW_CMD(COUNTRY_INFO);
+ IPW_CMD(AIRONET_INFO);
+ IPW_CMD(AP_TX_POWER);
+ IPW_CMD(CCKM_INFO);
+ IPW_CMD(CCX_VER_INFO);
+ IPW_CMD(SET_CALIBRATION);
+ IPW_CMD(SENSITIVITY_CALIB);
+ IPW_CMD(RETRY_LIMIT);
+ IPW_CMD(IPW_PRE_POWER_DOWN);
+ IPW_CMD(VAP_BEACON_TEMPLATE);
+ IPW_CMD(VAP_DTIM_PERIOD);
+ IPW_CMD(EXT_SUPPORTED_RATES);
+ IPW_CMD(VAP_LOCAL_TX_PWR_CONSTRAINT);
+ IPW_CMD(VAP_QUIET_INTERVALS);
+ IPW_CMD(VAP_CHANNEL_SWITCH);
+ IPW_CMD(VAP_MANDATORY_CHANNELS);
+ IPW_CMD(VAP_CELL_PWR_LIMIT);
+ IPW_CMD(VAP_CF_PARAM_SET);
+ IPW_CMD(VAP_SET_BEACONING_STATE);
+ IPW_CMD(MEASUREMENT);
+ IPW_CMD(POWER_CAPABILITY);
+ IPW_CMD(SUPPORTED_CHANNELS);
+ IPW_CMD(TPC_REPORT);
+ IPW_CMD(WME_INFO);
+ IPW_CMD(PRODUCTION_COMMAND);
+ default:
+ return "UNKNOWN";
+ }
+}
+#endif /* CONFIG_IPW_DEBUG */
+
+#define HOST_COMPLETE_TIMEOUT HZ
+static int ipw_send_cmd(struct ipw_priv *priv, struct host_cmd *cmd)
+{
+ int rc = 0;
+
+ if (priv->status & STATUS_HCMD_ACTIVE) {
+ IPW_ERROR("Already sending a command\n");
+ return -1;
+ }
+
+ priv->status |= STATUS_HCMD_ACTIVE;
+
+ IPW_DEBUG_HC("Sending %s command (#%d), %d bytes\n",
+ get_cmd_string(cmd->cmd), cmd->cmd, cmd->len);
+ printk_buf(IPW_DL_HOST_COMMAND, (u8*)cmd->param, cmd->len);
+
+ rc = ipw_queue_tx_hcmd(priv, cmd->cmd, &cmd->param, cmd->len, 0);
+ if (rc)
+ return rc;
+
+ rc = wait_event_interruptible_timeout(
+ priv->wait_command_queue, !(priv->status & STATUS_HCMD_ACTIVE),
+ HOST_COMPLETE_TIMEOUT);
+ if (rc == 0) {
+ IPW_DEBUG_INFO("Command completion failed out after %dms.\n",
+ HOST_COMPLETE_TIMEOUT / (HZ / 100));
+ return -EIO;
+ }
+ if (priv->status & STATUS_RF_KILL_MASK) {
+ IPW_DEBUG_INFO("Command aborted due to RF Kill Switch\n");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static int ipw_send_host_complete(struct ipw_priv *priv)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_HOST_COMPLETE,
+ .len = 0
+ };
+
+ if (!priv) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send HOST_COMPLETE command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_system_config(struct ipw_priv *priv,
+ struct ipw_sys_config *config)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_SYSTEM_CONFIG,
+ .len = sizeof(*config)
+ };
+
+ if (!priv || !config) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ memcpy(&cmd.param,config,sizeof(*config));
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send SYSTEM_CONFIG command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_ssid(struct ipw_priv *priv, u8 *ssid, int len)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_SSID,
+ .len = min(len, IW_ESSID_MAX_SIZE)
+ };
+
+ if (!priv || !ssid) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ memcpy(&cmd.param, ssid, cmd.len);
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send SSID command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_adapter_address(struct ipw_priv *priv, u8 *mac)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_ADAPTER_ADDRESS,
+ .len = ETH_ALEN
+ };
+
+ if (!priv || !mac) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ IPW_DEBUG_INFO("%s: Setting MAC to " MAC_FMT "\n",
+ priv->net_dev->name, MAC_ARG(mac));
+
+ memcpy(&cmd.param, mac, ETH_ALEN);
+
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send ADAPTER_ADDRESS command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_scan_request_ext(struct ipw_priv *priv,
+ struct ipw_scan_request_ext *request)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_SCAN_REQUEST_EXT,
+ .len = sizeof(*request)
+ };
+
+ if (!priv || !request) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ memcpy(&cmd.param,request,sizeof(*request));
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send SCAN_REQUEST_EXT command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_scan_abort(struct ipw_priv *priv)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_SCAN_ABORT,
+ .len = 0
+ };
+
+ if (!priv) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send SCAN_ABORT command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_set_sensitivity(struct ipw_priv *priv, u16 sens)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_SENSITIVITY_CALIB,
+ .len = sizeof(struct ipw_sensitivity_calib)
+ };
+ struct ipw_sensitivity_calib *calib = (struct ipw_sensitivity_calib *)
+ &cmd.param;
+ calib->beacon_rssi_raw = sens;
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send SENSITIVITY CALIB command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_associate(struct ipw_priv *priv,
+ struct ipw_associate *associate)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_ASSOCIATE,
+ .len = sizeof(*associate)
+ };
+
+ if (!priv || !associate) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ memcpy(&cmd.param,associate,sizeof(*associate));
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send ASSOCIATE command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_supported_rates(struct ipw_priv *priv,
+ struct ipw_supported_rates *rates)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_SUPPORTED_RATES,
+ .len = sizeof(*rates)
+ };
+
+ if (!priv || !rates) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ memcpy(&cmd.param,rates,sizeof(*rates));
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send SUPPORTED_RATES command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_set_random_seed(struct ipw_priv *priv)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_SEED_NUMBER,
+ .len = sizeof(u32)
+ };
+
+ if (!priv) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ get_random_bytes(&cmd.param, sizeof(u32));
+
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send SEED_NUMBER command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_tx_power(struct ipw_priv *priv,
+ struct ipw_tx_power *power)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_TX_POWER,
+ .len = sizeof(*power)
+ };
+
+ if (!priv || !power) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ memcpy(&cmd.param,power,sizeof(*power));
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send TX_POWER command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_rts_threshold(struct ipw_priv *priv, u16 rts)
+{
+ struct ipw_rts_threshold rts_threshold = {
+ .rts_threshold = rts,
+ };
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_RTS_THRESHOLD,
+ .len = sizeof(rts_threshold)
+ };
+
+ if (!priv) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ memcpy(&cmd.param, &rts_threshold, sizeof(rts_threshold));
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send RTS_THRESHOLD command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_frag_threshold(struct ipw_priv *priv, u16 frag)
+{
+ struct ipw_frag_threshold frag_threshold = {
+ .frag_threshold = frag,
+ };
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_FRAG_THRESHOLD,
+ .len = sizeof(frag_threshold)
+ };
+
+ if (!priv) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ memcpy(&cmd.param, &frag_threshold, sizeof(frag_threshold));
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send FRAG_THRESHOLD command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+static int ipw_send_power_mode(struct ipw_priv *priv, u32 mode)
+{
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_POWER_MODE,
+ .len = sizeof(u32)
+ };
+ u32 *param = (u32*)(&cmd.param);
+
+ if (!priv) {
+ IPW_ERROR("Invalid args\n");
+ return -1;
+ }
+
+ /* If on battery, set to 3, if AC set to CAM, else user
+ * level */
+ switch (mode) {
+ case IPW_POWER_BATTERY:
+ *param = IPW_POWER_INDEX_3;
+ break;
+ case IPW_POWER_AC:
+ *param = IPW_POWER_MODE_CAM;
+ break;
+ default:
+ *param = mode;
+ break;
+ }
+
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send POWER_MODE command\n");
+ return -1;
+ }
+
+ return 0;
+}
+
+/*
+ * The IPW device contains a Microwire compatible EEPROM that stores
+ * various data like the MAC address. Usually the firmware has exclusive
+ * access to the eeprom, but during device initialization (before the
+ * device driver has sent the HostComplete command to the firmware) the
+ * device driver has read access to the EEPROM by way of indirect addressing
+ * through a couple of memory mapped registers.
+ *
+ * The following is a simplified implementation for pulling data out of the
+ * the eeprom, along with some helper functions to find information in
+ * the per device private data's copy of the eeprom.
+ *
+ * NOTE: To better understand how these functions work (i.e what is a chip
+ * select and why do have to keep driving the eeprom clock?), read
+ * just about any data sheet for a Microwire compatible EEPROM.
+ */
+
+/* write a 32 bit value into the indirect accessor register */
+static inline void eeprom_write_reg(struct ipw_priv *p, u32 data)
+{
+ ipw_write_reg32(p, FW_MEM_REG_EEPROM_ACCESS, data);
+
+ /* the eeprom requires some time to complete the operation */
+ udelay(p->eeprom_delay);
+
+ return;
+}
+
+/* perform a chip select operation */
+static inline void eeprom_cs(struct ipw_priv* priv)
+{
+ eeprom_write_reg(priv,0);
+ eeprom_write_reg(priv,EEPROM_BIT_CS);
+ eeprom_write_reg(priv,EEPROM_BIT_CS|EEPROM_BIT_SK);
+ eeprom_write_reg(priv,EEPROM_BIT_CS);
+}
+
+/* perform a chip select operation */
+static inline void eeprom_disable_cs(struct ipw_priv* priv)
+{
+ eeprom_write_reg(priv,EEPROM_BIT_CS);
+ eeprom_write_reg(priv,0);
+ eeprom_write_reg(priv,EEPROM_BIT_SK);
+}
+
+/* push a single bit down to the eeprom */
+static inline void eeprom_write_bit(struct ipw_priv *p,u8 bit)
+{
+ int d = ( bit ? EEPROM_BIT_DI : 0);
+ eeprom_write_reg(p,EEPROM_BIT_CS|d);
+ eeprom_write_reg(p,EEPROM_BIT_CS|d|EEPROM_BIT_SK);
+}
+
+/* push an opcode followed by an address down to the eeprom */
+static void eeprom_op(struct ipw_priv* priv, u8 op, u8 addr)
+{
+ int i;
+
+ eeprom_cs(priv);
+ eeprom_write_bit(priv,1);
+ eeprom_write_bit(priv,op&2);
+ eeprom_write_bit(priv,op&1);
+ for ( i=7; i>=0; i-- ) {
+ eeprom_write_bit(priv,addr&(1<<i));
+ }
+}
+
+/* pull 16 bits off the eeprom, one bit at a time */
+static u16 eeprom_read_u16(struct ipw_priv* priv, u8 addr)
+{
+ int i;
+ u16 r=0;
+
+ /* Send READ Opcode */
+ eeprom_op(priv,EEPROM_CMD_READ,addr);
+
+ /* Send dummy bit */
+ eeprom_write_reg(priv,EEPROM_BIT_CS);
+
+ /* Read the byte off the eeprom one bit at a time */
+ for ( i=0; i<16; i++ ) {
+ u32 data = 0;
+ eeprom_write_reg(priv,EEPROM_BIT_CS|EEPROM_BIT_SK);
+ eeprom_write_reg(priv,EEPROM_BIT_CS);
+ data = ipw_read_reg32(priv,FW_MEM_REG_EEPROM_ACCESS);
+ r = (r<<1) | ((data & EEPROM_BIT_DO)?1:0);
+ }
+
+ /* Send another dummy bit */
+ eeprom_write_reg(priv,0);
+ eeprom_disable_cs(priv);
+
+ return r;
+}
+
+/* helper function for pulling the mac address out of the private */
+/* data's copy of the eeprom data */
+static void eeprom_parse_mac(struct ipw_priv* priv, u8* mac)
+{
+ u8* ee = (u8*)priv->eeprom;
+ memcpy(mac, &ee[EEPROM_MAC_ADDRESS], 6);
+}
+
+/*
+ * Either the device driver (i.e. the host) or the firmware can
+ * load eeprom data into the designated region in SRAM. If neither
+ * happens then the FW will shutdown with a fatal error.
+ *
+ * In order to signal the FW to load the EEPROM, the EEPROM_LOAD_DISABLE
+ * bit needs region of shared SRAM needs to be non-zero.
+ */
+static void ipw_eeprom_init_sram(struct ipw_priv *priv)
+{
+ int i;
+ u16 *eeprom = (u16 *)priv->eeprom;
+
+ IPW_DEBUG_TRACE(">>\n");
+
+ /* read entire contents of eeprom into private buffer */
+ for ( i=0; i<128; i++ )
+ eeprom[i] = eeprom_read_u16(priv,(u8)i);
+
+ /*
+ If the data looks correct, then copy it to our private
+ copy. Otherwise let the firmware know to perform the operation
+ on it's own
+ */
+ if ((priv->eeprom + EEPROM_VERSION) != 0) {
+ IPW_DEBUG_INFO("Writing EEPROM data into SRAM\n");
+
+ /* write the eeprom data to sram */
+ for( i=0; i<CX2_EEPROM_IMAGE_SIZE; i++ )
+ ipw_write8(priv, IPW_EEPROM_DATA + i,
+ priv->eeprom[i]);
+
+ /* Do not load eeprom data on fatal error or suspend */
+ ipw_write32(priv, IPW_EEPROM_LOAD_DISABLE, 0);
+ } else {
+ IPW_DEBUG_INFO("Enabling FW initializationg of SRAM\n");
+
+ /* Load eeprom data on fatal error or suspend */
+ ipw_write32(priv, IPW_EEPROM_LOAD_DISABLE, 1);
+ }
+
+ IPW_DEBUG_TRACE("<<\n");
+}
+
+
+static inline void ipw_zero_memory(struct ipw_priv *priv, u32 start, u32 count)
+{
+ count >>= 2;
+ if (!count) return;
+ _ipw_write32(priv, CX2_AUTOINC_ADDR, start);
+ while (count--)
+ _ipw_write32(priv, CX2_AUTOINC_DATA, 0);
+}
+
+static inline void ipw_fw_dma_reset_command_blocks(struct ipw_priv *priv)
+{
+ ipw_zero_memory(priv, CX2_SHARED_SRAM_DMA_CONTROL,
+ CB_NUMBER_OF_ELEMENTS_SMALL *
+ sizeof(struct command_block));
+}
+
+static int ipw_fw_dma_enable(struct ipw_priv *priv)
+{ /* start dma engine but no transfers yet*/
+
+ IPW_DEBUG_FW(">> : \n");
+
+ /* Start the dma */
+ ipw_fw_dma_reset_command_blocks(priv);
+
+ /* Write CB base address */
+ ipw_write_reg32(priv, CX2_DMA_I_CB_BASE, CX2_SHARED_SRAM_DMA_CONTROL);
+
+ IPW_DEBUG_FW("<< : \n");
+ return 0;
+}
+
+static void ipw_fw_dma_abort(struct ipw_priv *priv)
+{
+ u32 control = 0;
+
+ IPW_DEBUG_FW(">> :\n");
+
+ //set the Stop and Abort bit
+ control = DMA_CONTROL_SMALL_CB_CONST_VALUE | DMA_CB_STOP_AND_ABORT;
+ ipw_write_reg32(priv, CX2_DMA_I_DMA_CONTROL, control);
+ priv->sram_desc.last_cb_index = 0;
+
+ IPW_DEBUG_FW("<< \n");
+}
+
+static int ipw_fw_dma_write_command_block(struct ipw_priv *priv, int index, struct command_block *cb)
+{
+ u32 address = CX2_SHARED_SRAM_DMA_CONTROL + (sizeof(struct command_block) * index);
+ IPW_DEBUG_FW(">> :\n");
+
+ ipw_write_indirect(priv, address, (u8*)cb, sizeof(struct command_block));
+
+ IPW_DEBUG_FW("<< :\n");
+ return 0;
+
+}
+
+static int ipw_fw_dma_kick(struct ipw_priv *priv)
+{
+ u32 control = 0;
+ u32 index=0;
+
+ IPW_DEBUG_FW(">> :\n");
+
+ for (index = 0; index < priv->sram_desc.last_cb_index; index++)
+ ipw_fw_dma_write_command_block(priv, index, &priv->sram_desc.cb_list[index]);
+
+ /* Enable the DMA in the CSR register */
+ ipw_clear_bit(priv, CX2_RESET_REG,CX2_RESET_REG_MASTER_DISABLED | CX2_RESET_REG_STOP_MASTER);
+
+ /* Set the Start bit. */
+ control = DMA_CONTROL_SMALL_CB_CONST_VALUE | DMA_CB_START;
+ ipw_write_reg32(priv, CX2_DMA_I_DMA_CONTROL, control);
+
+ IPW_DEBUG_FW("<< :\n");
+ return 0;
+}
+
+static void ipw_fw_dma_dump_command_block(struct ipw_priv *priv)
+{
+ u32 address;
+ u32 register_value=0;
+ u32 cb_fields_address=0;
+
+ IPW_DEBUG_FW(">> :\n");
+ address = ipw_read_reg32(priv,CX2_DMA_I_CURRENT_CB);
+ IPW_DEBUG_FW_INFO("Current CB is 0x%x \n",address);
+
+ /* Read the DMA Controlor register */
+ register_value = ipw_read_reg32(priv, CX2_DMA_I_DMA_CONTROL);
+ IPW_DEBUG_FW_INFO("CX2_DMA_I_DMA_CONTROL is 0x%x \n",register_value);
+
+ /* Print the CB values*/
+ cb_fields_address = address;
+ register_value = ipw_read_reg32(priv, cb_fields_address);
+ IPW_DEBUG_FW_INFO("Current CB ControlField is 0x%x \n",register_value);
+
+ cb_fields_address += sizeof(u32);
+ register_value = ipw_read_reg32(priv, cb_fields_address);
+ IPW_DEBUG_FW_INFO("Current CB Source Field is 0x%x \n",register_value);
+
+ cb_fields_address += sizeof(u32);
+ register_value = ipw_read_reg32(priv, cb_fields_address);
+ IPW_DEBUG_FW_INFO("Current CB Destination Field is 0x%x \n",
+ register_value);
+
+ cb_fields_address += sizeof(u32);
+ register_value = ipw_read_reg32(priv, cb_fields_address);
+ IPW_DEBUG_FW_INFO("Current CB Status Field is 0x%x \n",register_value);
+
+ IPW_DEBUG_FW(">> :\n");
+}
+
+static int ipw_fw_dma_command_block_index(struct ipw_priv *priv)
+{
+ u32 current_cb_address = 0;
+ u32 current_cb_index = 0;
+
+ IPW_DEBUG_FW("<< :\n");
+ current_cb_address= ipw_read_reg32(priv, CX2_DMA_I_CURRENT_CB);
+
+ current_cb_index = (current_cb_address - CX2_SHARED_SRAM_DMA_CONTROL )/
+ sizeof (struct command_block);
+
+ IPW_DEBUG_FW_INFO("Current CB index 0x%x address = 0x%X \n",
+ current_cb_index, current_cb_address );
+
+ IPW_DEBUG_FW(">> :\n");
+ return current_cb_index;
+
+}
+
+static int ipw_fw_dma_add_command_block(struct ipw_priv *priv,
+ u32 src_address,
+ u32 dest_address,
+ u32 length,
+ int interrupt_enabled,
+ int is_last)
+{
+
+ u32 control = CB_VALID | CB_SRC_LE | CB_DEST_LE | CB_SRC_AUTOINC |
+ CB_SRC_IO_GATED | CB_DEST_AUTOINC | CB_SRC_SIZE_LONG |
+ CB_DEST_SIZE_LONG;
+ struct command_block *cb;
+ u32 last_cb_element=0;
+
+ IPW_DEBUG_FW_INFO("src_address=0x%x dest_address=0x%x length=0x%x\n",
+ src_address, dest_address, length);
+
+ if (priv->sram_desc.last_cb_index >= CB_NUMBER_OF_ELEMENTS_SMALL)
+ return -1;
+
+ last_cb_element = priv->sram_desc.last_cb_index;
+ cb = &priv->sram_desc.cb_list[last_cb_element];
+ priv->sram_desc.last_cb_index++;
+
+ /* Calculate the new CB control word */
+ if (interrupt_enabled )
+ control |= CB_INT_ENABLED;
+
+ if (is_last)
+ control |= CB_LAST_VALID;
+
+ control |= length;
+
+ /* Calculate the CB Element's checksum value */
+ cb->status = control ^src_address ^dest_address;
+
+ /* Copy the Source and Destination addresses */
+ cb->dest_addr = (void *)dest_address;
+ cb->source_addr = (void *)src_address;
+
+ /* Copy the Control Word last */
+ cb->control = control;
+
+ return 0;
+}
+
+static int ipw_fw_dma_add_buffer(struct ipw_priv *priv,
+ u32 src_phys,
+ u32 dest_address,
+ u32 length)
+{
+ u32 bytes_left = length;
+ u32 src_offset=0;
+ u32 dest_offset=0;
+ int status = 0;
+ IPW_DEBUG_FW(">> \n");
+ IPW_DEBUG_FW_INFO("src_phys=0x%x dest_address=0x%x length=0x%x\n",
+ src_phys, dest_address, length);
+ while (bytes_left > CB_MAX_LENGTH) {
+ status = ipw_fw_dma_add_command_block( priv,
+ src_phys + src_offset,
+ dest_address + dest_offset,
+ CB_MAX_LENGTH, 0, 0);
+ if (status) {
+ IPW_DEBUG_FW_INFO(": Failed\n");
+ return -1;
+ } else
+ IPW_DEBUG_FW_INFO(": Added new cb\n");
+
+ src_offset += CB_MAX_LENGTH;
+ dest_offset += CB_MAX_LENGTH;
+ bytes_left -= CB_MAX_LENGTH;
+ }
+
+ /* add the buffer tail */
+ if (bytes_left > 0) {
+ status = ipw_fw_dma_add_command_block(
+ priv, src_phys + src_offset,
+ dest_address + dest_offset,
+ bytes_left, 0, 0);
+ if (status) {
+ IPW_DEBUG_FW_INFO(": Failed on the buffer tail\n");
+ return -1;
+ } else
+ IPW_DEBUG_FW_INFO(": Adding new cb - the buffer tail\n");
+ }
+
+
+ IPW_DEBUG_FW("<< \n");
+ return 0;
+}
+
+static int ipw_fw_dma_wait(struct ipw_priv *priv)
+{
+ u32 current_index = 0;
+ u32 watchdog = 0;
+
+ IPW_DEBUG_FW(">> : \n");
+
+ current_index = ipw_fw_dma_command_block_index(priv);
+ IPW_DEBUG_FW_INFO("sram_desc.last_cb_index:0x%8X\n",
+ (int) priv->sram_desc.last_cb_index);
+
+ while (current_index < priv->sram_desc.last_cb_index) {
+ udelay(50);
+ current_index = ipw_fw_dma_command_block_index(priv);
+
+ watchdog++;
+
+ if (watchdog > 400) {
+ IPW_DEBUG_FW_INFO("Timeout\n");
+ ipw_fw_dma_dump_command_block(priv);
+ ipw_fw_dma_abort(priv);
+ return -1;
+ }
+ }
+
+ ipw_fw_dma_abort(priv);
+
+ /*Disable the DMA in the CSR register*/
+ ipw_set_bit(priv, CX2_RESET_REG,
+ CX2_RESET_REG_MASTER_DISABLED | CX2_RESET_REG_STOP_MASTER);
+
+ IPW_DEBUG_FW("<< dmaWaitSync \n");
+ return 0;
+}
+
+/**
+ * Check that card is still alive.
+ * Reads debug register from domain0.
+ * If card is present, pre-defined value should
+ * be found there.
+ *
+ * @param priv
+ * @return 1 if card is present, 0 otherwise
+ */
+static inline int ipw_alive(struct ipw_priv *priv)
+{
+ return ipw_read32(priv, 0x90) == 0xd55555d5;
+}
+
+static inline int ipw_poll_bit(struct ipw_priv *priv, u32 addr, u32 mask,
+ int timeout)
+{
+ int i = 0;
+
+ do {
+ if ((ipw_read32(priv, addr) & mask) == mask)
+ return i;
+ mdelay(10);
+ i += 10;
+ } while (i < timeout);
+
+ return -ETIME;
+}
+
+/* These functions load the firmware and micro code for the operation of
+ * the ipw hardware. It assumes the buffer has all the bits for the
+ * image and the caller is handling the memory allocation and clean up.
+ */
+
+
+static int ipw_stop_master(struct ipw_priv * priv)
+{
+ int rc;
+
+ IPW_DEBUG_TRACE(">> \n");
+ /* stop master. typical delay - 0 */
+ ipw_set_bit(priv, CX2_RESET_REG, CX2_RESET_REG_STOP_MASTER);
+
+ rc = ipw_poll_bit(priv, CX2_RESET_REG,
+ CX2_RESET_REG_MASTER_DISABLED, 100);
+ if (rc < 0) {
+ IPW_ERROR("stop master failed in 10ms\n");
+ return -1;
+ }
+
+ IPW_DEBUG_INFO("stop master %dms\n", rc);
+
+ return rc;
+}
+
+static void ipw_arc_release(struct ipw_priv *priv)
+{
+ IPW_DEBUG_TRACE(">> \n");
+ mdelay(5);
+
+ ipw_clear_bit(priv, CX2_RESET_REG, CBD_RESET_REG_PRINCETON_RESET);
+
+ /* no one knows timing, for safety add some delay */
+ mdelay(5);
+}
+
+struct fw_header {
+ u32 version;
+ u32 mode;
+};
+
+struct fw_chunk {
+ u32 address;
+ u32 length;
+};
+
+#define IPW_FW_MAJOR_VERSION 2
+#define IPW_FW_MINOR_VERSION 0
+
+#define IPW_FW_MINOR(x) ((x & 0xff) >> 8)
+#define IPW_FW_MAJOR(x) (x & 0xff)
+
+#define IPW_FW_VERSION ((IPW_FW_MINOR_VERSION << 8) | \
+ IPW_FW_MAJOR_VERSION)
+
+#define IPW_FW_PREFIX "ipw-" __stringify(IPW_FW_MAJOR_VERSION) \
+"." __stringify(IPW_FW_MINOR_VERSION) "-"
+
+#if 0
+#define IPW_FW_NAME(x) IPW_FW_PREFIX "" x ".fw"
+#else
+#define IPW_FW_NAME(x) "ipw2200_" x ".fw"
+#endif
+
+static int ipw_load_ucode(struct ipw_priv *priv, u8 * data,
+ size_t len)
+{
+ int rc = 0, i, addr;
+ u8 cr = 0;
+ u16 *image;
+
+ image = (u16 *)data;
+
+ IPW_DEBUG_TRACE(">> \n");
+
+ rc = ipw_stop_master(priv);
+
+ if (rc < 0)
+ return rc;
+
+// spin_lock_irqsave(&priv->lock, flags);
+
+ for (addr = CX2_SHARED_LOWER_BOUND;
+ addr < CX2_REGISTER_DOMAIN1_END; addr += 4) {
+ ipw_write32(priv, addr, 0);
+ }
+
+ /* no ucode (yet) */
+ memset(&priv->dino_alive, 0, sizeof(priv->dino_alive));
+ /* destroy DMA queues */
+ /* reset sequence */
+
+ ipw_write_reg32(priv, CX2_MEM_HALT_AND_RESET ,CX2_BIT_HALT_RESET_ON);
+ ipw_arc_release(priv);
+ ipw_write_reg32(priv, CX2_MEM_HALT_AND_RESET, CX2_BIT_HALT_RESET_OFF);
+ mdelay(1);
+
+ /* reset PHY */
+ ipw_write_reg32(priv, CX2_INTERNAL_CMD_EVENT, CX2_BASEBAND_POWER_DOWN);
+ mdelay(1);
+
+ ipw_write_reg32(priv, CX2_INTERNAL_CMD_EVENT, 0);
+ mdelay(1);
+
+ /* enable ucode store */
+ ipw_write_reg8(priv, DINO_CONTROL_REG, 0x0);
+ ipw_write_reg8(priv, DINO_CONTROL_REG, DINO_ENABLE_CS);
+ mdelay(1);
+
+ /* write ucode */
+ /**
+ * @bug
+ * Do NOT set indirect address register once and then
+ * store data to indirect data register in the loop.
+ * It seems very reasonable, but in this case DINO do not
+ * accept ucode. It is essential to set address each time.
+ */
+ /* load new ipw uCode */
+ for (i = 0; i < len / 2; i++)
+ ipw_write_reg16(priv, CX2_BASEBAND_CONTROL_STORE, image[i]);
+
+
+ /* enable DINO */
+ ipw_write_reg8(priv, CX2_BASEBAND_CONTROL_STATUS, 0);
+ ipw_write_reg8(priv, CX2_BASEBAND_CONTROL_STATUS,
+ DINO_ENABLE_SYSTEM );
+
+ /* this is where the igx / win driver deveates from the VAP driver.*/
+
+ /* wait for alive response */
+ for (i = 0; i < 100; i++) {
+ /* poll for incoming data */
+ cr = ipw_read_reg8(priv, CX2_BASEBAND_CONTROL_STATUS);
+ if (cr & DINO_RXFIFO_DATA)
+ break;
+ mdelay(1);
+ }
+
+ if (cr & DINO_RXFIFO_DATA) {
+ /* alive_command_responce size is NOT multiple of 4 */
+ u32 response_buffer[(sizeof(priv->dino_alive) + 3) / 4];
+
+ for (i = 0; i < ARRAY_SIZE(response_buffer); i++)
+ response_buffer[i] =
+ ipw_read_reg32(priv,
+ CX2_BASEBAND_RX_FIFO_READ);
+ memcpy(&priv->dino_alive, response_buffer,
+ sizeof(priv->dino_alive));
+ if (priv->dino_alive.alive_command == 1
+ && priv->dino_alive.ucode_valid == 1) {
+ rc = 0;
+ IPW_DEBUG_INFO(
+ "Microcode OK, rev. %d (0x%x) dev. %d (0x%x) "
+ "of %02d/%02d/%02d %02d:%02d\n",
+ priv->dino_alive.software_revision,
+ priv->dino_alive.software_revision,
+ priv->dino_alive.device_identifier,
+ priv->dino_alive.device_identifier,
+ priv->dino_alive.time_stamp[0],
+ priv->dino_alive.time_stamp[1],
+ priv->dino_alive.time_stamp[2],
+ priv->dino_alive.time_stamp[3],
+ priv->dino_alive.time_stamp[4]);
+ } else {
+ IPW_DEBUG_INFO("Microcode is not alive\n");
+ rc = -EINVAL;
+ }
+ } else {
+ IPW_DEBUG_INFO("No alive response from DINO\n");
+ rc = -ETIME;
+ }
+
+ /* disable DINO, otherwise for some reason
+ firmware have problem getting alive resp. */
+ ipw_write_reg8(priv, CX2_BASEBAND_CONTROL_STATUS, 0);
+
+// spin_unlock_irqrestore(&priv->lock, flags);
+
+ return rc;
+}
+
+static int ipw_load_firmware(struct ipw_priv *priv, u8 * data,
+ size_t len)
+{
+ int rc = -1;
+ int offset = 0;
+ struct fw_chunk *chunk;
+ dma_addr_t shared_phys;
+ u8 *shared_virt;
+
+ IPW_DEBUG_TRACE("<< : \n");
+ shared_virt = pci_alloc_consistent(priv->pci_dev, len, &shared_phys);
+
+ if (!shared_virt)
+ return -ENOMEM;
+
+ memmove(shared_virt, data, len);
+
+ /* Start the Dma */
+ rc = ipw_fw_dma_enable(priv);
+
+ if (priv->sram_desc.last_cb_index > 0) {
+ /* the DMA is already ready this would be a bug. */
+ BUG();
+ goto out;
+ }
+
+ do {
+ chunk = (struct fw_chunk *)(data + offset);
+ offset += sizeof(struct fw_chunk);
+ /* build DMA packet and queue up for sending */
+ /* dma to chunk->address, the chunk->length bytes from data +
+ * offeset*/
+ /* Dma loading */
+ rc = ipw_fw_dma_add_buffer(priv, shared_phys + offset,
+ chunk->address, chunk->length);
+ if (rc) {
+ IPW_DEBUG_INFO("dmaAddBuffer Failed\n");
+ goto out;
+ }
+
+ offset += chunk->length;
+ } while (offset < len);
+
+ /* Run the DMA and wait for the answer*/
+ rc = ipw_fw_dma_kick(priv);
+ if (rc) {
+ IPW_ERROR("dmaKick Failed\n");
+ goto out;
+ }
+
+ rc = ipw_fw_dma_wait(priv);
+ if (rc) {
+ IPW_ERROR("dmaWaitSync Failed\n");
+ goto out;
+ }
+ out:
+ pci_free_consistent( priv->pci_dev, len, shared_virt, shared_phys);
+ return rc;
+}
+
+/* stop nic */
+static int ipw_stop_nic(struct ipw_priv *priv)
+{
+ int rc = 0;
+
+ /* stop*/
+ ipw_write32(priv, CX2_RESET_REG, CX2_RESET_REG_STOP_MASTER);
+
+ rc = ipw_poll_bit(priv, CX2_RESET_REG,
+ CX2_RESET_REG_MASTER_DISABLED, 500);
+ if (rc < 0) {
+ IPW_ERROR("wait for reg master disabled failed\n");
+ return rc;
+ }
+
+ ipw_set_bit(priv, CX2_RESET_REG, CBD_RESET_REG_PRINCETON_RESET);
+
+ return rc;
+}
+
+static void ipw_start_nic(struct ipw_priv *priv)
+{
+ IPW_DEBUG_TRACE(">>\n");
+
+ /* prvHwStartNic release ARC*/
+ ipw_clear_bit(priv, CX2_RESET_REG,
+ CX2_RESET_REG_MASTER_DISABLED |
+ CX2_RESET_REG_STOP_MASTER |
+ CBD_RESET_REG_PRINCETON_RESET);
+
+ /* enable power management */
+ ipw_set_bit(priv, CX2_GP_CNTRL_RW, CX2_GP_CNTRL_BIT_HOST_ALLOWS_STANDBY);
+
+ IPW_DEBUG_TRACE("<<\n");
+}
+
+static int ipw_init_nic(struct ipw_priv *priv)
+{
+ int rc;
+
+ IPW_DEBUG_TRACE(">>\n");
+ /* reset */
+ /*prvHwInitNic */
+ /* set "initialization complete" bit to move adapter to D0 state */
+ ipw_set_bit(priv, CX2_GP_CNTRL_RW, CX2_GP_CNTRL_BIT_INIT_DONE);
+
+ /* low-level PLL activation */
+ ipw_write32(priv, CX2_READ_INT_REGISTER, CX2_BIT_INT_HOST_SRAM_READ_INT_REGISTER);
+
+ /* wait for clock stabilization */
+ rc = ipw_poll_bit(priv, CX2_GP_CNTRL_RW,
+ CX2_GP_CNTRL_BIT_CLOCK_READY, 250);
+ if (rc < 0 )
+ IPW_DEBUG_INFO("FAILED wait for clock stablization\n");
+
+ /* assert SW reset */
+ ipw_set_bit(priv, CX2_RESET_REG, CX2_RESET_REG_SW_RESET);
+
+ udelay(10);
+
+ /* set "initialization complete" bit to move adapter to D0 state */
+ ipw_set_bit(priv, CX2_GP_CNTRL_RW, CX2_GP_CNTRL_BIT_INIT_DONE);
+
+ IPW_DEBUG_TRACE(">>\n");
+ return 0;
+}
+
+
+/* Call this function from process context, it will sleep in request_firmware.
+ * Probe is an ok place to call this from.
+ */
+static int ipw_reset_nic(struct ipw_priv *priv)
+{
+ int rc = 0;
+
+ IPW_DEBUG_TRACE(">>\n");
+
+ rc = ipw_init_nic(priv);
+
+ IPW_DEBUG_TRACE("<<\n");
+ return rc;
+}
+
+static int ipw_get_fw(struct ipw_priv *priv,
+ const struct firmware **fw, const char *name)
+{
+ struct fw_header *header;
+ int rc;
+
+ /* ask firmware_class module to get the boot firmware off disk */
+ rc = request_firmware(fw, name, &priv->pci_dev->dev);
+ if (rc < 0) {
+ IPW_ERROR("%s load failed\n", name);
+ return rc;
+ }
+
+ header = (struct fw_header *)(*fw)->data;
+ if (IPW_FW_MAJOR(header->version) != IPW_FW_MAJOR_VERSION) {
+ IPW_ERROR("'%s' firmware version not compatible (%d != %d)\n",
+ name,
+ IPW_FW_MAJOR(header->version), IPW_FW_MAJOR_VERSION);
+ return rc;
+ }
+
+ IPW_DEBUG_INFO("Loading firmware '%s' file v%d.%d (%d bytes)\n",
+ name,
+ IPW_FW_MAJOR(header->version),
+ IPW_FW_MINOR(header->version),
+ (*fw)->size - sizeof(struct fw_header));
+ return 0;
+}
+
+#define CX2_RX_BUF_SIZE (3000)
+
+static inline void ipw_rx_queue_reset(struct ipw_priv *priv,
+ struct ipw_rx_queue *rxq)
+{
+ int i;
+
+ INIT_LIST_HEAD(&rxq->rx_free);
+ INIT_LIST_HEAD(&rxq->rx_used);
+
+ /* Fill the rx_used queue with _all_ of the Rx buffers */
+ for (i = 0; i < RX_FREE_BUFFERS + RX_QUEUE_SIZE; i++) {
+ /* In the reset function, these buffers may have been allocated
+ * to an SKB, so we need to unmap and free potential storage */
+ if (rxq->pool[i].skb != NULL) {
+ pci_unmap_single(priv->pci_dev, rxq->pool[i].dma_addr,
+ CX2_RX_BUF_SIZE,
+ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(rxq->pool[i].skb);
+ }
+ list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
+ }
+
+ /* Set us so that we have processed and used all buffers, but have
+ * not restocked the Rx queue with fresh buffers */
+ rxq->read = rxq->write = 0;
+ rxq->processed = RX_QUEUE_SIZE - 1;
+ rxq->free_count = 0;
+}
+
+#ifdef CONFIG_PM
+static int fw_loaded = 0;
+static const struct firmware *bootfw = NULL;
+static const struct firmware *firmware = NULL;
+static const struct firmware *ucode = NULL;
+#endif
+
+static int ipw_load(struct ipw_priv *priv)
+{
+#ifndef CONFIG_PM
+ const struct firmware *bootfw = NULL;
+ const struct firmware *firmware = NULL;
+ const struct firmware *ucode = NULL;
+#endif
+ int rc = 0, retries = 3;
+
+#ifdef CONFIG_PM
+ if (!fw_loaded) {
+#endif
+ rc = ipw_get_fw(priv, &bootfw, IPW_FW_NAME("boot"));
+ if (rc)
+ goto error;
+
+ rc = ipw_get_fw(priv, &ucode, IPW_FW_NAME("ucode"));
+ if (rc)
+ goto error;
+
+ switch (priv->ieee->iw_mode) {
+ case IW_MODE_ADHOC:
+ rc = ipw_get_fw(priv, &firmware, IPW_FW_NAME("ibss"));
+ break;
+
+#ifdef CONFIG_IPW_PROMISC
+ case IW_MODE_MONITOR:
+ rc = ipw_get_fw(priv, &firmware, IPW_FW_NAME("sniffer"));
+ break;
+#endif
+ case IW_MODE_INFRA:
+ rc = ipw_get_fw(priv, &firmware, IPW_FW_NAME("bss"));
+ break;
+
+ default:
+ rc = -EINVAL;
+ }
+
+ if (rc)
+ goto error;
+
+#ifdef CONFIG_PM
+ fw_loaded = 1;
+ }
+#endif
+
+ if (!priv->rxq)
+ priv->rxq = ipw_rx_queue_alloc(priv);
+ else
+ ipw_rx_queue_reset(priv, priv->rxq);
+ if (!priv->rxq) {
+ IPW_ERROR("Unable to initialize Rx queue\n");
+ goto error;
+ }
+
+ retry:
+ /* Ensure interrupts are disabled */
+ ipw_write32(priv, CX2_INTA_MASK_R, ~CX2_INTA_MASK_ALL);
+ priv->status &= ~STATUS_INT_ENABLED;
+
+ /* ack pending interrupts */
+ ipw_write32(priv, CX2_INTA_RW, CX2_INTA_MASK_ALL);
+
+ ipw_stop_nic(priv);
+
+ rc = ipw_reset_nic(priv);
+ if (rc) {
+ IPW_ERROR("Unable to reset NIC\n");
+ goto error;
+ }
+
+ ipw_zero_memory(priv, CX2_NIC_SRAM_LOWER_BOUND,
+ CX2_NIC_SRAM_UPPER_BOUND - CX2_NIC_SRAM_LOWER_BOUND);
+
+ /* DMA the initial boot firmware into the device */
+ rc = ipw_load_firmware(priv, bootfw->data + sizeof(struct fw_header),
+ bootfw->size - sizeof(struct fw_header));
+ if (rc < 0) {
+ IPW_ERROR("Unable to load boot firmware\n");
+ goto error;
+ }
+
+ /* kick start the device */
+ ipw_start_nic(priv);
+
+ /* wait for the device to finish it's initial startup sequence */
+ rc = ipw_poll_bit(priv, CX2_INTA_RW,
+ CX2_INTA_BIT_FW_INITIALIZATION_DONE, 500);
+ if (rc < 0) {
+ IPW_ERROR("device failed to boot initial fw image\n");
+ goto error;
+ }
+ IPW_DEBUG_INFO("initial device response after %dms\n", rc);
+
+ /* ack fw init done interrupt */
+ ipw_write32(priv, CX2_INTA_RW, CX2_INTA_BIT_FW_INITIALIZATION_DONE);
+
+ /* DMA the ucode into the device */
+ rc = ipw_load_ucode(priv, ucode->data + sizeof(struct fw_header),
+ ucode->size - sizeof(struct fw_header));
+ if (rc < 0) {
+ IPW_ERROR("Unable to load ucode\n");
+ goto error;
+ }
+
+ /* stop nic */
+ ipw_stop_nic(priv);
+
+ /* DMA bss firmware into the device */
+ rc = ipw_load_firmware(priv, firmware->data +
+ sizeof(struct fw_header),
+ firmware->size - sizeof(struct fw_header));
+ if (rc < 0 ) {
+ IPW_ERROR("Unable to load firmware\n");
+ goto error;
+ }
+
+ ipw_write32(priv, IPW_EEPROM_LOAD_DISABLE, 0);
+
+ rc = ipw_queue_reset(priv);
+ if (rc) {
+ IPW_ERROR("Unable to initialize queues\n");
+ goto error;
+ }
+
+ /* Ensure interrupts are disabled */
+ ipw_write32(priv, CX2_INTA_MASK_R, ~CX2_INTA_MASK_ALL);
+
+ /* kick start the device */
+ ipw_start_nic(priv);
+
+ if (ipw_read32(priv, CX2_INTA_RW) & CX2_INTA_BIT_PARITY_ERROR) {
+ if (retries > 0) {
+ IPW_WARNING("Parity error. Retrying init.\n");
+ retries--;
+ goto retry;
+ }
+
+ IPW_ERROR("TODO: Handle parity error -- schedule restart?\n");
+ rc = -EIO;
+ goto error;
+ }
+
+ /* wait for the device */
+ rc = ipw_poll_bit(priv, CX2_INTA_RW,
+ CX2_INTA_BIT_FW_INITIALIZATION_DONE, 500);
+ if (rc < 0) {
+ IPW_ERROR("device failed to start after 500ms\n");
+ goto error;
+ }
+ IPW_DEBUG_INFO("device response after %dms\n", rc);
+
+ /* ack fw init done interrupt */
+ ipw_write32(priv, CX2_INTA_RW, CX2_INTA_BIT_FW_INITIALIZATION_DONE);
+
+ /* read eeprom data and initialize the eeprom region of sram */
+ priv->eeprom_delay = 1;
+ ipw_eeprom_init_sram(priv);
+
+ /* enable interrupts */
+ ipw_enable_interrupts(priv);
+
+ ipw_rx_queue_replenish(priv);
+ ipw_write32(priv, CX2_RX_READ_INDEX, priv->rxq->read);
+
+ /* ack pending interrupts */
+ ipw_write32(priv, CX2_INTA_RW, CX2_INTA_MASK_ALL);
+
+#ifndef CONFIG_PM
+ release_firmware(bootfw);
+ release_firmware(ucode);
+ release_firmware(firmware);
+#endif
+ return 0;
+
+ error:
+ if (priv->rxq) {
+ ipw_rx_queue_free(priv, priv->rxq);
+ priv->rxq = NULL;
+ }
+ ipw_tx_queue_free(priv);
+ if (bootfw)
+ release_firmware(bootfw);
+ if (ucode)
+ release_firmware(ucode);
+ if (firmware)
+ release_firmware(firmware);
+#ifdef CONFIG_PM
+ fw_loaded = 0;
+#endif
+
+ return rc;
+}
+
+/**
+ * DMA services
+ *
+ * Theory of operation
+ *
+ * A queue is a circular buffers with 'Read' and 'Write' pointers.
+ * 2 empty entries always kept in the buffer to protect from overflow.
+ *
+ * For Tx queue, there are low mark and high mark limits. If, after queuing
+ * the packet for Tx, free space become < low mark, Tx queue stopped. When
+ * reclaiming packets (on 'tx done IRQ), if free space become > high mark,
+ * Tx queue resumed.
+ *
+ * The IPW operates with six queues, one receive queue in the device's
+ * sram, one transmit queue for sending commands to the device firmware,
+ * and four transmit queues for data.
+ *
+ * The four transmit queues allow for performing quality of service (qos)
+ * transmissions as per the 802.11 protocol. Currently Linux does not
+ * provide a mechanism to the user for utilizing prioritized queues, so
+ * we only utilize the first data transmit queue (queue1).
+ */
+
+/**
+ * Driver allocates buffers of this size for Rx
+ */
+
+static inline int ipw_queue_space(const struct clx2_queue *q)
+{
+ int s = q->last_used - q->first_empty;
+ if (s <= 0)
+ s += q->n_bd;
+ s -= 2; /* keep some reserve to not confuse empty and full situations */
+ if (s < 0)
+ s = 0;
+ return s;
+}
+
+static inline int ipw_queue_inc_wrap(int index, int n_bd)
+{
+ return (++index == n_bd) ? 0 : index;
+}
+
+/**
+ * Initialize common DMA queue structure
+ *
+ * @param q queue to init
+ * @param count Number of BD's to allocate. Should be power of 2
+ * @param read_register Address for 'read' register
+ * (not offset within BAR, full address)
+ * @param write_register Address for 'write' register
+ * (not offset within BAR, full address)
+ * @param base_register Address for 'base' register
+ * (not offset within BAR, full address)
+ * @param size Address for 'size' register
+ * (not offset within BAR, full address)
+ */
+static void ipw_queue_init(struct ipw_priv *priv, struct clx2_queue *q,
+ int count, u32 read, u32 write,
+ u32 base, u32 size)
+{
+ q->n_bd = count;
+
+ q->low_mark = q->n_bd / 4;
+ if (q->low_mark < 4)
+ q->low_mark = 4;
+
+ q->high_mark = q->n_bd / 8;
+ if (q->high_mark < 2)
+ q->high_mark = 2;
+
+ q->first_empty = q->last_used = 0;
+ q->reg_r = read;
+ q->reg_w = write;
+
+ ipw_write32(priv, base, q->dma_addr);
+ ipw_write32(priv, size, count);
+ ipw_write32(priv, read, 0);
+ ipw_write32(priv, write, 0);
+
+ _ipw_read32(priv, 0x90);
+}
+
+static int ipw_queue_tx_init(struct ipw_priv *priv,
+ struct clx2_tx_queue *q,
+ int count, u32 read, u32 write,
+ u32 base, u32 size)
+{
+ struct pci_dev *dev = priv->pci_dev;
+
+ q->txb = kmalloc(sizeof(q->txb[0]) * count, GFP_KERNEL);
+ if (!q->txb) {
+ IPW_ERROR("vmalloc for auxilary BD structures failed\n");
+ return -ENOMEM;
+ }
+
+ q->bd = pci_alloc_consistent(dev,sizeof(q->bd[0])*count, &q->q.dma_addr);
+ if (!q->bd) {
+ IPW_ERROR("pci_alloc_consistent(%d) failed\n",
+ sizeof(q->bd[0]) * count);
+ kfree(q->txb);
+ q->txb = NULL;
+ return -ENOMEM;
+ }
+
+ ipw_queue_init(priv, &q->q, count, read, write, base, size);
+ return 0;
+}
+
+/**
+ * Free one TFD, those at index [txq->q.last_used].
+ * Do NOT advance any indexes
+ *
+ * @param dev
+ * @param txq
+ */
+static void ipw_queue_tx_free_tfd(struct ipw_priv *priv,
+ struct clx2_tx_queue *txq)
+{
+ struct tfd_frame *bd = &txq->bd[txq->q.last_used];
+ struct pci_dev *dev = priv->pci_dev;
+ int i;
+
+ /* classify bd */
+ if (bd->control_flags.message_type == TX_HOST_COMMAND_TYPE)
+ /* nothing to cleanup after for host commands */
+ return;
+
+ /* sanity check */
+ if (bd->u.data.num_chunks > NUM_TFD_CHUNKS) {
+ IPW_ERROR("Too many chunks: %i\n", bd->u.data.num_chunks);
+ /** @todo issue fatal error, it is quite serious situation */
+ return;
+ }
+
+ /* unmap chunks if any */
+ for (i = 0; i < bd->u.data.num_chunks; i++) {
+ pci_unmap_single(dev, bd->u.data.chunk_ptr[i],
+ bd->u.data.chunk_len[i], PCI_DMA_TODEVICE);
+ if (txq->txb[txq->q.last_used]) {
+ ieee80211_txb_free(txq->txb[txq->q.last_used]);
+ txq->txb[txq->q.last_used] = NULL;
+ }
+ }
+}
+
+/**
+ * Deallocate DMA queue.
+ *
+ * Empty queue by removing and destroying all BD's.
+ * Free all buffers.
+ *
+ * @param dev
+ * @param q
+ */
+static void ipw_queue_tx_free(struct ipw_priv *priv,
+ struct clx2_tx_queue *txq)
+{
+ struct clx2_queue *q = &txq->q;
+ struct pci_dev *dev = priv->pci_dev;
+
+ if (q->n_bd == 0)
+ return;
+
+ /* first, empty all BD's */
+ for (; q->first_empty != q->last_used;
+ q->last_used = ipw_queue_inc_wrap(q->last_used, q->n_bd)) {
+ ipw_queue_tx_free_tfd(priv, txq);
+ }
+
+ /* free buffers belonging to queue itself */
+ pci_free_consistent(dev, sizeof(txq->bd[0])*q->n_bd, txq->bd,
+ q->dma_addr);
+ kfree(txq->txb);
+
+ /* 0 fill whole structure */
+ memset(txq, 0, sizeof(*txq));
+}
+
+
+/**
+ * Destroy all DMA queues and structures
+ *
+ * @param priv
+ */
+static void ipw_tx_queue_free(struct ipw_priv *priv)
+{
+ /* Tx CMD queue */
+ ipw_queue_tx_free(priv, &priv->txq_cmd);
+
+ /* Tx queues */
+ ipw_queue_tx_free(priv, &priv->txq[0]);
+ ipw_queue_tx_free(priv, &priv->txq[1]);
+ ipw_queue_tx_free(priv, &priv->txq[2]);
+ ipw_queue_tx_free(priv, &priv->txq[3]);
+}
+
+static void inline __maybe_wake_tx(struct ipw_priv *priv)
+{
+ if (netif_running(priv->net_dev)) {
+ switch (priv->port_type) {
+ case DCR_TYPE_MU_BSS:
+ case DCR_TYPE_MU_IBSS:
+ if (!(priv->status & STATUS_ASSOCIATED)) {
+ return;
+ }
+ }
+ netif_wake_queue(priv->net_dev);
+ }
+
+}
+
+static inline void ipw_create_bssid(struct ipw_priv *priv, u8 *bssid)
+{
+ /* First 3 bytes are manufacturer */
+ bssid[0] = priv->mac_addr[0];
+ bssid[1] = priv->mac_addr[1];
+ bssid[2] = priv->mac_addr[2];
+
+ /* Last bytes are random */
+ get_random_bytes(&bssid[3], ETH_ALEN-3);
+
+ bssid[0] &= 0xfe; /* clear multicast bit */
+ bssid[0] |= 0x02; /* set local assignment bit (IEEE802) */
+}
+
+static inline u8 ipw_add_station(struct ipw_priv *priv, u8 *bssid)
+{
+ struct ipw_station_entry entry;
+ int i;
+
+ for (i = 0; i < priv->num_stations; i++)
+ if (!memcmp(priv->stations[i], bssid, ETH_ALEN))
+ return i;
+
+ if (i == MAX_STATIONS)
+ return IPW_INVALID_STATION;
+
+ entry.reserved = 0;
+ entry.support_mode = 0;
+ memcpy(entry.mac_addr, bssid, ETH_ALEN);
+ memcpy(priv->stations[i], bssid, ETH_ALEN);
+ ipw_write_direct(priv, IPW_STATION_TABLE_LOWER + i * sizeof(entry),
+ &entry,
+ sizeof(entry));
+ priv->num_stations++;
+
+ return i;
+}
+
+static inline u8 ipw_find_station(struct ipw_priv *priv, u8 *bssid)
+{
+ int i;
+
+ for (i = 0; i < priv->num_stations; i++)
+ if (!memcmp(priv->stations[i], bssid, ETH_ALEN))
+ return i;
+
+ return IPW_INVALID_STATION;
+}
+
+static int ipw_tx_skb(struct ipw_priv *priv, struct sk_buff *skb)
+{
+ int rc = 0, i = 0;
+ struct tfd_frame *tfd;
+ struct clx2_tx_queue *txq = &priv->txq[0];
+ struct clx2_queue *q = &txq->q;
+ struct ieee80211_txb *txb;
+ u16 frame_ctl;
+ u8 dst[ETH_ALEN];
+ u8 src[ETH_ALEN];
+
+ memcpy(dst, skb->data, ETH_ALEN);
+ memcpy(src, skb->data + ETH_ALEN, ETH_ALEN);
+
+ txb = ieee80211_skb_to_txb(priv->ieee, skb);
+ if (txb == NULL) {
+ IPW_DEBUG_DROP("Failed to Tx packet\n");
+
+ /*
+ this is the one and only error path that should free
+ the skb since ieee80211_skb_to_txb will free the skb
+ when there are no errors
+ */
+ dev_kfree_skb(skb);
+ return -EIO;
+ }
+
+ /* If the call succeeded, but packet was dropped (valid scenario
+ * in WPA configurations) */
+ if (txb->nr_frags == 0) {
+ rc = 0;
+ goto out_free_txb;
+ }
+
+ if (txb->nr_frags > NUM_TFD_CHUNKS - 1) {
+ /* TODO: merge fragments back into a single buffer */
+ IPW_ERROR("Too many 80211 fragments, lower the "
+ "device FTS\n");
+ goto out_free_txb;
+ }
+
+ tfd = &txq->bd[q->first_empty];
+ txq->txb[q->first_empty] = txb;
+ memset(tfd, 0, sizeof(*tfd));
+
+ if (priv->ieee->iw_mode == IW_MODE_ADHOC) {
+ u8 id = ipw_find_station(priv, dst);
+ if (id == IPW_INVALID_STATION) {
+ if (is_broadcast_ether_addr(dst) ||
+ is_multicast_ether_addr(dst)) {
+ id = ipw_add_station(priv, dst);
+ } else {
+ IPW_ERROR("Attempt to send data to "
+ "non-existent cell: " MAC_FMT "\n",
+ MAC_ARG(dst));
+ rc = -EIO;
+ goto out_free_txb;
+ }
+ }
+ tfd->u.data.station_number = id;
+ }
+
+ tfd->control_flags.message_type = TX_FRAME_TYPE;
+ tfd->control_flags.control_bits = TFD_NEED_IRQ_MASK;
+
+ tfd->u.data.cmd_id = DINO_CMD_TX;
+ tfd->u.data.len = txb->payload_size;
+ tfd->u.data.tx_flags = DCT_FLAG_NO_WEP | DCT_FLAG_ACK_REQD;
+
+ if (priv->ieee->modulation & IEEE80211_OFDM_MODULATION)
+ tfd->u.data.tx_flags_ext = DCT_FLAG_EXT_MODE_OFDM;
+ else
+ tfd->u.data.tx_flags_ext = DCT_FLAG_EXT_MODE_CCK;
+ if (priv->config & CFG_PREAMBLE)
+ tfd->u.data.tx_flags |= DCT_FLAG_SHORT_PREMBL;
+ frame_ctl = IEEE80211_FTYPE_DATA | IEEE80211_STYPE_DATA;
+ if (priv->ieee->iw_mode == IW_MODE_INFRA) {
+ frame_ctl |= IEEE80211_FCTL_TODS;
+ /* To DS: Addr1 = BSSID, Addr2 = SA, Addr3 = DA */
+ memcpy(tfd->u.data.tfd.tfd_24.mchdr.addr1, priv->ieee->bssid,
+ ETH_ALEN);
+ memcpy(tfd->u.data.tfd.tfd_24.mchdr.addr2, src, ETH_ALEN);
+ memcpy(tfd->u.data.tfd.tfd_24.mchdr.addr3, dst, ETH_ALEN);
+ } else if (priv->ieee->iw_mode == IW_MODE_ADHOC) {
+ /* not From/To DS: Addr1 = DA, Addr2 = SA, Addr3 = BSSID */
+ memcpy(tfd->u.data.tfd.tfd_24.mchdr.addr1, dst, ETH_ALEN);
+ memcpy(tfd->u.data.tfd.tfd_24.mchdr.addr2, src, ETH_ALEN);
+ memcpy(tfd->u.data.tfd.tfd_24.mchdr.addr3, priv->ieee->bssid,
+ ETH_ALEN);
+ } else {
+ IPW_ERROR("Unknown ieee->iw_mode in Tx: %d\n",
+ priv->ieee->iw_mode);
+ }
+ if (txb->encrypted)
+ frame_ctl |= IEEE80211_FCTL_WEP;
+
+ tfd->u.data.tfd.tfd_24.mchdr.ctrl1 = frame_ctl & 0xff;
+ tfd->u.data.tfd.tfd_24.mchdr.ctrl2 = (frame_ctl >> 8) & 0xff;
+
+ /* payload */
+ tfd->u.data.num_chunks = txb->nr_frags;
+ for (i = 0; i < tfd->u.data.num_chunks; i++) {
+ IPW_DEBUG_TX("Dumping TX packet frag %i of %i (%d bytes):\n",
+ i, tfd->u.data.num_chunks,
+ txb->fragments[i]->len);
+ printk_buf(IPW_DL_TX, txb->fragments[i]->data,
+ txb->fragments[i]->len);
+
+ tfd->u.data.chunk_ptr[i] = pci_map_single(
+ priv->pci_dev, txb->fragments[i]->data,
+ txb->fragments[i]->len, PCI_DMA_TODEVICE);
+ tfd->u.data.chunk_len[i] = txb->fragments[i]->len;
+ }
+
+ /* kick DMA */
+ q->first_empty = ipw_queue_inc_wrap(q->first_empty, q->n_bd);
+ ipw_write32(priv, q->reg_w, q->first_empty);
+
+ /* Keep TIMEOUT from happening while we're still developing... */
+ priv->net_dev->trans_start = jiffies;
+
+ if (ipw_queue_space(q) < q->high_mark) {
+ IPW_DEBUG_INFO("Stopping network queue\n");
+ netif_stop_queue(priv->net_dev);
+ }
+
+ goto done;
+
+ out_free_txb:
+ priv->net_dev->trans_start = jiffies;
+ ieee80211_txb_free(txb);
+ txq->txb[q->first_empty] = NULL;
+ done:
+ return rc;
+}
+
+static void ipw_disassociate(void *data)
+{
+ struct ipw_priv *priv = data;
+ int err;
+
+ if (!(priv->status & (STATUS_ASSOCIATING | STATUS_ASSOCIATED))) {
+ IPW_DEBUG_ASSOC("Disassociating while not associated.\n");
+ return;
+ }
+
+ IPW_DEBUG_ASSOC("Disassocation attempt from " MAC_FMT " "
+ "on channel %d.\n",
+ MAC_ARG(priv->assoc_request.bssid),
+ priv->assoc_request.channel);
+ priv->assoc_request.assoc_type = HC_DISASSOCIATE;
+ err = ipw_send_associate(priv, &priv->assoc_request);
+ if (err) {
+ IPW_DEBUG_HC("Attempt to send [dis]associate command "
+ "failed.\n");
+ return;
+ }
+
+ priv->status &= ~(STATUS_ASSOCIATING | STATUS_ASSOCIATED);
+ priv->status |= STATUS_DISASSOCIATING;
+}
+
+static void notify_wx_assoc_event(struct ipw_priv *priv)
+{
+ union iwreq_data wrqu;
+ wrqu.ap_addr.sa_family = ARPHRD_ETHER;
+ if (priv->status & STATUS_ASSOCIATED)
+ memcpy(wrqu.ap_addr.sa_data, priv->bssid, ETH_ALEN);
+ else
+ memset(wrqu.ap_addr.sa_data, 0, ETH_ALEN);
+ wireless_send_event(priv->net_dev, SIOCGIWAP, &wrqu, NULL);
+}
+
+static inline void ipw_reset_counters(struct ipw_priv *priv)
+{
+ priv->missed_beacons = 0;
+ priv->tx_packets = 0;
+}
+
+struct ipw_status_code {
+ u16 status;
+ const char *reason;
+};
+
+static const struct ipw_status_code ipw_status_codes[] = {
+ {0x00, "Successful"},
+ {0x01, "Unspecified failure"},
+ {0x0A, "Cannot support all requested capabilities in the "
+ "Capability information field"},
+ {0x0B, "Reassociation denied due to inability to confirm that "
+ "association exists"},
+ {0x0C, "Association denied due to reason outside the scope of this "
+ "standard"},
+ {0x0D, "Responding station does not support the specified authentication "
+ "algorithm"},
+ {0x0E, "Received an Authentication frame with authentication sequence "
+ "transaction sequence number out of expected sequence"},
+ {0x0F, "Authentication rejected because of challenge failure"},
+ {0x10, "Authentication rejected due to timeout waiting for next "
+ "frame in sequence"},
+ {0x11, "Association denied because AP is unable to handle additional "
+ "associated stations"},
+ {0x12, "Association denied due to requesting station not supporting all "
+ "of the datarates in the BSSBasicServiceSet Parameter"},
+ {0x13, "Association denied due to requesting station not supporting "
+ "short preamble operation"},
+ {0x14, "Association denied due to requesting station not supporting "
+ "PBCC encoding"},
+ {0x15, "Association denied due to requesting station not supporting "
+ "channel agility"},
+ {0x19, "Association denied due to requesting station not supporting "
+ "short slot operation"},
+ {0x1A, "Association denied due to requesting station not supporting "
+ "DSSS-OFDM operation"},
+ {0x28, "Invalid Information Element"},
+ {0x29, "Group Cipher is not valid"},
+ {0x2A, "Pairwise Cipher is not valid"},
+ {0x2B, "AKMP is not valid"},
+ {0x2C, "Unsupported RSN IE version"},
+ {0x2D, "Invalid RSN IE Capabilities"},
+ {0x2E, "Cipher suite is rejected per security policy"},
+};
+
+static const char *ipw_get_status_code(u16 status)
+{
+ int i;
+ for (i = 0; i < ARRAY_SIZE(ipw_status_codes); i++)
+ if (ipw_status_codes[i].status == status)
+ return ipw_status_codes[i].reason;
+ return "Unknown status value.";
+}
+
+/**
+ * Handle host notification packet.
+ * Called from interrupt routine
+ */
+static inline void ipw_rx_notification(struct ipw_priv* priv,
+ struct ipw_rx_notification *notif)
+{
+ IPW_DEBUG_NOTIF("type = %i (%d bytes)\n",
+ notif->subtype, notif->size);
+
+ switch (notif->subtype) {
+ case HOST_NOTIFICATION_STATUS_ASSOCIATED: {
+ struct notif_association *assoc = ¬if->u.assoc;
+
+ switch (assoc->state) {
+ case CMAS_ASSOCIATED: {
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "associated: '%s' " MAC_FMT " \n",
+ escape_essid(priv->essid, priv->essid_len),
+ MAC_ARG(priv->bssid));
+
+ switch (priv->ieee->iw_mode) {
+ case IW_MODE_INFRA:
+ memcpy(priv->ieee->bssid, priv->bssid,
+ ETH_ALEN);
+ break;
+
+ case IW_MODE_ADHOC:
+ memcpy(priv->ieee->bssid, priv->bssid,
+ ETH_ALEN);
+ break;
+ }
+
+ ipw_reset_counters(priv);
+
+ priv->status &= ~STATUS_ASSOCIATING;
+ priv->status |= STATUS_ASSOCIATED;
+
+ netif_carrier_on(priv->net_dev);
+ if (netif_queue_stopped(priv->net_dev)) {
+ IPW_DEBUG_NOTIF("waking queue\n");
+ netif_wake_queue(priv->net_dev);
+ } else {
+ IPW_DEBUG_NOTIF("starting queue\n");
+ netif_start_queue(priv->net_dev);
+ }
+
+ notify_wx_assoc_event(priv);
+ break;
+ }
+
+ case CMAS_AUTHENTICATED: {
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "authenticated: '%s' " MAC_FMT "\n",
+ escape_essid(priv->essid, priv->essid_len),
+ MAC_ARG(priv->bssid));
+ break;
+ }
+
+ case CMAS_INIT: {
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "disassociated: '%s' " MAC_FMT " \n",
+ escape_essid(priv->essid, priv->essid_len),
+ MAC_ARG(priv->bssid));
+
+ priv->status &= ~(
+ STATUS_DISASSOCIATING |
+ STATUS_ASSOCIATING |
+ STATUS_ASSOCIATED |
+ STATUS_AUTH);
+
+ netif_carrier_off(priv->net_dev);
+ netif_stop_queue(priv->net_dev);
+ notify_wx_assoc_event(priv);
+
+ /* Cancel any queued work ... */
+ cancel_delayed_work(&priv->request_scan);
+
+ /* Queue up another scan... */
+ queue_work(priv->workqueue, &priv->request_scan);
+ break;
+ }
+
+ default:
+ IPW_ERROR("assoc: unknown (%d)\n",
+ assoc->state);
+ break;
+ }
+
+ break;
+ }
+
+ case HOST_NOTIFICATION_STATUS_AUTHENTICATE: {
+ struct notif_authenticate *auth = ¬if->u.auth;
+ switch (auth->state) {
+ case CMAS_AUTHENTICATED:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE,
+ "authenticated: '%s' " MAC_FMT " \n",
+ escape_essid(priv->essid, priv->essid_len),
+ MAC_ARG(priv->bssid));
+ priv->status |= STATUS_AUTH;
+ break;
+
+ case CMAS_INIT:
+ if (priv->status & STATUS_AUTH) {
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "authentication failed (0x%04X): %s\n",
+ ntohs(auth->status),
+ ipw_get_status_code(ntohs(auth->status)));
+ }
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "deauthenticated: '%s' " MAC_FMT "\n",
+ escape_essid(priv->essid, priv->essid_len),
+ MAC_ARG(priv->bssid));
+
+ priv->status &= ~(STATUS_ASSOCIATING |
+ STATUS_ASSOCIATED |
+ STATUS_AUTH);
+
+ netif_carrier_off(priv->net_dev);
+ netif_stop_queue(priv->net_dev);
+ notify_wx_assoc_event(priv);
+ queue_work(priv->workqueue, &priv->request_scan);
+ break;
+
+ case CMAS_TX_AUTH_SEQ_1:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "AUTH_SEQ_1\n");
+ break;
+ case CMAS_RX_AUTH_SEQ_2:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "AUTH_SEQ_2\n");
+ break;
+ case CMAS_AUTH_SEQ_1_PASS:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "AUTH_SEQ_1_PASS\n");
+ break;
+ case CMAS_AUTH_SEQ_1_FAIL:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "AUTH_SEQ_1_FAIL\n");
+ break;
+ case CMAS_TX_AUTH_SEQ_3:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "AUTH_SEQ_3\n");
+ break;
+ case CMAS_RX_AUTH_SEQ_4:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "RX_AUTH_SEQ_4\n");
+ break;
+ case CMAS_AUTH_SEQ_2_PASS:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "AUTH_SEQ_2_PASS\n");
+ break;
+ case CMAS_AUTH_SEQ_2_FAIL:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "AUT_SEQ_2_FAIL\n");
+ break;
+ case CMAS_TX_ASSOC:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "TX_ASSOC\n");
+ break;
+ case CMAS_RX_ASSOC_RESP:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "RX_ASSOC_RESP\n");
+ break;
+ case CMAS_ASSOCIATED:
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE | IPW_DL_ASSOC,
+ "ASSOCIATED\n");
+ break;
+ default:
+ IPW_DEBUG_NOTIF("auth: failure - %d\n", auth->state);
+ break;
+ }
+ break;
+ }
+
+ case HOST_NOTIFICATION_STATUS_SCAN_CHANNEL_RESULT: {
+ struct notif_channel_result *x = ¬if->u.channel_result;
+
+ if (notif->size == sizeof(*x)) {
+ IPW_DEBUG_SCAN("Scan result for channel %d\n",
+ x->channel_num);
+ } else {
+ IPW_DEBUG_SCAN("Scan result of wrong size %d "
+ "(should be %d)\n",
+ notif->size,sizeof(*x));
+ }
+ break;
+ }
+
+ case HOST_NOTIFICATION_STATUS_SCAN_COMPLETED: {
+ struct notif_scan_complete* x = ¬if->u.scan_complete;
+ if (notif->size == sizeof(*x)) {
+ IPW_DEBUG_SCAN("Scan completed: type %d, %d channels, "
+ "%d status\n",
+ x->scan_type,
+ x->num_channels,
+ x->status);
+ } else {
+ IPW_ERROR("Scan completed of wrong size %d "
+ "(should be %d)\n",
+ notif->size,sizeof(*x));
+ }
+
+ priv->status &= ~(STATUS_SCANNING | STATUS_SCAN_ABORTING);
+ if (priv->status & STATUS_SCAN_PENDING)
+ queue_work(priv->workqueue, &priv->request_scan);
+ else if (!(priv->status & (STATUS_ASSOCIATED |
+ STATUS_ASSOCIATING)))
+ queue_work(priv->workqueue, &priv->associate);
+ else if (priv->config & CFG_ASSOCIATE)
+ queue_delayed_work(
+ priv->workqueue, &priv->request_scan,
+ SCAN_INTERVAL);
+ priv->ieee->scans++;
+ break;
+ }
+
+ case HOST_NOTIFICATION_STATUS_FRAG_LENGTH: {
+ struct notif_frag_length *x = ¬if->u.frag_len;
+
+ if (notif->size == sizeof(*x)) {
+ IPW_ERROR("Frag length: %d\n", x->frag_length);
+ } else {
+ IPW_ERROR("Frag length of wrong size %d "
+ "(should be %d)\n",
+ notif->size, sizeof(*x));
+ }
+ break;
+ }
+
+ case HOST_NOTIFICATION_STATUS_LINK_DETERIORATION: {
+ struct notif_link_deterioration *x =
+ ¬if->u.link_deterioration;
+ if (notif->size==sizeof(*x)) {
+ IPW_DEBUG(IPW_DL_NOTIF | IPW_DL_STATE,
+ "link deterioration: '%s' " MAC_FMT " \n",
+ escape_essid(priv->essid, priv->essid_len),
+ MAC_ARG(priv->bssid));
+ memcpy(&priv->last_link_deterioration, x, sizeof(*x));
+ } else {
+ IPW_ERROR("Link Deterioration of wrong size %d "
+ "(should be %d)\n",
+ notif->size,sizeof(*x));
+ }
+ break;
+ }
+
+ case HOST_NOTIFICATION_DINO_CONFIG_RESPONSE: {
+ IPW_ERROR("Dino config\n");
+ if (priv->hcmd && priv->hcmd->cmd == HOST_CMD_DINO_CONFIG) {
+ /* TODO: Do anything special? */
+ } else {
+ IPW_ERROR("Unexpected DINO_CONFIG_RESPONSE\n");
+ }
+ break;
+ }
+
+ case HOST_NOTIFICATION_STATUS_BEACON_STATE: {
+ struct notif_beacon_state *x = ¬if->u.beacon_state;
+ if (notif->size != sizeof(*x)) {
+ IPW_ERROR("Beacon state of wrong size %d (should "
+ "be %d)\n", notif->size, sizeof(*x));
+ break;
+ }
+
+ if (x->state == HOST_NOTIFICATION_STATUS_BEACON_MISSING) {
+ if (priv->status & STATUS_SCANNING) {
+ /* TODO: Stop scan to keep fw from getting
+ * stuck... */
+ IPW_WARNING("TODO: Stop scan\n");
+ }
+
+ if (x->number - priv->missed_beacons >
+ priv->missed_beacon_threshold &&
+ priv->status & STATUS_ASSOCIATED) {
+ IPW_DEBUG(IPW_DL_INFO | IPW_DL_NOTIF,
+ "Missed beacon: %d - disassociate\n",
+ x->number);
+ priv->missed_beacons = x->number;
+ queue_work(priv->workqueue,
+ &priv->disassociate);
+ } else if (x->number - priv->missed_beacons >
+ priv->roaming_threshold) {
+ IPW_DEBUG_NOTIF("Missed beacon: %d - initiate "
+ "roaming\n",
+ x->number);
+ /* TODO: Roaming... */
+ priv->missed_beacons = x->number;
+ } else {
+ IPW_DEBUG_NOTIF("Missed beacon: %d\n",
+ x->number);
+ }
+ }
+
+
+ break;
+ }
+
+ case HOST_NOTIFICATION_STATUS_TGI_TX_KEY: {
+ struct notif_tgi_tx_key *x = ¬if->u.tgi_tx_key;
+ if (notif->size==sizeof(*x)) {
+ IPW_ERROR("TGi Tx Key: state 0x%02x sec type "
+ "0x%02x station %d\n",
+ x->key_state,x->security_type,
+ x->station_index);
+ break;
+ }
+
+ IPW_ERROR("TGi Tx Key of wrong size %d (should be %d)\n",
+ notif->size,sizeof(*x));
+ break;
+ }
+
+ case HOST_NOTIFICATION_CALIB_KEEP_RESULTS: {
+ struct notif_calibration *x = ¬if->u.calibration;
+
+ if (notif->size == sizeof(*x)) {
+ memcpy(&priv->calib, x, sizeof(*x));
+ IPW_DEBUG_INFO("TODO: Calibration\n");
+ break;
+ }
+
+ IPW_ERROR("Calibration of wrong size %d (should be %d)\n",
+ notif->size,sizeof(*x));
+ break;
+ }
+
+ default:
+ IPW_ERROR("Unknown notification: "
+ "subtype=%d,flags=0x%2x,size=%d\n",
+ notif->subtype, notif->flags, notif->size);
+ }
+}
+
+/**
+ * Destroys all DMA structures and initialise them again
+ *
+ * @param priv
+ * @return error code
+ */
+static int ipw_queue_reset(struct ipw_priv *priv)
+{
+ int rc = 0;
+ /** @todo customize queue sizes */
+ int nTx = 64, nTxCmd = 8;
+ ipw_tx_queue_free(priv);
+ /* Tx CMD queue */
+ rc = ipw_queue_tx_init(priv, &priv->txq_cmd, nTxCmd,
+ CX2_TX_CMD_QUEUE_READ_INDEX,
+ CX2_TX_CMD_QUEUE_WRITE_INDEX,
+ CX2_TX_CMD_QUEUE_BD_BASE,
+ CX2_TX_CMD_QUEUE_BD_SIZE);
+ if (rc) {
+ IPW_ERROR("Tx Cmd queue init failed\n");
+ goto error;
+ }
+ /* Tx queue(s) */
+ rc = ipw_queue_tx_init(priv, &priv->txq[0], nTx,
+ CX2_TX_QUEUE_0_READ_INDEX,
+ CX2_TX_QUEUE_0_WRITE_INDEX,
+ CX2_TX_QUEUE_0_BD_BASE,
+ CX2_TX_QUEUE_0_BD_SIZE);
+ if (rc) {
+ IPW_ERROR("Tx 0 queue init failed\n");
+ goto error;
+ }
+ rc = ipw_queue_tx_init(priv, &priv->txq[1], nTx,
+ CX2_TX_QUEUE_1_READ_INDEX,
+ CX2_TX_QUEUE_1_WRITE_INDEX,
+ CX2_TX_QUEUE_1_BD_BASE,
+ CX2_TX_QUEUE_1_BD_SIZE);
+ if (rc) {
+ IPW_ERROR("Tx 1 queue init failed\n");
+ goto error;
+ }
+ rc = ipw_queue_tx_init(priv, &priv->txq[2], nTx,
+ CX2_TX_QUEUE_2_READ_INDEX,
+ CX2_TX_QUEUE_2_WRITE_INDEX,
+ CX2_TX_QUEUE_2_BD_BASE,
+ CX2_TX_QUEUE_2_BD_SIZE);
+ if (rc) {
+ IPW_ERROR("Tx 2 queue init failed\n");
+ goto error;
+ }
+ rc = ipw_queue_tx_init(priv, &priv->txq[3], nTx,
+ CX2_TX_QUEUE_3_READ_INDEX,
+ CX2_TX_QUEUE_3_WRITE_INDEX,
+ CX2_TX_QUEUE_3_BD_BASE,
+ CX2_TX_QUEUE_3_BD_SIZE);
+ if (rc) {
+ IPW_ERROR("Tx 3 queue init failed\n");
+ goto error;
+ }
+ /* statistics */
+ priv->rx_bufs_min = 0;
+ priv->rx_pend_max = 0;
+ return rc;
+
+ error:
+ ipw_tx_queue_free(priv);
+ return rc;
+}
+
+/**
+ * Reclaim Tx queue entries no more used by NIC.
+ *
+ * When FW adwances 'R' index, all entries between old and
+ * new 'R' index need to be reclaimed. As result, some free space
+ * forms. If there is enough free space (> low mark), wake Tx queue.
+ *
+ * @note Need to protect against garbage in 'R' index
+ * @param priv
+ * @param txq
+ * @param qindex
+ * @return Number of used entries remains in the queue
+ */
+static int ipw_queue_tx_reclaim(struct ipw_priv *priv,
+ struct clx2_tx_queue *txq, int qindex)
+{
+ u32 hw_tail;
+ int used;
+ struct clx2_queue *q = &txq->q;
+
+ hw_tail = ipw_read32(priv, q->reg_r);
+ if (hw_tail >= q->n_bd) {
+ IPW_ERROR
+ ("Read index for DMA queue (%d) is out of range [0-%d)\n",
+ hw_tail, q->n_bd);
+ goto done;
+ }
+ for (; q->last_used != hw_tail;
+ q->last_used = ipw_queue_inc_wrap(q->last_used, q->n_bd)) {
+ ipw_queue_tx_free_tfd(priv, txq);
+ priv->tx_packets++;
+ }
+ done:
+ if (ipw_queue_space(q) > q->low_mark && qindex >= 0) {
+ __maybe_wake_tx(priv);
+ }
+ used = q->first_empty - q->last_used;
+ if (used < 0)
+ used += q->n_bd;
+
+ return used;
+}
+
+static int ipw_queue_tx_hcmd(struct ipw_priv *priv, int hcmd, void *buf,
+ int len, int sync)
+{
+ struct clx2_tx_queue *txq = &priv->txq_cmd;
+ struct clx2_queue *q = &txq->q;
+ struct tfd_frame *tfd;
+
+ if (ipw_queue_space(q) < (sync ? 1 : 2)) {
+ IPW_ERROR("No space for Tx\n");
+ return -EBUSY;
+ }
+
+ tfd = &txq->bd[q->first_empty];
+ txq->txb[q->first_empty] = NULL;
+
+ memset(tfd, 0, sizeof(*tfd));
+ tfd->control_flags.message_type = TX_HOST_COMMAND_TYPE;
+ tfd->control_flags.control_bits = TFD_NEED_IRQ_MASK;
+ priv->hcmd_seq++;
+ tfd->u.cmd.index = hcmd;
+ tfd->u.cmd.length = len;
+ memcpy(tfd->u.cmd.payload, buf, len);
+ q->first_empty = ipw_queue_inc_wrap(q->first_empty, q->n_bd);
+ ipw_write32(priv, q->reg_w, q->first_empty);
+ _ipw_read32(priv, 0x90);
+
+ return 0;
+}
+
+
+
+/*
+ * Rx theory of operation
+ *
+ * The host allocates 32 DMA target addresses and passes the host address
+ * to the firmware at register CX2_RFDS_TABLE_LOWER + N * RFD_SIZE where N is
+ * 0 to 31
+ *
+ * Rx Queue Indexes
+ * The host/firmware share two index registers for managing the Rx buffers.
+ *
+ * The READ index maps to the first position that the firmware may be writing
+ * to -- the driver can read up to (but not including) this position and get
+ * good data.
+ * The READ index is managed by the firmware once the card is enabled.
+ *
+ * The WRITE index maps to the last position the driver has read from -- the
+ * position preceding WRITE is the last slot the firmware can place a packet.
+ *
+ * The queue is empty (no good data) if WRITE = READ - 1, and is full if
+ * WRITE = READ.
+ *
+ * During initialization the host sets up the READ queue position to the first
+ * INDEX position, and WRITE to the last (READ - 1 wrapped)
+ *
+ * When the firmware places a packet in a buffer it will advance the READ index
+ * and fire the RX interrupt. The driver can then query the READ index and
+ * process as many packets as possible, moving the WRITE index forward as it
+ * resets the Rx queue buffers with new memory.
+ *
+ * The management in the driver is as follows:
+ * + A list of pre-allocated SKBs is stored in ipw->rxq->rx_free. When
+ * ipw->rxq->free_count drops to or below RX_LOW_WATERMARK, work is scheduled
+ * to replensish the ipw->rxq->rx_free.
+ * + In ipw_rx_queue_replenish (scheduled) if 'processed' != 'read' then the
+ * ipw->rxq is replenished and the READ INDEX is updated (updating the
+ * 'processed' and 'read' driver indexes as well)
+ * + A received packet is processed and handed to the kernel network stack,
+ * detached from the ipw->rxq. The driver 'processed' index is updated.
+ * + The Host/Firmware ipw->rxq is replenished at tasklet time from the rx_free
+ * list. If there are no allocated buffers in ipw->rxq->rx_free, the READ
+ * INDEX is not incremented and ipw->status(RX_STALLED) is set. If there
+ * were enough free buffers and RX_STALLED is set it is cleared.
+ *
+ *
+ * Driver sequence:
+ *
+ * ipw_rx_queue_alloc() Allocates rx_free
+ * ipw_rx_queue_replenish() Replenishes rx_free list from rx_used, and calls
+ * ipw_rx_queue_restock
+ * ipw_rx_queue_restock() Moves available buffers from rx_free into Rx
+ * queue, updates firmware pointers, and updates
+ * the WRITE index. If insufficient rx_free buffers
+ * are available, schedules ipw_rx_queue_replenish
+ *
+ * -- enable interrupts --
+ * ISR - ipw_rx() Detach ipw_rx_mem_buffers from pool up to the
+ * READ INDEX, detaching the SKB from the pool.
+ * Moves the packet buffer from queue to rx_used.
+ * Calls ipw_rx_queue_restock to refill any empty
+ * slots.
+ * ...
+ *
+ */
+
+/*
+ * If there are slots in the RX queue that need to be restocked,
+ * and we have free pre-allocated buffers, fill the ranks as much
+ * as we can pulling from rx_free.
+ *
+ * This moves the 'write' index forward to catch up with 'processed', and
+ * also updates the memory address in the firmware to reference the new
+ * target buffer.
+ */
+static void ipw_rx_queue_restock(struct ipw_priv *priv)
+{
+ struct ipw_rx_queue *rxq = priv->rxq;
+ struct list_head *element;
+ struct ipw_rx_mem_buffer *rxb;
+
+ int write = rxq->write;
+
+ while ((rxq->write != rxq->processed) && (rxq->free_count)) {
+ element = rxq->rx_free.next;
+ rxb = list_entry(element, struct ipw_rx_mem_buffer, list);
+ list_del(element);
+
+ ipw_write32(priv, CX2_RFDS_TABLE_LOWER + rxq->write * RFD_SIZE,
+ rxb->dma_addr);
+ rxq->queue[rxq->write] = rxb;
+ rxq->write = (rxq->write + 1) % RX_QUEUE_SIZE;
+ rxq->free_count--;
+ }
+
+ /* If the pre-allocated buffer pool is dropping low, schedule to
+ * refill it */
+ if (rxq->free_count <= RX_LOW_WATERMARK)
+ queue_work(priv->workqueue, &priv->rx_replenish);
+
+ /* If we've added more space for the firmware to place data, tell it */
+ if (write != rxq->write)
+ ipw_write32(priv, CX2_RX_WRITE_INDEX, rxq->write);
+}
+
+/*
+ * Move all used packet from rx_used to rx_free, allocating a new SKB for each.
+ * Also restock the Rx queue via ipw_rx_queue_restock.
+ *
+ * This is called as a scheduled work item (except for during intialization)
+ */
+static void ipw_rx_queue_replenish(void *data)
+{
+ struct ipw_priv *priv = data;
+ struct ipw_rx_queue *rxq = priv->rxq;
+ struct list_head *element;
+ struct ipw_rx_mem_buffer *rxb;
+
+ while (!list_empty(&rxq->rx_used)) {
+ element = rxq->rx_used.next;
+ rxb = list_entry(element, struct ipw_rx_mem_buffer, list);
+ rxb->skb = alloc_skb(CX2_RX_BUF_SIZE, GFP_ATOMIC);
+ if (!rxb->skb) {
+ printk(KERN_CRIT "%s: Can not allocate SKB buffers.\n",
+ priv->net_dev->name);
+ /* We don't reschedule replenish work here -- we will
+ * call the restock method and if it still needs
+ * more buffers it will schedule replenish */
+ break;
+ }
+ list_del(element);
+
+ rxb->rxb = (struct ipw_rx_buffer *)rxb->skb->data;
+ rxb->dma_addr = pci_map_single(
+ priv->pci_dev, rxb->skb->data, CX2_RX_BUF_SIZE,
+ PCI_DMA_FROMDEVICE);
+
+ list_add_tail(&rxb->list, &rxq->rx_free);
+ rxq->free_count++;
+ }
+
+ ipw_rx_queue_restock(priv);
+}
+
+/* Assumes that the skb field of the buffers in 'pool' is kept accurate.
+ * If an SKB has been detached, the POOL needs to have it's SKB set to NULL
+ * This free routine walks the list of POOL entries and if SKB is set to
+ * non NULL it is unmapped and freed
+ */
+static void ipw_rx_queue_free(struct ipw_priv *priv,
+ struct ipw_rx_queue *rxq)
+{
+ int i;
+
+ if (!rxq)
+ return;
+
+ for (i = 0; i < RX_QUEUE_SIZE + RX_FREE_BUFFERS; i++) {
+ if (rxq->pool[i].skb != NULL) {
+ pci_unmap_single(priv->pci_dev, rxq->pool[i].dma_addr,
+ CX2_RX_BUF_SIZE,
+ PCI_DMA_FROMDEVICE);
+ dev_kfree_skb(rxq->pool[i].skb);
+ }
+ }
+
+ kfree(rxq);
+}
+
+static struct ipw_rx_queue *ipw_rx_queue_alloc(struct ipw_priv *priv)
+{
+ struct ipw_rx_queue *rxq;
+ int i;
+
+ rxq = (struct ipw_rx_queue *)kmalloc(sizeof(*rxq), GFP_KERNEL);
+ memset(rxq, 0, sizeof(*rxq));
+ INIT_LIST_HEAD(&rxq->rx_free);
+ INIT_LIST_HEAD(&rxq->rx_used);
+
+ /* Fill the rx_used queue with _all_ of the Rx buffers */
+ for (i = 0; i < RX_FREE_BUFFERS + RX_QUEUE_SIZE; i++)
+ list_add_tail(&rxq->pool[i].list, &rxq->rx_used);
+
+ /* Set us so that we have processed and used all buffers, but have
+ * not restocked the Rx queue with fresh buffers */
+ rxq->read = rxq->write = 0;
+ rxq->processed = RX_QUEUE_SIZE - 1;
+ rxq->free_count = 0;
+
+ return rxq;
+}
+
+static int ipw_is_rate_in_mask(struct ipw_priv *priv, int ieee_mode, u8 rate)
+{
+ rate &= ~IEEE80211_BASIC_RATE_MASK;
+ if (ieee_mode == IEEE_A) {
+ switch (rate) {
+ case IEEE80211_OFDM_RATE_6MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_6MB_MASK ?
+ 1 : 0;
+ case IEEE80211_OFDM_RATE_9MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_9MB_MASK ?
+ 1 : 0;
+ case IEEE80211_OFDM_RATE_12MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_12MB_MASK ?
+ 1 : 0;
+ case IEEE80211_OFDM_RATE_18MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_18MB_MASK ?
+ 1 : 0;
+ case IEEE80211_OFDM_RATE_24MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_24MB_MASK ?
+ 1 : 0;
+ case IEEE80211_OFDM_RATE_36MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_36MB_MASK ?
+ 1 : 0;
+ case IEEE80211_OFDM_RATE_48MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_48MB_MASK ?
+ 1 : 0;
+ case IEEE80211_OFDM_RATE_54MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_54MB_MASK ?
+ 1 : 0;
+ default:
+ return 0;
+ }
+ }
+
+ /* B and G mixed */
+ switch (rate) {
+ case IEEE80211_CCK_RATE_1MB:
+ return priv->rates_mask & IEEE80211_CCK_RATE_1MB_MASK ? 1 : 0;
+ case IEEE80211_CCK_RATE_2MB:
+ return priv->rates_mask & IEEE80211_CCK_RATE_2MB_MASK ? 1 : 0;
+ case IEEE80211_CCK_RATE_5MB:
+ return priv->rates_mask & IEEE80211_CCK_RATE_5MB_MASK ? 1 : 0;
+ case IEEE80211_CCK_RATE_11MB:
+ return priv->rates_mask & IEEE80211_CCK_RATE_11MB_MASK ? 1 : 0;
+ }
+
+ /* If we are limited to B modulations, bail at this point */
+ if (ieee_mode == IEEE_B)
+ return 0;
+
+ /* G */
+ switch (rate) {
+ case IEEE80211_OFDM_RATE_6MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_6MB_MASK ? 1 : 0;
+ case IEEE80211_OFDM_RATE_9MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_9MB_MASK ? 1 : 0;
+ case IEEE80211_OFDM_RATE_12MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_12MB_MASK ? 1 : 0;
+ case IEEE80211_OFDM_RATE_18MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_18MB_MASK ? 1 : 0;
+ case IEEE80211_OFDM_RATE_24MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_24MB_MASK ? 1 : 0;
+ case IEEE80211_OFDM_RATE_36MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_36MB_MASK ? 1 : 0;
+ case IEEE80211_OFDM_RATE_48MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_48MB_MASK ? 1 : 0;
+ case IEEE80211_OFDM_RATE_54MB:
+ return priv->rates_mask & IEEE80211_OFDM_RATE_54MB_MASK ? 1 : 0;
+ }
+
+ return 0;
+}
+
+static int ipw_compatible_rates(struct ipw_priv *priv,
+ const struct ieee80211_network *network,
+ struct ipw_supported_rates *rates)
+{
+ int num_rates, i;
+
+ memset(rates, 0, sizeof(*rates));
+ num_rates = min(network->rates_len, (u8)IPW_MAX_RATES);
+ rates->num_rates = 0;
+ for (i = 0; i < num_rates; i++) {
+ if (!ipw_is_rate_in_mask(priv, network->mode, network->rates[i])) {
+ IPW_DEBUG_SCAN("Rate %02X masked : 0x%08X\n",
+ network->rates[i], priv->rates_mask);
+ continue;
+ }
+
+ rates->supported_rates[rates->num_rates++] = network->rates[i];
+ }
+
+ num_rates = min(network->rates_ex_len, (u8)(IPW_MAX_RATES - num_rates));
+ for (i = 0; i < num_rates; i++) {
+ if (!ipw_is_rate_in_mask(priv, network->mode, network->rates_ex[i])) {
+ IPW_DEBUG_SCAN("Rate %02X masked : 0x%08X\n",
+ network->rates_ex[i], priv->rates_mask);
+ continue;
+ }
+
+ rates->supported_rates[rates->num_rates++] = network->rates_ex[i];
+ }
+
+ return rates->num_rates;
+}
+
+static inline void ipw_copy_rates(struct ipw_supported_rates *dest,
+ const struct ipw_supported_rates *src)
+{
+ u8 i;
+ for (i = 0; i < src->num_rates; i++)
+ dest->supported_rates[i] = src->supported_rates[i];
+ dest->num_rates = src->num_rates;
+}
+
+/* TODO: Look at sniffed packets in the air to determine if the basic rate
+ * mask should ever be used -- right now all callers to add the scan rates are
+ * set with the modulation = CCK, so BASIC_RATE_MASK is never set... */
+static void ipw_add_cck_scan_rates(struct ipw_supported_rates *rates,
+ u8 modulation, u32 rate_mask)
+{
+ u8 basic_mask = (IEEE80211_OFDM_MODULATION == modulation) ?
+ IEEE80211_BASIC_RATE_MASK : 0;
+
+ if (rate_mask & IEEE80211_CCK_RATE_1MB_MASK)
+ rates->supported_rates[rates->num_rates++] =
+ IEEE80211_BASIC_RATE_MASK | IEEE80211_CCK_RATE_1MB;
+
+ if (rate_mask & IEEE80211_CCK_RATE_2MB_MASK)
+ rates->supported_rates[rates->num_rates++] =
+ IEEE80211_BASIC_RATE_MASK | IEEE80211_CCK_RATE_2MB;
+
+ if (rate_mask & IEEE80211_CCK_RATE_5MB_MASK)
+ rates->supported_rates[rates->num_rates++] = basic_mask |
+ IEEE80211_CCK_RATE_5MB;
+
+ if (rate_mask & IEEE80211_CCK_RATE_11MB_MASK)
+ rates->supported_rates[rates->num_rates++] = basic_mask |
+ IEEE80211_CCK_RATE_11MB;
+}
+
+static void ipw_add_ofdm_scan_rates(struct ipw_supported_rates *rates,
+ u8 modulation, u32 rate_mask)
+{
+ u8 basic_mask = (IEEE80211_OFDM_MODULATION == modulation) ?
+ IEEE80211_BASIC_RATE_MASK : 0;
+
+ if (rate_mask & IEEE80211_OFDM_RATE_6MB_MASK)
+ rates->supported_rates[rates->num_rates++] = basic_mask |
+ IEEE80211_OFDM_RATE_6MB;
+
+ if (rate_mask & IEEE80211_OFDM_RATE_9MB_MASK)
+ rates->supported_rates[rates->num_rates++] =
+ IEEE80211_OFDM_RATE_9MB;
+
+ if (rate_mask & IEEE80211_OFDM_RATE_12MB_MASK)
+ rates->supported_rates[rates->num_rates++] = basic_mask |
+ IEEE80211_OFDM_RATE_12MB;
+
+ if (rate_mask & IEEE80211_OFDM_RATE_18MB_MASK)
+ rates->supported_rates[rates->num_rates++] =
+ IEEE80211_OFDM_RATE_18MB;
+
+ if (rate_mask & IEEE80211_OFDM_RATE_24MB_MASK)
+ rates->supported_rates[rates->num_rates++] = basic_mask |
+ IEEE80211_OFDM_RATE_24MB;
+
+ if (rate_mask & IEEE80211_OFDM_RATE_36MB_MASK)
+ rates->supported_rates[rates->num_rates++] =
+ IEEE80211_OFDM_RATE_36MB;
+
+ if (rate_mask & IEEE80211_OFDM_RATE_48MB_MASK)
+ rates->supported_rates[rates->num_rates++] =
+ IEEE80211_OFDM_RATE_48MB;
+
+ if (rate_mask & IEEE80211_OFDM_RATE_54MB_MASK)
+ rates->supported_rates[rates->num_rates++] =
+ IEEE80211_OFDM_RATE_54MB;
+}
+
+struct ipw_network_match {
+ struct ieee80211_network *network;
+ struct ipw_supported_rates rates;
+};
+
+static int ipw_best_network(
+ struct ipw_priv *priv,
+ struct ipw_network_match *match,
+ struct ieee80211_network *network)
+{
+ struct ipw_supported_rates rates;
+
+ /* Verify that this network's capability is compatible with the
+ * current mode (AdHoc or Infrastructure) */
+ if ((priv->ieee->iw_mode == IW_MODE_INFRA &&
+ !(network->capability & WLAN_CAPABILITY_BSS)) ||
+ (priv->ieee->iw_mode == IW_MODE_ADHOC &&
+ !(network->capability & WLAN_CAPABILITY_IBSS))) {
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded due to "
+ "capability mismatch.\n",
+ escape_essid(network->ssid, network->ssid_len),
+ MAC_ARG(network->bssid));
+ return 0;
+ }
+
+ /* If we do not have an ESSID for this AP, we can not associate with
+ * it */
+ if (network->flags & NETWORK_EMPTY_ESSID) {
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded "
+ "because of hidden ESSID.\n",
+ escape_essid(network->ssid, network->ssid_len),
+ MAC_ARG(network->bssid));
+ return 0;
+ }
+
+ /* If the old network rate is better than this one, don't bother
+ * testing everything else. */
+ if (match->network && match->network->stats.rssi >
+ network->stats.rssi) {
+ char escaped[IW_ESSID_MAX_SIZE * 2 + 1];
+ strncpy(escaped,
+ escape_essid(network->ssid, network->ssid_len),
+ sizeof(escaped));
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded because "
+ "'%s (" MAC_FMT ")' has a stronger signal.\n",
+ escaped, MAC_ARG(network->bssid),
+ escape_essid(match->network->ssid,
+ match->network->ssid_len),
+ MAC_ARG(match->network->bssid));
+ return 0;
+ }
+
+ /* If this network has already had an association attempt within the
+ * last 3 seconds, do not try and associate again... */
+ if (network->last_associate + HZ * 3 > jiffies) {
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded "
+ "because of storming.\n",
+ escape_essid(network->ssid, network->ssid_len),
+ MAC_ARG(network->bssid));
+ return 0;
+ }
+
+ /* Now go through and see if the requested network is valid... */
+ if (priv->ieee->scan_age != 0 &&
+ jiffies - network->last_scanned > priv->ieee->scan_age) {
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded "
+ "because of age: %lums.\n",
+ escape_essid(network->ssid, network->ssid_len),
+ MAC_ARG(network->bssid),
+ (jiffies - network->last_scanned) / (HZ / 100));
+ return 0;
+ }
+
+ if ((priv->config & CFG_STATIC_CHANNEL) &&
+ (network->channel != priv->channel)) {
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded "
+ "because of channel mismatch: %d != %d.\n",
+ escape_essid(network->ssid, network->ssid_len),
+ MAC_ARG(network->bssid),
+ network->channel, priv->channel);
+ return 0;
+ }
+
+ /* Verify privacy compatability */
+ if (((priv->capability & CAP_PRIVACY_ON) ? 1 : 0) !=
+ ((network->capability & WLAN_CAPABILITY_PRIVACY) ? 1 : 0)) {
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded "
+ "because of privacy mismatch: %s != %s.\n",
+ escape_essid(network->ssid, network->ssid_len),
+ MAC_ARG(network->bssid),
+ priv->capability & CAP_PRIVACY_ON ? "on" :
+ "off",
+ network->capability &
+ WLAN_CAPABILITY_PRIVACY ?"on" : "off");
+ return 0;
+ }
+
+ if ((priv->config & CFG_STATIC_BSSID) &&
+ memcmp(network->bssid, priv->bssid, ETH_ALEN)) {
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded "
+ "because of BSSID mismatch: " MAC_FMT ".\n",
+ escape_essid(network->ssid, network->ssid_len),
+ MAC_ARG(network->bssid),
+ MAC_ARG(priv->bssid));
+ return 0;
+ }
+
+ /* If an ESSID has been configured but this AP did not
+ * broadcast its ESSID, then it is a valid match. Otherwise,
+ * compare the broadcast ESSID to ours */
+ if ((priv->config & CFG_STATIC_ESSID) &&
+ !(network->flags & NETWORK_EMPTY_ESSID) &&
+ memcmp(network->ssid, priv->essid, min(network->ssid_len,
+ priv->essid_len))) {
+ char escaped[IW_ESSID_MAX_SIZE * 2 + 1];
+ strncpy(escaped, escape_essid(
+ network->ssid, network->ssid_len),
+ sizeof(escaped));
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded "
+ "because of ESSID mismatch: '%s'.\n",
+ escaped, MAC_ARG(network->bssid),
+ escape_essid(priv->essid, priv->essid_len));
+ return 0;
+ }
+
+ /* Filter out any incompatible freq / mode combinations */
+ if (!ieee80211_is_valid_mode(priv->ieee, network->mode)) {
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded "
+ "because of invalid frequency/mode "
+ "combination.\n",
+ escape_essid(network->ssid, network->ssid_len),
+ MAC_ARG(network->bssid));
+ return 0;
+ }
+
+ ipw_compatible_rates(priv, network, &rates);
+ if (rates.num_rates == 0) {
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' excluded "
+ "because of no compatible rates.\n",
+ escape_essid(network->ssid, network->ssid_len),
+ MAC_ARG(network->bssid));
+ return 0;
+ }
+
+ /* TODO: Perform any further minimal comparititive tests. We do not
+ * want to put too much policy logic here; intelligent scan selection
+ * should occur within a generic IEEE 802.11 user space tool. */
+
+ /* Set up 'new' AP to this network */
+ ipw_copy_rates(&match->rates, &rates);
+ match->network = network;
+
+ IPW_DEBUG_ASSOC("Network '%s (" MAC_FMT ")' is a viable match.\n",
+ escape_essid(network->ssid, network->ssid_len),
+ MAC_ARG(network->bssid));
+
+ return 1;
+}
+
+static u8 ipw_calc_rssi_dbm(u8 rssi, u8 agc)
+{
+ u16 tmp1 = (agc & BIT(6)) ? 16 : 0;
+ u16 tmp2 = (agc & BIT(5)) ? 16 : 0;
+ u16 tmp3 = agc & 0xF;
+ tmp3 += tmp3 >> 1;
+ return rssi - (tmp1 + tmp2 + tmp3 + 35);
+}
+
+static void ipw_adhoc_create(struct ipw_priv *priv,
+ struct ieee80211_network *network)
+{
+ ipw_create_bssid(priv, network->bssid);
+ network->ssid_len = priv->essid_len;
+ memcpy(network->ssid, priv->essid, priv->essid_len);
+ memset(&network->stats, 0, sizeof(network->stats));
+ network->capability = WLAN_CAPABILITY_IBSS;
+ network->channel = priv->channel;
+ network->rates_len = min(priv->rates.num_rates, MAX_RATES_LENGTH);
+ memcpy(network->rates, priv->rates.supported_rates,
+ network->rates_len);
+ network->rates_ex_len = priv->rates.num_rates - network->rates_len;
+ memcpy(network->rates_ex,
+ &priv->rates.supported_rates[network->rates_len],
+ network->rates_ex_len);
+ network->last_scanned = 0;
+ network->mode = priv->ieee->mode;
+ network->flags = 0;
+ network->last_associate = 0;
+ network->time_stamp[0] = 0;
+ network->time_stamp[1] = 0;
+ network->beacon_interval = 100; /* Default */
+ network->listen_interval = 10; /* Default */
+ network->atim_window = 0; /* Default */
+#ifdef CONFIG_IEEE80211_WPA
+ network->wpa_ie_len = 0;
+ network->rsn_ie_len = 0;
+#endif /* CONFIG_IEEE80211_WPA */
+}
+
+static void ipw_send_wep_keys(struct ipw_priv *priv)
+{
+ struct ipw_wep_key *key;
+ int i;
+ struct host_cmd cmd = {
+ .cmd = IPW_CMD_WEP_KEY,
+ .len = sizeof(*key)
+ };
+
+ key = (struct ipw_wep_key *)&cmd.param;
+ key->cmd_id = DINO_CMD_WEP_KEY;
+ key->seq_num = 0;
+
+ for (i = 0; i < 4; i++) {
+ key->key_index = i;
+ if (!(priv->sec.flags & (1 << i))) {
+ key->key_size = 0;
+ } else {
+ key->key_size = priv->sec.key_sizes[i];
+ memcpy(key->key, priv->sec.keys[i], key->key_size);
+ }
+
+ if (ipw_send_cmd(priv, &cmd)) {
+ IPW_ERROR("failed to send WEP_KEY command\n");
+ return;
+ }
+ }
+}
+
+static void ipw_associate(void *data)
+{
+ struct ipw_priv *priv = data;
+
+ struct ieee80211_network *network = NULL;
+ struct ipw_network_match match = {
+ .network = NULL
+ };
+ struct ipw_supported_rates *rates;
+ struct list_head *element;
+ int err;
+
+ list_for_each_entry(network, &priv->ieee->network_list, list)
+ ipw_best_network(priv, &match, network);
+
+ network = match.network;
+ rates = &match.rates;
+
+ if (network == NULL &&
+ priv->ieee->iw_mode == IW_MODE_ADHOC &&
+ priv->config & CFG_ADHOC_CREATE &&
+ priv->config & CFG_STATIC_ESSID &&
+ priv->config & CFG_STATIC_CHANNEL &&
+ !list_empty(&priv->ieee->network_free_list)) {
+ element = priv->ieee->network_free_list.next;
+ network = list_entry(element, struct ieee80211_network,
+ list);
+ ipw_adhoc_create(priv, network);
+ rates = &priv->rates;
+ list_del(element);
+ list_add_tail(&network->list, &priv->ieee->network_list);
+ }
+
+ /* If we reached the end of the list, then we don't have any valid
+ * matching APs */
+ if (!network) {
+#ifdef CONFIG_IPW_DEBUG
+ IPW_DEBUG_INFO("Scan completed, no valid APs matched "
+ "[CFG 0x%08lX]\n", priv->config);
+ if (priv->config & CFG_STATIC_CHANNEL)
+ IPW_DEBUG_INFO("Channel locked to %d\n",
+ priv->channel);
+ else
+ IPW_DEBUG_INFO("Channel unlocked.\n");
+ if (priv->config & CFG_STATIC_ESSID)
+ IPW_DEBUG_INFO("ESSID locked to '%s'\n",
+ escape_essid(priv->essid,
+ priv->essid_len));
+ else
+ IPW_DEBUG_INFO("ESSID unlocked.\n");
+ if (priv->config & CFG_STATIC_BSSID)
+ IPW_DEBUG_INFO("BSSID locked to %d\n", priv->channel);
+ else
+ IPW_DEBUG_INFO("BSSID unlocked.\n");
+ if (priv->capability & CAP_PRIVACY_ON)
+ IPW_DEBUG_INFO("PRIVACY on\n");
+ else
+ IPW_DEBUG_INFO("PRIVACY off\n");
+ IPW_DEBUG_INFO("RATE MASK: 0x%08X\n", priv->rates_mask);
+#endif
+
+ queue_delayed_work(priv->workqueue, &priv->request_scan,
+ SCAN_INTERVAL);
+
+ return;
+ }
+
+ if (priv->config & CFG_FIXED_RATE) {
+ /* TODO: Verify that this works... */
+ struct ipw_fixed_rate fr = {
+ .tx_rates = priv->rates_mask
+ };
+ u32 reg;
+ u16 mask = 0;
+
+ /* Identify 'current FW band' and match it with the fixed
+ * Tx rates */
+
+ switch (priv->ieee->freq_band) {
+ case IEEE80211_52GHZ_BAND: /* A only */
+ /* IEEE_A */
+ if (priv->rates_mask & ~IEEE80211_OFDM_RATES_MASK) {
+ /* Invalid fixed rate mask */
+ fr.tx_rates = 0;
+ break;
+ }
+
+ fr.tx_rates >>= IEEE80211_OFDM_SHIFT_MASK_A;
+ break;
+
+ default: /* 2.4Ghz or Mixed */
+ /* IEEE_B */
+ if (network->mode == IEEE_B) {
+ if (fr.tx_rates & ~IEEE80211_CCK_RATES_MASK) {
+ /* Invalid fixed rate mask */
+ fr.tx_rates = 0;
+ }
+ break;
+ }
+
+ /* IEEE_G */
+ if (fr.tx_rates & ~(IEEE80211_CCK_RATES_MASK |
+ IEEE80211_OFDM_RATES_MASK)) {
+ /* Invalid fixed rate mask */
+ fr.tx_rates = 0;
+ break;
+ }
+
+ if (IEEE80211_OFDM_RATE_6MB_MASK & fr.tx_rates) {
+ mask |= (IEEE80211_OFDM_RATE_6MB_MASK >> 1);
+ fr.tx_rates &= ~IEEE80211_OFDM_RATE_6MB_MASK;
+ }
+
+ if (IEEE80211_OFDM_RATE_9MB_MASK & fr.tx_rates) {
+ mask |= (IEEE80211_OFDM_RATE_9MB_MASK >> 1);
+ fr.tx_rates &= ~IEEE80211_OFDM_RATE_9MB_MASK;
+ }
+
+ if (IEEE80211_OFDM_RATE_12MB_MASK & fr.tx_rates) {
+ mask |= (IEEE80211_OFDM_RATE_12MB_MASK >> 1);
+ fr.tx_rates &= ~IEEE80211_OFDM_RATE_12MB_MASK;
+ }
+
+ fr.tx_rates |= mask;
+ break;
+ }
+
+ reg = ipw_read32(priv, IPW_MEM_FIXED_OVERRIDE);
+ ipw_write_reg32(priv, reg, *(u32*)&fr);
+ }
+
+ if (!(priv->config & CFG_STATIC_ESSID)) {
+ priv->essid_len = min(network->ssid_len, (u8)IW_ESSID_MAX_SIZE);
+ memcpy(priv->essid, network->ssid, priv->essid_len);
+ }
+
+ IPW_DEBUG_ASSOC("Assocation attempt: '%s', channel %d, "
+ "802.11%c [%d], enc=%s%s.\n",
+ escape_essid(priv->essid, priv->essid_len),
+ network->channel, ipw_modes[network->mode],
+ rates->num_rates,
+ priv->capability & CAP_PRIVACY_ON ? "on " : "off",
+ priv->capability & CAP_PRIVACY_ON ?
+ (priv->capability & CAP_SHARED_KEY ? "(shared)" :
+ "(open)") : "");
+
+ network->last_associate = jiffies;
+ priv->ieee->mode = network->mode;
+
+ memset(&priv->assoc_request, 0, sizeof(priv->assoc_request));
+ priv->assoc_request.channel = network->channel;
+ if ((priv->capability & CAP_PRIVACY_ON) &&
+ (priv->capability & CAP_SHARED_KEY)) {
+ priv->assoc_request.auth_type = AUTH_SHARED_KEY;
+ priv->assoc_request.auth_key = priv->sec.active_key;
+ } else {
+ priv->assoc_request.auth_type = AUTH_OPEN;
+ priv->assoc_request.auth_key = 0;
+ }
+
+ if (priv->capability & CAP_PRIVACY_ON)
+ ipw_send_wep_keys(priv);
+
+ priv->assoc_request.ieee_mode = priv->ieee->mode;
+
+ priv->assoc_request.beacon_interval = network->beacon_interval;
+ if ((priv->ieee->iw_mode == IW_MODE_ADHOC) &&
+ (network->time_stamp[0] == 0) &&
+ (network->time_stamp[1] == 0)) {
+ priv->assoc_request.assoc_type = HC_IBSS_START;
+ priv->assoc_request.assoc_tsf_msw = 0;
+ priv->assoc_request.assoc_tsf_lsw = 0;
+ } else {
+ priv->assoc_request.assoc_type = HC_ASSOCIATE;
+ priv->assoc_request.assoc_tsf_msw = network->time_stamp[1];
+ priv->assoc_request.assoc_tsf_lsw = network->time_stamp[0];
+ }
+
+ memcpy(&priv->assoc_request.bssid, network->bssid, ETH_ALEN);
+
+ if (priv->ieee->iw_mode == IW_MODE_ADHOC) {
+ memset(&priv->assoc_request.dest, 0xFF, ETH_ALEN);
+ priv->assoc_request.atim_window = network->atim_window;
+ } else {
+ memcpy(&priv->assoc_request.dest, network->bssid, ETH_ALEN);
+ priv->assoc_request.atim_window = 0;
+ }
+
+ priv->assoc_request.capability = network->capability;
+ priv->assoc_request.listen_interval = network->listen_interval;
+
+ err = ipw_send_ssid(priv, priv->essid, priv->essid_len);
+ if (err) {
+ IPW_DEBUG_HC("Attempt to send SSID command failed.\n");
+ return;
+ }
+
+ rates->ieee_mode = priv->ieee->mode;
+ rates->purpose = IPW_RATE_CONNECT;
+ ipw_send_supported_rates(priv, rates);
+
+ if (network->mode == IEEE_G)
+ priv->sys_config.dot11g_auto_detection = 1;
+ else
+ priv->sys_config.dot11g_auto_detection = 0;
+ err = ipw_send_system_config(priv, &priv->sys_config);
+ if (err) {
+ IPW_DEBUG_HC("Attempt to send sys config command failed.\n");
+ return;
+ }
+
+ err = ipw_set_sensitivity(priv, network->stats.rssi ?
+ network->stats.rssi + 112 : 0);
+ if (err) {
+ IPW_DEBUG_HC("Attempt to send associate command failed.\n");
+ return;
+ }
+
+ err = ipw_send_associate(priv, &priv->assoc_request);
+ if (err) {
+ IPW_DEBUG_HC("Attempt to send associate command failed.\n");
+ return;
+ }
+
+ priv->channel = network->channel;
+ memcpy(priv->bssid, network->bssid, ETH_ALEN);
+
+ priv->status |= STATUS_ASSOCIATING;
+
+ IPW_DEBUG(IPW_DL_STATE, "associating: '%s' " MAC_FMT " \n",
+ escape_essid(priv->essid, priv->essid_len),
+ MAC_ARG(priv->bssid));
+}
+
+static inline void ipw_handle_data_packet(struct ipw_priv *priv,
+ struct ipw_rx_mem_buffer *rxb,
+ struct ieee80211_rx_stats *stats)
+{
+ struct ipw_rx_packet *pkt = (struct ipw_rx_packet *)rxb->skb->data;
+
+ /* We only process data packets if the
+ * interface is open */
+ if (unlikely((pkt->u.frame.length + IPW_RX_FRAME_SIZE) >
+ skb_tailroom(rxb->skb))) {
+ priv->ieee->stats.rx_errors++;
+ priv->wstats.discard.misc++;
+ IPW_DEBUG_DROP("Corruption detected! Oh no!\n");
+ return;
+ } else if (unlikely(!netif_running(priv->net_dev))) {
+ priv->ieee->stats.rx_dropped++;
+ priv->wstats.discard.misc++;
+ IPW_DEBUG_DROP("Dropping packet while interface is not up.\n");
+ return;
+ }
+
+ /* Advance skb->data to the start of the actual payload */
+ skb_reserve(rxb->skb, (u32)&pkt->u.frame.data[0] - (u32)pkt);
+
+ /* Set the size of the skb to the size of the frame */
+ skb_put(rxb->skb, pkt->u.frame.length);
+
+ IPW_DEBUG_RX("Rx packet of %d bytes.\n", rxb->skb->len);
+
+ if (!ieee80211_rx(priv->ieee, rxb->skb, stats))
+ priv->ieee->stats.rx_errors++;
+ else /* ieee80211_rx succeeded, so it now owns the SKB */
+ rxb->skb = NULL;
+}
+
+
+/*
+ * Main entry function for recieving a packet with 80211 headers. This
+ * should be called when ever the FW has notified us that there is a new
+ * skb in the recieve queue.
+ */
+static void ipw_rx(struct ipw_priv *priv)
+{
+ struct ipw_rx_mem_buffer *rxb;
+ struct ipw_rx_packet *pkt;
+ struct ieee80211_hdr *header;
+ u32 r, w, i;
+
+ r = ipw_read32(priv, CX2_RX_READ_INDEX);
+ w = ipw_read32(priv, CX2_RX_WRITE_INDEX);
+ i = (priv->rxq->processed + 1) % RX_QUEUE_SIZE;
+
+ while (i != r) {
+ rxb = priv->rxq->queue[i];
+#ifdef CONFIG_IPW_DEBUG
+ if (unlikely(rxb == NULL)) {
+ printk(KERN_CRIT "Queue not allocated!\n");
+ break;
+ }
+#endif
+ priv->rxq->queue[i] = NULL;
+
+ pci_dma_sync_single_for_cpu(priv->pci_dev, rxb->dma_addr,
+ CX2_RX_BUF_SIZE,
+ PCI_DMA_FROMDEVICE);
+
+ pkt = (struct ipw_rx_packet *)rxb->skb->data;
+ IPW_DEBUG_RX("Packet: type=%02X seq=%02X bits=%02X\n",
+ pkt->header.message_type,
+ pkt->header.rx_seq_num,
+ pkt->header.control_bits);
+
+ switch (pkt->header.message_type) {
+ case RX_FRAME_TYPE: /* 802.11 frame */ {
+ struct ieee80211_rx_stats stats = {
+ .rssi = ipw_calc_rssi_dbm(
+ pkt->u.frame.rssi, pkt->u.frame.agc),
+ .signal = pkt->u.frame.signal,
+ .rate = pkt->u.frame.rate,
+ .mac_time = jiffies,
+ .received_channel = pkt->u.frame.received_channel,
+ .freq = (pkt->u.frame.control & BIT(0)) ? IEEE80211_24GHZ_BAND : IEEE80211_52GHZ_BAND,
+ .len = pkt->u.frame.length,
+ };
+
+ if (stats.rssi != 0)
+ stats.mask |= IEEE80211_STATMASK_RSSI;
+ if (stats.signal != 0)
+ stats.mask |= IEEE80211_STATMASK_SIGNAL;
+ if (stats.rate != 0)
+ stats.mask |= IEEE80211_STATMASK_RATE;
+
+ priv->last_rx_rssi = stats.rssi;
+ priv->last_rx_rate = stats.rate;
+
+ header = (struct ieee80211_hdr *)(rxb->skb->data +
+ IPW_RX_FRAME_SIZE);
+
+ IPW_DEBUG_RX("Frame: len=%u\n", pkt->u.frame.length);
+
+#ifdef CONFIG_IPW_PROMISC
+ if (priv->ieee->iw_mode == IW_MODE_MONITOR) {
+ ipw_handle_data_packet(priv, rxb, &stats);
+ break;
+ }
+#endif
+
+ if (pkt->u.frame.length < frame_hdr_len(header)) {
+ IPW_DEBUG_DROP("Received packet is too small. "
+ "Dropping.\n");
+ priv->ieee->stats.rx_errors++;
+ priv->wstats.discard.misc++;
+ break;
+ }
+
+ header = (struct ieee80211_hdr *)(rxb->skb->data +
+ IPW_RX_FRAME_SIZE);
+ switch (WLAN_FC_GET_TYPE(header->frame_ctl)) {
+ case IEEE80211_FTYPE_MGMT:
+ ieee80211_rx_mgt(priv->ieee, header, &stats);
+ if (priv->ieee->iw_mode == IW_MODE_ADHOC &&
+ ((WLAN_FC_GET_STYPE(header->frame_ctl) ==
+ IEEE80211_STYPE_PROBE_RESP) ||
+ (WLAN_FC_GET_STYPE(header->frame_ctl) ==
+ IEEE80211_STYPE_BEACON)) &&
+ !memcmp(header->addr3, priv->bssid, ETH_ALEN)) {
+ IPW_DEBUG_SCAN("Adding AdHoc station: "
+ MAC_FMT "\n",
+ MAC_ARG(header->addr2));
+ ipw_add_station(priv, header->addr2);
+ }
+ break;
+
+ case IEEE80211_FTYPE_CTL:
+ break;
+
+ case IEEE80211_FTYPE_DATA:
+ ipw_handle_data_packet(priv, rxb, &stats);
+ break;
+ }
+ break;
+ }
+
+ case RX_HOST_NOTIFICATION_TYPE: {
+ IPW_DEBUG_RX("Notification: subtype=%02X flags=%02X size=%d\n",
+ pkt->u.notification.subtype,
+ pkt->u.notification.flags,
+ pkt->u.notification.size);
+ ipw_rx_notification(priv, &pkt->u.notification);
+ break;
+ }
+
+ default:
+ IPW_DEBUG_RX("Bad Rx packet of type %d\n",
+ pkt->header.message_type);
+ break;
+ }
+
+ /* For now we just don't re-use anything. We can tweak this
+ * later to try and re-use notification packets and SKBs that
+ * fail to Rx correctly */
+ if (rxb->skb != NULL) {
+ dev_kfree_skb_any(rxb->skb);
+ rxb->skb = NULL;
+ }
+
+ pci_unmap_single(priv->pci_dev, rxb->dma_addr,
+ CX2_RX_BUF_SIZE, PCI_DMA_FROMDEVICE);
+ list_add_tail(&rxb->list, &priv->rxq->rx_used);
+
+ i = (i + 1) % RX_QUEUE_SIZE;
+ }
+
+ /* Backtrack one entry */
+ priv->rxq->processed = (i ? i : RX_QUEUE_SIZE) - 1;
+
+ ipw_rx_queue_restock(priv);
+}
+
+static void ipw_adapter_restart(void *adapter)
+{
+ struct ipw_priv *priv = adapter;
+
+ if (priv->status & STATUS_RF_KILL_MASK)
+ return;
+
+ ipw_down(priv);
+ if (ipw_up(priv)) {
+ IPW_ERROR("Failed to up device\n");
+ return;
+ }
+}
+
+
+
+static inline int ipw_abort_scan(struct ipw_priv *priv)
+{
+ int err;
+
+ if (priv->status & STATUS_SCAN_ABORTING) {
+ IPW_DEBUG_HC("Ignoring concurrent scan abort request.\n");
+ return 0;
+ }
+ priv->status |= STATUS_SCAN_ABORTING;
+
+ err = ipw_send_scan_abort(priv);
+ if (err) {
+ IPW_DEBUG_HC("Request to abort scan failed.\n");
+ return err;
+ }
+
+ return 0;
+}
+
+static int ipw_request_scan(struct ipw_priv *priv)
+{
+ struct ipw_scan_request_ext scan;
+ int number_of_channel = 1;
+ int number_of_a_channel = 1;
+ int i, err, scan_type;
+
+ if (priv->status & STATUS_EXIT_PENDING) {
+ IPW_DEBUG_SCAN("Aborting scan due to device shutdown\n");
+ priv->status |= STATUS_SCAN_PENDING;
+ return 0;
+ }
+
+ if (priv->status & STATUS_SCANNING) {
+ IPW_DEBUG_HC("Concurrent scan requested. Aborting first.\n");
+ priv->status |= STATUS_SCAN_PENDING;
+ ipw_abort_scan(priv);
+ return 0;
+ }
+
+ if (priv->status & STATUS_SCAN_ABORTING) {
+ IPW_DEBUG_HC("Scan request while abort pending. Queuing.\n");
+ priv->status |= STATUS_SCAN_PENDING;
+ return 0;
+ }
+
+ if (priv->status & STATUS_RF_KILL_MASK) {
+ IPW_DEBUG_HC("Aborting scan due to RF Kill activation\n");
+ priv->status |= STATUS_SCAN_PENDING;
+ return 0;
+ }
+
+ memset(&scan, 0, sizeof(scan));
+
+ scan.dwell_time[IPW_SCAN_ACTIVE_BROADCAST_SCAN] = 20;
+ scan.dwell_time[IPW_SCAN_ACTIVE_BROADCAST_AND_DIRECT_SCAN] = 100;
+ scan.dwell_time[IPW_SCAN_PASSIVE_FULL_DWELL_SCAN] = 120;
+
+ scan.full_scan_index = ieee80211_get_scans(priv->ieee);
+ /* Ensure that every other scan is a fast channel hop scan */
+ if (priv->config & CFG_STATIC_ESSID && (priv->ieee->scans % 2)) {
+ err = ipw_send_ssid(priv, priv->essid, priv->essid_len);
+ if (err) {
+ IPW_DEBUG_HC("Attempt to send SSID command failed.\n");
+ return err;
+ }
+
+ scan_type = IPW_SCAN_ACTIVE_BROADCAST_AND_DIRECT_SCAN;
+ } else {
+ scan_type = IPW_SCAN_ACTIVE_BROADCAST_SCAN;
+ }
+
+ if (priv->ieee->freq_band & IEEE80211_52GHZ_BAND) {
+ for (i = 0; i < MAX_A_CHANNELS; i++) {
+ if (band_a_active_channel[i] == 0)
+ break;
+ scan.channels_list[number_of_a_channel] = band_a_active_channel[i];
+ ipw_set_scan_type(&scan, number_of_a_channel, scan_type);
+ number_of_a_channel++;
+ }
+ scan.channels_list[0] = (u8)(IEEE_A << 6) | (number_of_a_channel - 1);
+ number_of_channel = number_of_a_channel + 1;
+ }
+
+ if (number_of_a_channel == 1) {
+ number_of_a_channel = 0;
+ number_of_channel = 1;
+ }
+
+ if (priv->ieee->freq_band & IEEE80211_24GHZ_BAND) {
+ for (i = 0; i <= MAX_B_CHANNELS; i++) {
+ if (band_b_active_channel[i] == 0)
+ break;
+ scan.channels_list[number_of_channel] = band_b_active_channel[i];
+ ipw_set_scan_type(&scan, number_of_channel, scan_type);
+ number_of_channel++;
+ }
+
+ scan.channels_list[number_of_a_channel] = (u8)(IEEE_B << 6) | (number_of_channel - number_of_a_channel - 1);
+ }
+
+ /* TODO -- Remove the currently active channel from the list of
+ * channels set for active scan */
+
+ err = ipw_send_scan_request_ext(priv, &scan);
+ if (err) {
+ IPW_DEBUG_HC("Sending scan command failed: %08X\n",
+ err);
+ return -EIO;
+ }
+
+ priv->status |= STATUS_SCANNING;
+ priv->status &= ~STATUS_SCAN_PENDING;
+
+ return 0;
+}
+
+/*
+ * This file defines the Wireless Extension handlers. It does not
+ * define any methods of hardware manipulation and relies on the
+ * functions defined in ipw_main to provide the HW interaction.
+ *
+ * The exception to this is the use of the ipw_get_ordinal()
+ * function used to poll the hardware vs. making unecessary calls.
+ *
+ */
+
+static int ipw_wx_get_name(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ if (!(priv->status & STATUS_ASSOCIATED))
+ strcpy(wrqu->name, "unassociated");
+ else
+ snprintf(wrqu->name, IFNAMSIZ, "IEEE 802.11%c",
+ ipw_modes[priv->ieee->mode]);
+ IPW_DEBUG_WX("Name: %s\n", wrqu->name);
+ return 0;
+}
+
+static int ipw_set_channel(struct ipw_priv *priv, u8 channel)
+{
+ if (channel == 0) {
+ IPW_DEBUG_INFO("Setting channel to ANY (0)\n");
+ priv->config &= ~CFG_STATIC_CHANNEL;
+ if (!(priv->status & (STATUS_SCANNING | STATUS_ASSOCIATED |
+ STATUS_ASSOCIATING))) {
+ IPW_DEBUG_ASSOC("Attempting to associate with new "
+ "parameters.\n");
+ ipw_associate(priv);
+ }
+
+ return 0;
+ }
+
+ priv->config |= CFG_STATIC_CHANNEL;
+
+ if (priv->channel == channel) {
+ IPW_DEBUG_INFO(
+ "Request to set channel to current value (%d)\n",
+ channel);
+ return 0;
+ }
+
+ IPW_DEBUG_INFO("Setting channel to %i\n", (int)channel);
+ priv->channel = channel;
+
+ /* If we are currently associated, or trying to associate
+ * then see if this is a new channel (causing us to disassociate) */
+ if (priv->status & (STATUS_ASSOCIATED | STATUS_ASSOCIATING)) {
+ IPW_DEBUG_ASSOC("Disassociating due to channel change.\n");
+ ipw_disassociate(priv);
+ }
+
+ ipw_associate(priv);
+
+ return 0;
+}
+
+static int ipw_wx_set_freq(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ struct iw_freq *fwrq = &wrqu->freq;
+
+ /* if setting by freq convert to channel */
+ if (fwrq->e == 1) {
+ if ((fwrq->m >= (int) 2.412e8 &&
+ fwrq->m <= (int) 2.487e8)) {
+ int f = fwrq->m / 100000;
+ int c = 0;
+
+ while ((c < REG_MAX_CHANNEL) &&
+ (f != ipw_frequencies[c]))
+ c++;
+
+ /* hack to fall through */
+ fwrq->e = 0;
+ fwrq->m = c + 1;
+ }
+ }
+
+ if (fwrq->e > 0 || fwrq->m > 1000)
+ return -EOPNOTSUPP;
+
+ IPW_DEBUG_WX("SET Freq/Channel -> %d \n", fwrq->m);
+ return ipw_set_channel(priv, (u8)fwrq->m);
+
+ return 0;
+}
+
+
+static int ipw_wx_get_freq(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ wrqu->freq.e = 0;
+
+ /* If we are associated, trying to associate, or have a statically
+ * configured CHANNEL then return that; otherwise return ANY */
+ if (priv->config & CFG_STATIC_CHANNEL ||
+ priv->status & (STATUS_ASSOCIATING | STATUS_ASSOCIATED))
+ wrqu->freq.m = priv->channel;
+ else
+ wrqu->freq.m = 0;
+
+ IPW_DEBUG_WX("GET Freq/Channel -> %d \n", priv->channel);
+ return 0;
+}
+
+static int ipw_wx_set_mode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ int err = 0;
+
+ IPW_DEBUG_WX("Set MODE: %d\n", wrqu->mode);
+
+ if (wrqu->mode == priv->ieee->iw_mode)
+ return 0;
+
+ switch (wrqu->mode) {
+#ifdef CONFIG_IPW_PROMISC
+ case IW_MODE_MONITOR:
+#endif
+ case IW_MODE_ADHOC:
+ case IW_MODE_INFRA:
+ break;
+ case IW_MODE_AUTO:
+ wrqu->mode = IW_MODE_INFRA;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+#ifdef CONFIG_IPW_PROMISC
+ if (priv->ieee->iw_mode == IW_MODE_MONITOR)
+ priv->net_dev->type = ARPHRD_ETHER;
+
+ if (wrqu->mode == IW_MODE_MONITOR)
+ priv->net_dev->type = ARPHRD_IEEE80211;
+#endif /* CONFIG_IPW_PROMISC */
+
+#ifdef CONFIG_PM
+ /* Free the existing firmware and reset the fw_loaded
+ * flag so ipw_load() will bring in the new firmawre */
+ if (fw_loaded) {
+ fw_loaded = 0;
+ }
+
+ release_firmware(bootfw);
+ release_firmware(ucode);
+ release_firmware(firmware);
+#endif
+
+ priv->ieee->iw_mode = wrqu->mode;
+ ipw_adapter_restart(priv);
+
+ return err;
+}
+
+static int ipw_wx_get_mode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ wrqu->mode = priv->ieee->iw_mode;
+ IPW_DEBUG_WX("Get MODE -> %d\n", wrqu->mode);
+
+ return 0;
+}
+
+
+#define DEFAULT_FRAG_THRESHOLD 2342U
+#define MIN_FRAG_THRESHOLD 256U
+#define MAX_FRAG_THRESHOLD 2342U
+#define DEFAULT_RTS_THRESHOLD 2304U
+#define MIN_RTS_THRESHOLD 1U
+#define MAX_RTS_THRESHOLD 2304U
+#define DEFAULT_BEACON_INTERVAL 100U
+#define DEFAULT_SHORT_RETRY_LIMIT 7U
+#define DEFAULT_LONG_RETRY_LIMIT 4U
+
+/* Values are in microsecond */
+static const s32 timeout_duration[] = {
+ 350000,
+ 250000,
+ 75000,
+ 37000,
+ 25000,
+};
+
+static const s32 period_duration[] = {
+ 400000,
+ 700000,
+ 1000000,
+ 1000000,
+ 1000000
+};
+
+static int ipw_wx_get_range(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ struct iw_range *range = (struct iw_range *)extra;
+ u16 val;
+ int i;
+
+ wrqu->data.length = sizeof(*range);
+ memset(range, 0, sizeof(*range));
+
+ /* 54Mbs == ~27 Mb/s real (802.11g) */
+ range->throughput = 27 * 1000 * 1000;
+
+ range->max_qual.qual = 100;
+ /* TODO: Find real max RSSI and stick here */
+ range->max_qual.level = 0;
+ range->max_qual.noise = 0;
+ range->max_qual.updated = 7; /* Updated all three */
+
+ range->avg_qual.qual = 70; /* > 8% missed beacons is 'bad' */
+ /* TODO: Find real 'good' to 'bad' threshol value for RSSI */
+ range->avg_qual.level = 0; /* FIXME to real average level */
+ range->avg_qual.noise = 0;
+ range->avg_qual.updated = 7; /* Updated all three */
+
+ range->num_bitrates = min(priv->rates.num_rates, (u8)IW_MAX_BITRATES);
+
+ for (i = 0; i < range->num_bitrates; i++)
+ range->bitrate[i] = (priv->rates.supported_rates[i] & 0x7F) *
+ 500000;
+
+ range->max_rts = DEFAULT_RTS_THRESHOLD;
+ range->min_frag = MIN_FRAG_THRESHOLD;
+ range->max_frag = MAX_FRAG_THRESHOLD;
+
+ range->encoding_size[0] = 5;
+ range->encoding_size[1] = 13; /* Different token sizes */
+ range->num_encoding_sizes = 2; /* Number of entry in the list */
+ range->max_encoding_tokens = WEP_KEYS; /* Max number of tokens */
+
+ /* Set the Wireless Extension versions */
+ range->we_version_compiled = WIRELESS_EXT;
+ range->we_version_source = 16;
+
+ range->num_channels = FREQ_COUNT;
+
+ val = 0;
+ for (i = 0; i < FREQ_COUNT; i++) {
+ range->freq[val].i = i + 1;
+ range->freq[val].m = ipw_frequencies[i] * 100000;
+ range->freq[val].e = 1;
+ val++;
+
+ if (val == IW_MAX_FREQUENCIES)
+ break;
+ }
+ range->num_frequency = val;
+
+ IPW_DEBUG_WX("GET Range\n");
+ return 0;
+}
+
+static int ipw_wx_set_wap(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ static const unsigned char any[] = {
+ 0xff, 0xff, 0xff, 0xff, 0xff, 0xff
+ };
+ static const unsigned char off[] = {
+ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00
+ };
+
+ if (wrqu->ap_addr.sa_family != ARPHRD_ETHER)
+ return -EINVAL;
+
+ if (!memcmp(any, wrqu->ap_addr.sa_data, ETH_ALEN) ||
+ !memcmp(off, wrqu->ap_addr.sa_data, ETH_ALEN)) {
+ /* we disable mandatory BSSID association */
+ IPW_DEBUG_WX("Setting AP BSSID to ANY\n");
+ priv->config &= ~CFG_STATIC_BSSID;
+ if (!(priv->status & (STATUS_SCANNING | STATUS_ASSOCIATED |
+ STATUS_ASSOCIATING))) {
+ IPW_DEBUG_ASSOC("Attempting to associate with new "
+ "parameters.\n");
+ ipw_associate(priv);
+ }
+
+ return 0;
+ }
+
+ priv->config |= CFG_STATIC_BSSID;
+ if (!memcmp(priv->bssid, wrqu->ap_addr.sa_data, ETH_ALEN)) {
+ IPW_DEBUG_WX("BSSID set to current BSSID.\n");
+ return 0;
+ }
+
+ IPW_DEBUG_WX("Setting mandatory BSSID to " MAC_FMT "\n",
+ MAC_ARG(wrqu->ap_addr.sa_data));
+
+ memcpy(priv->bssid, wrqu->ap_addr.sa_data, ETH_ALEN);
+
+ /* If we are currently associated, or trying to associate
+ * then see if this is a new BSSID (causing us to disassociate) */
+ if (priv->status & (STATUS_ASSOCIATED | STATUS_ASSOCIATING)) {
+ IPW_DEBUG_ASSOC("Disassociating due to BSSID change.\n");
+ ipw_disassociate(priv);
+ }
+
+ return 0;
+}
+
+static int ipw_wx_get_wap(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ /* If we are associated, trying to associate, or have a statically
+ * configured BSSID then return that; otherwise return ANY */
+ if (priv->config & CFG_STATIC_BSSID ||
+ priv->status & (STATUS_ASSOCIATED | STATUS_ASSOCIATING)) {
+ wrqu->ap_addr.sa_family = ARPHRD_ETHER;
+ memcpy(wrqu->ap_addr.sa_data, &priv->bssid, ETH_ALEN);
+ } else
+ memset(wrqu->ap_addr.sa_data, 0, ETH_ALEN);
+
+ IPW_DEBUG_WX("Getting WAP BSSID: " MAC_FMT "\n",
+ MAC_ARG(wrqu->ap_addr.sa_data));
+ return 0;
+}
+
+static int ipw_wx_set_essid(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ char *essid = ""; /* ANY */
+ int length = 0;
+
+ if (wrqu->essid.flags && wrqu->essid.length) {
+ length = wrqu->essid.length - 1;
+ essid = extra;
+ }
+ if (length == 0) {
+ IPW_DEBUG_WX("Setting ESSID to ANY\n");
+ priv->config &= ~CFG_STATIC_ESSID;
+ if (!(priv->status & (STATUS_SCANNING | STATUS_ASSOCIATED |
+ STATUS_ASSOCIATING))) {
+ IPW_DEBUG_ASSOC("Attempting to associate with new "
+ "parameters.\n");
+ ipw_associate(priv);
+ }
+
+ return 0;
+ }
+
+ length = min(length, IW_ESSID_MAX_SIZE);
+
+ priv->config |= CFG_STATIC_ESSID;
+
+ if (priv->essid_len == length && !memcmp(priv->essid, extra, length)) {
+ IPW_DEBUG_WX("ESSID set to current ESSID.\n");
+ return 0;
+ }
+
+ IPW_DEBUG_WX("Setting ESSID: '%s' (%d)\n", escape_essid(essid, length),
+ length);
+
+ priv->essid_len = length;
+ memcpy(priv->essid, essid, priv->essid_len);
+
+ /* If we are currently associated, or trying to associate
+ * then see if this is a new ESSID (causing us to disassociate) */
+ if (priv->status & (STATUS_ASSOCIATED | STATUS_ASSOCIATING)) {
+ IPW_DEBUG_ASSOC("Disassociating due to ESSID change.\n");
+ ipw_disassociate(priv);
+ }
+
+ return 0;
+}
+
+static int ipw_wx_get_essid(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ /* If we are associated, trying to associate, or have a statically
+ * configured ESSID then return that; otherwise return ANY */
+ if (priv->config & CFG_STATIC_ESSID ||
+ priv->status & (STATUS_ASSOCIATED | STATUS_ASSOCIATING)) {
+ IPW_DEBUG_WX("Getting essid: '%s'\n",
+ escape_essid(priv->essid, priv->essid_len));
+ memcpy(extra, priv->essid, priv->essid_len);
+ wrqu->essid.length = priv->essid_len;
+ wrqu->essid.flags = 1; /* active */
+ } else {
+ IPW_DEBUG_WX("Getting essid: ANY\n");
+ wrqu->essid.length = 0;
+ wrqu->essid.flags = 0; /* active */
+ }
+
+ return 0;
+}
+
+static int ipw_wx_set_nick(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ IPW_DEBUG_WX("Setting nick to '%s'\n", extra);
+ if (wrqu->data.length > IW_ESSID_MAX_SIZE)
+ return -E2BIG;
+
+ wrqu->data.length = min((size_t)wrqu->data.length, sizeof(priv->nick));
+ memset(priv->nick, 0, sizeof(priv->nick));
+ memcpy(priv->nick, extra, wrqu->data.length);
+ IPW_DEBUG_TRACE("<<\n");
+ return 0;
+
+}
+
+
+static int ipw_wx_get_nick(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ IPW_DEBUG_WX("Getting nick\n");
+ wrqu->data.length = strlen(priv->nick) + 1;
+ memcpy(extra, priv->nick, wrqu->data.length);
+ wrqu->data.flags = 1; /* active */
+ return 0;
+}
+
+
+static int ipw_wx_set_rate(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ IPW_DEBUG_WX("0x%p, 0x%p, 0x%p\n", dev, info, wrqu);
+ return -EOPNOTSUPP;
+}
+
+static inline u32 ipw_get_max_rate(struct ipw_priv *priv)
+{
+ u32 i = 0x80000000;
+ u32 mask = priv->rates_mask;
+ /* If currently associated in B mode, restrict the maximum
+ * rate match to B rates */
+ if (priv->ieee->mode == IEEE_B)
+ mask &= IEEE80211_CCK_RATES_MASK;
+ while (i && !(mask & i)) i >>= 1;
+ switch (i) {
+ case IEEE80211_CCK_RATE_1MB_MASK: return IPW_TX_RATE_1MB;
+ case IEEE80211_CCK_RATE_2MB_MASK: return IPW_TX_RATE_2MB;
+ case IEEE80211_CCK_RATE_5MB_MASK: return IPW_TX_RATE_5MB;
+ case IEEE80211_OFDM_RATE_6MB_MASK: return IPW_TX_RATE_6MB;
+ case IEEE80211_OFDM_RATE_9MB_MASK: return IPW_TX_RATE_9MB;
+ case IEEE80211_CCK_RATE_11MB_MASK: return IPW_TX_RATE_11MB;
+ case IEEE80211_OFDM_RATE_12MB_MASK: return IPW_TX_RATE_12MB;
+ case IEEE80211_OFDM_RATE_18MB_MASK: return IPW_TX_RATE_18MB;
+ case IEEE80211_OFDM_RATE_24MB_MASK: return IPW_TX_RATE_24MB;
+ case IEEE80211_OFDM_RATE_36MB_MASK: return IPW_TX_RATE_36MB;
+ case IEEE80211_OFDM_RATE_48MB_MASK: return IPW_TX_RATE_48MB;
+ case IEEE80211_OFDM_RATE_54MB_MASK: return IPW_TX_RATE_54MB;
+ }
+
+ if (priv->ieee->mode == IEEE_B)
+ return IPW_TX_RATE_11MB;
+ else
+ return IPW_TX_RATE_54MB;
+}
+
+static u32 ipw_get_current_rate(struct ipw_priv *priv)
+{
+ u32 rate, len = sizeof(rate);
+ int err;
+
+ if (!(priv->status & STATUS_ASSOCIATED))
+ return 0;
+
+ if (priv->tx_packets > IPW_REAL_RATE_RX_PACKET_THRESHOLD) {
+ err = ipw_get_ordinal(priv, IPW_ORD_STAT_TX_CURR_RATE, &rate,
+ &len);
+ if (err) {
+ IPW_DEBUG_INFO("failed querying ordinals.\n");
+ return 0;
+ }
+ } else {
+ IPW_DEBUG_WX("Defaulting to MAX rate\n");
+ rate = ipw_get_max_rate(priv);
+ }
+
+ switch (rate) {
+ case IPW_TX_RATE_1MB: return 1000000;
+ case IPW_TX_RATE_2MB: return 2000000;
+ case IPW_TX_RATE_5MB: return 5500000;
+ case IPW_TX_RATE_6MB: return 6000000;
+ case IPW_TX_RATE_9MB: return 9000000;
+ case IPW_TX_RATE_11MB: return 11000000;
+ case IPW_TX_RATE_12MB: return 12000000;
+ case IPW_TX_RATE_18MB: return 18000000;
+ case IPW_TX_RATE_24MB: return 24000000;
+ case IPW_TX_RATE_36MB: return 36000000;
+ case IPW_TX_RATE_48MB: return 48000000;
+ case IPW_TX_RATE_54MB: return 54000000;
+ }
+
+ return 0;
+}
+
+static int ipw_wx_get_rate(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ wrqu->bitrate.value = ipw_get_current_rate(netdev_priv(dev));
+
+ IPW_DEBUG_WX("GET Rate -> %d \n", wrqu->bitrate.value);
+ return 0;
+}
+
+
+static int ipw_wx_set_rts(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ if (wrqu->rts.disabled)
+ priv->rts_threshold = DEFAULT_RTS_THRESHOLD;
+ else {
+ if (wrqu->rts.value < MIN_RTS_THRESHOLD ||
+ wrqu->rts.value > MAX_RTS_THRESHOLD)
+ return -EINVAL;
+
+ priv->rts_threshold = wrqu->rts.value;
+ }
+
+ ipw_send_rts_threshold(priv, priv->rts_threshold);
+ IPW_DEBUG_WX("SET RTS Threshold -> %d \n", priv->rts_threshold);
+ return 0;
+}
+
+static int ipw_wx_get_rts(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ wrqu->rts.value = priv->rts_threshold;
+ wrqu->rts.fixed = 0; /* no auto select */
+ wrqu->rts.disabled =
+ (wrqu->rts.value == DEFAULT_RTS_THRESHOLD);
+
+ IPW_DEBUG_WX("GET RTS Threshold -> %d \n", wrqu->rts.value);
+ return 0;
+}
+
+
+static int ipw_wx_set_txpow(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ struct ipw_tx_power tx_power;
+ int i;
+
+ if (ipw_radio_kill_sw(priv, wrqu->power.disabled))
+ return -EINPROGRESS;
+
+ if (wrqu->power.flags != IW_TXPOW_DBM)
+ return -EINVAL;
+
+ if ((wrqu->power.value > 20) ||
+ (wrqu->power.value < -12))
+ return -EINVAL;
+
+ priv->tx_power = wrqu->power.value;
+
+ memset(&tx_power, 0, sizeof(tx_power));
+
+ /* configure device for 'G' band */
+ tx_power.ieee_mode = IEEE_G;
+ tx_power.num_channels = 11;
+ for (i = 0; i < 11; i++) {
+ tx_power.channels_tx_power[i].channel_number = i + 1;
+ tx_power.channels_tx_power[i].tx_power = priv->tx_power;
+ }
+ if (ipw_send_tx_power(priv, &tx_power))
+ goto error;
+
+ /* configure device to also handle 'B' band */
+ tx_power.ieee_mode = IEEE_B;
+ if (ipw_send_tx_power(priv, &tx_power))
+ goto error;
+
+ return 0;
+
+ error:
+ return -EIO;
+}
+
+
+static int ipw_wx_get_txpow(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ wrqu->power.value = priv->tx_power;
+ wrqu->power.fixed = 1;
+ wrqu->power.flags = IW_TXPOW_DBM;
+ wrqu->power.disabled = 0;
+
+ IPW_DEBUG_WX("GET TX Power -> %d \n", wrqu->power.value);
+
+ return 0;
+}
+
+static int ipw_wx_set_frag(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ if (wrqu->frag.disabled)
+ priv->ieee->fts = DEFAULT_FRAG_THRESHOLD;
+ else {
+ if (wrqu->frag.value < MIN_FRAG_THRESHOLD ||
+ wrqu->frag.value > MAX_FRAG_THRESHOLD)
+ return -EINVAL;
+
+ priv->ieee->fts = wrqu->frag.value & ~0x1;
+ }
+
+ ipw_send_frag_threshold(priv, wrqu->frag.value);
+ IPW_DEBUG_WX("SET Frag Threshold -> %d \n", wrqu->frag.value);
+ return 0;
+}
+
+static int ipw_wx_get_frag(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ wrqu->frag.value = priv->ieee->fts;
+ wrqu->frag.fixed = 0; /* no auto select */
+ wrqu->frag.disabled =
+ (wrqu->frag.value == DEFAULT_FRAG_THRESHOLD);
+
+ IPW_DEBUG_WX("GET Frag Threshold -> %d \n", wrqu->frag.value);
+
+ return 0;
+}
+
+static int ipw_wx_set_retry(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ IPW_DEBUG_WX("0x%p, 0x%p, 0x%p\n", dev, info, wrqu);
+ return -EOPNOTSUPP;
+}
+
+
+static int ipw_wx_get_retry(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ IPW_DEBUG_WX("0x%p, 0x%p, 0x%p\n", dev, info, wrqu);
+ return -EOPNOTSUPP;
+}
+
+
+static int ipw_wx_set_scan(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ IPW_DEBUG_WX("Start scan\n");
+ if (ipw_request_scan(priv))
+ return -EIO;
+ return 0;
+}
+
+static int ipw_wx_get_scan(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ return ieee80211_wx_get_scan(priv->ieee, info, wrqu, extra);
+}
+
+static int ipw_wx_set_encode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *key)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ return ieee80211_wx_set_encode(priv->ieee, info, wrqu, key);
+}
+
+static int ipw_wx_get_encode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *key)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ return ieee80211_wx_get_encode(priv->ieee, info, wrqu, key);
+}
+
+static int ipw_wx_set_power(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ int err;
+
+ if (wrqu->power.disabled) {
+ priv->power_mode = IPW_POWER_LEVEL(priv->power_mode);
+ err = ipw_send_power_mode(priv, IPW_POWER_MODE_CAM);
+ if (err) {
+ IPW_DEBUG_WX("failed setting power mode.\n");
+ return err;
+ }
+
+ IPW_DEBUG_WX("SET Power Management Mode -> off\n");
+
+ return 0;
+ }
+
+ switch (wrqu->power.flags & IW_POWER_MODE) {
+ case IW_POWER_ON: /* If not specified */
+ case IW_POWER_MODE: /* If set all mask */
+ case IW_POWER_ALL_R: /* If explicitely state all */
+ break;
+ default: /* Otherwise we don't support it */
+ IPW_DEBUG_WX("SET PM Mode: %X not supported.\n",
+ wrqu->power.flags);
+ return -EOPNOTSUPP;
+ }
+
+ /* If the user hasn't specified a power management mode yet, default
+ * to BATTERY */
+ if (IPW_POWER_LEVEL(priv->power_mode) == IPW_POWER_AC)
+ priv->power_mode = IPW_POWER_ENABLED | IPW_POWER_BATTERY;
+ else
+ priv->power_mode = IPW_POWER_ENABLED | priv->power_mode;
+ err = ipw_send_power_mode(priv, IPW_POWER_LEVEL(priv->power_mode));
+ if (err) {
+ IPW_DEBUG_WX("failed setting power mode.\n");
+ return err;
+ }
+
+ IPW_DEBUG_WX("SET Power Management Mode -> 0x%02X\n",
+ priv->power_mode);
+
+ return 0;
+}
+
+static int ipw_wx_get_power(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ if (!(priv->power_mode & IPW_POWER_ENABLED)) {
+ wrqu->power.disabled = 1;
+ } else {
+ wrqu->power.disabled = 0;
+ }
+
+ IPW_DEBUG_WX("GET Power Management Mode -> %02X\n", priv->power_mode);
+
+ return 0;
+}
+
+static int ipw_wx_set_powermode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ int mode = *(int *)extra;
+ int err;
+
+ if ((mode < 1) || (mode > IPW_POWER_LIMIT)) {
+ mode = IPW_POWER_AC;
+ priv->power_mode = mode;
+ } else {
+ priv->power_mode = IPW_POWER_ENABLED | mode;
+ }
+
+ if (priv->power_mode != mode) {
+ err = ipw_send_power_mode(priv, mode);
+
+ if (err) {
+ IPW_DEBUG_WX("failed setting power mode.\n");
+ return err;
+ }
+ }
+
+ return 0;
+}
+
+#define MAX_WX_STRING 80
+static int ipw_wx_get_powermode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ int level = IPW_POWER_LEVEL(priv->power_mode);
+ char *p = extra;
+
+ p += snprintf(p, MAX_WX_STRING, "Power save level: %d ", level);
+
+ switch (level) {
+ case IPW_POWER_AC:
+ p += snprintf(p, MAX_WX_STRING - (p - extra), "(AC)");
+ break;
+ case IPW_POWER_BATTERY:
+ p += snprintf(p, MAX_WX_STRING - (p - extra), "(BATTERY)");
+ break;
+ default:
+ p += snprintf(p, MAX_WX_STRING - (p - extra),
+ "(Timeout %dms, Period %dms)",
+ timeout_duration[level - 1] / 1000,
+ period_duration[level - 1] / 1000);
+ }
+
+ if (!(priv->power_mode & IPW_POWER_ENABLED))
+ p += snprintf(p, MAX_WX_STRING - (p - extra)," OFF");
+
+ wrqu->data.length = p - extra + 1;
+
+ return 0;
+}
+
+static int ipw_wx_set_wireless_mode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ int mode = *(int *)extra;
+ u8 band = 0, modulation = 0;
+
+ if (mode == 0 || mode & ~IEEE_MASK) {
+ IPW_WARNING("Attempt to set invalid wireless mode: %d\n",
+ mode);
+ return -EINVAL;
+ }
+
+ if (priv->adapter == IPW_2915ABG) {
+ priv->ieee->abg_ture = 1;
+ if (mode & BIT(IEEE_A)) {
+ band |= IEEE80211_52GHZ_BAND;
+ modulation |= IEEE80211_OFDM_MODULATION;
+ } else
+ priv->ieee->abg_ture = 0;
+ } else {
+ if (mode & BIT(IEEE_A)) {
+ IPW_WARNING("Attempt to set 2200BG into "
+ "802.11a mode\n");
+ return -EINVAL;
+ }
+
+ priv->ieee->abg_ture = 0;
+ }
+
+ if (mode & BIT(IEEE_B)) {
+ band |= IEEE80211_24GHZ_BAND;
+ modulation |= IEEE80211_CCK_MODULATION;
+ } else
+ priv->ieee->abg_ture = 0;
+
+ if (mode & BIT(IEEE_G)) {
+ band |= IEEE80211_24GHZ_BAND;
+ modulation |= IEEE80211_OFDM_MODULATION;
+ } else
+ priv->ieee->abg_ture = 0;
+
+ priv->ieee->freq_band = band;
+ priv->ieee->modulation = modulation;
+ init_supported_rates(priv, &priv->rates);
+
+ /* If we are currently associated, or trying to associate
+ * then see if this is a new configuration (causing us to
+ * disassociate) */
+ if (priv->status & (STATUS_ASSOCIATED | STATUS_ASSOCIATING)) {
+ /* The resulting association will trigger
+ * the new rates to be sent to the device */
+ IPW_DEBUG_ASSOC("Disassociating due to mode change.\n");
+ ipw_disassociate(priv);
+ } else
+ ipw_send_supported_rates(priv, &priv->rates);
+
+ IPW_DEBUG_WX("PRIV SET MODE: %c%c%c\n",
+ mode & BIT(IEEE_A) ? 'a' : '.',
+ mode & BIT(IEEE_B) ? 'b' : '.',
+ mode & BIT(IEEE_G) ? 'g' : '.');
+ return 0;
+}
+
+static int ipw_wx_get_wireless_mode(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ switch (priv->ieee->freq_band) {
+ case IEEE80211_24GHZ_BAND:
+ switch (priv->ieee->modulation) {
+ case IEEE80211_CCK_MODULATION:
+ strncpy(extra, "802.11b (2)", MAX_WX_STRING);
+ break;
+ case IEEE80211_OFDM_MODULATION:
+ strncpy(extra, "802.11g (4)", MAX_WX_STRING);
+ break;
+ default:
+ strncpy(extra, "802.11bg (6)", MAX_WX_STRING);
+ break;
+ }
+ break;
+
+ case IEEE80211_52GHZ_BAND:
+ strncpy(extra, "802.11a (1)", MAX_WX_STRING);
+ break;
+
+ default: /* Mixed Band */
+ switch (priv->ieee->modulation) {
+ case IEEE80211_CCK_MODULATION:
+ strncpy(extra, "802.11ab (3)", MAX_WX_STRING);
+ break;
+ case IEEE80211_OFDM_MODULATION:
+ strncpy(extra, "802.11ag (5)", MAX_WX_STRING);
+ break;
+ default:
+ strncpy(extra, "802.11abg (7)", MAX_WX_STRING);
+ break;
+ }
+ break;
+ }
+
+ IPW_DEBUG_WX("PRIV GET MODE: %s\n", extra);
+
+ wrqu->data.length = strlen(extra) + 1;
+
+ return 0;
+}
+
+#ifdef CONFIG_IPW_PROMISC
+static int ipw_wx_set_promisc(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ int *parms = (int *)extra;
+ int enable = (parms[0] > 0);
+
+ IPW_DEBUG_WX("SET PROMISC: %d %d\n", enable, parms[1]);
+ if (enable) {
+ if (priv->ieee->iw_mode != IW_MODE_MONITOR) {
+ priv->net_dev->type = ARPHRD_IEEE80211;
+ ipw_adapter_restart(priv);
+ }
+
+ ipw_set_channel(priv, parms[1]);
+ } else {
+ if (priv->ieee->iw_mode != IW_MODE_MONITOR)
+ return 0;
+ priv->net_dev->type = ARPHRD_ETHER;
+ ipw_adapter_restart(priv);
+ }
+ return 0;
+}
+
+
+static int ipw_wx_reset(struct net_device *dev,
+ struct iw_request_info *info,
+ union iwreq_data *wrqu, char *extra)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ IPW_DEBUG_WX("RESET\n");
+ ipw_adapter_restart(priv);
+ return 0;
+}
+#endif // CONFIG_IPW_PROMISC
+
+/* Rebase the WE IOCTLs to zero for the handler array */
+#define IW_IOCTL(x) [(x)-SIOCSIWCOMMIT]
+static iw_handler ipw_wx_handlers[] =
+{
+ IW_IOCTL(SIOCGIWNAME) = ipw_wx_get_name,
+ IW_IOCTL(SIOCSIWFREQ) = ipw_wx_set_freq,
+ IW_IOCTL(SIOCGIWFREQ) = ipw_wx_get_freq,
+ IW_IOCTL(SIOCSIWMODE) = ipw_wx_set_mode,
+ IW_IOCTL(SIOCGIWMODE) = ipw_wx_get_mode,
+ IW_IOCTL(SIOCGIWRANGE) = ipw_wx_get_range,
+ IW_IOCTL(SIOCSIWAP) = ipw_wx_set_wap,
+ IW_IOCTL(SIOCGIWAP) = ipw_wx_get_wap,
+ IW_IOCTL(SIOCSIWSCAN) = ipw_wx_set_scan,
+ IW_IOCTL(SIOCGIWSCAN) = ipw_wx_get_scan,
+ IW_IOCTL(SIOCSIWESSID) = ipw_wx_set_essid,
+ IW_IOCTL(SIOCGIWESSID) = ipw_wx_get_essid,
+ IW_IOCTL(SIOCSIWNICKN) = ipw_wx_set_nick,
+ IW_IOCTL(SIOCGIWNICKN) = ipw_wx_get_nick,
+ IW_IOCTL(SIOCSIWRATE) = ipw_wx_set_rate,
+ IW_IOCTL(SIOCGIWRATE) = ipw_wx_get_rate,
+ IW_IOCTL(SIOCSIWRTS) = ipw_wx_set_rts,
+ IW_IOCTL(SIOCGIWRTS) = ipw_wx_get_rts,
+ IW_IOCTL(SIOCSIWFRAG) = ipw_wx_set_frag,
+ IW_IOCTL(SIOCGIWFRAG) = ipw_wx_get_frag,
+ IW_IOCTL(SIOCSIWTXPOW) = ipw_wx_set_txpow,
+ IW_IOCTL(SIOCGIWTXPOW) = ipw_wx_get_txpow,
+ IW_IOCTL(SIOCSIWRETRY) = ipw_wx_set_retry,
+ IW_IOCTL(SIOCGIWRETRY) = ipw_wx_get_retry,
+ IW_IOCTL(SIOCSIWENCODE) = ipw_wx_set_encode,
+ IW_IOCTL(SIOCGIWENCODE) = ipw_wx_get_encode,
+ IW_IOCTL(SIOCSIWPOWER) = ipw_wx_set_power,
+ IW_IOCTL(SIOCGIWPOWER) = ipw_wx_get_power,
+};
+
+#define IPW_PRIV_SET_POWER SIOCIWFIRSTPRIV
+#define IPW_PRIV_GET_POWER SIOCIWFIRSTPRIV+1
+#define IPW_PRIV_SET_MODE SIOCIWFIRSTPRIV+2
+#define IPW_PRIV_GET_MODE SIOCIWFIRSTPRIV+3
+#define IPW_PRIV_SET_PROMISC SIOCIWFIRSTPRIV+4
+#define IPW_PRIV_RESET SIOCIWFIRSTPRIV+5
+
+
+static struct iw_priv_args ipw_priv_args[] = {
+ {
+ .cmd = IPW_PRIV_SET_POWER,
+ .set_args = IW_PRIV_TYPE_INT | IW_PRIV_SIZE_FIXED | 1,
+ .name = "set_power"
+ },
+ {
+ .cmd = IPW_PRIV_GET_POWER,
+ .get_args = IW_PRIV_TYPE_CHAR | IW_PRIV_SIZE_FIXED | MAX_WX_STRING,
+ .name = "get_power"
+ },
+ {
+ .cmd = IPW_PRIV_SET_MODE,
+ .set_args = IW_PRIV_TYPE_INT | IW_PRIV_SIZE_FIXED | 1,
+ .name = "set_mode"
+ },
+ {
+ .cmd = IPW_PRIV_GET_MODE,
+ .get_args = IW_PRIV_TYPE_CHAR | IW_PRIV_SIZE_FIXED | MAX_WX_STRING,
+ .name = "get_mode"
+ },
+#ifdef CONFIG_IPW_PROMISC
+ {
+ IPW_PRIV_SET_PROMISC,
+ IW_PRIV_TYPE_INT | IW_PRIV_SIZE_FIXED | 2, 0, "monitor"
+ },
+ {
+ IPW_PRIV_RESET,
+ IW_PRIV_TYPE_INT | IW_PRIV_SIZE_FIXED | 0, 0, "reset"
+ },
+#endif /* CONFIG_IPW_PROMISC */
+};
+
+static iw_handler ipw_priv_handler[] = {
+ ipw_wx_set_powermode,
+ ipw_wx_get_powermode,
+ ipw_wx_set_wireless_mode,
+ ipw_wx_get_wireless_mode,
+#ifdef CONFIG_IPW_PROMISC
+ ipw_wx_set_promisc,
+ ipw_wx_reset,
+#endif
+};
+
+static struct iw_handler_def ipw_wx_handler_def =
+{
+ .standard = ipw_wx_handlers,
+ .num_standard = ARRAY_SIZE(ipw_wx_handlers),
+ .num_private = ARRAY_SIZE(ipw_priv_handler),
+ .num_private_args = ARRAY_SIZE(ipw_priv_args),
+ .private = ipw_priv_handler,
+ .private_args = ipw_priv_args,
+};
+
+/*
+ * Get wireless statistics.
+ * Called by /proc/net/wireless
+ * Also called by SIOCGIWSTATS
+ */
+static struct iw_statistics *ipw_get_wireless_stats(struct net_device * dev)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ struct iw_statistics *wstats;
+ u32 rssi, quality, missed_beacons, tx_failures;/*, tx_retry;*/
+ u32 len = sizeof(u32);
+
+ u32 rate, max_rate, rx_err;
+
+ /* If we don't have a connection the quality and level is 0*/
+ if (!(priv->status & STATUS_ASSOCIATED))
+ return (struct iw_statistics *) NULL;
+
+ wstats = &priv->wstats;
+
+/*
+quality = (current rate / max band rate) * 0.4 +
+ 1 - (Rx error / Rx packets) * 0.6
+*/
+ rate = ipw_get_current_rate(priv);
+ max_rate = (priv->assoc_request.ieee_mode == IEEE_B) ?
+ 11000000 : 54000000;
+
+ ipw_get_ordinal(priv, IPW_ORD_STAT_RX_ERR_CRC, &rx_err, &len);
+ quality = (rate * 40) / max_rate;
+ /* TODO: Use a cached copy of the Rx stats that can be periodically
+ * reset -- otherwise the quality level will turn static */
+ if (priv->ieee->stats.rx_packets + rx_err)
+ quality += 60 - (rx_err * 60) /
+ (priv->ieee->stats.rx_packets + rx_err);
+ else
+ quality += 60;
+
+ wstats->qual.qual = quality;
+
+ ipw_get_ordinal(priv, IPW_ORD_STAT_CURR_RSSI_RAW, &rssi, &len);
+ wstats->qual.level = priv->last_rx_rssi; /*rssi + IPW_RSSI_TO_DBM;*/
+ wstats->qual.updated = IW_QUAL_QUAL_UPDATED|IW_QUAL_LEVEL_UPDATED;
+
+ /* current fw does not support noise statistics */
+ wstats->qual.updated |= IW_QUAL_NOISE_INVALID;
+
+ if (ipw_get_ordinal(priv, IPW_ORD_STAT_MISSED_BEACONS,
+ &missed_beacons, &len))
+ goto fail_get_ordinal;
+
+ wstats->miss.beacon = missed_beacons;
+
+ if (ipw_get_ordinal(priv, IPW_ORD_STAT_TX_FAILURE,
+ &tx_failures, &len))
+ goto fail_get_ordinal;
+ wstats->discard.retries = tx_failures;
+
+/* if (ipw_get_ordinal(priv, IPW_ORD_STAT_TX_RETRY, &tx_retry, &len))
+ goto fail_get_ordinal;
+ wstats->discard.retries += tx_retry; */
+
+ return wstats;
+
+ fail_get_ordinal:
+ IPW_DEBUG_WX("failed querying ordinals.\n");
+
+ return NULL;
+}
+
+
+/* net device stuff */
+
+static inline void init_sys_config(struct ipw_sys_config *sys_config)
+{
+ memset(sys_config, 0, sizeof(struct ipw_sys_config));
+ sys_config->bt_coexistence = 1; /* We may need to look into prvStaBtConfig */
+ sys_config->answer_broadcast_ssid_probe = 0;
+ sys_config->accept_all_data_frames = 0;
+ sys_config->accept_non_directed_frames = 1;
+ sys_config->exclude_unicast_unencrypted = 0;
+ sys_config->disable_unicast_decryption = 1;
+ sys_config->exclude_multicast_unencrypted = 0;
+ sys_config->disable_multicast_decryption = 1;
+ sys_config->antenna_diversity = CFG_SYS_ANTENNA_BOTH;
+ sys_config->pass_crc_to_host = 0; /* TODO: See if 1 gives us FCS */
+ sys_config->dot11g_auto_detection = 0;
+ sys_config->enable_cts_to_self = 0;
+ sys_config->bt_coexist_collision_thr = 0;
+ sys_config->pass_noise_stats_to_host = 1;
+}
+
+static int ipw_net_open(struct net_device *dev)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ IPW_DEBUG_INFO("dev->open\n");
+ /* we should be verifying the device is ready to be opened */
+ if (!(priv->status & STATUS_RF_KILL_MASK) &&
+ (priv->status & STATUS_ASSOCIATED))
+ netif_start_queue(dev);
+ return 0;
+}
+
+static int ipw_net_stop(struct net_device *dev)
+{
+ IPW_DEBUG_INFO("dev->close\n");
+ netif_stop_queue(dev);
+ return 0;
+}
+
+static int ipw_net_hard_start_xmit(struct sk_buff *skb,
+ struct net_device *dev)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ unsigned long flags;
+ int err;
+
+ IPW_DEBUG_TX("dev->xmit(%d bytes)\n", skb->len);
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ if (!(priv->status & STATUS_ASSOCIATED)) {
+ IPW_DEBUG_INFO("Tx attempt while not associated.\n");
+ priv->ieee->stats.tx_carrier_errors++;
+ netif_stop_queue(dev);
+ goto fail_unlock;
+ }
+
+ err = ipw_tx_skb(priv, skb);
+ if (err) {
+ priv->ieee->stats.tx_errors++;
+ goto fail_unlock;
+ }
+
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return 0;
+
+ fail_unlock:
+ spin_unlock_irqrestore(&priv->lock, flags);
+ return 1;
+}
+
+static struct net_device_stats *ipw_net_get_stats(struct net_device *dev)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ return &priv->ieee->stats;
+}
+
+static void ipw_net_set_multicast_list(struct net_device *dev)
+{
+
+}
+
+static int ipw_net_set_mac_address(struct net_device *dev, void *p)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+ struct sockaddr *addr = p;
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+ priv->config |= CFG_CUSTOM_MAC;
+ memcpy(priv->mac_addr, addr->sa_data, ETH_ALEN);
+ printk(KERN_INFO "%s: Setting MAC to " MAC_FMT "\n",
+ priv->net_dev->name, MAC_ARG(priv->mac_addr));
+ ipw_adapter_restart(priv);
+ return 0;
+}
+
+static void ipw_ethtool_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct ipw_priv *p = netdev_priv(dev);
+ char vers[64];
+ char date[32];
+ u32 len;
+
+ strcpy(info->driver, DRV_NAME);
+ strcpy(info->version, DRV_VERSION);
+
+ ipw_get_ordinal(p, IPW_ORD_STAT_FW_VERSION, vers, &len);
+ ipw_get_ordinal(p, IPW_ORD_STAT_FW_DATE, date, &len);
+
+ snprintf(info->fw_version, sizeof(info->fw_version),"%s (%s)",
+ vers, date);
+ strcpy(info->bus_info, pci_name(p->pci_dev));
+ info->eedump_len = CX2_EEPROM_IMAGE_SIZE;
+}
+
+static u32 ipw_ethtool_get_link(struct net_device *dev)
+{
+ struct ipw_priv *priv = dev->priv;
+ return (priv->status & STATUS_ASSOCIATED) != 0;
+}
+
+static int ipw_ethtool_get_eeprom_len(struct net_device *dev)
+{
+ return CX2_EEPROM_IMAGE_SIZE;
+}
+
+static int ipw_ethtool_get_eeprom(struct net_device *dev,
+ struct ethtool_eeprom *eeprom, u8 *bytes)
+{
+ struct ipw_priv *p = netdev_priv(dev);
+
+ if (eeprom->offset + eeprom->len > CX2_EEPROM_IMAGE_SIZE)
+ return -EINVAL;
+
+ memcpy(bytes, &((u8 *)p->eeprom)[eeprom->offset], eeprom->len);
+ return 0;
+}
+
+static int ipw_ethtool_set_eeprom(struct net_device *dev,
+ struct ethtool_eeprom *eeprom, u8 *bytes)
+{
+ struct ipw_priv *p = netdev_priv(dev);
+ int i;
+
+ if (eeprom->offset + eeprom->len > CX2_EEPROM_IMAGE_SIZE)
+ return -EINVAL;
+
+ memcpy(&((u8 *)p->eeprom)[eeprom->offset], bytes, eeprom->len);
+ for (i = IPW_EEPROM_DATA;
+ i < IPW_EEPROM_DATA + CX2_EEPROM_IMAGE_SIZE;
+ i++)
+ ipw_write8(p, i, p->eeprom[i]);
+
+ return 0;
+}
+
+static struct ethtool_ops ipw_ethtool_ops = {
+ .get_link = ipw_ethtool_get_link,
+ .get_drvinfo = ipw_ethtool_get_drvinfo,
+ .get_eeprom_len = ipw_ethtool_get_eeprom_len,
+ .get_eeprom = ipw_ethtool_get_eeprom,
+ .set_eeprom = ipw_ethtool_set_eeprom,
+};
+
+static irqreturn_t ipw_isr(int irq, void *data, struct pt_regs *regs)
+{
+ struct ipw_priv *priv = data;
+ u32 inta, inta_mask;
+
+ if (!priv)
+ return IRQ_NONE;
+
+ spin_lock(&priv->lock);
+
+ if (!(priv->status & STATUS_INT_ENABLED)) {
+ /* Shared IRQ */
+ goto none;
+ }
+
+ inta = ipw_read32(priv, CX2_INTA_RW);
+ inta_mask = ipw_read32(priv, CX2_INTA_MASK_R);
+
+ if (inta == 0xFFFFFFFF) {
+ /* Hardware disappeared */
+ IPW_WARNING("IRQ INTA == 0xFFFFFFFF\n");
+ goto none;
+ }
+
+ if (!(inta & (CX2_INTA_MASK_ALL & inta_mask))) {
+ /* Shared interrupt */
+ goto none;
+ }
+
+ /* tell the device to stop sending interrupts */
+ ipw_disable_interrupts(priv);
+
+ /* ack current interrupts */
+ inta &= (CX2_INTA_MASK_ALL & inta_mask);
+ ipw_write32(priv, CX2_INTA_RW, inta);
+
+ /* Cache INTA value for our tasklet */
+ priv->isr_inta = inta;
+
+ tasklet_schedule(&priv->irq_tasklet);
+
+ spin_unlock(&priv->lock);
+
+ return IRQ_HANDLED;
+ none:
+ spin_unlock(&priv->lock);
+ return IRQ_NONE;
+}
+
+static void ipw_rf_kill(void *adapter)
+{
+ struct ipw_priv *priv = adapter;
+ unsigned long flags;
+
+ spin_lock_irqsave(&priv->lock, flags);
+
+ if (rf_kill_active(priv)) {
+ IPW_DEBUG_RF_KILL("RF Kill active, rescheduling GPIO check\n");
+ if (priv->workqueue)
+ queue_delayed_work(priv->workqueue,
+ &priv->rf_kill, 2 * HZ);
+ goto exit_unlock;
+ }
+
+ /* RF Kill is now disabled, so bring the device back up */
+
+ if (!(priv->status & STATUS_RF_KILL_MASK)) {
+ IPW_DEBUG_RF_KILL("HW RF Kill no longer active, restarting "
+ "device\n");
+ ipw_adapter_restart(priv);
+ } else
+ IPW_DEBUG_RF_KILL("HW RF Kill deactivated. SW RF Kill still "
+ "enabled\n");
+
+ exit_unlock:
+ spin_unlock_irqrestore(&priv->lock, flags);
+}
+
+static int ipw_setup_deferred_work(struct ipw_priv *priv)
+{
+ int ret = 0;
+
+#ifdef CONFIG_SOFTWARE_SUSPEND2
+ priv->workqueue = create_workqueue(DRV_NAME, 0);
+#else
+ priv->workqueue = create_workqueue(DRV_NAME);
+#endif
+ init_waitqueue_head(&priv->wait_command_queue);
+
+ INIT_WORK(&priv->associate, ipw_associate, priv);
+ INIT_WORK(&priv->disassociate, ipw_disassociate, priv);
+ INIT_WORK(&priv->rx_replenish, ipw_rx_queue_replenish, priv);
+ INIT_WORK(&priv->adapter_restart, ipw_adapter_restart, priv);
+ INIT_WORK(&priv->rf_kill, ipw_rf_kill, priv);
+ INIT_WORK(&priv->up, (void (*)(void *))ipw_up, priv);
+ INIT_WORK(&priv->down, (void (*)(void *))ipw_down, priv);
+ INIT_WORK(&priv->request_scan,
+ (void (*)(void *))ipw_request_scan, priv);
+
+ tasklet_init(&priv->irq_tasklet, (void (*)(unsigned long))
+ ipw_irq_tasklet, (unsigned long)priv);
+
+ return ret;
+}
+
+
+static void shim__set_security(struct ieee80211_device *ieee,
+ struct ieee80211_security *sec)
+{
+ struct ipw_priv *priv = ieee->priv;
+ int i;
+
+ for (i = 0; i < 4; i++) {
+ if (sec->flags & (1 << i)) {
+ priv->sec.key_sizes[i] = sec->key_sizes[i];
+ if (sec->key_sizes[i] == 0)
+ priv->sec.flags &= ~(1 << i);
+ else
+ memcpy(priv->sec.keys[i], sec->keys[i],
+ sec->key_sizes[i]);
+ priv->sec.flags |= (1 << i);
+ priv->status |= STATUS_SECURITY_UPDATED;
+ }
+ }
+
+ if ((sec->flags & SEC_ACTIVE_KEY) &&
+ priv->sec.active_key != sec->active_key) {
+ if (sec->active_key <= 3) {
+ priv->sec.active_key = sec->active_key;
+ priv->sec.flags |= SEC_ACTIVE_KEY;
+ } else
+ priv->sec.flags &= ~SEC_ACTIVE_KEY;
+ priv->status |= STATUS_SECURITY_UPDATED;
+ }
+
+ if ((sec->flags & SEC_AUTH_MODE) &&
+ (priv->sec.auth_mode != sec->auth_mode)) {
+ priv->sec.auth_mode = sec->auth_mode;
+ priv->sec.flags |= SEC_AUTH_MODE;
+ if (sec->auth_mode == WLAN_AUTH_SHARED_KEY)
+ priv->capability |= CAP_SHARED_KEY;
+ else
+ priv->capability &= ~CAP_SHARED_KEY;
+ priv->status |= STATUS_SECURITY_UPDATED;
+ }
+
+ if (sec->flags & SEC_ENABLED &&
+ priv->sec.enabled != sec->enabled) {
+ priv->sec.flags |= SEC_ENABLED;
+ priv->sec.enabled = sec->enabled;
+ priv->status |= STATUS_SECURITY_UPDATED;
+ if (sec->enabled)
+ priv->capability |= CAP_PRIVACY_ON;
+ else
+ priv->capability &= ~CAP_PRIVACY_ON;
+ }
+
+ if (sec->flags & SEC_LEVEL &&
+ priv->sec.level != sec->level) {
+ priv->sec.level = sec->level;
+ priv->sec.flags |= SEC_LEVEL;
+ priv->status |= STATUS_SECURITY_UPDATED;
+ }
+
+ /* If the privacy state has switched from what we are
+ * associated/ing with, then disassociate */
+ if ((priv->status & (STATUS_ASSOCIATED | STATUS_ASSOCIATING)) &&
+ (((priv->assoc_request.capability &
+ WLAN_CAPABILITY_PRIVACY) && !sec->enabled) ||
+ (!((priv->assoc_request.capability &
+ WLAN_CAPABILITY_PRIVACY) && sec->enabled)))) {
+ IPW_DEBUG_ASSOC("Disassociating due to capability "
+ "change.\n");
+ ipw_disassociate(priv);
+ }
+}
+
+static struct ieee80211_helper_functions ipw_ieee_callbacks = {
+ .set_security = shim__set_security,
+ .card_present = NULL,
+ .cor_sreset = NULL,
+ .dev_open = NULL,
+ .dev_close = NULL,
+ .genesis_reset = NULL,
+ .set_unencrypted_filter = NULL,
+ .hw_enable = NULL,
+ .hw_config = NULL,
+ .hw_reset = NULL,
+ .hw_shutdown = NULL,
+ .reset_port = NULL,
+ .tx = NULL,
+ .schedule_reset = NULL,
+ .tx_80211 = NULL
+};
+
+static int init_supported_rates(struct ipw_priv *priv,
+ struct ipw_supported_rates *rates)
+{
+ /* TODO: Mask out rates based on priv->rates_mask */
+
+ memset(rates, 0, sizeof(*rates));
+ /* configure supported rates */
+ switch (priv->ieee->freq_band) {
+ case IEEE80211_52GHZ_BAND:
+ rates->ieee_mode = IEEE_A;
+ rates->purpose = IPW_RATE_CAPABILITIES;
+ ipw_add_ofdm_scan_rates(rates, IEEE80211_CCK_MODULATION,
+ IEEE80211_OFDM_DEFAULT_RATES_MASK);
+ break;
+
+ default: /* Mixed or 2.4Ghz */
+ rates->ieee_mode = IEEE_G;
+ rates->purpose = IPW_RATE_CAPABILITIES;
+ ipw_add_cck_scan_rates(rates, IEEE80211_CCK_MODULATION,
+ IEEE80211_CCK_DEFAULT_RATES_MASK);
+ if (priv->ieee->modulation & IEEE80211_OFDM_MODULATION) {
+ ipw_add_ofdm_scan_rates(rates, IEEE80211_CCK_MODULATION,
+ IEEE80211_OFDM_DEFAULT_RATES_MASK);
+ }
+ break;
+ }
+
+ return 0;
+}
+
+static int ipw_config(struct ipw_priv *priv)
+{
+ int i;
+ struct ipw_tx_power tx_power;
+
+ memset(&priv->sys_config, 0, sizeof(priv->sys_config));
+ memset(&tx_power, 0, sizeof(tx_power));
+
+ /* This is only called from ipw_up, which resets/reloads the firmware
+ so, we don't need to first disable the card before we configure
+ it */
+
+ /* configure device for 'G' band */
+ tx_power.ieee_mode = IEEE_G;
+ tx_power.num_channels = 11;
+ for (i = 0; i < 11; i++) {
+ tx_power.channels_tx_power[i].channel_number = i + 1;
+ tx_power.channels_tx_power[i].tx_power = priv->tx_power;
+ }
+ if (ipw_send_tx_power(priv, &tx_power))
+ goto error;
+
+ /* configure device to also handle 'B' band */
+ tx_power.ieee_mode = IEEE_B;
+ if (ipw_send_tx_power(priv, &tx_power))
+ goto error;
+
+ /* initialize adapter address */
+ if (ipw_send_adapter_address(priv, priv->net_dev->dev_addr))
+ goto error;
+
+ /* set basic system config settings */
+ init_sys_config(&priv->sys_config);
+ if (ipw_send_system_config(priv, &priv->sys_config))
+ goto error;
+
+ init_supported_rates(priv, &priv->rates);
+ if (ipw_send_supported_rates(priv, &priv->rates))
+ goto error;
+
+ /* Set request-to-send threshold */
+ if (priv->rts_threshold) {
+ if (ipw_send_rts_threshold(priv, priv->rts_threshold))
+ goto error;
+ }
+
+ if (ipw_set_random_seed(priv))
+ goto error;
+
+ /* final state transition to the RUN state */
+ if (ipw_send_host_complete(priv))
+ goto error;
+
+ /* kick off an active scan */
+ if (ipw_request_scan(priv))
+ goto error;
+
+ return 0;
+
+ error:
+ return -EIO;
+}
+
+#define MAX_HW_RESTARTS 5
+static int ipw_up(struct ipw_priv *priv)
+{
+ int rc, i;
+
+ if (priv->status & STATUS_EXIT_PENDING)
+ return -EIO;
+
+ for (i = 0; i < MAX_HW_RESTARTS; i++ ) {
+ /* Load the microcode, firmware, and eeprom.
+ * Also start the clocks. */
+ rc = ipw_load(priv);
+ if (rc) {
+ IPW_ERROR("Unable to load firmware: 0x%08X\n",
+ rc);
+ return rc;
+ }
+
+ ipw_init_ordinals(priv);
+ if (!(priv->config & CFG_CUSTOM_MAC))
+ eeprom_parse_mac(priv, priv->mac_addr);
+ memcpy(priv->net_dev->dev_addr, priv->mac_addr, ETH_ALEN);
+
+ if (priv->status & STATUS_RF_KILL_MASK)
+ return 0;
+
+ rc = ipw_config(priv);
+ if (!rc) {
+ IPW_DEBUG_INFO("Configured device on count %i\n", i);
+
+ netif_start_queue(priv->net_dev);
+ return 0;
+ } else {
+ IPW_DEBUG_INFO("Device configuration failed: 0x%08X\n",
+ rc);
+ }
+
+ IPW_DEBUG_INFO("Failed to config device on retry %d of %d\n",
+ i, MAX_HW_RESTARTS);
+
+ /* We had an error bringing up the hardware, so take it
+ * all the way back down so we can try again */
+ ipw_down(priv);
+ }
+
+ /* tried to restart and config the device for as long as our
+ * patience could withstand */
+ IPW_ERROR("Unable to initialize device after %d attempts.\n",
+ i);
+ return -EIO;
+}
+
+static void ipw_down(struct ipw_priv *priv)
+{
+ /* tell the device to stop sending interrupts */
+ ipw_disable_interrupts(priv);
+
+ /* Clear all bits but the RF Kill */
+ priv->status &= STATUS_RF_KILL_MASK;
+
+ netif_carrier_off(priv->net_dev);
+ netif_stop_queue(priv->net_dev);
+
+ ipw_stop_nic(priv);
+}
+
+/* Called by register_netdev() */
+static int ipw_net_init(struct net_device *dev)
+{
+ struct ipw_priv *priv = netdev_priv(dev);
+
+ if (priv->status & STATUS_RF_KILL_SW) {
+ IPW_WARNING("Radio disabled by module parameter.\n");
+ return 0;
+ } else if (rf_kill_active(priv)) {
+ IPW_WARNING("Radio Frequency Kill Switch is On:\n"
+ "Kill switch must be turned off for "
+ "wireless networking to work.\n");
+ queue_delayed_work(priv->workqueue, &priv->rf_kill, 2 * HZ);
+ return 0;
+ }
+
+ if (ipw_up(priv))
+ return -EIO;
+
+ return 0;
+}
+
+/* PCI driver stuff */
+static struct pci_device_id card_ids[] = {
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2701, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2702, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2711, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2712, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2721, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2722, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2731, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2732, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2741, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x103c, 0x2741, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2742, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2751, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2752, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2753, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2754, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2761, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x1043, 0x8086, 0x2762, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x104f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
+ {PCI_VENDOR_ID_INTEL, 0x4220, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, /* BG */
+ {PCI_VENDOR_ID_INTEL, 0x4223, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, /* ABG */
+ {PCI_VENDOR_ID_INTEL, 0x4224, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0}, /* ABG */
+
+ /* required last entry */
+ {0,}
+};
+
+MODULE_DEVICE_TABLE(pci, card_ids);
+
+static struct attribute *ipw_sysfs_entries[] = {
+ &dev_attr_rf_kill.attr,
+ &dev_attr_direct_dword.attr,
+ &dev_attr_indirect_byte.attr,
+ &dev_attr_indirect_dword.attr,
+ &dev_attr_mem_gpio_reg.attr,
+ &dev_attr_command_event_reg.attr,
+ &dev_attr_nic_type.attr,
+ &dev_attr_status.attr,
+ &dev_attr_cfg.attr,
+ &dev_attr_dump_errors.attr,
+ &dev_attr_dump_events.attr,
+ &dev_attr_eeprom_delay.attr,
+ &dev_attr_ucode_version.attr,
+ &dev_attr_rtc.attr,
+ NULL
+};
+
+static struct attribute_group ipw_attribute_group = {
+ .name = NULL, /* put in device directory */
+ .attrs = ipw_sysfs_entries,
+};
+
+static int ipw_pci_probe(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ int err = 0;
+ struct net_device *net_dev;
+ void *base;
+ u32 length, val;
+ struct ipw_priv *priv;
+ int band, modulation;
+
+ net_dev = alloc_etherdev(sizeof(struct ipw_priv));
+ if (net_dev == NULL) {
+ err = -ENOMEM;
+ goto out;
+ }
+
+ priv = netdev_priv(net_dev);
+ priv->net_dev = net_dev;
+ priv->pci_dev = pdev;
+#ifdef CONFIG_IPW_DEBUG
+ ipw_debug_level = debug;
+#endif
+ spin_lock_init(&priv->lock);
+
+ if (pci_enable_device(pdev)) {
+ err = -ENODEV;
+ goto out_free_netdev;
+ }
+
+ pci_set_master(pdev);
+ pci_set_drvdata(pdev, priv);
+
+ err = pci_request_regions(pdev, DRV_NAME);
+ if (err)
+ goto out_pci_disable_device;
+
+ /* We disable the RETRY_TIMEOUT register (0x41) to keep
+ * PCI Tx retries from interfering with C3 CPU state */
+ pci_read_config_dword(pdev, 0x40, &val);
+ if ((val & 0x0000ff00) != 0)
+ pci_write_config_dword(pdev, 0x40, val & 0xffff00ff);
+
+ length = pci_resource_len(pdev, 0);
+ priv->hw_len = length;
+
+ base = ioremap_nocache(pci_resource_start(pdev, 0), length);
+ if (!base) {
+ err = -ENODEV;
+ goto out_pci_release_regions;
+ }
+
+ priv->hw_base = (unsigned long)base;
+ IPW_DEBUG_INFO("pci_resource_len = 0x%08x\n", length);
+ IPW_DEBUG_INFO("pci_resource_base = %p\n", base);
+
+ err = ipw_setup_deferred_work(priv);
+ if (err) {
+ IPW_ERROR("Unable to setup deferred work\n");
+ goto out_iounmap;
+ }
+
+ /* Initialize WEP */
+ priv->ieee = ieee80211_alloc(priv->net_dev, priv);
+ if (!priv->ieee) {
+ IPW_ERROR("Unable to allocate ieee80211 data\n");
+ err = -ENOMEM;
+ goto out_destroy_workqueue;
+ }
+
+ /* Initialize module parameter values here */
+ if (ifname)
+ strncpy(net_dev->name, ifname, IFNAMSIZ);
+
+ if (associate)
+ priv->config |= CFG_ASSOCIATE;
+ else
+ IPW_DEBUG_INFO("Auto associate disabled.\n");
+
+ if (adhoc_create)
+ priv->config |= CFG_ADHOC_CREATE;
+ else
+ IPW_DEBUG_INFO("Auto adhoc creation disabled.\n");
+
+ if (disable) {
+ priv->status |= STATUS_RF_KILL_SW;
+ IPW_DEBUG_INFO("Radio disabled.\n");
+ }
+
+ if (channel != 0) {
+ priv->config |= CFG_STATIC_CHANNEL;
+ priv->channel = channel;
+ IPW_DEBUG_INFO("Bind to static channel %d\n", channel);
+ IPW_DEBUG_INFO("Bind to static channel %d\n", channel);
+ /* TODO: Validate that provided channel is in range */
+ }
+
+ switch (mode) {
+ case 1:
+ priv->ieee->iw_mode = IW_MODE_ADHOC;
+ break;
+#ifdef CONFIG_IPW_PROMISC
+ case 2:
+ priv->ieee->iw_mode = IW_MODE_MONITOR;
+ break;
+#endif
+ default:
+ case 0:
+ priv->ieee->iw_mode = IW_MODE_INFRA;
+ break;
+ }
+
+ if ((priv->pci_dev->device == 0x4223) ||
+ (priv->pci_dev->device == 0x4224)) {
+ printk(KERN_INFO DRV_NAME
+ ": Detected Intel PRO/Wireless 2915ABG Network "
+ "Connection\n");
+ priv->ieee->abg_ture = 1;
+ band = IEEE80211_52GHZ_BAND | IEEE80211_24GHZ_BAND;
+ modulation = IEEE80211_OFDM_MODULATION |
+ IEEE80211_CCK_MODULATION;
+ priv->adapter = IPW_2915ABG;
+ priv->ieee->mode = IEEE_A;
+ } else {
+ printk(KERN_INFO DRV_NAME
+ ": Detected Intel PRO/Wireless 2200BG Network "
+ "Connection\n");
+ priv->ieee->abg_ture = 0;
+ band = IEEE80211_24GHZ_BAND;
+ modulation = IEEE80211_OFDM_MODULATION |
+ IEEE80211_CCK_MODULATION;
+ priv->adapter = IPW_2200BG;
+ priv->ieee->mode = IEEE_G;
+ }
+
+ priv->ieee->freq_band = band;
+ priv->ieee->modulation = modulation;
+
+ priv->rates_mask = IEEE80211_DEFAULT_RATES_MASK;
+
+ priv->missed_beacon_threshold = IPW_MB_DISASSOCIATE_THRESHOLD_DEFAULT;
+ priv->roaming_threshold = IPW_MB_ROAMING_THRESHOLD_DEFAULT;
+
+ priv->ieee->func = &ipw_ieee_callbacks;
+ priv->ieee->tx_payload_only = 1;
+
+ priv->rts_threshold = DEFAULT_RTS_THRESHOLD;
+
+ /* If power management is turned on, default to AC mode */
+ priv->power_mode = IPW_POWER_AC;
+ priv->tx_power = IPW_DEFAULT_TX_POWER;
+
+ err = request_irq(pdev->irq, ipw_isr, SA_SHIRQ, DRV_NAME,
+ priv);
+ if (err) {
+ IPW_ERROR("Error allocating IRQ %d\n", pdev->irq);
+ goto out_free_ieee80211;
+ }
+
+ SET_MODULE_OWNER(net_dev);
+ SET_NETDEV_DEV(net_dev, &pdev->dev);
+
+ net_dev->open = ipw_net_open;
+ net_dev->stop = ipw_net_stop;
+ net_dev->init = ipw_net_init;
+ net_dev->hard_start_xmit = ipw_net_hard_start_xmit;
+ net_dev->get_stats = ipw_net_get_stats;
+ net_dev->set_multicast_list = ipw_net_set_multicast_list;
+ net_dev->set_mac_address = ipw_net_set_mac_address;
+ net_dev->get_wireless_stats = ipw_get_wireless_stats;
+ net_dev->wireless_handlers = &ipw_wx_handler_def;
+ net_dev->ethtool_ops = &ipw_ethtool_ops;
+ net_dev->irq = pdev->irq;
+ net_dev->base_addr = priv->hw_base;
+ net_dev->mem_start = pci_resource_start(pdev, 0);
+ net_dev->mem_end = net_dev->mem_start + pci_resource_len(pdev, 0) - 1;
+
+ err = sysfs_create_group(&pdev->dev.kobj, &ipw_attribute_group);
+ if (err) {
+ IPW_ERROR("failed to create sysfs device attributes\n");
+ goto out_release_irq;
+ }
+
+ err = register_netdev(net_dev);
+ if (err) {
+ IPW_ERROR("failed to register network device\n");
+ goto out_remove_group;
+ }
+
+ return 0;
+
+ out_remove_group:
+ sysfs_remove_group(&pdev->dev.kobj, &ipw_attribute_group);
+ out_release_irq:
+ free_irq(pdev->irq, priv);
+ out_free_ieee80211:
+ ieee80211_free(priv->ieee);
+ priv->ieee = NULL;
+ out_destroy_workqueue:
+ destroy_workqueue(priv->workqueue);
+ priv->workqueue = NULL;
+ out_iounmap:
+ iounmap((void *)priv->hw_base);
+ out_pci_release_regions:
+ pci_release_regions(pdev);
+ out_pci_disable_device:
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ out_free_netdev:
+ free_netdev(priv->net_dev);
+ out:
+ return err;
+}
+
+static void ipw_pci_remove(struct pci_dev *pdev)
+{
+ struct ipw_priv *priv = pci_get_drvdata(pdev);
+ if (!priv)
+ return;
+
+ priv->status |= STATUS_EXIT_PENDING;
+
+ sysfs_remove_group(&pdev->dev.kobj, &ipw_attribute_group);
+
+ ipw_down(priv);
+
+ unregister_netdev(priv->net_dev);
+
+ if (priv->rxq) {
+ ipw_rx_queue_free(priv, priv->rxq);
+ priv->rxq = NULL;
+ }
+ ipw_tx_queue_free(priv);
+ ieee80211_free(priv->ieee);
+
+ /* ipw_down will ensure that there is no more pending work
+ * in the workqueue's, so we can safely remove them now. */
+ if (priv->workqueue) {
+ cancel_delayed_work(&priv->request_scan);
+ cancel_delayed_work(&priv->rf_kill);
+ destroy_workqueue(priv->workqueue);
+ priv->workqueue = NULL;
+ }
+
+ free_irq(pdev->irq, priv);
+ iounmap((void *)priv->hw_base);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ free_netdev(priv->net_dev);
+
+#ifdef CONFIG_PM
+ if (fw_loaded) {
+ release_firmware(bootfw);
+ release_firmware(ucode);
+ release_firmware(firmware);
+ fw_loaded = 0;
+ }
+#endif
+}
+
+
+#ifdef CONFIG_PM
+static int ipw_pci_suspend(struct pci_dev *pdev, u32 state)
+{
+ struct ipw_priv *priv = pci_get_drvdata(pdev);
+ struct net_device *dev = priv->net_dev;
+
+ printk(KERN_INFO "%s: Going into suspend...\n", dev->name);
+
+ /* Take down the device; powers it off, etc. */
+ ipw_down(priv);
+
+ /* Remove the PRESENT state of the device */
+ netif_device_detach(dev);
+
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,10)
+ pci_save_state(pdev, priv->pm_state);
+#else
+ pci_save_state(pdev);
+#endif
+ pci_disable_device(pdev); // needed?
+ pci_set_power_state(pdev, state);
+
+ return 0;
+}
+
+static int ipw_pci_resume(struct pci_dev *pdev)
+{
+ struct ipw_priv *priv = pci_get_drvdata(pdev);
+ struct net_device *dev = priv->net_dev;
+ u32 val;
+
+ printk(KERN_INFO "%s: Coming out of suspend...\n", dev->name);
+
+ pci_set_power_state(pdev, 0);
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,10)
+ pci_restore_state(pdev, priv->pm_state);
+#else
+ pci_restore_state(pdev);
+#endif
+ /*
+ * Suspend/Resume resets the PCI configuration space, so we have to
+ * re-disable the RETRY_TIMEOUT register (0x41) to keep PCI Tx retries
+ * from interfering with C3 CPU state. pci_restore_state won't help
+ * here since it only restores the first 64 bytes pci config header.
+ */
+ pci_read_config_dword(pdev, 0x40, &val);
+ if ((val & 0x0000ff00) != 0)
+ pci_write_config_dword(pdev, 0x40, val & 0xffff00ff);
+
+ /* Set the device back into the PRESENT state; this will also wake
+ * the queue of needed */
+ netif_device_attach(dev);
+
+ /* Bring the device back up */
+ ipw_up(priv);
+
+ return 0;
+}
+#endif
+
+/* driver initialization stuff */
+static struct pci_driver ipw_driver = {
+ .name = DRV_NAME,
+ .id_table = card_ids,
+ .probe = ipw_pci_probe,
+ .remove = __devexit_p(ipw_pci_remove),
+#ifdef CONFIG_PM
+ .suspend = ipw_pci_suspend,
+ .resume = ipw_pci_resume,
+#endif
+};
+
+static int __init ipw_init(void)
+{
+ int ret;
+
+ printk(KERN_INFO DRV_NAME ": " DRV_DESCRIPTION ", " DRV_VERSION "\n");
+ printk(KERN_INFO DRV_NAME ": " DRV_COPYRIGHT "\n");
+
+ ret = pci_module_init(&ipw_driver);
+ if (ret) {
+ IPW_ERROR("Unable to initialize PCI module\n");
+ return ret;
+ }
+
+ ret = driver_create_file(&ipw_driver.driver,
+ &driver_attr_debug_level);
+ if (ret) {
+ IPW_ERROR("Unable to create driver sysfs file\n");
+ pci_unregister_driver(&ipw_driver);
+ return ret;
+ }
+
+ return ret;
+}
+
+static void __exit ipw_exit(void)
+{
+ driver_remove_file(&ipw_driver.driver, &driver_attr_debug_level);
+ pci_unregister_driver(&ipw_driver);
+}
+
+module_param(disable, int, 0444);
+MODULE_PARM_DESC(disable, "manually disable the radio (default 0 [radio on])");
+
+module_param(associate, int, 0444);
+MODULE_PARM_DESC(associate, "auto associate when scanning (default on)");
+
+module_param(adhoc_create, int, 0444);
+MODULE_PARM_DESC(adhoc_create, "auto create adhoc network (default on)");
+
+module_param(debug, int, 0444);
+MODULE_PARM_DESC(debug, "debug output mask");
+
+module_param(channel, int, 0444);
+MODULE_PARM_DESC(channel, "channel to limit associate to (default 0 [ANY])");
+
+module_param(ifname, charp, 0444);
+MODULE_PARM_DESC(ifname, "network device name (default eth%d)");
+
+module_param(mode, int, 0444);
+MODULE_PARM_DESC(mode, "network mode (0=BSS,1=IBSS,2=Monitor)");
+
+module_exit(ipw_exit);
+module_init(ipw_init);
--- /dev/null
+/******************************************************************************
+
+ Copyright(c) 2003 - 2004 Intel Corporation. All rights reserved.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms of version 2 of the GNU General Public License as
+ published by the Free Software Foundation.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ You should have received a copy of the GNU General Public License along with
+ this program; if not, write to the Free Software Foundation, Inc., 59
+ Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+
+ The full GNU General Public License is included in this distribution in the
+ file called LICENSE.
+
+ Contact Information:
+ James P. Ketrenos <ipw2100-admin@linux.intel.com>
+ Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
+
+******************************************************************************/
+
+#ifndef __ipw2200_h__
+#define __ipw2200_h__
+
+#define WEXT_USECHANNELS 1
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/config.h>
+#include <linux/init.h>
+
+#include <linux/version.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/ethtool.h>
+#include <linux/skbuff.h>
+#include <linux/etherdevice.h>
+#include <linux/delay.h>
+#include <linux/random.h>
+
+#include <linux/firmware.h>
+#include <linux/wireless.h>
+#include <asm/io.h>
+
+#include "ieee80211.h"
+
+#define DRV_NAME "ipw2200"
+
+#include <linux/workqueue.h>
+
+#ifndef IRQ_NONE
+typedef void irqreturn_t;
+#define IRQ_NONE
+#define IRQ_HANDLED
+#define IRQ_RETVAL(x)
+#endif
+
+#if ( LINUX_VERSION_CODE < KERNEL_VERSION(2,6,5) )
+#define pci_dma_sync_single_for_cpu pci_dma_sync_single
+#define pci_dma_sync_single_for_device pci_dma_sync_single
+#endif
+
+#ifndef HAVE_FREE_NETDEV
+#define free_netdev(x) kfree(x)
+#endif
+
+#if WIRELESS_EXT < 17
+#define IW_QUAL_QUAL_UPDATED 0x1
+#define IW_QUAL_LEVEL_UPDATED 0x2
+#define IW_QUAL_NOISE_INVALID 0x40
+#endif
+
+/* Authentication and Association States */
+enum connection_manager_assoc_states
+{
+ CMAS_INIT = 0,
+ CMAS_TX_AUTH_SEQ_1,
+ CMAS_RX_AUTH_SEQ_2,
+ CMAS_AUTH_SEQ_1_PASS,
+ CMAS_AUTH_SEQ_1_FAIL,
+ CMAS_TX_AUTH_SEQ_3,
+ CMAS_RX_AUTH_SEQ_4,
+ CMAS_AUTH_SEQ_2_PASS,
+ CMAS_AUTH_SEQ_2_FAIL,
+ CMAS_AUTHENTICATED,
+ CMAS_TX_ASSOC,
+ CMAS_RX_ASSOC_RESP,
+ CMAS_ASSOCIATED,
+ CMAS_LAST
+};
+
+#define IPW_POWER_MODE_CAM 0x00 //(always on)
+#define IPW_POWER_INDEX_1 0x01
+#define IPW_POWER_INDEX_2 0x02
+#define IPW_POWER_INDEX_3 0x03
+#define IPW_POWER_INDEX_4 0x04
+#define IPW_POWER_INDEX_5 0x05
+#define IPW_POWER_AC 0x06
+#define IPW_POWER_BATTERY 0x07
+#define IPW_POWER_LIMIT 0x07
+#define IPW_POWER_MASK 0x0F
+#define IPW_POWER_ENABLED 0x10
+#define IPW_POWER_LEVEL(x) ((x) & IPW_POWER_MASK)
+
+#define IPW_CMD_HOST_COMPLETE 2
+#define IPW_CMD_POWER_DOWN 4
+#define IPW_CMD_SYSTEM_CONFIG 6
+#define IPW_CMD_MULTICAST_ADDRESS 7
+#define IPW_CMD_SSID 8
+#define IPW_CMD_ADAPTER_ADDRESS 11
+#define IPW_CMD_PORT_TYPE 12
+#define IPW_CMD_RTS_THRESHOLD 15
+#define IPW_CMD_FRAG_THRESHOLD 16
+#define IPW_CMD_POWER_MODE 17
+#define IPW_CMD_WEP_KEY 18
+#define IPW_CMD_TGI_TX_KEY 19
+#define IPW_CMD_SCAN_REQUEST 20
+#define IPW_CMD_ASSOCIATE 21
+#define IPW_CMD_SUPPORTED_RATES 22
+#define IPW_CMD_SCAN_ABORT 23
+#define IPW_CMD_TX_FLUSH 24
+#define IPW_CMD_QOS_PARAMETERS 25
+#define IPW_CMD_SCAN_REQUEST_EXT 26
+#define IPW_CMD_DINO_CONFIG 30
+#define IPW_CMD_RSN_CAPABILITIES 31
+#define IPW_CMD_RX_KEY 32
+#define IPW_CMD_CARD_DISABLE 33
+#define IPW_CMD_SEED_NUMBER 34
+#define IPW_CMD_TX_POWER 35
+#define IPW_CMD_COUNTRY_INFO 36
+#define IPW_CMD_AIRONET_INFO 37
+#define IPW_CMD_AP_TX_POWER 38
+#define IPW_CMD_CCKM_INFO 39
+#define IPW_CMD_CCX_VER_INFO 40
+#define IPW_CMD_SET_CALIBRATION 41
+#define IPW_CMD_SENSITIVITY_CALIB 42
+#define IPW_CMD_RETRY_LIMIT 51
+#define IPW_CMD_IPW_PRE_POWER_DOWN 58
+#define IPW_CMD_VAP_BEACON_TEMPLATE 60
+#define IPW_CMD_VAP_DTIM_PERIOD 61
+#define IPW_CMD_EXT_SUPPORTED_RATES 62
+#define IPW_CMD_VAP_LOCAL_TX_PWR_CONSTRAINT 63
+#define IPW_CMD_VAP_QUIET_INTERVALS 64
+#define IPW_CMD_VAP_CHANNEL_SWITCH 65
+#define IPW_CMD_VAP_MANDATORY_CHANNELS 66
+#define IPW_CMD_VAP_CELL_PWR_LIMIT 67
+#define IPW_CMD_VAP_CF_PARAM_SET 68
+#define IPW_CMD_VAP_SET_BEACONING_STATE 69
+#define IPW_CMD_MEASUREMENT 80
+#define IPW_CMD_POWER_CAPABILITY 81
+#define IPW_CMD_SUPPORTED_CHANNELS 82
+#define IPW_CMD_TPC_REPORT 83
+#define IPW_CMD_WME_INFO 84
+#define IPW_CMD_PRODUCTION_COMMAND 85
+#define IPW_CMD_LINKSYS_EOU_INFO 90
+
+#define RFD_SIZE 4
+#define NUM_TFD_CHUNKS 6
+
+#define TX_QUEUE_SIZE 32
+#define RX_QUEUE_SIZE 32
+
+#define DINO_CMD_WEP_KEY 0x08
+#define DINO_CMD_TX 0x0B
+#define DCT_ANTENNA_A 0x01
+#define DCT_ANTENNA_B 0x02
+/*
+ * TX Queue Flag Definitions
+ */
+
+/* abort attempt if mgmt frame is rx'd */
+#define DCT_FLAG_ABORT_MGMT 0x01
+
+/* require CTS */
+#define DCT_FLAG_CTS_REQUIRED 0x02
+
+/* use short preamble */
+#define DCT_FLAG_SHORT_PREMBL 0x04
+
+/* RTS/CTS first */
+#define DCT_FLAG_RTS_REQD 0x08
+
+/* dont calculate duration field */
+#define DCT_FLAG_DUR_SET 0x10
+
+/* even if MAC WEP set (allows pre-encrypt) */
+#define DCT_FLAG_NO_WEP 0x20
+#define IPW_
+/* overwrite TSF field */
+#define DCT_FLAG_TSF_REQD 0x40
+
+/* ACK rx is expected to follow */
+#define DCT_FLAG_ACK_REQD 0x80
+
+#define DCT_FLAG_EXT_MODE_CCK 0x01
+#define DCT_FLAG_EXT_MODE_OFDM 0x00
+
+
+#define TX_RX_TYPE_MASK 0xFF
+#define TX_FRAME_TYPE 0x00
+#define TX_HOST_COMMAND_TYPE 0x01
+#define RX_FRAME_TYPE 0x09
+#define RX_HOST_NOTIFICATION_TYPE 0x03
+#define RX_HOST_CMD_RESPONSE_TYPE 0x04
+#define RX_TX_FRAME_RESPONSE_TYPE 0x05
+#define TFD_NEED_IRQ_MASK 0x04
+
+#define HOST_CMD_DINO_CONFIG 30
+
+#define HOST_NOTIFICATION_STATUS_ASSOCIATED 10
+#define HOST_NOTIFICATION_STATUS_AUTHENTICATE 11
+#define HOST_NOTIFICATION_STATUS_SCAN_CHANNEL_RESULT 12
+#define HOST_NOTIFICATION_STATUS_SCAN_COMPLETED 13
+#define HOST_NOTIFICATION_STATUS_FRAG_LENGTH 14
+#define HOST_NOTIFICATION_STATUS_LINK_DETERIORATION 15
+#define HOST_NOTIFICATION_DINO_CONFIG_RESPONSE 16
+#define HOST_NOTIFICATION_STATUS_BEACON_STATE 17
+#define HOST_NOTIFICATION_STATUS_TGI_TX_KEY 18
+#define HOST_NOTIFICATION_TX_STATUS 19
+#define HOST_NOTIFICATION_CALIB_KEEP_RESULTS 20
+#define HOST_NOTIFICATION_MEASUREMENT_STARTED 21
+#define HOST_NOTIFICATION_MEASUREMENT_ENDED 22
+#define HOST_NOTIFICATION_CHANNEL_SWITCHED 23
+#define HOST_NOTIFICATION_RX_DURING_QUIET_PERIOD 24
+#define HOST_NOTIFICATION_NOISE_STATS 25
+#define HOST_NOTIFICATION_S36_MEASUREMENT_ACCEPTED 30
+#define HOST_NOTIFICATION_S36_MEASUREMENT_REFUSED 31
+
+#define HOST_NOTIFICATION_STATUS_BEACON_MISSING 1
+#define IPW_MB_DISASSOCIATE_THRESHOLD_DEFAULT 24
+#define IPW_MB_ROAMING_THRESHOLD_DEFAULT 8
+#define IPW_REAL_RATE_RX_PACKET_THRESHOLD 300
+
+#define MACADRR_BYTE_LEN 6
+
+#define DCR_TYPE_AP 0x01
+#define DCR_TYPE_WLAP 0x02
+#define DCR_TYPE_MU_ESS 0x03
+#define DCR_TYPE_MU_IBSS 0x04
+#define DCR_TYPE_MU_PIBSS 0x05
+#define DCR_TYPE_SNIFFER 0x06
+#define DCR_TYPE_MU_BSS DCR_TYPE_MU_ESS
+
+/**
+ * Generic queue structure
+ *
+ * Contains common data for Rx and Tx queues
+ */
+struct clx2_queue {
+ int n_bd; /**< number of BDs in this queue */
+ int first_empty; /**< 1-st empty entry (index) */
+ int last_used; /**< last used entry (index) */
+ u32 reg_w; /**< 'write' reg (queue head), addr in domain 1 */
+ u32 reg_r; /**< 'read' reg (queue tail), addr in domain 1 */
+ dma_addr_t dma_addr; /**< physical addr for BD's */
+ int low_mark; /**< low watermark, resume queue if free space more than this */
+ int high_mark; /**< high watermark, stop queue if free space less than this */
+} __attribute__ ((packed));
+
+struct machdr32
+{
+ u8 ctrl1;
+ u8 ctrl2;
+ u16 duration; // watch out for endians!
+ u8 addr1[ MACADRR_BYTE_LEN ];
+ u8 addr2[ MACADRR_BYTE_LEN ];
+ u8 addr3[ MACADRR_BYTE_LEN ];
+ u16 seq_ctrl; // more endians!
+ u8 addr4[ MACADRR_BYTE_LEN ];
+ u16 qos_ctrl;
+} __attribute__ ((packed)) ;
+
+struct machdr30
+{
+ u8 ctrl1;
+ u8 ctrl2;
+ u16 duration; // watch out for endians!
+ u8 addr1[ MACADRR_BYTE_LEN ];
+ u8 addr2[ MACADRR_BYTE_LEN ];
+ u8 addr3[ MACADRR_BYTE_LEN ];
+ u16 seq_ctrl; // more endians!
+ u8 addr4[ MACADRR_BYTE_LEN ];
+} __attribute__ ((packed)) ;
+
+struct machdr26
+{
+ u8 ctrl1;
+ u8 ctrl2;
+ u16 duration; // watch out for endians!
+ u8 addr1[ MACADRR_BYTE_LEN ];
+ u8 addr2[ MACADRR_BYTE_LEN ];
+ u8 addr3[ MACADRR_BYTE_LEN ];
+ u16 seq_ctrl; // more endians!
+ u16 qos_ctrl;
+} __attribute__ ((packed)) ;
+
+struct machdr24
+{
+ u8 ctrl1;
+ u8 ctrl2;
+ u16 duration; // watch out for endians!
+ u8 addr1[ MACADRR_BYTE_LEN ];
+ u8 addr2[ MACADRR_BYTE_LEN ];
+ u8 addr3[ MACADRR_BYTE_LEN ];
+ u16 seq_ctrl; // more endians!
+} __attribute__ ((packed)) ;
+
+// TX TFD with 32 byte MAC Header
+struct tx_tfd_32
+{
+ struct machdr32 mchdr; // 32
+ u32 uivplaceholder[2]; // 8
+} __attribute__ ((packed)) ;
+
+// TX TFD with 30 byte MAC Header
+struct tx_tfd_30
+{
+ struct machdr30 mchdr; // 30
+ u8 reserved[2]; // 2
+ u32 uivplaceholder[2]; // 8
+} __attribute__ ((packed)) ;
+
+// tx tfd with 26 byte mac header
+struct tx_tfd_26
+{
+ struct machdr26 mchdr; // 26
+ u8 reserved1[2]; // 2
+ u32 uivplaceholder[2]; // 8
+ u8 reserved2[4]; // 4
+} __attribute__ ((packed)) ;
+
+// tx tfd with 24 byte mac header
+struct tx_tfd_24
+{
+ struct machdr24 mchdr; // 24
+ u32 uivplaceholder[2]; // 8
+ u8 reserved[8]; // 8
+} __attribute__ ((packed)) ;
+
+
+#define DCT_WEP_KEY_FIELD_LENGTH 16
+
+struct tfd_command
+{
+ u8 index;
+ u8 length;
+ u16 reserved;
+ u8 payload[0];
+} __attribute__ ((packed)) ;
+
+struct tfd_data {
+ /* Header */
+ u32 work_area_ptr;
+ u8 station_number; /* 0 for BSS */
+ u8 reserved1;
+ u16 reserved2;
+
+ /* Tx Parameters */
+ u8 cmd_id;
+ u8 seq_num;
+ u16 len;
+ u8 priority;
+ u8 tx_flags;
+ u8 tx_flags_ext;
+ u8 key_index;
+ u8 wepkey[DCT_WEP_KEY_FIELD_LENGTH];
+ u8 rate;
+ u8 antenna;
+ u16 next_packet_duration;
+ u16 next_frag_len;
+ u16 back_off_counter; //////txop;
+ u8 retrylimit;
+ u16 cwcurrent;
+ u8 reserved3;
+
+ /* 802.11 MAC Header */
+ union
+ {
+ struct tx_tfd_24 tfd_24;
+ struct tx_tfd_26 tfd_26;
+ struct tx_tfd_30 tfd_30;
+ struct tx_tfd_32 tfd_32;
+ } tfd;
+
+ /* Payload DMA info */
+ u32 num_chunks;
+ u32 chunk_ptr[NUM_TFD_CHUNKS];
+ u16 chunk_len[NUM_TFD_CHUNKS];
+} __attribute__ ((packed));
+
+struct txrx_control_flags
+{
+ u8 message_type;
+ u8 rx_seq_num;
+ u8 control_bits;
+ u8 reserved;
+} __attribute__ ((packed));
+
+#define TFD_SIZE 128
+#define TFD_CMD_IMMEDIATE_PAYLOAD_LENGTH (TFD_SIZE - sizeof(struct txrx_control_flags))
+
+struct tfd_frame
+{
+ struct txrx_control_flags control_flags;
+ union {
+ struct tfd_data data;
+ struct tfd_command cmd;
+ u8 raw[TFD_CMD_IMMEDIATE_PAYLOAD_LENGTH];
+ } u;
+} __attribute__ ((packed)) ;
+
+typedef void destructor_func(const void*);
+
+/**
+ * Tx Queue for DMA. Queue consists of circular buffer of
+ * BD's and required locking structures.
+ */
+struct clx2_tx_queue {
+ struct clx2_queue q;
+ struct tfd_frame* bd;
+ struct ieee80211_txb **txb;
+};
+
+/*
+ * RX related structures and functions
+ */
+#define RX_FREE_BUFFERS 32
+#define RX_LOW_WATERMARK 8
+
+#define SUP_RATE_11B_MAX_NUM_CHANNELS (4)
+#define SUP_RATE_11G_MAX_NUM_CHANNELS (12)
+
+// Used for passing to driver number of successes and failures per rate
+struct rate_histogram
+{
+ union {
+ u32 b[SUP_RATE_11B_MAX_NUM_CHANNELS];
+ u32 g[SUP_RATE_11G_MAX_NUM_CHANNELS];
+ } success;
+ union {
+ u32 b[SUP_RATE_11B_MAX_NUM_CHANNELS];
+ u32 g[SUP_RATE_11G_MAX_NUM_CHANNELS];
+ } failed;
+} __attribute__ ((packed));
+
+/* statistics command response */
+struct ipw_cmd_stats {
+ u8 cmd_id;
+ u8 seq_num;
+ u16 good_sfd;
+ u16 bad_plcp;
+ u16 wrong_bssid;
+ u16 valid_mpdu;
+ u16 bad_mac_header;
+ u16 reserved_frame_types;
+ u16 rx_ina;
+ u16 bad_crc32;
+ u16 invalid_cts;
+ u16 invalid_acks;
+ u16 long_distance_ina_fina;
+ u16 dsp_silence_unreachable;
+ u16 accumulated_rssi;
+ u16 rx_ovfl_frame_tossed;
+ u16 rssi_silence_threshold;
+ u16 rx_ovfl_frame_supplied;
+ u16 last_rx_frame_signal;
+ u16 last_rx_frame_noise;
+ u16 rx_autodetec_no_ofdm;
+ u16 rx_autodetec_no_barker;
+ u16 reserved;
+} __attribute__ ((packed));
+
+struct notif_channel_result {
+ u8 channel_num;
+ struct ipw_cmd_stats stats;
+ u8 uReserved;
+} __attribute__ ((packed));
+
+struct notif_scan_complete {
+ u8 scan_type;
+ u8 num_channels;
+ u8 status;
+ u8 reserved;
+} __attribute__ ((packed));
+
+struct notif_frag_length {
+ u16 frag_length;
+ u16 reserved;
+} __attribute__ ((packed));
+
+struct notif_beacon_state {
+ u32 state;
+ u32 number;
+} __attribute__ ((packed));
+
+struct notif_tgi_tx_key {
+ u8 key_state;
+ u8 security_type;
+ u8 station_index;
+ u8 reserved;
+} __attribute__ ((packed));
+
+struct notif_link_deterioration {
+ struct ipw_cmd_stats stats;
+ u8 rate;
+ u8 modulation;
+ struct rate_histogram histogram;
+ u8 reserved1;
+ u16 reserved2;
+} __attribute__ ((packed));
+
+struct notif_association {
+ u8 state;
+} __attribute__ ((packed));
+
+struct notif_authenticate {
+ u8 state;
+ struct machdr24 addr;
+ u16 status;
+} __attribute__ ((packed));
+
+struct temperature
+{
+ s32 measured;
+ s32 active;
+} __attribute__ ((packed));
+
+struct notif_calibration {
+ u8 data[104];
+} __attribute__ ((packed));
+
+struct ipw_rx_notification {
+ u8 reserved[8];
+ u8 subtype;
+ u8 flags;
+ u16 size;
+ union {
+ struct notif_association assoc;
+ struct notif_authenticate auth;
+ struct notif_channel_result channel_result;
+ struct notif_scan_complete scan_complete;
+ struct notif_frag_length frag_len;
+ struct notif_beacon_state beacon_state;
+ struct notif_tgi_tx_key tgi_tx_key;
+ struct notif_link_deterioration link_deterioration;
+ struct notif_calibration calibration;
+ u8 raw[0];
+ } u;
+} __attribute__ ((packed));
+
+struct ipw_rx_frame {
+ u32 reserved1;
+ u8 parent_tsf[4]; // fw_use[0] is boolean for OUR_TSF_IS_GREATER
+ u8 received_channel; // The channel that this frame was received on.
+ // Note that for .11b this does not have to be
+ // the same as the channel that it was sent.
+ // Filled by LMAC
+ u8 frameStatus;
+ u8 rate;
+ u8 rssi;
+ u8 agc;
+ u8 reserved2;
+ u16 signal;
+ u16 noise;
+ u8 antennaAndPhy;
+ u8 control; // control bit should be on in bg
+ u8 rtscts_rate; // rate of rts or cts (in rts cts sequence rate
+ // is identical)
+ u8 rtscts_seen; // 0x1 RTS seen ; 0x2 CTS seen
+ u16 length;
+ u8 data[0];
+} __attribute__ ((packed));
+
+struct ipw_rx_header {
+ u8 message_type;
+ u8 rx_seq_num;
+ u8 control_bits;
+ u8 reserved;
+} __attribute__ ((packed));
+
+struct ipw_rx_packet
+{
+ struct ipw_rx_header header;
+ union {
+ struct ipw_rx_frame frame;
+ struct ipw_rx_notification notification;
+ } u;
+} __attribute__ ((packed));
+
+#define IPW_RX_NOTIFICATION_SIZE sizeof(struct ipw_rx_header) + 12
+#define IPW_RX_FRAME_SIZE sizeof(struct ipw_rx_header) + \
+ sizeof(struct ipw_rx_frame)
+
+struct ipw_rx_mem_buffer {
+ dma_addr_t dma_addr;
+ struct ipw_rx_buffer *rxb;
+ struct sk_buff *skb;
+ struct list_head list;
+}; /* Not transferred over network, so not __attribute__ ((packed)) */
+
+struct ipw_rx_queue {
+ struct ipw_rx_mem_buffer pool[RX_QUEUE_SIZE + RX_FREE_BUFFERS];
+ struct ipw_rx_mem_buffer *queue[RX_QUEUE_SIZE];
+ u32 processed; /* Internal index to last handled Rx packet */
+ u32 read; /* Shared index to newest available Rx buffer */
+ u32 write; /* Shared index to oldest written Rx packet */
+ u32 free_count;/* Number of pre-allocated buffers in rx_free */
+ /* Each of these lists is used as a FIFO for ipw_rx_mem_buffers */
+ struct list_head rx_free; /* Own an SKBs */
+ struct list_head rx_used; /* No SKB allocated */
+}; /* Not transferred over network, so not __attribute__ ((packed)) */
+
+struct alive_command_responce {
+ u8 alive_command;
+ u8 sequence_number;
+ u16 software_revision;
+ u8 device_identifier;
+ u8 reserved1[5];
+ u16 reserved2;
+ u16 reserved3;
+ u16 clock_settle_time;
+ u16 powerup_settle_time;
+ u16 reserved4;
+ u8 time_stamp[5]; /* month, day, year, hours, minutes */
+ u8 ucode_valid;
+} __attribute__ ((packed));
+
+#define IPW_MAX_RATES 12
+
+struct ipw_rates {
+ u8 num_rates;
+ u8 rates[IPW_MAX_RATES];
+} __attribute__ ((packed));
+
+struct command_block
+{
+ unsigned int control;
+ void *source_addr;
+ void *dest_addr;
+ unsigned int status;
+} __attribute__ ((packed));
+
+#define CB_NUMBER_OF_ELEMENTS_SMALL 64
+struct fw_image_desc
+{
+ unsigned long last_cb_index;
+ unsigned long current_cb_index;
+ struct command_block cb_list[CB_NUMBER_OF_ELEMENTS_SMALL];
+ void * v_addr;
+ unsigned long p_addr;
+ unsigned long len;
+};
+
+struct ipw_sys_config
+{
+ u8 bt_coexistence;
+ u8 reserved1;
+ u8 answer_broadcast_ssid_probe;
+ u8 accept_all_data_frames;
+ u8 accept_non_directed_frames;
+ u8 exclude_unicast_unencrypted;
+ u8 disable_unicast_decryption;
+ u8 exclude_multicast_unencrypted;
+ u8 disable_multicast_decryption;
+ u8 antenna_diversity;
+ u8 pass_crc_to_host;
+ u8 dot11g_auto_detection;
+ u8 enable_cts_to_self;
+ u8 enable_multicast_filtering;
+ u8 bt_coexist_collision_thr;
+ u8 reserved2;
+ u8 accept_all_mgmt_bcpr;
+ u8 accept_all_mgtm_frames;
+ u8 pass_noise_stats_to_host;
+ u8 reserved3;
+} __attribute__ ((packed));
+
+struct ipw_multicast_addr
+{
+ u8 num_of_multicast_addresses;
+ u8 reserved[3];
+ u8 mac1[6];
+ u8 mac2[6];
+ u8 mac3[6];
+ u8 mac4[6];
+} __attribute__ ((packed));
+
+struct ipw_wep_key
+{
+ u8 cmd_id;
+ u8 seq_num;
+ u8 key_index;
+ u8 key_size;
+ u8 key[16];
+} __attribute__ ((packed));
+
+struct ipw_tgi_tx_key
+{
+ u8 key_id;
+ u8 security_type;
+ u8 station_index;
+ u8 flags;
+ u8 key[16];
+ u32 tx_counter[2];
+} __attribute__ ((packed));
+
+#define IPW_SCAN_CHANNELS 54
+
+struct ipw_scan_request
+{
+ u8 scan_type;
+ u16 dwell_time;
+ u8 channels_list[IPW_SCAN_CHANNELS];
+ u8 channels_reserved[3];
+} __attribute__ ((packed));
+
+enum {
+ IPW_SCAN_PASSIVE_TILL_FIRST_BEACON_SCAN = 0,
+ IPW_SCAN_PASSIVE_FULL_DWELL_SCAN,
+ IPW_SCAN_ACTIVE_DIRECT_SCAN,
+ IPW_SCAN_ACTIVE_BROADCAST_SCAN,
+ IPW_SCAN_ACTIVE_BROADCAST_AND_DIRECT_SCAN,
+ IPW_SCAN_TYPES
+};
+
+struct ipw_scan_request_ext
+{
+ u32 full_scan_index;
+ u8 channels_list[IPW_SCAN_CHANNELS];
+ u8 scan_type[IPW_SCAN_CHANNELS / 2];
+ u8 reserved;
+ u16 dwell_time[IPW_SCAN_TYPES];
+} __attribute__ ((packed));
+
+extern inline u8 ipw_get_scan_type(struct ipw_scan_request_ext *scan, u8 index)
+{
+ if (index % 2)
+ return scan->scan_type[index / 2] & 0x0F;
+ else
+ return (scan->scan_type[index / 2] & 0xF0) >> 4;
+}
+
+extern inline void ipw_set_scan_type(struct ipw_scan_request_ext *scan,
+ u8 index, u8 scan_type)
+{
+ if (index % 2)
+ scan->scan_type[index / 2] =
+ (scan->scan_type[index / 2] & 0xF0) |
+ (scan_type & 0x0F);
+ else
+ scan->scan_type[index / 2] =
+ (scan->scan_type[index / 2] & 0x0F) |
+ ((scan_type & 0x0F) << 4);
+}
+
+struct ipw_associate
+{
+ u8 channel;
+ u8 auth_type:4,
+ auth_key:4;
+ u8 assoc_type;
+ u8 reserved;
+ u16 policy_support;
+ u8 preamble_length;
+ u8 ieee_mode;
+ u8 bssid[ETH_ALEN];
+ u32 assoc_tsf_msw;
+ u32 assoc_tsf_lsw;
+ u16 capability;
+ u16 listen_interval;
+ u16 beacon_interval;
+ u8 dest[ETH_ALEN];
+ u16 atim_window;
+ u8 smr;
+ u8 reserved1;
+ u16 reserved2;
+} __attribute__ ((packed));
+
+struct ipw_supported_rates
+{
+ u8 ieee_mode;
+ u8 num_rates;
+ u8 purpose;
+ u8 reserved;
+ u8 supported_rates[IPW_MAX_RATES];
+} __attribute__ ((packed));
+
+struct ipw_rts_threshold
+{
+ u16 rts_threshold;
+ u16 reserved;
+} __attribute__ ((packed));
+
+struct ipw_frag_threshold
+{
+ u16 frag_threshold;
+ u16 reserved;
+} __attribute__ ((packed));
+
+struct ipw_retry_limit
+{
+ u8 short_retry_limit;
+ u8 long_retry_limit;
+ u16 reserved;
+} __attribute__ ((packed));
+
+struct ipw_dino_config
+{
+ u32 dino_config_addr;
+ u16 dino_config_size;
+ u8 dino_response;
+ u8 reserved;
+} __attribute__ ((packed));
+
+struct ipw_aironet_info
+{
+ u8 id;
+ u8 length;
+ u16 reserved;
+} __attribute__ ((packed));
+
+struct ipw_rx_key
+{
+ u8 station_index;
+ u8 key_type;
+ u8 key_id;
+ u8 key_flag;
+ u8 key[16];
+ u8 station_address[6];
+ u8 key_index;
+ u8 reserved;
+} __attribute__ ((packed));
+
+struct ipw_country_channel_info
+{
+ u8 first_channel;
+ u8 no_channels;
+ s8 max_tx_power;
+} __attribute__ ((packed));
+
+struct ipw_country_info
+{
+ u8 id;
+ u8 length;
+ u8 country_str[3];
+ struct ipw_country_channel_info groups[7];
+} __attribute__ ((packed));
+
+struct ipw_channel_tx_power
+{
+ u8 channel_number;
+ s8 tx_power;
+} __attribute__ ((packed));
+
+#define SCAN_INTERVAL (HZ / 10)
+#define MAX_A_CHANNELS 37
+#define MAX_B_CHANNELS 14
+
+struct ipw_tx_power
+{
+ u8 num_channels;
+ u8 ieee_mode;
+ struct ipw_channel_tx_power channels_tx_power[MAX_A_CHANNELS];
+} __attribute__ ((packed));
+
+struct ipw_qos_parameters
+{
+ u16 cw_min[4];
+ u16 cw_max[4];
+ u8 aifs[4];
+ u8 flag[4];
+ u16 tx_op_limit[4];
+} __attribute__ ((packed));
+
+struct ipw_rsn_capabilities
+{
+ u8 id;
+ u8 length;
+ u16 version;
+} __attribute__ ((packed));
+
+struct ipw_sensitivity_calib
+{
+ u16 beacon_rssi_raw;
+ u16 reserved;
+} __attribute__ ((packed));
+
+/**
+ * Host command structure.
+ *
+ * On input, the following fields should be filled:
+ * - cmd
+ * - len
+ * - status_len
+ * - param (if needed)
+ *
+ * On output,
+ * - \a status contains status;
+ * - \a param filled with status parameters.
+ */
+struct ipw_cmd {
+ u32 cmd; /**< Host command */
+ u32 status; /**< Status */
+ u32 status_len; /**< How many 32 bit parameters in the status */
+ u32 len; /**< incoming parameters length, bytes */
+ /**
+ * command parameters.
+ * There should be enough space for incoming and
+ * outcoming parameters.
+ * Incoming parameters listed 1-st, followed by outcoming params.
+ * nParams=(len+3)/4+status_len
+ */
+ u32 param[0];
+} __attribute__ ((packed));
+
+#define STATUS_FW_DOWNLOAD BIT(0) /**< fw download in progress */
+#define STATUS_HCMD_ACTIVE BIT(1) /**< host command in progress */
+#define STATUS_HCMD_DONE BIT(2) /**< host command reply received */
+#define STATUS_HCMD_TIMEOUT BIT(3) /**< host command timed out */
+#define STATUS_FW_READY BIT(4) /**< FW is ready (got INIT_DONE IRQ ) */
+#define STATUS_HOST_COMPLETE BIT(5) /**< Ready to Tx/Rx (HostComplete) */
+#define STATUS_WEP BIT(7) /**< use WEP */
+#define STATUS_ERROR BIT(8) /**< Error state. Needs restart. */
+#define STATUS_SNIF_DINO BIT(9) /**< Pass DINO header to sniffer */
+
+#define STATUS_INT_ENABLED BIT(11)
+#define STATUS_RF_KILL_HW BIT(12)
+#define STATUS_RF_KILL_SW BIT(13)
+#define STATUS_RF_KILL_MASK (STATUS_RF_KILL_HW | STATUS_RF_KILL_SW)
+#define STATUS_EXIT_PENDING BIT(14)
+
+#define STATUS_SCAN_PENDING BIT(20)
+#define STATUS_SCANNING BIT(21)
+#define STATUS_SCAN_ABORTING BIT(22)
+#define STATUS_AUTH BIT(23) /**< Authenticated */
+#define STATUS_ASSOCIATING BIT(24)
+#define STATUS_ASSOCIATED BIT(25) /**< Associated */
+#define STATUS_DISASSOCIATING BIT(26)
+
+#define STATUS_INDIRECT_BYTE BIT(27) /* sysfs entry configured for access */
+#define STATUS_INDIRECT_DWORD BIT(28) /* sysfs entry configured for access */
+#define STATUS_DIRECT_DWORD BIT(29) /* sysfs entry configured for access */
+
+#define STATUS_SECURITY_UPDATED BIT(30) /* Security sync needed */
+
+#define CFG_STATIC_CHANNEL BIT(0) /* Restrict assoc. to single channel */
+#define CFG_STATIC_ESSID BIT(1) /* Restrict assoc. to single SSID */
+#define CFG_STATIC_BSSID BIT(2) /* Restrict assoc. to single BSSID */
+#define CFG_CUSTOM_MAC BIT(3)
+#define CFG_PREAMBLE BIT(4)
+/* free bit */
+#define CFG_ASSOCIATE BIT(6)
+#define CFG_FIXED_RATE BIT(7)
+#define CFG_ADHOC_CREATE BIT(8)
+
+#define CAP_SHARED_KEY BIT(0) /* Off = OPEN */
+#define CAP_PRIVACY_ON BIT(1) /* Off = No privacy */
+
+#define MAX_STATIONS 32
+#define IPW_INVALID_STATION (0xff)
+
+struct ipw_station_entry {
+ u8 mac_addr[ETH_ALEN];
+ u8 reserved;
+ u8 support_mode;
+};
+
+struct ipw_priv {
+ /* ieee device used by generic ieee processing code */
+ struct ieee80211_device *ieee;
+ struct ieee80211_security sec;
+
+ /* spinlock */
+ spinlock_t lock;
+
+ /* basic pci-network driver stuff */
+ struct pci_dev *pci_dev;
+ struct net_device *net_dev;
+
+ /* pci harware address support */
+ unsigned long hw_base; /* (virt) */
+ unsigned long hw_len;
+
+ struct fw_image_desc sram_desc;
+
+ /* result of ucode download */
+ struct alive_command_responce dino_alive;
+
+ wait_queue_head_t wait_command_queue;
+
+ /* Rx and Tx DMA processing queues */
+ struct ipw_rx_queue *rxq;
+ struct clx2_tx_queue txq_cmd;
+ struct clx2_tx_queue txq[4];
+ unsigned long status;
+ unsigned long config;
+ unsigned long capability;
+
+ u8 last_rx_rate;
+ u8 last_rx_rssi;
+ u32 port_type;
+ int rx_bufs_min; /**< minimum number of bufs in Rx queue */
+ int rx_pend_max; /**< maximum pending buffers for one IRQ */
+ u32 hcmd_seq; /**< sequence number for hcmd */
+ u32 missed_beacon_threshold;
+ u32 roaming_threshold;
+
+ struct ipw_associate assoc_request;
+
+ unsigned long ts_scan_abort;
+ struct ipw_supported_rates rates;
+ struct ipw_rates phy[3]; /**< PHY restrictions, per band */
+ struct ipw_rates supp; /**< software defined */
+ struct ipw_rates extended; /**< use for corresp. IE, AP only */
+
+ struct notif_link_deterioration last_link_deterioration; /** for statistics */
+ struct ipw_cmd* hcmd; /**< host command currently executed */
+
+ wait_queue_head_t hcmd_wq; /**< host command waits for execution */
+ u32 tsf_bcn[2]; /**< TSF from latest beacon */
+
+ struct notif_calibration calib; /**< last calibration */
+
+ /* ordinal interface with firmware */
+ u32 table0_addr;
+ u32 table0_len;
+ u32 table1_addr;
+ u32 table1_len;
+ u32 table2_addr;
+ u32 table2_len;
+
+ /* context information */
+ u8 essid[IW_ESSID_MAX_SIZE];
+ u8 essid_len;
+ u8 nick[IW_ESSID_MAX_SIZE];
+ u16 rates_mask;
+ u8 channel;
+ struct ipw_sys_config sys_config;
+ u32 power_mode;
+ u8 bssid[ETH_ALEN];
+ u16 rts_threshold;
+ u8 mac_addr[ETH_ALEN];
+ u8 num_stations;
+ u8 stations[MAX_STATIONS][ETH_ALEN];
+
+
+ /* Statistics and counters reset with each association */
+ u32 missed_beacons;
+ u32 tx_packets;
+
+ /* eeprom */
+ u8 eeprom[0x100]; /* 256 bytes of eeprom */
+ int eeprom_delay;
+
+ struct iw_statistics wstats;
+
+ struct workqueue_struct *workqueue;
+
+ struct work_struct associate;
+ struct work_struct disassociate;
+ struct work_struct rx_replenish;
+ struct work_struct request_scan;
+ struct work_struct adapter_restart;
+ struct work_struct rf_kill;
+ struct work_struct up;
+ struct work_struct down;
+ struct tasklet_struct irq_tasklet;
+
+
+#define IPW_2200BG 1
+#define IPW_2915ABG 2
+ u8 adapter;
+
+#define IPW_DEFAULT_TX_POWER 0x14
+ u8 tx_power;
+
+#ifdef CONFIG_PM
+ u32 pm_state[16];
+#endif
+
+ /* network state */
+
+ /* Used to pass the current INTA value from ISR to Tasklet */
+ u32 isr_inta;
+
+ /* debugging info */
+ u32 indirect_dword;
+ u32 direct_dword;
+ u32 indirect_byte;
+}; /*ipw_priv */
+
+
+/* debug macros */
+
+#ifdef CONFIG_IPW_DEBUG
+#define IPW_DEBUG(level, fmt, args...) \
+do { if (ipw_debug_level & (level)) \
+ printk(KERN_DEBUG DRV_NAME": %c %s " fmt, \
+ in_interrupt() ? 'I' : 'U', __FUNCTION__, ## args); } while (0)
+#else
+#define IPW_DEBUG(level, fmt, args...) do {} while (0)
+#endif /* CONFIG_IPW_DEBUG */
+
+/*
+ * To use the debug system;
+ *
+ * If you are defining a new debug classification, simply add it to the #define
+ * list here in the form of:
+ *
+ * #define IPW_DL_xxxx VALUE
+ *
+ * shifting value to the left one bit from the previous entry. xxxx should be
+ * the name of the classification (for example, WEP)
+ *
+ * You then need to either add a IPW_xxxx_DEBUG() macro definition for your
+ * classification, or use IPW_DEBUG(IPW_DL_xxxx, ...) whenever you want
+ * to send output to that classification.
+ *
+ * To add your debug level to the list of levels seen when you perform
+ *
+ * % cat /proc/net/ipw/debug_level
+ *
+ * you simply need to add your entry to the ipw_debug_levels array.
+ *
+ * If you do not see debug_level in /proc/net/ipw then you do not have
+ * CONFIG_IPW_DEBUG defined in your kernel configuration
+ *
+ */
+
+#define IPW_DL_ERROR BIT(0)
+#define IPW_DL_WARNING BIT(1)
+#define IPW_DL_INFO BIT(2)
+#define IPW_DL_WX BIT(3)
+#define IPW_DL_HOST_COMMAND BIT(5)
+#define IPW_DL_STATE BIT(6)
+
+#define IPW_DL_NOTIF BIT(10)
+#define IPW_DL_SCAN BIT(11)
+#define IPW_DL_ASSOC BIT(12)
+#define IPW_DL_DROP BIT(13)
+#define IPW_DL_IOCTL BIT(14)
+
+#define IPW_DL_MANAGE BIT(15)
+#define IPW_DL_FW BIT(16)
+#define IPW_DL_RF_KILL BIT(17)
+
+
+#define IPW_DL_ORD BIT(20)
+
+#define IPW_DL_FRAG BIT(21)
+#define IPW_DL_WEP BIT(22)
+#define IPW_DL_TX BIT(23)
+#define IPW_DL_RX BIT(24)
+#define IPW_DL_ISR BIT(25)
+#define IPW_DL_FW_INFO BIT(26)
+#define IPW_DL_IO BIT(27)
+#define IPW_DL_TRACE BIT(28)
+
+
+#define IPW_ERROR(f, a...) printk(KERN_ERR DRV_NAME ": " f, ## a)
+#define IPW_WARNING(f, a...) printk(KERN_WARNING DRV_NAME ": " f, ## a)
+#define IPW_DEBUG_INFO(f, a...) IPW_DEBUG(IPW_DL_INFO, f, ## a)
+
+#define IPW_DEBUG_WX(f, a...) IPW_DEBUG(IPW_DL_WX, f, ## a)
+#define IPW_DEBUG_SCAN(f, a...) IPW_DEBUG(IPW_DL_SCAN, f, ## a)
+#define IPW_DEBUG_STATUS(f, a...) IPW_DEBUG(IPW_DL_STATUS, f, ## a)
+#define IPW_DEBUG_TRACE(f, a...) IPW_DEBUG(IPW_DL_TRACE, f, ## a)
+#define IPW_DEBUG_RX(f, a...) IPW_DEBUG(IPW_DL_RX, f, ## a)
+#define IPW_DEBUG_TX(f, a...) IPW_DEBUG(IPW_DL_TX, f, ## a)
+#define IPW_DEBUG_ISR(f, a...) IPW_DEBUG(IPW_DL_ISR, f, ## a)
+#define IPW_DEBUG_MANAGEMENT(f, a...) IPW_DEBUG(IPW_DL_MANAGE, f, ## a)
+#define IPW_DEBUG_WEP(f, a...) IPW_DEBUG(IPW_DL_WEP, f, ## a)
+#define IPW_DEBUG_HC(f, a...) IPW_DEBUG(IPW_DL_HOST_COMMAND, f, ## a)
+#define IPW_DEBUG_FRAG(f, a...) IPW_DEBUG(IPW_DL_FRAG, f, ## a)
+#define IPW_DEBUG_FW(f, a...) IPW_DEBUG(IPW_DL_FW, f, ## a)
+#define IPW_DEBUG_RF_KILL(f, a...) IPW_DEBUG(IPW_DL_RF_KILL, f, ## a)
+#define IPW_DEBUG_DROP(f, a...) IPW_DEBUG(IPW_DL_DROP, f, ## a)
+#define IPW_DEBUG_IO(f, a...) IPW_DEBUG(IPW_DL_IO, f, ## a)
+#define IPW_DEBUG_ORD(f, a...) IPW_DEBUG(IPW_DL_ORD, f, ## a)
+#define IPW_DEBUG_FW_INFO(f, a...) IPW_DEBUG(IPW_DL_FW_INFO, f, ## a)
+#define IPW_DEBUG_NOTIF(f, a...) IPW_DEBUG(IPW_DL_NOTIF, f, ## a)
+#define IPW_DEBUG_STATE(f, a...) IPW_DEBUG(IPW_DL_STATE | IPW_DL_ASSOC | IPW_DL_INFO, f, ## a)
+#define IPW_DEBUG_ASSOC(f, a...) IPW_DEBUG(IPW_DL_ASSOC | IPW_DL_INFO, f, ## a)
+#include <linux/ctype.h>
+
+/*
+* Register bit definitions
+*/
+
+/* Dino control registers bits */
+
+#define DINO_ENABLE_SYSTEM 0x80
+#define DINO_ENABLE_CS 0x40
+#define DINO_RXFIFO_DATA 0x01
+#define DINO_CONTROL_REG 0x00200000
+
+#define CX2_INTA_RW 0x00000008
+#define CX2_INTA_MASK_R 0x0000000C
+#define CX2_INDIRECT_ADDR 0x00000010
+#define CX2_INDIRECT_DATA 0x00000014
+#define CX2_AUTOINC_ADDR 0x00000018
+#define CX2_AUTOINC_DATA 0x0000001C
+#define CX2_RESET_REG 0x00000020
+#define CX2_GP_CNTRL_RW 0x00000024
+
+#define CX2_READ_INT_REGISTER 0xFF4
+
+#define CX2_GP_CNTRL_BIT_INIT_DONE 0x00000004
+
+#define CX2_REGISTER_DOMAIN1_END 0x00001000
+#define CX2_SRAM_READ_INT_REGISTER 0x00000ff4
+
+#define CX2_SHARED_LOWER_BOUND 0x00000200
+#define CX2_INTERRUPT_AREA_LOWER_BOUND 0x00000f80
+
+#define CX2_NIC_SRAM_LOWER_BOUND 0x00000000
+#define CX2_NIC_SRAM_UPPER_BOUND 0x00030000
+
+#define CX2_BIT_INT_HOST_SRAM_READ_INT_REGISTER (1 << 29)
+#define CX2_GP_CNTRL_BIT_CLOCK_READY 0x00000001
+#define CX2_GP_CNTRL_BIT_HOST_ALLOWS_STANDBY 0x00000002
+
+/*
+ * RESET Register Bit Indexes
+ */
+#define CBD_RESET_REG_PRINCETON_RESET 0x00000001 /* Bit 0 (LSB) */
+#define CX2_RESET_REG_SW_RESET 0x00000080 /* Bit 7 */
+#define CX2_RESET_REG_MASTER_DISABLED 0x00000100 /* Bit 8 */
+#define CX2_RESET_REG_STOP_MASTER 0x00000200 /* Bit 9 */
+#define CX2_ARC_KESHET_CONFIG 0x08000000 /* Bit 27 */
+#define CX2_START_STANDBY 0x00000004 /* Bit 2 */
+
+#define CX2_CSR_CIS_UPPER_BOUND 0x00000200
+#define CX2_DOMAIN_0_END 0x1000
+#define CLX_MEM_BAR_SIZE 0x1000
+
+#define CX2_BASEBAND_CONTROL_STATUS 0X00200000
+#define CX2_BASEBAND_TX_FIFO_WRITE 0X00200004
+#define CX2_BASEBAND_RX_FIFO_READ 0X00200004
+#define CX2_BASEBAND_CONTROL_STORE 0X00200010
+
+#define CX2_INTERNAL_CMD_EVENT 0X00300004
+#define CX2_BASEBAND_POWER_DOWN 0x00000001
+
+#define CX2_MEM_HALT_AND_RESET 0x003000e0
+
+/* defgroup bits_halt_reset MEM_HALT_AND_RESET register bits */
+#define CX2_BIT_HALT_RESET_ON 0x80000000
+#define CX2_BIT_HALT_RESET_OFF 0x00000000
+
+#define CB_LAST_VALID 0x20000000
+#define CB_INT_ENABLED 0x40000000
+#define CB_VALID 0x80000000
+#define CB_SRC_LE 0x08000000
+#define CB_DEST_LE 0x04000000
+#define CB_SRC_AUTOINC 0x00800000
+#define CB_SRC_IO_GATED 0x00400000
+#define CB_DEST_AUTOINC 0x00080000
+#define CB_SRC_SIZE_LONG 0x00200000
+#define CB_DEST_SIZE_LONG 0x00020000
+
+
+/* DMA DEFINES */
+
+#define DMA_CONTROL_SMALL_CB_CONST_VALUE 0x00540000
+#define DMA_CB_STOP_AND_ABORT 0x00000C00
+#define DMA_CB_START 0x00000100
+
+
+#define CX2_SHARED_SRAM_SIZE 0x00030000
+#define CX2_SHARED_SRAM_DMA_CONTROL 0x00027000
+#define CB_MAX_LENGTH 0x1FFF
+
+#define CX2_HOST_EEPROM_DATA_SRAM_SIZE 0xA18
+#define CX2_EEPROM_IMAGE_SIZE 0x100
+
+
+/* DMA defs */
+#define CX2_DMA_I_CURRENT_CB 0x003000D0
+#define CX2_DMA_O_CURRENT_CB 0x003000D4
+#define CX2_DMA_I_DMA_CONTROL 0x003000A4
+#define CX2_DMA_I_CB_BASE 0x003000A0
+
+#define CX2_TX_CMD_QUEUE_BD_BASE (0x00000200)
+#define CX2_TX_CMD_QUEUE_BD_SIZE (0x00000204)
+#define CX2_TX_QUEUE_0_BD_BASE (0x00000208)
+#define CX2_TX_QUEUE_0_BD_SIZE (0x0000020C)
+#define CX2_TX_QUEUE_1_BD_BASE (0x00000210)
+#define CX2_TX_QUEUE_1_BD_SIZE (0x00000214)
+#define CX2_TX_QUEUE_2_BD_BASE (0x00000218)
+#define CX2_TX_QUEUE_2_BD_SIZE (0x0000021C)
+#define CX2_TX_QUEUE_3_BD_BASE (0x00000220)
+#define CX2_TX_QUEUE_3_BD_SIZE (0x00000224)
+#define CX2_RX_BD_BASE (0x00000240)
+#define CX2_RX_BD_SIZE (0x00000244)
+#define CX2_RFDS_TABLE_LOWER (0x00000500)
+
+#define CX2_TX_CMD_QUEUE_READ_INDEX (0x00000280)
+#define CX2_TX_QUEUE_0_READ_INDEX (0x00000284)
+#define CX2_TX_QUEUE_1_READ_INDEX (0x00000288)
+#define CX2_TX_QUEUE_2_READ_INDEX (0x0000028C)
+#define CX2_TX_QUEUE_3_READ_INDEX (0x00000290)
+#define CX2_RX_READ_INDEX (0x000002A0)
+
+#define CX2_TX_CMD_QUEUE_WRITE_INDEX (0x00000F80)
+#define CX2_TX_QUEUE_0_WRITE_INDEX (0x00000F84)
+#define CX2_TX_QUEUE_1_WRITE_INDEX (0x00000F88)
+#define CX2_TX_QUEUE_2_WRITE_INDEX (0x00000F8C)
+#define CX2_TX_QUEUE_3_WRITE_INDEX (0x00000F90)
+#define CX2_RX_WRITE_INDEX (0x00000FA0)
+
+/*
+ * EEPROM Related Definitions
+ */
+
+#define IPW_EEPROM_DATA_SRAM_ADDRESS (CX2_SHARED_LOWER_BOUND + 0x814)
+#define IPW_EEPROM_DATA_SRAM_SIZE (CX2_SHARED_LOWER_BOUND + 0x818)
+#define IPW_EEPROM_LOAD_DISABLE (CX2_SHARED_LOWER_BOUND + 0x81C)
+#define IPW_EEPROM_DATA (CX2_SHARED_LOWER_BOUND + 0x820)
+#define IPW_EEPROM_UPPER_ADDRESS (CX2_SHARED_LOWER_BOUND + 0x9E0)
+
+#define IPW_STATION_TABLE_LOWER (CX2_SHARED_LOWER_BOUND + 0xA0C)
+#define IPW_STATION_TABLE_UPPER (CX2_SHARED_LOWER_BOUND + 0xB0C)
+#define IPW_REQUEST_ATIM (CX2_SHARED_LOWER_BOUND + 0xB0C)
+#define IPW_ATIM_SENT (CX2_SHARED_LOWER_BOUND + 0xB10)
+#define IPW_WHO_IS_AWAKE (CX2_SHARED_LOWER_BOUND + 0xB14)
+#define IPW_DURING_ATIM_WINDOW (CX2_SHARED_LOWER_BOUND + 0xB18)
+
+
+#define MSB 1
+#define LSB 0
+#define WORD_TO_BYTE(_word) ((_word) * sizeof(u16))
+
+#define GET_EEPROM_ADDR(_wordoffset,_byteoffset) \
+ ( WORD_TO_BYTE(_wordoffset) + (_byteoffset) )
+
+/* EEPROM access by BYTE */
+#define EEPROM_PME_CAPABILITY (GET_EEPROM_ADDR(0x09,MSB)) /* 1 byte */
+#define EEPROM_MAC_ADDRESS (GET_EEPROM_ADDR(0x21,LSB)) /* 6 byte */
+#define EEPROM_VERSION (GET_EEPROM_ADDR(0x24,MSB)) /* 1 byte */
+#define EEPROM_NIC_TYPE (GET_EEPROM_ADDR(0x25,LSB)) /* 1 byte */
+#define EEPROM_SKU_CAPABILITY (GET_EEPROM_ADDR(0x25,MSB)) /* 1 byte */
+#define EEPROM_COUNTRY_CODE (GET_EEPROM_ADDR(0x26,LSB)) /* 3 bytes */
+#define EEPROM_IBSS_CHANNELS_BG (GET_EEPROM_ADDR(0x28,LSB)) /* 2 bytes */
+#define EEPROM_IBSS_CHANNELS_A (GET_EEPROM_ADDR(0x29,MSB)) /* 5 bytes */
+#define EEPROM_BSS_CHANNELS_BG (GET_EEPROM_ADDR(0x2c,LSB)) /* 2 bytes */
+#define EEPROM_HW_VERSION (GET_EEPROM_ADDR(0x72,LSB)) /* 2 bytes */
+
+/* NIC type as found in the one byte EEPROM_NIC_TYPE offset*/
+#define EEPROM_NIC_TYPE_STANDARD 0
+#define EEPROM_NIC_TYPE_DELL 1
+#define EEPROM_NIC_TYPE_FUJITSU 2
+#define EEPROM_NIC_TYPE_IBM 3
+#define EEPROM_NIC_TYPE_HP 4
+
+#define FW_MEM_REG_LOWER_BOUND 0x00300000
+#define FW_MEM_REG_EEPROM_ACCESS (FW_MEM_REG_LOWER_BOUND + 0x40)
+
+#define EEPROM_BIT_SK __BIT(0)
+#define EEPROM_BIT_CS __BIT(1)
+#define EEPROM_BIT_DI __BIT(2)
+#define EEPROM_BIT_DO __BIT(4)
+
+#define EEPROM_CMD_READ 0x2
+
+/* Defines a single bit in a by bit number (0-31) */
+#define __BIT(x) (1UL << (x))
+
+/* Interrupts masks */
+#define CX2_INTA_NONE 0x00000000
+
+#define CX2_INTA_BIT_RX_TRANSFER 0x00000002
+#define CX2_INTA_BIT_STATUS_CHANGE 0x00000010
+#define CX2_INTA_BIT_BEACON_PERIOD_EXPIRED 0x00000020
+
+//Inta Bits for CF
+#define CX2_INTA_BIT_TX_CMD_QUEUE 0x00000800
+#define CX2_INTA_BIT_TX_QUEUE_1 0x00001000
+#define CX2_INTA_BIT_TX_QUEUE_2 0x00002000
+#define CX2_INTA_BIT_TX_QUEUE_3 0x00004000
+#define CX2_INTA_BIT_TX_QUEUE_4 0x00008000
+
+#define CX2_INTA_BIT_SLAVE_MODE_HOST_CMD_DONE 0x00010000
+
+#define CX2_INTA_BIT_PREPARE_FOR_POWER_DOWN 0x00100000
+#define CX2_INTA_BIT_POWER_DOWN 0x00200000
+
+#define CX2_INTA_BIT_FW_INITIALIZATION_DONE 0x01000000
+#define CX2_INTA_BIT_FW_CARD_DISABLE_PHY_OFF_DONE 0x02000000
+#define CX2_INTA_BIT_RF_KILL_DONE 0x04000000
+#define CX2_INTA_BIT_FATAL_ERROR 0x40000000
+#define CX2_INTA_BIT_PARITY_ERROR 0x80000000
+
+/* Interrupts enabled at init time. */
+#define CX2_INTA_MASK_ALL \
+ (CX2_INTA_BIT_TX_QUEUE_1 | \
+ CX2_INTA_BIT_TX_QUEUE_2 | \
+ CX2_INTA_BIT_TX_QUEUE_3 | \
+ CX2_INTA_BIT_TX_QUEUE_4 | \
+ CX2_INTA_BIT_TX_CMD_QUEUE | \
+ CX2_INTA_BIT_RX_TRANSFER | \
+ CX2_INTA_BIT_FATAL_ERROR | \
+ CX2_INTA_BIT_PARITY_ERROR | \
+ CX2_INTA_BIT_STATUS_CHANGE | \
+ CX2_INTA_BIT_FW_INITIALIZATION_DONE | \
+ CX2_INTA_BIT_BEACON_PERIOD_EXPIRED | \
+ CX2_INTA_BIT_SLAVE_MODE_HOST_CMD_DONE | \
+ CX2_INTA_BIT_PREPARE_FOR_POWER_DOWN | \
+ CX2_INTA_BIT_POWER_DOWN | \
+ CX2_INTA_BIT_RF_KILL_DONE )
+
+#define IPWSTATUS_ERROR_LOG (CX2_SHARED_LOWER_BOUND + 0x410)
+#define IPW_EVENT_LOG (CX2_SHARED_LOWER_BOUND + 0x414)
+
+/* FW event log definitions */
+#define EVENT_ELEM_SIZE (3 * sizeof(u32))
+#define EVENT_START_OFFSET (1 * sizeof(u32) + 2 * sizeof(u16))
+
+/* FW error log definitions */
+#define ERROR_ELEM_SIZE (7 * sizeof(u32))
+#define ERROR_START_OFFSET (1 * sizeof(u32))
+
+enum {
+ IPW_FW_ERROR_OK = 0,
+ IPW_FW_ERROR_FAIL,
+ IPW_FW_ERROR_MEMORY_UNDERFLOW,
+ IPW_FW_ERROR_MEMORY_OVERFLOW,
+ IPW_FW_ERROR_BAD_PARAM,
+ IPW_FW_ERROR_BAD_CHECKSUM,
+ IPW_FW_ERROR_NMI_INTERRUPT,
+ IPW_FW_ERROR_BAD_DATABASE,
+ IPW_FW_ERROR_ALLOC_FAIL,
+ IPW_FW_ERROR_DMA_UNDERRUN,
+ IPW_FW_ERROR_DMA_STATUS,
+ IPW_FW_ERROR_DINOSTATUS_ERROR,
+ IPW_FW_ERROR_EEPROMSTATUS_ERROR,
+ IPW_FW_ERROR_SYSASSERT,
+ IPW_FW_ERROR_FATAL_ERROR
+};
+
+#define AUTH_OPEN 0
+#define AUTH_SHARED_KEY 1
+#define AUTH_IGNORE 3
+
+#define HC_ASSOCIATE 0
+#define HC_REASSOCIATE 1
+#define HC_DISASSOCIATE 2
+#define HC_IBSS_START 3
+#define HC_IBSS_RECONF 4
+#define HC_DISASSOC_QUIET 5
+
+#define IPW_RATE_CAPABILITIES 1
+#define IPW_RATE_CONNECT 0
+
+
+/*
+ * Rate values and masks
+ */
+#define IPW_TX_RATE_1MB 0x0A
+#define IPW_TX_RATE_2MB 0x14
+#define IPW_TX_RATE_5MB 0x37
+#define IPW_TX_RATE_6MB 0x0D
+#define IPW_TX_RATE_9MB 0x0F
+#define IPW_TX_RATE_11MB 0x6E
+#define IPW_TX_RATE_12MB 0x05
+#define IPW_TX_RATE_18MB 0x07
+#define IPW_TX_RATE_24MB 0x09
+#define IPW_TX_RATE_36MB 0x0B
+#define IPW_TX_RATE_48MB 0x01
+#define IPW_TX_RATE_54MB 0x03
+
+#define IPW_ORD_TABLE_ID_MASK 0x0000FF00
+#define IPW_ORD_TABLE_VALUE_MASK 0x000000FF
+
+#define IPW_ORD_TABLE_0_MASK 0x0000F000
+#define IPW_ORD_TABLE_1_MASK 0x0000F100
+#define IPW_ORD_TABLE_2_MASK 0x0000F200
+#define IPW_ORD_TABLE_3_MASK 0x0000F300
+#define IPW_ORD_TABLE_4_MASK 0x0000F400
+#define IPW_ORD_TABLE_5_MASK 0x0000F500
+#define IPW_ORD_TABLE_6_MASK 0x0000F600
+#define IPW_ORD_TABLE_7_MASK 0x0000F700
+
+/*
+ * Table 0 Entries (all entries are 32 bits)
+ */
+enum {
+ IPW_ORD_STAT_TX_CURR_RATE = IPW_ORD_TABLE_0_MASK + 1,
+ IPW_ORD_STAT_FRAG_TRESHOLD,
+ IPW_ORD_STAT_RTS_THRESHOLD,
+ IPW_ORD_STAT_TX_HOST_REQUESTS,
+ IPW_ORD_STAT_TX_HOST_COMPLETE,
+ IPW_ORD_STAT_TX_DIR_DATA,
+ IPW_ORD_STAT_TX_DIR_DATA_B_1,
+ IPW_ORD_STAT_TX_DIR_DATA_B_2,
+ IPW_ORD_STAT_TX_DIR_DATA_B_5_5,
+ IPW_ORD_STAT_TX_DIR_DATA_B_11,
+ /* Hole */
+
+
+
+
+
+
+
+ IPW_ORD_STAT_TX_DIR_DATA_G_1 = IPW_ORD_TABLE_0_MASK + 19,
+ IPW_ORD_STAT_TX_DIR_DATA_G_2,
+ IPW_ORD_STAT_TX_DIR_DATA_G_5_5,
+ IPW_ORD_STAT_TX_DIR_DATA_G_6,
+ IPW_ORD_STAT_TX_DIR_DATA_G_9,
+ IPW_ORD_STAT_TX_DIR_DATA_G_11,
+ IPW_ORD_STAT_TX_DIR_DATA_G_12,
+ IPW_ORD_STAT_TX_DIR_DATA_G_18,
+ IPW_ORD_STAT_TX_DIR_DATA_G_24,
+ IPW_ORD_STAT_TX_DIR_DATA_G_36,
+ IPW_ORD_STAT_TX_DIR_DATA_G_48,
+ IPW_ORD_STAT_TX_DIR_DATA_G_54,
+ IPW_ORD_STAT_TX_NON_DIR_DATA,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_B_1,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_B_2,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_B_5_5,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_B_11,
+ /* Hole */
+
+
+
+
+
+
+
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_1 = IPW_ORD_TABLE_0_MASK + 44,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_2,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_5_5,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_6,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_9,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_11,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_12,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_18,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_24,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_36,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_48,
+ IPW_ORD_STAT_TX_NON_DIR_DATA_G_54,
+ IPW_ORD_STAT_TX_RETRY,
+ IPW_ORD_STAT_TX_FAILURE,
+ IPW_ORD_STAT_RX_ERR_CRC,
+ IPW_ORD_STAT_RX_ERR_ICV,
+ IPW_ORD_STAT_RX_NO_BUFFER,
+ IPW_ORD_STAT_FULL_SCANS,
+ IPW_ORD_STAT_PARTIAL_SCANS,
+ IPW_ORD_STAT_TGH_ABORTED_SCANS,
+ IPW_ORD_STAT_TX_TOTAL_BYTES,
+ IPW_ORD_STAT_CURR_RSSI_RAW,
+ IPW_ORD_STAT_RX_BEACON,
+ IPW_ORD_STAT_MISSED_BEACONS,
+ IPW_ORD_TABLE_0_LAST
+};
+
+#define IPW_RSSI_TO_DBM 112
+
+/* Table 1 Entries
+ */
+enum {
+ IPW_ORD_TABLE_1_LAST = IPW_ORD_TABLE_1_MASK | 1,
+};
+
+/*
+ * Table 2 Entries
+ *
+ * FW_VERSION: 16 byte string
+ * FW_DATE: 16 byte string (only 14 bytes used)
+ * UCODE_VERSION: 4 byte version code
+ * UCODE_DATE: 5 bytes code code
+ * ADDAPTER_MAC: 6 byte MAC address
+ * RTC: 4 byte clock
+ */
+enum {
+ IPW_ORD_STAT_FW_VERSION = IPW_ORD_TABLE_2_MASK | 1,
+ IPW_ORD_STAT_FW_DATE,
+ IPW_ORD_STAT_UCODE_VERSION,
+ IPW_ORD_STAT_UCODE_DATE,
+ IPW_ORD_STAT_ADAPTER_MAC,
+ IPW_ORD_STAT_RTC,
+ IPW_ORD_TABLE_2_LAST
+};
+
+/* Table 3 */
+enum {
+ IPW_ORD_STAT_TX_PACKET = IPW_ORD_TABLE_3_MASK | 0,
+ IPW_ORD_STAT_TX_PACKET_FAILURE,
+ IPW_ORD_STAT_TX_PACKET_SUCCESS,
+ IPW_ORD_STAT_TX_PACKET_ABORTED,
+ IPW_ORD_TABLE_3_LAST
+};
+
+/* Table 4 */
+enum {
+ IPW_ORD_TABLE_4_LAST = IPW_ORD_TABLE_4_MASK
+};
+
+/* Table 5 */
+enum {
+ IPW_ORD_STAT_AVAILABLE_AP_COUNT = IPW_ORD_TABLE_5_MASK,
+ IPW_ORD_STAT_AP_ASSNS,
+ IPW_ORD_STAT_ROAM,
+ IPW_ORD_STAT_ROAM_CAUSE_MISSED_BEACONS,
+ IPW_ORD_STAT_ROAM_CAUSE_UNASSOC,
+ IPW_ORD_STAT_ROAM_CAUSE_RSSI,
+ IPW_ORD_STAT_ROAM_CAUSE_LINK_QUALITY,
+ IPW_ORD_STAT_ROAM_CAUSE_AP_LOAD_BALANCE,
+ IPW_ORD_STAT_ROAM_CAUSE_AP_NO_TX,
+ IPW_ORD_STAT_LINK_UP,
+ IPW_ORD_STAT_LINK_DOWN,
+ IPW_ORD_ANTENNA_DIVERSITY,
+ IPW_ORD_CURR_FREQ,
+ IPW_ORD_TABLE_5_LAST
+};
+
+/* Table 6 */
+enum {
+ IPW_ORD_COUNTRY_CODE = IPW_ORD_TABLE_6_MASK,
+ IPW_ORD_CURR_BSSID,
+ IPW_ORD_CURR_SSID,
+ IPW_ORD_TABLE_6_LAST
+};
+
+/* Table 7 */
+enum {
+ IPW_ORD_STAT_PERCENT_MISSED_BEACONS = IPW_ORD_TABLE_7_MASK,
+ IPW_ORD_STAT_PERCENT_TX_RETRIES,
+ IPW_ORD_STAT_PERCENT_LINK_QUALITY,
+ IPW_ORD_STAT_CURR_RSSI_DBM,
+ IPW_ORD_TABLE_7_LAST
+};
+
+#define IPW_ORDINALS_TABLE_LOWER (CX2_SHARED_LOWER_BOUND + 0x500)
+#define IPW_ORDINALS_TABLE_0 (CX2_SHARED_LOWER_BOUND + 0x180)
+#define IPW_ORDINALS_TABLE_1 (CX2_SHARED_LOWER_BOUND + 0x184)
+#define IPW_ORDINALS_TABLE_2 (CX2_SHARED_LOWER_BOUND + 0x188)
+#define IPW_MEM_FIXED_OVERRIDE (CX2_SHARED_LOWER_BOUND + 0x41C)
+
+struct ipw_fixed_rate {
+ u16 tx_rates;
+ u16 reserved;
+} __attribute__ ((packed));
+
+#define CX2_INDIRECT_ADDR_MASK (~0x3ul)
+
+struct host_cmd {
+ u8 cmd;
+ u8 len;
+ u16 reserved;
+ u32 param[TFD_CMD_IMMEDIATE_PAYLOAD_LENGTH];
+} __attribute__ ((packed));
+
+#define CFG_BT_COEXISTENCE_MIN 0x00
+#define CFG_BT_COEXISTENCE_DEFER 0x02
+#define CFG_BT_COEXISTENCE_KILL 0x04
+#define CFG_BT_COEXISTENCE_WME_OVER_BT 0x08
+#define CFG_BT_COEXISTENCE_OOB 0x10
+#define CFG_BT_COEXISTENCE_MAX 0xFF
+#define CFG_BT_COEXISTENCE_DEF 0x80 /* read Bt from EEPROM*/
+
+#define CFG_CTS_TO_ITSELF_ENABLED_MIN 0x0
+#define CFG_CTS_TO_ITSELF_ENABLED_MAX 0x1
+#define CFG_CTS_TO_ITSELF_ENABLED_DEF CFG_CTS_TO_ITSELF_ENABLED_MIN
+
+#define CFG_SYS_ANTENNA_BOTH 0x000
+#define CFG_SYS_ANTENNA_A 0x001
+#define CFG_SYS_ANTENNA_B 0x003
+
+/*
+ * The definitions below were lifted off the ipw2100 driver, which only
+ * supports 'b' mode, so I'm sure these are not exactly correct.
+ *
+ * Somebody fix these!!
+ */
+#define REG_MIN_CHANNEL 0
+#define REG_MAX_CHANNEL 14
+
+#define REG_CHANNEL_MASK 0x00003FFF
+#define IPW_IBSS_11B_DEFAULT_MASK 0x87ff
+
+static const long ipw_frequencies[] = {
+ 2412, 2417, 2422, 2427,
+ 2432, 2437, 2442, 2447,
+ 2452, 2457, 2462, 2467,
+ 2472, 2484
+};
+
+#define FREQ_COUNT ARRAY_SIZE(ipw_frequencies)
+
+#define IPW_MAX_CONFIG_RETRIES 10
+
+static inline u32 frame_hdr_len(struct ieee80211_hdr *hdr)
+{
+ u32 retval;
+ u16 fc;
+
+ retval = sizeof(struct ieee80211_hdr);
+ fc = le16_to_cpu(hdr->frame_ctl);
+
+ /*
+ * Function ToDS FromDS
+ * IBSS 0 0
+ * To AP 1 0
+ * From AP 0 1
+ * WDS (bridge) 1 1
+ *
+ * Only WDS frames use Address4 among them. --YZ
+ */
+ if (!(fc & IEEE80211_FCTL_TODS) || !(fc & IEEE80211_FCTL_FROMDS))
+ retval -= ETH_ALEN;
+
+ return retval;
+}
+
+#endif /* __ipw2200_h__ */
--- /dev/null
+/*
+ * File: pci-acpi.c
+ * Purpose: Provide PCI supports in ACPI
+ *
+ * Copyright (C) 2004 Intel
+ * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
+ */
+
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/module.h>
+#include <acpi/acpi.h>
+#include <acpi/acnamesp.h>
+#include <acpi/acresrc.h>
+#include <acpi/acpi_bus.h>
+
+#include <linux/pci-acpi.h>
+
+static u32 ctrlset_buf[3] = {0, 0, 0};
+static u32 global_ctrlsets = 0;
+u8 OSC_UUID[16] = {0x5B, 0x4D, 0xDB, 0x33, 0xF7, 0x1F, 0x1C, 0x40, 0x96, 0x57, 0x74, 0x41, 0xC0, 0x3D, 0xD7, 0x66};
+
+static acpi_status
+acpi_query_osc (
+ acpi_handle handle,
+ u32 level,
+ void *context,
+ void **retval )
+{
+ acpi_status status;
+ struct acpi_object_list input;
+ union acpi_object in_params[4];
+ struct acpi_buffer output;
+ union acpi_object out_obj;
+ u32 osc_dw0;
+
+ /* Setting up output buffer */
+ output.length = sizeof(out_obj) + 3*sizeof(u32);
+ output.pointer = &out_obj;
+
+ /* Setting up input parameters */
+ input.count = 4;
+ input.pointer = in_params;
+ in_params[0].type = ACPI_TYPE_BUFFER;
+ in_params[0].buffer.length = 16;
+ in_params[0].buffer.pointer = OSC_UUID;
+ in_params[1].type = ACPI_TYPE_INTEGER;
+ in_params[1].integer.value = 1;
+ in_params[2].type = ACPI_TYPE_INTEGER;
+ in_params[2].integer.value = 3;
+ in_params[3].type = ACPI_TYPE_BUFFER;
+ in_params[3].buffer.length = 12;
+ in_params[3].buffer.pointer = (u8 *)context;
+
+ status = acpi_evaluate_object(handle, "_OSC", &input, &output);
+ if (ACPI_FAILURE (status)) {
+ printk(KERN_DEBUG
+ "Evaluate _OSC Set fails. Status = 0x%04x\n", status);
+ return status;
+ }
+ if (out_obj.type != ACPI_TYPE_BUFFER) {
+ printk(KERN_DEBUG
+ "Evaluate _OSC returns wrong type\n");
+ return AE_TYPE;
+ }
+ osc_dw0 = *((u32 *) out_obj.buffer.pointer);
+ if (osc_dw0) {
+ if (osc_dw0 & OSC_REQUEST_ERROR)
+ printk(KERN_DEBUG "_OSC request fails\n");
+ if (osc_dw0 & OSC_INVALID_UUID_ERROR)
+ printk(KERN_DEBUG "_OSC invalid UUID\n");
+ if (osc_dw0 & OSC_INVALID_REVISION_ERROR)
+ printk(KERN_DEBUG "_OSC invalid revision\n");
+ if (osc_dw0 & OSC_CAPABILITIES_MASK_ERROR) {
+ /* Update Global Control Set */
+ global_ctrlsets = *((u32 *)(out_obj.buffer.pointer+8));
+ return AE_OK;
+ }
+ return AE_ERROR;
+ }
+
+ /* Update Global Control Set */
+ global_ctrlsets = *((u32 *)(out_obj.buffer.pointer + 8));
+ return AE_OK;
+}
+
+
+static acpi_status
+acpi_run_osc (
+ acpi_handle handle,
+ u32 level,
+ void *context,
+ void **retval )
+{
+ acpi_status status;
+ struct acpi_object_list input;
+ union acpi_object in_params[4];
+ struct acpi_buffer output;
+ union acpi_object out_obj;
+ u32 osc_dw0;
+
+ /* Setting up output buffer */
+ output.length = sizeof(out_obj) + 3*sizeof(u32);
+ output.pointer = &out_obj;
+
+ /* Setting up input parameters */
+ input.count = 4;
+ input.pointer = in_params;
+ in_params[0].type = ACPI_TYPE_BUFFER;
+ in_params[0].buffer.length = 16;
+ in_params[0].buffer.pointer = OSC_UUID;
+ in_params[1].type = ACPI_TYPE_INTEGER;
+ in_params[1].integer.value = 1;
+ in_params[2].type = ACPI_TYPE_INTEGER;
+ in_params[2].integer.value = 3;
+ in_params[3].type = ACPI_TYPE_BUFFER;
+ in_params[3].buffer.length = 12;
+ in_params[3].buffer.pointer = (u8 *)context;
+
+ status = acpi_evaluate_object(handle, "_OSC", &input, &output);
+ if (ACPI_FAILURE (status)) {
+ printk(KERN_DEBUG
+ "Evaluate _OSC Set fails. Status = 0x%04x\n", status);
+ return status;
+ }
+ if (out_obj.type != ACPI_TYPE_BUFFER) {
+ printk(KERN_DEBUG
+ "Evaluate _OSC returns wrong type\n");
+ return AE_TYPE;
+ }
+ osc_dw0 = *((u32 *) out_obj.buffer.pointer);
+ if (osc_dw0) {
+ if (osc_dw0 & OSC_REQUEST_ERROR)
+ printk(KERN_DEBUG "_OSC request fails\n");
+ if (osc_dw0 & OSC_INVALID_UUID_ERROR)
+ printk(KERN_DEBUG "_OSC invalid UUID\n");
+ if (osc_dw0 & OSC_INVALID_REVISION_ERROR)
+ printk(KERN_DEBUG "_OSC invalid revision\n");
+ if (osc_dw0 & OSC_CAPABILITIES_MASK_ERROR) {
+ printk(KERN_DEBUG "_OSC FW not grant req. control\n");
+ return AE_SUPPORT;
+ }
+ return AE_ERROR;
+ }
+ return AE_OK;
+}
+
+/**
+ * pci_osc_support_set - register OS support to Firmware
+ * @flags: OS support bits
+ *
+ * Update OS support fields and doing a _OSC Query to obtain an update
+ * from Firmware on supported control bits.
+ **/
+acpi_status pci_osc_support_set(u32 flags)
+{
+ u32 temp;
+
+ if (!(flags & OSC_SUPPORT_MASKS)) {
+ return AE_TYPE;
+ }
+ ctrlset_buf[OSC_SUPPORT_TYPE] |= (flags & OSC_SUPPORT_MASKS);
+
+ /* do _OSC query for all possible controls */
+ temp = ctrlset_buf[OSC_CONTROL_TYPE];
+ ctrlset_buf[OSC_QUERY_TYPE] = OSC_QUERY_ENABLE;
+ ctrlset_buf[OSC_CONTROL_TYPE] = OSC_CONTROL_MASKS;
+ acpi_get_devices ( PCI_ROOT_HID_STRING,
+ acpi_query_osc,
+ ctrlset_buf,
+ NULL );
+ ctrlset_buf[OSC_QUERY_TYPE] = !OSC_QUERY_ENABLE;
+ ctrlset_buf[OSC_CONTROL_TYPE] = temp;
+ return AE_OK;
+}
+EXPORT_SYMBOL(pci_osc_support_set);
+
+/**
+ * pci_osc_control_set - commit requested control to Firmware
+ * @flags: driver's requested control bits
+ *
+ * Attempt to take control from Firmware on requested control bits.
+ **/
+acpi_status pci_osc_control_set(u32 flags)
+{
+ acpi_status status;
+ u32 ctrlset;
+
+ ctrlset = (flags & OSC_CONTROL_MASKS);
+ if (!ctrlset) {
+ return AE_TYPE;
+ }
+ if (ctrlset_buf[OSC_SUPPORT_TYPE] &&
+ ((global_ctrlsets & ctrlset) != ctrlset)) {
+ return AE_SUPPORT;
+ }
+ ctrlset_buf[OSC_CONTROL_TYPE] |= ctrlset;
+ status = acpi_get_devices ( PCI_ROOT_HID_STRING,
+ acpi_run_osc,
+ ctrlset_buf,
+ NULL );
+ if (ACPI_FAILURE (status)) {
+ ctrlset_buf[OSC_CONTROL_TYPE] &= ~ctrlset;
+ }
+
+ return status;
+}
+EXPORT_SYMBOL(pci_osc_control_set);
--- /dev/null
+/*
+ * Driver for the Cirrus PD6729 PCI-PCMCIA bridge.
+ *
+ * Based on the i82092.c driver.
+ *
+ * This software may be used and distributed according to the terms of
+ * the GNU General Public License, incorporated herein by reference.
+ */
+
+#include <linux/kernel.h>
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/workqueue.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+
+#include <pcmcia/cs_types.h>
+#include <pcmcia/ss.h>
+#include <pcmcia/cs.h>
+
+#include <asm/system.h>
+#include <asm/io.h>
+
+#include "pd6729.h"
+#include "i82365.h"
+#include "cirrus.h"
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("Driver for the Cirrus PD6729 PCI-PCMCIA bridge");
+MODULE_AUTHOR("Jun Komuro <komurojun@mbn.nifty.com>");
+
+#define MAX_SOCKETS 2
+
+/*
+ * simple helper functions
+ * External clock time, in nanoseconds. 120 ns = 8.33 MHz
+ */
+#define to_cycles(ns) ((ns)/120)
+
+static spinlock_t port_lock = SPIN_LOCK_UNLOCKED;
+
+/* basic value read/write functions */
+
+static unsigned char indirect_read(struct pd6729_socket *socket, unsigned short reg)
+{
+ unsigned long port;
+ unsigned char val;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg += socket->number * 0x40;
+ port = socket->io_base;
+ outb(reg, port);
+ val = inb(port + 1);
+ spin_unlock_irqrestore(&port_lock, flags);
+
+ return val;
+}
+
+static unsigned short indirect_read16(struct pd6729_socket *socket, unsigned short reg)
+{
+ unsigned long port;
+ unsigned short tmp;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg = reg + socket->number * 0x40;
+ port = socket->io_base;
+ outb(reg, port);
+ tmp = inb(port + 1);
+ reg++;
+ outb(reg, port);
+ tmp = tmp | (inb(port + 1) << 8);
+ spin_unlock_irqrestore(&port_lock, flags);
+
+ return tmp;
+}
+
+static void indirect_write(struct pd6729_socket *socket, unsigned short reg, unsigned char value)
+{
+ unsigned long port;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg = reg + socket->number * 0x40;
+ port = socket->io_base;
+ outb(reg, port);
+ outb(value, port + 1);
+ spin_unlock_irqrestore(&port_lock, flags);
+}
+
+static void indirect_setbit(struct pd6729_socket *socket, unsigned short reg, unsigned char mask)
+{
+ unsigned long port;
+ unsigned char val;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg = reg + socket->number * 0x40;
+ port = socket->io_base;
+ outb(reg, port);
+ val = inb(port + 1);
+ val |= mask;
+ outb(reg, port);
+ outb(val, port + 1);
+ spin_unlock_irqrestore(&port_lock, flags);
+}
+
+static void indirect_resetbit(struct pd6729_socket *socket, unsigned short reg, unsigned char mask)
+{
+ unsigned long port;
+ unsigned char val;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg = reg + socket->number * 0x40;
+ port = socket->io_base;
+ outb(reg, port);
+ val = inb(port + 1);
+ val &= ~mask;
+ outb(reg, port);
+ outb(val, port + 1);
+ spin_unlock_irqrestore(&port_lock, flags);
+}
+
+static void indirect_write16(struct pd6729_socket *socket, unsigned short reg, unsigned short value)
+{
+ unsigned long port;
+ unsigned char val;
+ unsigned long flags;
+
+ spin_lock_irqsave(&port_lock, flags);
+ reg = reg + socket->number * 0x40;
+ port = socket->io_base;
+
+ outb(reg, port);
+ val = value & 255;
+ outb(val, port + 1);
+
+ reg++;
+
+ outb(reg, port);
+ val = value >> 8;
+ outb(val, port + 1);
+ spin_unlock_irqrestore(&port_lock, flags);
+}
+
+/* Interrupt handler functionality */
+
+static irqreturn_t pd6729_interrupt(int irq, void *dev, struct pt_regs *regs)
+{
+ struct pd6729_socket *socket = (struct pd6729_socket *)dev;
+ int i;
+ int loopcount = 0;
+ int handled = 0;
+ unsigned int events, active = 0;
+
+ while (1) {
+ loopcount++;
+ if (loopcount > 20) {
+ printk(KERN_ERR "pd6729: infinite eventloop in interrupt\n");
+ break;
+ }
+
+ active = 0;
+
+ for (i = 0; i < MAX_SOCKETS; i++) {
+ unsigned int csc;
+
+ /* card status change register */
+ csc = indirect_read(&socket[i], I365_CSC);
+ if (csc == 0) /* no events on this socket */
+ continue;
+
+ handled = 1;
+ events = 0;
+
+ if (csc & I365_CSC_DETECT) {
+ events |= SS_DETECT;
+ dprintk("Card detected in socket %i!\n", i);
+ }
+
+ if (indirect_read(&socket[i], I365_INTCTL) & I365_PC_IOCARD) {
+ /* For IO/CARDS, bit 0 means "read the card" */
+ events |= (csc & I365_CSC_STSCHG) ? SS_STSCHG : 0;
+ } else {
+ /* Check for battery/ready events */
+ events |= (csc & I365_CSC_BVD1) ? SS_BATDEAD : 0;
+ events |= (csc & I365_CSC_BVD2) ? SS_BATWARN : 0;
+ events |= (csc & I365_CSC_READY) ? SS_READY : 0;
+ }
+
+ if (events) {
+ pcmcia_parse_events(&socket[i].socket, events);
+ }
+ active |= events;
+ }
+
+ if (active == 0) /* no more events to handle */
+ break;
+ }
+ return IRQ_RETVAL(handled);
+}
+
+/* socket functions */
+
+static void set_bridge_state(struct pd6729_socket *socket)
+{
+ indirect_write(socket, I365_GBLCTL, 0x00);
+ indirect_write(socket, I365_GENCTL, 0x00);
+
+ indirect_setbit(socket, I365_INTCTL, 0x08);
+}
+
+static int pd6729_get_status(struct pcmcia_socket *sock, u_int *value)
+{
+ struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ unsigned int status;
+ unsigned int data;
+ struct pd6729_socket *t;
+
+ /* Interface Status Register */
+ status = indirect_read(socket, I365_STATUS);
+ *value = 0;
+
+ if ((status & I365_CS_DETECT) == I365_CS_DETECT) {
+ *value |= SS_DETECT;
+ }
+
+ /*
+ * IO cards have a different meaning of bits 0,1
+ * Also notice the inverse-logic on the bits
+ */
+ if (indirect_read(socket, I365_INTCTL) & I365_PC_IOCARD) {
+ /* IO card */
+ if (!(status & I365_CS_STSCHG))
+ *value |= SS_STSCHG;
+ } else {
+ /* non I/O card */
+ if (!(status & I365_CS_BVD1))
+ *value |= SS_BATDEAD;
+ if (!(status & I365_CS_BVD2))
+ *value |= SS_BATWARN;
+ }
+
+ if (status & I365_CS_WRPROT)
+ *value |= SS_WRPROT; /* card is write protected */
+
+ if (status & I365_CS_READY)
+ *value |= SS_READY; /* card is not busy */
+
+ if (status & I365_CS_POWERON)
+ *value |= SS_POWERON; /* power is applied to the card */
+
+ t = (socket->number) ? socket : socket + 1;
+ indirect_write(t, PD67_EXT_INDEX, PD67_EXTERN_DATA);
+ data = indirect_read16(t, PD67_EXT_DATA);
+ *value |= (data & PD67_EXD_VS1(socket->number)) ? 0 : SS_3VCARD;
+
+ return 0;
+}
+
+
+static int pd6729_get_socket(struct pcmcia_socket *sock, socket_state_t *state)
+{
+ struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ unsigned char reg, vcc, vpp;
+
+ state->flags = 0;
+ state->Vcc = 0;
+ state->Vpp = 0;
+ state->io_irq = 0;
+ state->csc_mask = 0;
+
+ /*
+ * First the power status of the socket
+ * PCTRL - Power Control Register
+ */
+ reg = indirect_read(socket, I365_POWER);
+
+ if (reg & I365_PWR_AUTO)
+ state->flags |= SS_PWR_AUTO; /* Automatic Power Switch */
+
+ if (reg & I365_PWR_OUT)
+ state->flags |= SS_OUTPUT_ENA; /* Output signals are enabled */
+
+ vcc = reg & I365_VCC_MASK; vpp = reg & I365_VPP1_MASK;
+
+ if (reg & I365_VCC_5V) {
+ state->Vcc = (indirect_read(socket, PD67_MISC_CTL_1) &
+ PD67_MC1_VCC_3V) ? 33 : 50;
+
+ if (vpp == I365_VPP1_5V) {
+ if (state->Vcc == 50)
+ state->Vpp = 50;
+ else
+ state->Vpp = 33;
+ }
+ if (vpp == I365_VPP1_12V)
+ state->Vpp = 120;
+ }
+
+ /*
+ * Now the IO card, RESET flags and IO interrupt
+ * IGENC, Interrupt and General Control
+ */
+ reg = indirect_read(socket, I365_INTCTL);
+
+ if ((reg & I365_PC_RESET) == 0)
+ state->flags |= SS_RESET;
+ if (reg & I365_PC_IOCARD)
+ state->flags |= SS_IOCARD; /* This is an IO card */
+
+ /* Set the IRQ number */
+ state->io_irq = socket->socket.pci_irq;
+
+ /*
+ * Card status change
+ * CSCICR, Card Status Change Interrupt Configuration
+ */
+ reg = indirect_read(socket, I365_CSCINT);
+
+ if (reg & I365_CSC_DETECT)
+ state->csc_mask |= SS_DETECT; /* Card detect is enabled */
+
+ if (state->flags & SS_IOCARD) {/* IO Cards behave different */
+ if (reg & I365_CSC_STSCHG)
+ state->csc_mask |= SS_STSCHG;
+ } else {
+ if (reg & I365_CSC_BVD1)
+ state->csc_mask |= SS_BATDEAD;
+ if (reg & I365_CSC_BVD2)
+ state->csc_mask |= SS_BATWARN;
+ if (reg & I365_CSC_READY)
+ state->csc_mask |= SS_READY;
+ }
+
+ return 0;
+}
+
+static int pd6729_set_socket(struct pcmcia_socket *sock, socket_state_t *state)
+{
+ struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ unsigned char reg;
+
+ /* First, set the global controller options */
+
+ set_bridge_state(socket);
+
+ /* Values for the IGENC register */
+
+ reg = 0;
+ /* The reset bit has "inverse" logic */
+ if (!(state->flags & SS_RESET))
+ reg = reg | I365_PC_RESET;
+ if (state->flags & SS_IOCARD)
+ reg = reg | I365_PC_IOCARD;
+
+ /* IGENC, Interrupt and General Control Register */
+ indirect_write(socket, I365_INTCTL, reg);
+
+ /* Power registers */
+
+ reg = I365_PWR_NORESET; /* default: disable resetdrv on resume */
+
+ if (state->flags & SS_PWR_AUTO) {
+ dprintk("Auto power\n");
+ reg |= I365_PWR_AUTO; /* automatic power mngmnt */
+ }
+ if (state->flags & SS_OUTPUT_ENA) {
+ dprintk("Power Enabled\n");
+ reg |= I365_PWR_OUT; /* enable power */
+ }
+
+ switch (state->Vcc) {
+ case 0:
+ break;
+ case 33:
+ dprintk("setting voltage to Vcc to 3.3V on socket %i\n",
+ socket->number);
+ reg |= I365_VCC_5V;
+ indirect_setbit(socket, PD67_MISC_CTL_1, PD67_MC1_VCC_3V);
+ break;
+ case 50:
+ dprintk("setting voltage to Vcc to 5V on socket %i\n",
+ socket->number);
+ reg |= I365_VCC_5V;
+ indirect_resetbit(socket, PD67_MISC_CTL_1, PD67_MC1_VCC_3V);
+ break;
+ default:
+ dprintk("pd6729: pd6729_set_socket called with invalid VCC power value: %i\n",
+ state->Vcc);
+ return -EINVAL;
+ }
+
+ switch (state->Vpp) {
+ case 0:
+ dprintk("not setting Vpp on socket %i\n", socket->number);
+ break;
+ case 33:
+ case 50:
+ dprintk("setting Vpp to Vcc for socket %i\n", socket->number);
+ reg |= I365_VPP1_5V;
+ break;
+ case 120:
+ dprintk("setting Vpp to 12.0\n");
+ reg |= I365_VPP1_12V;
+ break;
+ default:
+ dprintk("pd6729: pd6729_set_socket called with invalid VPP power value: %i\n",
+ state->Vpp);
+ return -EINVAL;
+ }
+
+ /* only write if changed */
+ if (reg != indirect_read(socket, I365_POWER))
+ indirect_write(socket, I365_POWER, reg);
+
+ /* Now, specifiy that all interrupts are to be done as PCI interrupts */
+ indirect_write(socket, PD67_EXT_INDEX, PD67_EXT_CTL_1);
+ indirect_write(socket, PD67_EXT_DATA, PD67_EC1_INV_MGMT_IRQ | PD67_EC1_INV_CARD_IRQ);
+
+ /* Enable specific interrupt events */
+
+ reg = 0x00;
+ if (state->csc_mask & SS_DETECT) {
+ reg |= I365_CSC_DETECT;
+ }
+ if (state->flags & SS_IOCARD) {
+ if (state->csc_mask & SS_STSCHG)
+ reg |= I365_CSC_STSCHG;
+ } else {
+ if (state->csc_mask & SS_BATDEAD)
+ reg |= I365_CSC_BVD1;
+ if (state->csc_mask & SS_BATWARN)
+ reg |= I365_CSC_BVD2;
+ if (state->csc_mask & SS_READY)
+ reg |= I365_CSC_READY;
+ }
+ reg |= 0x30; /* management IRQ: PCI INTA# = "irq 3" */
+ indirect_write(socket, I365_CSCINT, reg);
+
+ reg = indirect_read(socket, I365_INTCTL);
+ reg |= 0x03; /* card IRQ: PCI INTA# = "irq 3" */
+ indirect_write(socket, I365_INTCTL, reg);
+
+ /* now clear the (probably bogus) pending stuff by doing a dummy read */
+ (void)indirect_read(socket, I365_CSC);
+
+ return 0;
+}
+
+static int pd6729_set_io_map(struct pcmcia_socket *sock, struct pccard_io_map *io)
+{
+ struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ unsigned char map, ioctl;
+
+ map = io->map;
+
+ /* Check error conditions */
+ if (map > 1) {
+ dprintk("pd6729_set_io_map with invalid map");
+ return -EINVAL;
+ }
+
+ /* Turn off the window before changing anything */
+ if (indirect_read(socket, I365_ADDRWIN) & I365_ENA_IO(map))
+ indirect_resetbit(socket, I365_ADDRWIN, I365_ENA_IO(map));
+
+/* dprintk("set_io_map: Setting range to %x - %x\n", io->start, io->stop);*/
+
+ /* write the new values */
+ indirect_write16(socket, I365_IO(map)+I365_W_START, io->start);
+ indirect_write16(socket, I365_IO(map)+I365_W_STOP, io->stop);
+
+ ioctl = indirect_read(socket, I365_IOCTL) & ~I365_IOCTL_MASK(map);
+
+ if (io->flags & MAP_0WS) ioctl |= I365_IOCTL_0WS(map);
+ if (io->flags & MAP_16BIT) ioctl |= I365_IOCTL_16BIT(map);
+ if (io->flags & MAP_AUTOSZ) ioctl |= I365_IOCTL_IOCS16(map);
+
+ indirect_write(socket, I365_IOCTL, ioctl);
+
+ /* Turn the window back on if needed */
+ if (io->flags & MAP_ACTIVE)
+ indirect_setbit(socket, I365_ADDRWIN, I365_ENA_IO(map));
+
+ return 0;
+}
+
+static int pd6729_set_mem_map(struct pcmcia_socket *sock, struct pccard_mem_map *mem)
+{
+ struct pd6729_socket *socket = container_of(sock, struct pd6729_socket, socket);
+ unsigned short base, i;
+ unsigned char map;
+
+ map = mem->map;
+ if (map > 4) {
+ printk("pd6729_set_mem_map: invalid map");
+ return -EINVAL;
+ }
+
+ if ((mem->res->start > mem->res->end) || (mem->speed > 1000)) {
+ printk("pd6729_set_mem_map: invalid address / speed");
+ /* printk("invalid mem map for socket %i : %lx to %lx with a start of %x\n",
+ sock, mem->res->start, mem->res->end, mem->card_start); */
+ return -EINVAL;
+ }
+
+ /* Turn off the window before changing anything */
+ if (indirect_read(socket, I365_ADDRWIN) & I365_ENA_MEM(map))
+ indirect_resetbit(socket, I365_ADDRWIN, I365_ENA_MEM(map));
+
+ /* write the start address */
+ base = I365_MEM(map);
+ i = (mem->res->start >> 12) & 0x0fff;
+ if (mem->flags & MAP_16BIT)
+ i |= I365_MEM_16BIT;
+ if (mem->flags & MAP_0WS)
+ i |= I365_MEM_0WS;
+ indirect_write16(socket, base + I365_W_START, i);
+
+ /* write the stop address */
+
+ i= (mem->res->end >> 12) & 0x0fff;
+ switch (to_cycles(mem->speed)) {
+ case 0:
+ break;
+ case 1:
+ i |= I365_MEM_WS0;
+ break;
+ case 2:
+ i |= I365_MEM_WS1;
+ break;
+ default:
+ i |= I365_MEM_WS1 | I365_MEM_WS0;
+ break;
+ }
+
+ indirect_write16(socket, base + I365_W_STOP, i);
+
+ /* Take care of high byte */
+ indirect_write(socket, PD67_EXT_INDEX, PD67_MEM_PAGE(map));
+ indirect_write(socket, PD67_EXT_DATA, mem->res->start >> 24);
+
+ /* card start */
+
+ i = ((mem->card_start - mem->res->start) >> 12) & 0x3fff;
+ if (mem->flags & MAP_WRPROT)
+ i |= I365_MEM_WRPROT;
+ if (mem->flags & MAP_ATTRIB) {
+/* dprintk("requesting attribute memory for socket %i\n",
+ socket->number);*/
+ i |= I365_MEM_REG;
+ } else {
+/* dprintk("requesting normal memory for socket %i\n",
+ socket->number);*/
+ }
+ indirect_write16(socket, base + I365_W_OFF, i);
+
+ /* Enable the window if necessary */
+ if (mem->flags & MAP_ACTIVE)
+ indirect_setbit(socket, I365_ADDRWIN, I365_ENA_MEM(map));
+
+ return 0;
+}
+
+static int pd6729_suspend(struct pcmcia_socket *sock)
+{
+ return pd6729_set_socket(sock, &dead_socket);
+}
+
+static int pd6729_init(struct pcmcia_socket *sock)
+{
+ int i;
+ struct resource res = { .end = 0x0fff };
+ pccard_io_map io = { 0, 0, 0, 0, 1 };
+ pccard_mem_map mem = { .res = &res, };
+
+ pd6729_set_socket(sock, &dead_socket);
+ for (i = 0; i < 2; i++) {
+ io.map = i;
+ pd6729_set_io_map(sock, &io);
+ }
+ for (i = 0; i < 5; i++) {
+ mem.map = i;
+ pd6729_set_mem_map(sock, &mem);
+ }
+
+ return 0;
+}
+
+
+/* the pccard structure and its functions */
+static struct pccard_operations pd6729_operations = {
+ .init = pd6729_init,
+ .suspend = pd6729_suspend,
+ .get_status = pd6729_get_status,
+ .get_socket = pd6729_get_socket,
+ .set_socket = pd6729_set_socket,
+ .set_io_map = pd6729_set_io_map,
+ .set_mem_map = pd6729_set_mem_map,
+};
+
+static int __devinit pd6729_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+{
+ int i, j, ret;
+ char configbyte;
+ struct pd6729_socket *socket;
+
+ socket = kmalloc(sizeof(struct pd6729_socket) * MAX_SOCKETS, GFP_KERNEL);
+ if (!socket)
+ return -ENOMEM;
+
+ memset(socket, 0, sizeof(struct pd6729_socket) * MAX_SOCKETS);
+
+ if ((ret = pci_enable_device(dev)))
+ goto err_out_free_mem;
+
+ printk(KERN_INFO "pd6729: Cirrus PD6729 PCI to PCMCIA Bridge at 0x%lx on irq %d\n",
+ pci_resource_start(dev, 0), dev->irq);
+ printk(KERN_INFO "pd6729: configured as a %d socket device.\n", MAX_SOCKETS);
+ /*
+ * Since we have no memory BARs some firmware we may not
+ * have had PCI_COMMAND_MEM enabled, yet the device needs
+ * it.
+ */
+ pci_read_config_byte(dev, PCI_COMMAND, &configbyte);
+ if (!(configbyte & PCI_COMMAND_MEMORY)) {
+ printk(KERN_DEBUG "pd6729: Enabling PCI_COMMAND_MEMORY.\n");
+ configbyte |= PCI_COMMAND_MEMORY;
+ pci_write_config_byte(dev, PCI_COMMAND, configbyte);
+ }
+
+ ret = pci_request_regions(dev, "pd6729");
+ if (ret) {
+ printk(KERN_INFO "pd6729: pci request region failed.\n");
+ goto err_out_disable;
+ }
+
+ for (i = 0; i < MAX_SOCKETS; i++) {
+ socket[i].io_base = pci_resource_start(dev, 0);
+ socket[i].socket.features |= SS_CAP_PCCARD;
+ socket[i].socket.map_size = 0x1000;
+ socket[i].socket.irq_mask = 0;
+ socket[i].socket.pci_irq = dev->irq;
+ socket[i].socket.owner = THIS_MODULE;
+
+ socket[i].number = i;
+
+ socket[i].socket.ops = &pd6729_operations;
+ socket[i].socket.dev.dev = &dev->dev;
+ socket[i].socket.driver_data = &socket[i];
+ }
+
+ pci_set_drvdata(dev, socket);
+
+ /* Register the interrupt handler */
+ if ((ret = request_irq(dev->irq, pd6729_interrupt, SA_SHIRQ, "pd6729", socket))) {
+ printk(KERN_ERR "pd6729: Failed to register irq %d, aborting\n", dev->irq);
+ goto err_out_free_res;
+ }
+
+ for (i = 0; i < MAX_SOCKETS; i++) {
+ ret = pcmcia_register_socket(&socket[i].socket);
+ if (ret) {
+ printk(KERN_INFO "pd6729: pcmcia_register_socket failed.\n");
+ for (j = 0; j < i ; j++)
+ pcmcia_unregister_socket(&socket[j].socket);
+ goto err_out_free_res2;
+ }
+ }
+
+ return 0;
+
+ err_out_free_res2:
+ free_irq(dev->irq, socket);
+ err_out_free_res:
+ pci_release_regions(dev);
+ err_out_disable:
+ pci_disable_device(dev);
+
+ err_out_free_mem:
+ kfree(socket);
+ return ret;
+}
+
+static void __devexit pd6729_pci_remove(struct pci_dev *dev)
+{
+ int i;
+ struct pd6729_socket *socket = pci_get_drvdata(dev);
+
+ for (i = 0; i < MAX_SOCKETS; i++)
+ pcmcia_unregister_socket(&socket[i].socket);
+
+ free_irq(dev->irq, socket);
+ pci_release_regions(dev);
+ pci_disable_device(dev);
+
+ kfree(socket);
+}
+
+static int pd6729_socket_suspend(struct pci_dev *dev, u32 state)
+{
+ return pcmcia_socket_dev_suspend(&dev->dev, state);
+}
+
+static int pd6729_socket_resume(struct pci_dev *dev)
+{
+ return pcmcia_socket_dev_resume(&dev->dev);
+}
+
+static struct pci_device_id pd6729_pci_ids[] = {
+ {
+ .vendor = PCI_VENDOR_ID_CIRRUS,
+ .device = PCI_DEVICE_ID_CIRRUS_6729,
+ .subvendor = PCI_ANY_ID,
+ .subdevice = PCI_ANY_ID,
+ },
+ { }
+};
+MODULE_DEVICE_TABLE(pci, pd6729_pci_ids);
+
+static struct pci_driver pd6729_pci_drv = {
+ .name = "pd6729",
+ .id_table = pd6729_pci_ids,
+ .probe = pd6729_pci_probe,
+ .remove = __devexit_p(pd6729_pci_remove),
+ .suspend = pd6729_socket_suspend,
+ .resume = pd6729_socket_resume,
+};
+
+static int pd6729_module_init(void)
+{
+ return pci_module_init(&pd6729_pci_drv);
+}
+
+static void pd6729_module_exit(void)
+{
+ pci_unregister_driver(&pd6729_pci_drv);
+}
+
+module_init(pd6729_module_init);
+module_exit(pd6729_module_exit);
--- /dev/null
+/*======================================================================
+
+ Device driver for the PCMCIA control functionality of PXA2xx
+ microprocessors.
+
+ The contents of this file may be used under the
+ terms of the GNU Public License version 2 (the "GPL")
+
+ (c) Ian Molton (spyro@f2s.com) 2003
+ (c) Stefan Eletzhofer (stefan.eletzhofer@inquant.de) 2003,4
+
+ derived from sa11xx_base.c
+
+ Portions created by John G. Dorsey are
+ Copyright (C) 1999 John G. Dorsey.
+
+ ======================================================================*/
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/config.h>
+#include <linux/cpufreq.h>
+#include <linux/ioport.h>
+#include <linux/kernel.h>
+#include <linux/spinlock.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/system.h>
+#include <asm/arch/pxa-regs.h>
+
+#include <pcmcia/cs_types.h>
+#include <pcmcia/ss.h>
+#include <pcmcia/bulkmem.h>
+#include <pcmcia/cistpl.h>
+
+#include "cs_internal.h"
+#include "soc_common.h"
+#include "pxa2xx_base.h"
+
+
+#define MCXX_SETUP_MASK (0x7f)
+#define MCXX_ASST_MASK (0x1f)
+#define MCXX_HOLD_MASK (0x3f)
+#define MCXX_SETUP_SHIFT (0)
+#define MCXX_ASST_SHIFT (7)
+#define MCXX_HOLD_SHIFT (14)
+
+static inline u_int pxa2xx_mcxx_hold(u_int pcmcia_cycle_ns,
+ u_int mem_clk_10khz)
+{
+ u_int code = pcmcia_cycle_ns * mem_clk_10khz;
+ return (code / 300000) + ((code % 300000) ? 1 : 0) - 1;
+}
+
+static inline u_int pxa2xx_mcxx_asst(u_int pcmcia_cycle_ns,
+ u_int mem_clk_10khz)
+{
+ u_int code = pcmcia_cycle_ns * mem_clk_10khz;
+ return (code / 300000) + ((code % 300000) ? 1 : 0) - 1;
+}
+
+static inline u_int pxa2xx_mcxx_setup(u_int pcmcia_cycle_ns,
+ u_int mem_clk_10khz)
+{
+ u_int code = pcmcia_cycle_ns * mem_clk_10khz;
+ return (code / 100000) + ((code % 100000) ? 1 : 0) - 1;
+}
+
+/* This function returns the (approximate) command assertion period, in
+ * nanoseconds, for a given CPU clock frequency and MCXX_ASST value:
+ */
+static inline u_int pxa2xx_pcmcia_cmd_time(u_int mem_clk_10khz,
+ u_int pcmcia_mcxx_asst)
+{
+ return (300000 * (pcmcia_mcxx_asst + 1) / mem_clk_10khz);
+}
+
+static int pxa2xx_pcmcia_set_mcmem( int sock, int speed, int clock )
+{
+ MCMEM(sock) = ((pxa2xx_mcxx_setup(speed, clock)
+ & MCXX_SETUP_MASK) << MCXX_SETUP_SHIFT)
+ | ((pxa2xx_mcxx_asst(speed, clock)
+ & MCXX_ASST_MASK) << MCXX_ASST_SHIFT)
+ | ((pxa2xx_mcxx_hold(speed, clock)
+ & MCXX_HOLD_MASK) << MCXX_HOLD_SHIFT);
+
+ return 0;
+}
+
+static int pxa2xx_pcmcia_set_mcio( int sock, int speed, int clock )
+{
+ MCIO(sock) = ((pxa2xx_mcxx_setup(speed, clock)
+ & MCXX_SETUP_MASK) << MCXX_SETUP_SHIFT)
+ | ((pxa2xx_mcxx_asst(speed, clock)
+ & MCXX_ASST_MASK) << MCXX_ASST_SHIFT)
+ | ((pxa2xx_mcxx_hold(speed, clock)
+ & MCXX_HOLD_MASK) << MCXX_HOLD_SHIFT);
+
+ return 0;
+}
+
+static int pxa2xx_pcmcia_set_mcatt( int sock, int speed, int clock )
+{
+ MCATT(sock) = ((pxa2xx_mcxx_setup(speed, clock)
+ & MCXX_SETUP_MASK) << MCXX_SETUP_SHIFT)
+ | ((pxa2xx_mcxx_asst(speed, clock)
+ & MCXX_ASST_MASK) << MCXX_ASST_SHIFT)
+ | ((pxa2xx_mcxx_hold(speed, clock)
+ & MCXX_HOLD_MASK) << MCXX_HOLD_SHIFT);
+
+ return 0;
+}
+
+static int pxa2xx_pcmcia_set_mcxx(struct soc_pcmcia_socket *skt, unsigned int clk)
+{
+ struct soc_pcmcia_timing timing;
+ int sock = skt->nr;
+
+ soc_common_pcmcia_get_timing(skt, &timing);
+
+ pxa2xx_pcmcia_set_mcmem(sock, timing.mem, clk);
+ pxa2xx_pcmcia_set_mcatt(sock, timing.attr, clk);
+ pxa2xx_pcmcia_set_mcio(sock, timing.io, clk);
+
+ return 0;
+}
+
+static int pxa2xx_pcmcia_set_timing(struct soc_pcmcia_socket *skt)
+{
+ unsigned int clk = get_memclk_frequency_10khz();
+ return pxa2xx_pcmcia_set_mcxx(skt, clk);
+}
+
+#ifdef CONFIG_CPU_FREQ
+
+static int
+pxa2xx_pcmcia_frequency_change(struct soc_pcmcia_socket *skt,
+ unsigned long val,
+ struct cpufreq_freqs *freqs)
+{
+#warning "it's not clear if this is right since the core CPU (N) clock has no effect on the memory (L) clock"
+ switch (val) {
+ case CPUFREQ_PRECHANGE:
+ if (freqs->new > freqs->old) {
+ debug(skt, 2, "new frequency %u.%uMHz > %u.%uMHz, "
+ "pre-updating\n",
+ freqs->new / 1000, (freqs->new / 100) % 10,
+ freqs->old / 1000, (freqs->old / 100) % 10);
+ pxa2xx_pcmcia_set_mcxx(skt, freqs->new);
+ }
+ break;
+
+ case CPUFREQ_POSTCHANGE:
+ if (freqs->new < freqs->old) {
+ debug(skt, 2, "new frequency %u.%uMHz < %u.%uMHz, "
+ "post-updating\n",
+ freqs->new / 1000, (freqs->new / 100) % 10,
+ freqs->old / 1000, (freqs->old / 100) % 10);
+ pxa2xx_pcmcia_set_mcxx(skt, freqs->new);
+ }
+ break;
+ }
+ return 0;
+}
+#endif
+
+int pxa2xx_drv_pcmcia_probe(struct device *dev)
+{
+ int ret;
+ struct pcmcia_low_level *ops;
+ int first, nr;
+
+ if (!dev || !dev->platform_data)
+ return -ENODEV;
+
+ ops = (struct pcmcia_low_level *)dev->platform_data;
+ first = ops->first;
+ nr = ops->nr;
+
+ /* Provide our PXA2xx specific timing routines. */
+ ops->set_timing = pxa2xx_pcmcia_set_timing;
+#ifdef CONFIG_CPU_FREQ
+ ops->frequency_change = pxa2xx_pcmcia_frequency_change;
+#endif
+
+ ret = soc_common_drv_pcmcia_probe(dev, ops, first, nr);
+
+ if (ret == 0) {
+ /*
+ * We have at least one socket, so set MECR:CIT
+ * (Card Is There)
+ */
+ MECR |= MECR_CIT;
+
+ /* Set MECR:NOS (Number Of Sockets) */
+ if (nr > 1)
+ MECR |= MECR_NOS;
+ else
+ MECR &= ~MECR_NOS;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(pxa2xx_drv_pcmcia_probe);
+
+static int pxa2xx_drv_pcmcia_suspend(struct device *dev, u32 state, u32 level)
+{
+ int ret = 0;
+ if (level == SUSPEND_SAVE_STATE)
+ ret = pcmcia_socket_dev_suspend(dev, state);
+ return ret;
+}
+
+static int pxa2xx_drv_pcmcia_resume(struct device *dev, u32 level)
+{
+ int ret = 0;
+ if (level == RESUME_RESTORE_STATE)
+ {
+ struct pcmcia_low_level *ops = dev->platform_data;
+ int nr = ops ? ops->nr : 0;
+
+ MECR = nr > 1 ? MECR_CIT | MECR_NOS : (nr > 0 ? MECR_CIT : 0);
+ ret = pcmcia_socket_dev_resume(dev);
+ }
+ return ret;
+}
+
+static struct device_driver pxa2xx_pcmcia_driver = {
+ .probe = pxa2xx_drv_pcmcia_probe,
+ .remove = soc_common_drv_pcmcia_remove,
+ .suspend = pxa2xx_drv_pcmcia_suspend,
+ .resume = pxa2xx_drv_pcmcia_resume,
+ .name = "pxa2xx-pcmcia",
+ .bus = &platform_bus_type,
+};
+
+static int __init pxa2xx_pcmcia_init(void)
+{
+ return driver_register(&pxa2xx_pcmcia_driver);
+}
+
+static void __exit pxa2xx_pcmcia_exit(void)
+{
+ driver_unregister(&pxa2xx_pcmcia_driver);
+}
+
+module_init(pxa2xx_pcmcia_init);
+module_exit(pxa2xx_pcmcia_exit);
+
+MODULE_AUTHOR("Stefan Eletzhofer <stefan.eletzhofer@inquant.de> and Ian Molton <spyro@f2s.com>");
+MODULE_DESCRIPTION("Linux PCMCIA Card Services: PXA2xx core socket driver");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * linux/drivers/pcmcia/pxa2xx_lubbock.c
+ *
+ * Author: George Davis
+ * Created: Jan 10, 2002
+ * Copyright: MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * Originally based upon linux/drivers/pcmcia/sa1100_neponset.c
+ *
+ * Lubbock PCMCIA specific routines.
+ *
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/device.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/delay.h>
+
+#include <asm/hardware.h>
+#include <asm/hardware/sa1111.h>
+#include <asm/mach-types.h>
+#include <asm/arch/pxa-regs.h>
+#include <asm/arch/lubbock.h>
+
+#include "sa1111_generic.h"
+
+static int
+lubbock_pcmcia_hw_init(struct soc_pcmcia_socket *skt)
+{
+ /*
+ * Setup default state of GPIO outputs
+ * before we enable them as outputs.
+ */
+ GPSR(GPIO48_nPOE) =
+ GPIO_bit(GPIO48_nPOE) |
+ GPIO_bit(GPIO49_nPWE) |
+ GPIO_bit(GPIO50_nPIOR) |
+ GPIO_bit(GPIO51_nPIOW) |
+ GPIO_bit(GPIO52_nPCE_1) |
+ GPIO_bit(GPIO53_nPCE_2);
+
+ pxa_gpio_mode(GPIO48_nPOE_MD);
+ pxa_gpio_mode(GPIO49_nPWE_MD);
+ pxa_gpio_mode(GPIO50_nPIOR_MD);
+ pxa_gpio_mode(GPIO51_nPIOW_MD);
+ pxa_gpio_mode(GPIO52_nPCE_1_MD);
+ pxa_gpio_mode(GPIO53_nPCE_2_MD);
+ pxa_gpio_mode(GPIO54_pSKTSEL_MD);
+ pxa_gpio_mode(GPIO55_nPREG_MD);
+ pxa_gpio_mode(GPIO56_nPWAIT_MD);
+ pxa_gpio_mode(GPIO57_nIOIS16_MD);
+
+ return sa1111_pcmcia_hw_init(skt);
+}
+
+static int
+lubbock_pcmcia_configure_socket(struct soc_pcmcia_socket *skt,
+ const socket_state_t *state)
+{
+ unsigned int pa_dwr_mask, pa_dwr_set, misc_mask, misc_set;
+ int ret = 0;
+
+ pa_dwr_mask = pa_dwr_set = misc_mask = misc_set = 0;
+
+ /* Lubbock uses the Maxim MAX1602, with the following connections:
+ *
+ * Socket 0 (PCMCIA):
+ * MAX1602 Lubbock Register
+ * Pin Signal
+ * ----- ------- ----------------------
+ * A0VPP S0_PWR0 SA-1111 GPIO A<0>
+ * A1VPP S0_PWR1 SA-1111 GPIO A<1>
+ * A0VCC S0_PWR2 SA-1111 GPIO A<2>
+ * A1VCC S0_PWR3 SA-1111 GPIO A<3>
+ * VX VCC
+ * VY +3.3V
+ * 12IN +12V
+ * CODE +3.3V Cirrus Code, CODE = High (VY)
+ *
+ * Socket 1 (CF):
+ * MAX1602 Lubbock Register
+ * Pin Signal
+ * ----- ------- ----------------------
+ * A0VPP GND VPP is not connected
+ * A1VPP GND VPP is not connected
+ * A0VCC S1_PWR0 MISC_WR<14>
+ * A1VCC S1_PWR1 MISC_WR<15>
+ * VX VCC
+ * VY +3.3V
+ * 12IN GND VPP is not connected
+ * CODE +3.3V Cirrus Code, CODE = High (VY)
+ *
+ */
+
+ again:
+ switch (skt->nr) {
+ case 0:
+ pa_dwr_mask = GPIO_A0 | GPIO_A1 | GPIO_A2 | GPIO_A3;
+
+ switch (state->Vcc) {
+ case 0: /* Hi-Z */
+ break;
+
+ case 33: /* VY */
+ pa_dwr_set |= GPIO_A3;
+ break;
+
+ case 50: /* VX */
+ pa_dwr_set |= GPIO_A2;
+ break;
+
+ default:
+ printk(KERN_ERR "%s(): unrecognized Vcc %u\n",
+ __FUNCTION__, state->Vcc);
+ ret = -1;
+ }
+
+ switch (state->Vpp) {
+ case 0: /* Hi-Z */
+ break;
+
+ case 120: /* 12IN */
+ pa_dwr_set |= GPIO_A1;
+ break;
+
+ default: /* VCC */
+ if (state->Vpp == state->Vcc)
+ pa_dwr_set |= GPIO_A0;
+ else {
+ printk(KERN_ERR "%s(): unrecognized Vpp %u\n",
+ __FUNCTION__, state->Vpp);
+ ret = -1;
+ break;
+ }
+ }
+ break;
+
+ case 1:
+ misc_mask = (1 << 15) | (1 << 14);
+
+ switch (state->Vcc) {
+ case 0: /* Hi-Z */
+ break;
+
+ case 33: /* VY */
+ misc_set |= 1 << 15;
+ break;
+
+ case 50: /* VX */
+ misc_set |= 1 << 14;
+ break;
+
+ default:
+ printk(KERN_ERR "%s(): unrecognized Vcc %u\n",
+ __FUNCTION__, state->Vcc);
+ ret = -1;
+ break;
+ }
+
+ if (state->Vpp != state->Vcc && state->Vpp != 0) {
+ printk(KERN_ERR "%s(): CF slot cannot support Vpp %u\n",
+ __FUNCTION__, state->Vpp);
+ ret = -1;
+ break;
+ }
+ break;
+
+ default:
+ ret = -1;
+ }
+
+ if (ret == 0)
+ ret = sa1111_pcmcia_configure_socket(skt, state);
+
+ if (ret == 0) {
+ lubbock_set_misc_wr(misc_mask, misc_set);
+ sa1111_set_io(SA1111_DEV(skt->dev), pa_dwr_mask, pa_dwr_set);
+ }
+
+#if 1
+ if (ret == 0 && state->Vcc == 33) {
+ struct pcmcia_state new_state;
+
+ /*
+ * HACK ALERT:
+ * We can't sense the voltage properly on Lubbock before
+ * actually applying some power to the socket (catch 22).
+ * Resense the socket Voltage Sense pins after applying
+ * socket power.
+ *
+ * Note: It takes about 2.5ms for the MAX1602 VCC output
+ * to rise.
+ */
+ mdelay(3);
+
+ sa1111_pcmcia_socket_state(skt, &new_state);
+
+ if (!new_state.vs_3v && !new_state.vs_Xv) {
+ /*
+ * Switch to 5V, Configure socket with 5V voltage
+ */
+ lubbock_set_misc_wr(misc_mask, 0);
+ sa1111_set_io(SA1111_DEV(skt->dev), pa_dwr_mask, 0);
+
+ /*
+ * It takes about 100ms to turn off Vcc.
+ */
+ mdelay(100);
+
+ /*
+ * We need to hack around the const qualifier as
+ * well to keep this ugly workaround localized and
+ * not force it to the rest of the code. Barf bags
+ * avaliable in the seat pocket in front of you!
+ */
+ ((socket_state_t *)state)->Vcc = 50;
+ ((socket_state_t *)state)->Vpp = 50;
+ goto again;
+ }
+ }
+#endif
+
+ return ret;
+}
+
+static struct pcmcia_low_level lubbock_pcmcia_ops = {
+ .owner = THIS_MODULE,
+ .hw_init = lubbock_pcmcia_hw_init,
+ .hw_shutdown = sa1111_pcmcia_hw_shutdown,
+ .socket_state = sa1111_pcmcia_socket_state,
+ .configure_socket = lubbock_pcmcia_configure_socket,
+ .socket_init = sa1111_pcmcia_socket_init,
+ .socket_suspend = sa1111_pcmcia_socket_suspend,
+ .first = 0,
+ .nr = 2,
+};
+
+#include "pxa2xx_base.h"
+
+int __init pcmcia_lubbock_init(struct sa1111_dev *sadev)
+{
+ int ret = -ENODEV;
+
+ if (machine_is_lubbock()) {
+ /*
+ * Set GPIO_A<3:0> to be outputs for the MAX1600,
+ * and switch to standby mode.
+ */
+ sa1111_set_io_dir(sadev, GPIO_A0|GPIO_A1|GPIO_A2|GPIO_A3, 0, 0);
+ sa1111_set_io(sadev, GPIO_A0|GPIO_A1|GPIO_A2|GPIO_A3, 0);
+ sa1111_set_sleep_io(sadev, GPIO_A0|GPIO_A1|GPIO_A2|GPIO_A3, 0);
+
+ /* Set CF Socket 1 power to standby mode. */
+ lubbock_set_misc_wr((1 << 15) | (1 << 14), 0);
+
+ sadev->dev.platform_data = &lubbock_pcmcia_ops;
+ ret = pxa2xx_drv_pcmcia_probe(&sadev->dev);
+ }
+
+ return ret;
+}
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * linux/drivers/pcmcia/pxa2xx_mainstone.c
+ *
+ * Mainstone PCMCIA specific routines.
+ *
+ * Created: May 12, 2004
+ * Author: Nicolas Pitre
+ * Copyright: MontaVista Software Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/device.h>
+
+#include <pcmcia/ss.h>
+
+#include <asm/hardware.h>
+#include <asm/irq.h>
+
+#include <asm/arch/pxa-regs.h>
+#include <asm/arch/mainstone.h>
+
+#include "soc_common.h"
+
+
+static struct pcmcia_irqs irqs[] = {
+ { 0, MAINSTONE_S0_CD_IRQ, "PCMCIA0 CD" },
+ { 1, MAINSTONE_S1_CD_IRQ, "PCMCIA1 CD" },
+ { 0, MAINSTONE_S0_STSCHG_IRQ, "PCMCIA0 STSCHG" },
+ { 1, MAINSTONE_S1_STSCHG_IRQ, "PCMCIA1 STSCHG" },
+};
+
+static int mst_pcmcia_hw_init(struct soc_pcmcia_socket *skt)
+{
+ /*
+ * Setup default state of GPIO outputs
+ * before we enable them as outputs.
+ */
+ GPSR(GPIO48_nPOE) =
+ GPIO_bit(GPIO48_nPOE) |
+ GPIO_bit(GPIO49_nPWE) |
+ GPIO_bit(GPIO50_nPIOR) |
+ GPIO_bit(GPIO51_nPIOW) |
+ GPIO_bit(GPIO85_nPCE_1) |
+ GPIO_bit(GPIO54_nPCE_2);
+
+ pxa_gpio_mode(GPIO48_nPOE_MD);
+ pxa_gpio_mode(GPIO49_nPWE_MD);
+ pxa_gpio_mode(GPIO50_nPIOR_MD);
+ pxa_gpio_mode(GPIO51_nPIOW_MD);
+ pxa_gpio_mode(GPIO85_nPCE_1_MD);
+ pxa_gpio_mode(GPIO54_nPCE_2_MD);
+ pxa_gpio_mode(GPIO79_pSKTSEL_MD);
+ pxa_gpio_mode(GPIO55_nPREG_MD);
+ pxa_gpio_mode(GPIO56_nPWAIT_MD);
+ pxa_gpio_mode(GPIO57_nIOIS16_MD);
+
+ skt->irq = (skt->nr == 0) ? MAINSTONE_S0_IRQ : MAINSTONE_S1_IRQ;
+ return soc_pcmcia_request_irqs(skt, irqs, ARRAY_SIZE(irqs));
+}
+
+static void mst_pcmcia_hw_shutdown(struct soc_pcmcia_socket *skt)
+{
+ soc_pcmcia_free_irqs(skt, irqs, ARRAY_SIZE(irqs));
+}
+
+static unsigned long mst_pcmcia_status[2];
+
+static void mst_pcmcia_socket_state(struct soc_pcmcia_socket *skt,
+ struct pcmcia_state *state)
+{
+ unsigned long status, flip;
+
+ status = (skt->nr == 0) ? MST_PCMCIA0 : MST_PCMCIA1;
+ flip = (status ^ mst_pcmcia_status[skt->nr]) & MST_PCMCIA_nSTSCHG_BVD1;
+
+ /*
+ * Workaround for STSCHG which can't be deasserted:
+ * We therefore disable/enable corresponding IRQs
+ * as needed to avoid IRQ locks.
+ */
+ if (flip) {
+ mst_pcmcia_status[skt->nr] = status;
+ if (status & MST_PCMCIA_nSTSCHG_BVD1)
+ enable_irq( (skt->nr == 0) ? MAINSTONE_S0_STSCHG_IRQ
+ : MAINSTONE_S1_STSCHG_IRQ );
+ else
+ disable_irq( (skt->nr == 0) ? MAINSTONE_S0_STSCHG_IRQ
+ : MAINSTONE_S1_STSCHG_IRQ );
+ }
+
+ state->detect = (status & MST_PCMCIA_nCD) ? 0 : 1;
+ state->ready = (status & MST_PCMCIA_nIRQ) ? 1 : 0;
+ state->bvd1 = (status & MST_PCMCIA_nSTSCHG_BVD1) ? 1 : 0;
+ state->bvd2 = (status & MST_PCMCIA_nSPKR_BVD2) ? 1 : 0;
+ state->vs_3v = (status & MST_PCMCIA_nVS1) ? 0 : 1;
+ state->vs_Xv = (status & MST_PCMCIA_nVS2) ? 0 : 1;
+ state->wrprot = 0; /* not available */
+}
+
+static int mst_pcmcia_configure_socket(struct soc_pcmcia_socket *skt,
+ const socket_state_t *state)
+{
+ unsigned long power = 0;
+ int ret = 0;
+
+ switch (state->Vcc) {
+ case 0: power |= MST_PCMCIA_PWR_VCC_0; break;
+ case 33: power |= MST_PCMCIA_PWR_VCC_33; break;
+ case 50: power |= MST_PCMCIA_PWR_VCC_50; break;
+ default:
+ printk(KERN_ERR "%s(): bad Vcc %u\n",
+ __FUNCTION__, state->Vcc);
+ ret = -1;
+ }
+
+ switch (state->Vpp) {
+ case 0: power |= MST_PCMCIA_PWR_VPP_0; break;
+ case 120: power |= MST_PCMCIA_PWR_VPP_120; break;
+ default:
+ if(state->Vpp == state->Vcc) {
+ power |= MST_PCMCIA_PWR_VPP_VCC;
+ } else {
+ printk(KERN_ERR "%s(): bad Vpp %u\n",
+ __FUNCTION__, state->Vpp);
+ ret = -1;
+ }
+ }
+
+ if (state->flags & SS_RESET)
+ power |= MST_PCMCIA_RESET;
+
+ switch (skt->nr) {
+ case 0: MST_PCMCIA0 = power; break;
+ case 1: MST_PCMCIA1 = power; break;
+ default: ret = -1;
+ }
+
+ return ret;
+}
+
+static void mst_pcmcia_socket_init(struct soc_pcmcia_socket *skt)
+{
+}
+
+static void mst_pcmcia_socket_suspend(struct soc_pcmcia_socket *skt)
+{
+}
+
+static struct pcmcia_low_level mst_pcmcia_ops = {
+ .owner = THIS_MODULE,
+ .hw_init = mst_pcmcia_hw_init,
+ .hw_shutdown = mst_pcmcia_hw_shutdown,
+ .socket_state = mst_pcmcia_socket_state,
+ .configure_socket = mst_pcmcia_configure_socket,
+ .socket_init = mst_pcmcia_socket_init,
+ .socket_suspend = mst_pcmcia_socket_suspend,
+ .nr = 2,
+};
+
+static struct platform_device *mst_pcmcia_device;
+
+static int __init mst_pcmcia_init(void)
+{
+ int ret;
+
+ mst_pcmcia_device = kmalloc(sizeof(*mst_pcmcia_device), GFP_KERNEL);
+ if (!mst_pcmcia_device)
+ return -ENOMEM;
+ memset(mst_pcmcia_device, 0, sizeof(*mst_pcmcia_device));
+ mst_pcmcia_device->name = "pxa2xx-pcmcia";
+ mst_pcmcia_device->dev.platform_data = &mst_pcmcia_ops;
+
+ ret = platform_device_register(mst_pcmcia_device);
+ if (ret)
+ kfree(mst_pcmcia_device);
+
+ return ret;
+}
+
+static void __exit mst_pcmcia_exit(void)
+{
+ /*
+ * This call is supposed to free our mst_pcmcia_device.
+ * Unfortunately platform_device don't have a free method, and
+ * we can't assume it's free of any reference at this point so we
+ * can't free it either.
+ */
+ platform_device_unregister(mst_pcmcia_device);
+}
+
+module_init(mst_pcmcia_init);
+module_exit(mst_pcmcia_exit);
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/*======================================================================
+
+ Common support code for the PCMCIA control functionality of
+ integrated SOCs like the SA-11x0 and PXA2xx microprocessors.
+
+ The contents of this file are subject to the Mozilla Public
+ License Version 1.1 (the "License"); you may not use this file
+ except in compliance with the License. You may obtain a copy of
+ the License at http://www.mozilla.org/MPL/
+
+ Software distributed under the License is distributed on an "AS
+ IS" basis, WITHOUT WARRANTY OF ANY KIND, either express or
+ implied. See the License for the specific language governing
+ rights and limitations under the License.
+
+ The initial developer of the original code is John G. Dorsey
+ <john+@cs.cmu.edu>. Portions created by John G. Dorsey are
+ Copyright (C) 1999 John G. Dorsey. All Rights Reserved.
+
+ Alternatively, the contents of this file may be used under the
+ terms of the GNU Public License version 2 (the "GPL"), in which
+ case the provisions of the GPL are applicable instead of the
+ above. If you wish to allow the use of your version of this file
+ only under the terms of the GPL and not to allow others to use
+ your version of this file under the MPL, indicate your decision
+ by deleting the provisions above and replace them with the notice
+ and other provisions required by the GPL. If you do not delete
+ the provisions above, a recipient may use your version of this
+ file under either the MPL or the GPL.
+
+======================================================================*/
+
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/timer.h>
+#include <linux/mm.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/cpufreq.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/system.h>
+
+#include "soc_common.h"
+
+/* FIXME: platform dependent resource declaration has to move out of this file */
+#ifdef CONFIG_ARCH_PXA
+#include <asm/arch/pxa-regs.h>
+#endif
+
+#ifdef DEBUG
+
+static int pc_debug;
+module_param(pc_debug, int, 0644);
+
+void soc_pcmcia_debug(struct soc_pcmcia_socket *skt, const char *func,
+ int lvl, const char *fmt, ...)
+{
+ va_list args;
+ if (pc_debug > lvl) {
+ printk(KERN_DEBUG "skt%u: %s: ", skt->nr, func);
+ va_start(args, fmt);
+ printk(fmt, args);
+ va_end(args);
+ }
+}
+
+#endif
+
+#define to_soc_pcmcia_socket(x) container_of(x, struct soc_pcmcia_socket, socket)
+
+static unsigned short
+calc_speed(unsigned short *spds, int num, unsigned short dflt)
+{
+ unsigned short speed = 0;
+ int i;
+
+ for (i = 0; i < num; i++)
+ if (speed < spds[i])
+ speed = spds[i];
+ if (speed == 0)
+ speed = dflt;
+
+ return speed;
+}
+
+void soc_common_pcmcia_get_timing(struct soc_pcmcia_socket *skt, struct soc_pcmcia_timing *timing)
+{
+ timing->io = calc_speed(skt->spd_io, MAX_IO_WIN, SOC_PCMCIA_IO_ACCESS);
+ timing->mem = calc_speed(skt->spd_mem, MAX_WIN, SOC_PCMCIA_3V_MEM_ACCESS);
+ timing->attr = calc_speed(skt->spd_attr, MAX_WIN, SOC_PCMCIA_3V_MEM_ACCESS);
+}
+EXPORT_SYMBOL(soc_common_pcmcia_get_timing);
+
+static unsigned int soc_common_pcmcia_skt_state(struct soc_pcmcia_socket *skt)
+{
+ struct pcmcia_state state;
+ unsigned int stat;
+
+ memset(&state, 0, sizeof(struct pcmcia_state));
+
+ skt->ops->socket_state(skt, &state);
+
+ stat = state.detect ? SS_DETECT : 0;
+ stat |= state.ready ? SS_READY : 0;
+ stat |= state.wrprot ? SS_WRPROT : 0;
+ stat |= state.vs_3v ? SS_3VCARD : 0;
+ stat |= state.vs_Xv ? SS_XVCARD : 0;
+
+ /* The power status of individual sockets is not available
+ * explicitly from the hardware, so we just remember the state
+ * and regurgitate it upon request:
+ */
+ stat |= skt->cs_state.Vcc ? SS_POWERON : 0;
+
+ if (skt->cs_state.flags & SS_IOCARD)
+ stat |= state.bvd1 ? SS_STSCHG : 0;
+ else {
+ if (state.bvd1 == 0)
+ stat |= SS_BATDEAD;
+ else if (state.bvd2 == 0)
+ stat |= SS_BATWARN;
+ }
+ return stat;
+}
+
+/*
+ * soc_common_pcmcia_config_skt
+ * ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ *
+ * Convert PCMCIA socket state to our socket configure structure.
+ */
+static int
+soc_common_pcmcia_config_skt(struct soc_pcmcia_socket *skt, socket_state_t *state)
+{
+ int ret;
+
+ ret = skt->ops->configure_socket(skt, state);
+ if (ret == 0) {
+ /*
+ * This really needs a better solution. The IRQ
+ * may or may not be claimed by the driver.
+ */
+ if (skt->irq_state != 1 && state->io_irq) {
+ skt->irq_state = 1;
+ set_irq_type(skt->irq, IRQT_FALLING);
+ } else if (skt->irq_state == 1 && state->io_irq == 0) {
+ skt->irq_state = 0;
+ set_irq_type(skt->irq, IRQT_NOEDGE);
+ }
+
+ skt->cs_state = *state;
+ }
+
+ if (ret < 0)
+ printk(KERN_ERR "soc_common_pcmcia: unable to configure "
+ "socket %d\n", skt->nr);
+
+ return ret;
+}
+
+/* soc_common_pcmcia_sock_init()
+ * ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ *
+ * (Re-)Initialise the socket, turning on status interrupts
+ * and PCMCIA bus. This must wait for power to stabilise
+ * so that the card status signals report correctly.
+ *
+ * Returns: 0
+ */
+static int soc_common_pcmcia_sock_init(struct pcmcia_socket *sock)
+{
+ struct soc_pcmcia_socket *skt = to_soc_pcmcia_socket(sock);
+
+ debug(skt, 2, "initializing socket\n");
+
+ skt->ops->socket_init(skt);
+ return 0;
+}
+
+
+/*
+ * soc_common_pcmcia_suspend()
+ * ^^^^^^^^^^^^^^^^^^^^^^^^^^^
+ *
+ * Remove power on the socket, disable IRQs from the card.
+ * Turn off status interrupts, and disable the PCMCIA bus.
+ *
+ * Returns: 0
+ */
+static int soc_common_pcmcia_suspend(struct pcmcia_socket *sock)
+{
+ struct soc_pcmcia_socket *skt = to_soc_pcmcia_socket(sock);
+ int ret;
+
+ debug(skt, 2, "suspending socket\n");
+
+ ret = soc_common_pcmcia_config_skt(skt, &dead_socket);
+ if (ret == 0)
+ skt->ops->socket_suspend(skt);
+
+ return ret;
+}
+
+static spinlock_t status_lock = SPIN_LOCK_UNLOCKED;
+
+static void soc_common_check_status(struct soc_pcmcia_socket *skt)
+{
+ unsigned int events;
+
+ debug(skt, 4, "entering PCMCIA monitoring thread\n");
+
+ do {
+ unsigned int status;
+ unsigned long flags;
+
+ status = soc_common_pcmcia_skt_state(skt);
+
+ spin_lock_irqsave(&status_lock, flags);
+ events = (status ^ skt->status) & skt->cs_state.csc_mask;
+ skt->status = status;
+ spin_unlock_irqrestore(&status_lock, flags);
+
+ debug(skt, 4, "events: %s%s%s%s%s%s\n",
+ events == 0 ? "<NONE>" : "",
+ events & SS_DETECT ? "DETECT " : "",
+ events & SS_READY ? "READY " : "",
+ events & SS_BATDEAD ? "BATDEAD " : "",
+ events & SS_BATWARN ? "BATWARN " : "",
+ events & SS_STSCHG ? "STSCHG " : "");
+
+ if (events)
+ pcmcia_parse_events(&skt->socket, events);
+ } while (events);
+}
+
+/* Let's poll for events in addition to IRQs since IRQ only is unreliable... */
+static void soc_common_pcmcia_poll_event(unsigned long dummy)
+{
+ struct soc_pcmcia_socket *skt = (struct soc_pcmcia_socket *)dummy;
+ debug(skt, 4, "polling for events\n");
+
+ mod_timer(&skt->poll_timer, jiffies + SOC_PCMCIA_POLL_PERIOD);
+
+ soc_common_check_status(skt);
+}
+
+
+/*
+ * Service routine for socket driver interrupts (requested by the
+ * low-level PCMCIA init() operation via soc_common_pcmcia_thread()).
+ * The actual interrupt-servicing work is performed by
+ * soc_common_pcmcia_thread(), largely because the Card Services event-
+ * handling code performs scheduling operations which cannot be
+ * executed from within an interrupt context.
+ */
+static irqreturn_t soc_common_pcmcia_interrupt(int irq, void *dev, struct pt_regs *regs)
+{
+ struct soc_pcmcia_socket *skt = dev;
+
+ debug(skt, 3, "servicing IRQ %d\n", irq);
+
+ soc_common_check_status(skt);
+
+ return IRQ_HANDLED;
+}
+
+
+/*
+ * Implements the get_status() operation for the in-kernel PCMCIA
+ * service (formerly SS_GetStatus in Card Services). Essentially just
+ * fills in bits in `status' according to internal driver state or
+ * the value of the voltage detect chipselect register.
+ *
+ * As a debugging note, during card startup, the PCMCIA core issues
+ * three set_socket() commands in a row the first with RESET deasserted,
+ * the second with RESET asserted, and the last with RESET deasserted
+ * again. Following the third set_socket(), a get_status() command will
+ * be issued. The kernel is looking for the SS_READY flag (see
+ * setup_socket(), reset_socket(), and unreset_socket() in cs.c).
+ *
+ * Returns: 0
+ */
+static int
+soc_common_pcmcia_get_status(struct pcmcia_socket *sock, unsigned int *status)
+{
+ struct soc_pcmcia_socket *skt = to_soc_pcmcia_socket(sock);
+
+ skt->status = soc_common_pcmcia_skt_state(skt);
+ *status = skt->status;
+
+ return 0;
+}
+
+
+/*
+ * Implements the get_socket() operation for the in-kernel PCMCIA
+ * service (formerly SS_GetSocket in Card Services). Not a very
+ * exciting routine.
+ *
+ * Returns: 0
+ */
+static int
+soc_common_pcmcia_get_socket(struct pcmcia_socket *sock, socket_state_t *state)
+{
+ struct soc_pcmcia_socket *skt = to_soc_pcmcia_socket(sock);
+
+ debug(skt, 2, "\n");
+
+ *state = skt->cs_state;
+
+ return 0;
+}
+
+/*
+ * Implements the set_socket() operation for the in-kernel PCMCIA
+ * service (formerly SS_SetSocket in Card Services). We more or
+ * less punt all of this work and let the kernel handle the details
+ * of power configuration, reset, &c. We also record the value of
+ * `state' in order to regurgitate it to the PCMCIA core later.
+ *
+ * Returns: 0
+ */
+static int
+soc_common_pcmcia_set_socket(struct pcmcia_socket *sock, socket_state_t *state)
+{
+ struct soc_pcmcia_socket *skt = to_soc_pcmcia_socket(sock);
+
+ debug(skt, 2, "mask: %s%s%s%s%s%sflags: %s%s%s%s%s%sVcc %d Vpp %d irq %d\n",
+ (state->csc_mask==0)?"<NONE> ":"",
+ (state->csc_mask&SS_DETECT)?"DETECT ":"",
+ (state->csc_mask&SS_READY)?"READY ":"",
+ (state->csc_mask&SS_BATDEAD)?"BATDEAD ":"",
+ (state->csc_mask&SS_BATWARN)?"BATWARN ":"",
+ (state->csc_mask&SS_STSCHG)?"STSCHG ":"",
+ (state->flags==0)?"<NONE> ":"",
+ (state->flags&SS_PWR_AUTO)?"PWR_AUTO ":"",
+ (state->flags&SS_IOCARD)?"IOCARD ":"",
+ (state->flags&SS_RESET)?"RESET ":"",
+ (state->flags&SS_SPKR_ENA)?"SPKR_ENA ":"",
+ (state->flags&SS_OUTPUT_ENA)?"OUTPUT_ENA ":"",
+ state->Vcc, state->Vpp, state->io_irq);
+
+ return soc_common_pcmcia_config_skt(skt, state);
+}
+
+
+/*
+ * Implements the set_io_map() operation for the in-kernel PCMCIA
+ * service (formerly SS_SetIOMap in Card Services). We configure
+ * the map speed as requested, but override the address ranges
+ * supplied by Card Services.
+ *
+ * Returns: 0 on success, -1 on error
+ */
+static int
+soc_common_pcmcia_set_io_map(struct pcmcia_socket *sock, struct pccard_io_map *map)
+{
+ struct soc_pcmcia_socket *skt = to_soc_pcmcia_socket(sock);
+ unsigned short speed = map->speed;
+
+ debug(skt, 2, "map %u speed %u start 0x%08x stop 0x%08x\n",
+ map->map, map->speed, map->start, map->stop);
+ debug(skt, 2, "flags: %s%s%s%s%s%s%s%s\n",
+ (map->flags==0)?"<NONE>":"",
+ (map->flags&MAP_ACTIVE)?"ACTIVE ":"",
+ (map->flags&MAP_16BIT)?"16BIT ":"",
+ (map->flags&MAP_AUTOSZ)?"AUTOSZ ":"",
+ (map->flags&MAP_0WS)?"0WS ":"",
+ (map->flags&MAP_WRPROT)?"WRPROT ":"",
+ (map->flags&MAP_USE_WAIT)?"USE_WAIT ":"",
+ (map->flags&MAP_PREFETCH)?"PREFETCH ":"");
+
+ if (map->map >= MAX_IO_WIN) {
+ printk(KERN_ERR "%s(): map (%d) out of range\n", __FUNCTION__,
+ map->map);
+ return -1;
+ }
+
+ if (map->flags & MAP_ACTIVE) {
+ if (speed == 0)
+ speed = SOC_PCMCIA_IO_ACCESS;
+ } else {
+ speed = 0;
+ }
+
+ skt->spd_io[map->map] = speed;
+ skt->ops->set_timing(skt);
+
+ if (map->stop == 1)
+ map->stop = PAGE_SIZE-1;
+
+ map->stop -= map->start;
+ map->stop += (unsigned long)skt->virt_io;
+ map->start = (unsigned long)skt->virt_io;
+
+ return 0;
+}
+
+
+/*
+ * Implements the set_mem_map() operation for the in-kernel PCMCIA
+ * service (formerly SS_SetMemMap in Card Services). We configure
+ * the map speed as requested, but override the address ranges
+ * supplied by Card Services.
+ *
+ * Returns: 0 on success, -1 on error
+ */
+static int
+soc_common_pcmcia_set_mem_map(struct pcmcia_socket *sock, struct pccard_mem_map *map)
+{
+ struct soc_pcmcia_socket *skt = to_soc_pcmcia_socket(sock);
+ struct resource *res;
+ unsigned short speed = map->speed;
+
+ debug(skt, 2, "map %u speed %u card_start %08x\n",
+ map->map, map->speed, map->card_start);
+ debug(skt, 2, "flags: %s%s%s%s%s%s%s%s\n",
+ (map->flags==0)?"<NONE>":"",
+ (map->flags&MAP_ACTIVE)?"ACTIVE ":"",
+ (map->flags&MAP_16BIT)?"16BIT ":"",
+ (map->flags&MAP_AUTOSZ)?"AUTOSZ ":"",
+ (map->flags&MAP_0WS)?"0WS ":"",
+ (map->flags&MAP_WRPROT)?"WRPROT ":"",
+ (map->flags&MAP_ATTRIB)?"ATTRIB ":"",
+ (map->flags&MAP_USE_WAIT)?"USE_WAIT ":"");
+
+ if (map->map >= MAX_WIN)
+ return -EINVAL;
+
+ if (map->flags & MAP_ACTIVE) {
+ if (speed == 0)
+ speed = 300;
+ } else {
+ speed = 0;
+ }
+
+ if (map->flags & MAP_ATTRIB) {
+ res = &skt->res_attr;
+ skt->spd_attr[map->map] = speed;
+ skt->spd_mem[map->map] = 0;
+ } else {
+ res = &skt->res_mem;
+ skt->spd_attr[map->map] = 0;
+ skt->spd_mem[map->map] = speed;
+ }
+
+ skt->ops->set_timing(skt);
+
+ map->static_start = res->start + map->card_start;
+
+ return 0;
+}
+
+struct bittbl {
+ unsigned int mask;
+ const char *name;
+};
+
+static struct bittbl status_bits[] = {
+ { SS_WRPROT, "SS_WRPROT" },
+ { SS_BATDEAD, "SS_BATDEAD" },
+ { SS_BATWARN, "SS_BATWARN" },
+ { SS_READY, "SS_READY" },
+ { SS_DETECT, "SS_DETECT" },
+ { SS_POWERON, "SS_POWERON" },
+ { SS_STSCHG, "SS_STSCHG" },
+ { SS_3VCARD, "SS_3VCARD" },
+ { SS_XVCARD, "SS_XVCARD" },
+};
+
+static struct bittbl conf_bits[] = {
+ { SS_PWR_AUTO, "SS_PWR_AUTO" },
+ { SS_IOCARD, "SS_IOCARD" },
+ { SS_RESET, "SS_RESET" },
+ { SS_DMA_MODE, "SS_DMA_MODE" },
+ { SS_SPKR_ENA, "SS_SPKR_ENA" },
+ { SS_OUTPUT_ENA, "SS_OUTPUT_ENA" },
+};
+
+static void
+dump_bits(char **p, const char *prefix, unsigned int val, struct bittbl *bits, int sz)
+{
+ char *b = *p;
+ int i;
+
+ b += sprintf(b, "%-9s:", prefix);
+ for (i = 0; i < sz; i++)
+ if (val & bits[i].mask)
+ b += sprintf(b, " %s", bits[i].name);
+ *b++ = '\n';
+ *p = b;
+}
+
+/*
+ * Implements the /sys/class/pcmcia_socket/??/status file.
+ *
+ * Returns: the number of characters added to the buffer
+ */
+static ssize_t show_status(struct class_device *class_dev, char *buf)
+{
+ struct soc_pcmcia_socket *skt =
+ container_of(class_dev, struct soc_pcmcia_socket, socket.dev);
+ char *p = buf;
+
+ p+=sprintf(p, "slot : %d\n", skt->nr);
+
+ dump_bits(&p, "status", skt->status,
+ status_bits, ARRAY_SIZE(status_bits));
+ dump_bits(&p, "csc_mask", skt->cs_state.csc_mask,
+ status_bits, ARRAY_SIZE(status_bits));
+ dump_bits(&p, "cs_flags", skt->cs_state.flags,
+ conf_bits, ARRAY_SIZE(conf_bits));
+
+ p+=sprintf(p, "Vcc : %d\n", skt->cs_state.Vcc);
+ p+=sprintf(p, "Vpp : %d\n", skt->cs_state.Vpp);
+ p+=sprintf(p, "IRQ : %d (%d)\n", skt->cs_state.io_irq, skt->irq);
+ if (skt->ops->show_timing)
+ p+=skt->ops->show_timing(skt, p);
+
+ return p-buf;
+}
+static CLASS_DEVICE_ATTR(status, S_IRUGO, show_status, NULL);
+
+
+static struct pccard_operations soc_common_pcmcia_operations = {
+ .init = soc_common_pcmcia_sock_init,
+ .suspend = soc_common_pcmcia_suspend,
+ .get_status = soc_common_pcmcia_get_status,
+ .get_socket = soc_common_pcmcia_get_socket,
+ .set_socket = soc_common_pcmcia_set_socket,
+ .set_io_map = soc_common_pcmcia_set_io_map,
+ .set_mem_map = soc_common_pcmcia_set_mem_map,
+};
+
+
+int soc_pcmcia_request_irqs(struct soc_pcmcia_socket *skt,
+ struct pcmcia_irqs *irqs, int nr)
+{
+ int i, res = 0;
+
+ for (i = 0; i < nr; i++) {
+ if (irqs[i].sock != skt->nr)
+ continue;
+ res = request_irq(irqs[i].irq, soc_common_pcmcia_interrupt,
+ SA_INTERRUPT, irqs[i].str, skt);
+ if (res)
+ break;
+ set_irq_type(irqs[i].irq, IRQT_NOEDGE);
+ }
+
+ if (res) {
+ printk(KERN_ERR "PCMCIA: request for IRQ%d failed (%d)\n",
+ irqs[i].irq, res);
+
+ while (i--)
+ if (irqs[i].sock == skt->nr)
+ free_irq(irqs[i].irq, skt);
+ }
+ return res;
+}
+EXPORT_SYMBOL(soc_pcmcia_request_irqs);
+
+void soc_pcmcia_free_irqs(struct soc_pcmcia_socket *skt,
+ struct pcmcia_irqs *irqs, int nr)
+{
+ int i;
+
+ for (i = 0; i < nr; i++)
+ if (irqs[i].sock == skt->nr)
+ free_irq(irqs[i].irq, skt);
+}
+EXPORT_SYMBOL(soc_pcmcia_free_irqs);
+
+void soc_pcmcia_disable_irqs(struct soc_pcmcia_socket *skt,
+ struct pcmcia_irqs *irqs, int nr)
+{
+ int i;
+
+ for (i = 0; i < nr; i++)
+ if (irqs[i].sock == skt->nr)
+ set_irq_type(irqs[i].irq, IRQT_NOEDGE);
+}
+EXPORT_SYMBOL(soc_pcmcia_disable_irqs);
+
+void soc_pcmcia_enable_irqs(struct soc_pcmcia_socket *skt,
+ struct pcmcia_irqs *irqs, int nr)
+{
+ int i;
+
+ for (i = 0; i < nr; i++)
+ if (irqs[i].sock == skt->nr) {
+ set_irq_type(irqs[i].irq, IRQT_RISING);
+ set_irq_type(irqs[i].irq, IRQT_BOTHEDGE);
+ }
+}
+EXPORT_SYMBOL(soc_pcmcia_enable_irqs);
+
+
+LIST_HEAD(soc_pcmcia_sockets);
+DECLARE_MUTEX(soc_pcmcia_sockets_lock);
+
+static const char *skt_names[] = {
+ "PCMCIA socket 0",
+ "PCMCIA socket 1",
+};
+
+struct skt_dev_info {
+ int nskt;
+ struct soc_pcmcia_socket skt[0];
+};
+
+#define SKT_DEV_INFO_SIZE(n) \
+ (sizeof(struct skt_dev_info) + (n)*sizeof(struct soc_pcmcia_socket))
+
+#ifdef CONFIG_CPU_FREQ
+static int
+soc_pcmcia_notifier(struct notifier_block *nb, unsigned long val, void *data)
+{
+ struct soc_pcmcia_socket *skt;
+ struct cpufreq_freqs *freqs = data;
+ int ret = 0;
+
+ down(&soc_pcmcia_sockets_lock);
+ list_for_each_entry(skt, &soc_pcmcia_sockets, node)
+ if ( skt->ops->frequency_change )
+ ret += skt->ops->frequency_change(skt, val, freqs);
+ up(&soc_pcmcia_sockets_lock);
+
+ return ret;
+}
+
+static struct notifier_block soc_pcmcia_notifier_block = {
+ .notifier_call = soc_pcmcia_notifier
+};
+
+static int soc_pcmcia_cpufreq_register(void)
+{
+ int ret;
+
+ ret = cpufreq_register_notifier(&soc_pcmcia_notifier_block,
+ CPUFREQ_TRANSITION_NOTIFIER);
+ if (ret < 0)
+ printk(KERN_ERR "Unable to register CPU frequency change "
+ "notifier for PCMCIA (%d)\n", ret);
+ return ret;
+}
+
+static void soc_pcmcia_cpufreq_unregister(void)
+{
+ cpufreq_unregister_notifier(&soc_pcmcia_notifier_block, CPUFREQ_TRANSITION_NOTIFIER);
+}
+
+#else
+#define soc_pcmcia_cpufreq_register()
+#define soc_pcmcia_cpufreq_unregister()
+#endif
+
+int soc_common_drv_pcmcia_probe(struct device *dev, struct pcmcia_low_level *ops, int first, int nr)
+{
+ struct skt_dev_info *sinfo;
+ struct soc_pcmcia_socket *skt;
+ int ret, i;
+
+ down(&soc_pcmcia_sockets_lock);
+
+ sinfo = kmalloc(SKT_DEV_INFO_SIZE(nr), GFP_KERNEL);
+ if (!sinfo) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ memset(sinfo, 0, SKT_DEV_INFO_SIZE(nr));
+ sinfo->nskt = nr;
+
+ /*
+ * Initialise the per-socket structure.
+ */
+ for (i = 0; i < nr; i++) {
+ skt = &sinfo->skt[i];
+
+ skt->socket.ops = &soc_common_pcmcia_operations;
+ skt->socket.owner = ops->owner;
+ skt->socket.dev.dev = dev;
+
+ init_timer(&skt->poll_timer);
+ skt->poll_timer.function = soc_common_pcmcia_poll_event;
+ skt->poll_timer.data = (unsigned long)skt;
+ skt->poll_timer.expires = jiffies + SOC_PCMCIA_POLL_PERIOD;
+
+ skt->nr = first + i;
+ skt->irq = NO_IRQ;
+ skt->dev = dev;
+ skt->ops = ops;
+
+ skt->res_skt.start = _PCMCIA(skt->nr);
+ skt->res_skt.end = _PCMCIA(skt->nr) + PCMCIASp - 1;
+ skt->res_skt.name = skt_names[skt->nr];
+ skt->res_skt.flags = IORESOURCE_MEM;
+
+ ret = request_resource(&iomem_resource, &skt->res_skt);
+ if (ret)
+ goto out_err_1;
+
+ skt->res_io.start = _PCMCIAIO(skt->nr);
+ skt->res_io.end = _PCMCIAIO(skt->nr) + PCMCIAIOSp - 1;
+ skt->res_io.name = "io";
+ skt->res_io.flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+
+ ret = request_resource(&skt->res_skt, &skt->res_io);
+ if (ret)
+ goto out_err_2;
+
+ skt->res_mem.start = _PCMCIAMem(skt->nr);
+ skt->res_mem.end = _PCMCIAMem(skt->nr) + PCMCIAMemSp - 1;
+ skt->res_mem.name = "memory";
+ skt->res_mem.flags = IORESOURCE_MEM;
+
+ ret = request_resource(&skt->res_skt, &skt->res_mem);
+ if (ret)
+ goto out_err_3;
+
+ skt->res_attr.start = _PCMCIAAttr(skt->nr);
+ skt->res_attr.end = _PCMCIAAttr(skt->nr) + PCMCIAAttrSp - 1;
+ skt->res_attr.name = "attribute";
+ skt->res_attr.flags = IORESOURCE_MEM;
+
+ ret = request_resource(&skt->res_skt, &skt->res_attr);
+ if (ret)
+ goto out_err_4;
+
+ skt->virt_io = ioremap(skt->res_io.start, 0x10000);
+ if (skt->virt_io == NULL) {
+ ret = -ENOMEM;
+ goto out_err_5;
+ }
+
+ if ( list_empty(&soc_pcmcia_sockets) )
+ soc_pcmcia_cpufreq_register();
+
+ list_add(&skt->node, &soc_pcmcia_sockets);
+
+ /*
+ * We initialize default socket timing here, because
+ * we are not guaranteed to see a SetIOMap operation at
+ * runtime.
+ */
+ ops->set_timing(skt);
+
+ ret = ops->hw_init(skt);
+ if (ret)
+ goto out_err_6;
+
+ skt->socket.features = SS_CAP_STATIC_MAP|SS_CAP_PCCARD;
+ skt->socket.irq_mask = 0;
+ skt->socket.map_size = PAGE_SIZE;
+ skt->socket.pci_irq = skt->irq;
+ skt->socket.io_offset = (unsigned long)skt->virt_io;
+
+ skt->status = soc_common_pcmcia_skt_state(skt);
+
+ ret = pcmcia_register_socket(&skt->socket);
+ if (ret)
+ goto out_err_7;
+
+ WARN_ON(skt->socket.sock != i);
+
+ add_timer(&skt->poll_timer);
+
+ class_device_create_file(&skt->socket.dev, &class_device_attr_status);
+ }
+
+ dev_set_drvdata(dev, sinfo);
+ ret = 0;
+ goto out;
+
+ do {
+ skt = &sinfo->skt[i];
+
+ del_timer_sync(&skt->poll_timer);
+ pcmcia_unregister_socket(&skt->socket);
+
+ out_err_7:
+ flush_scheduled_work();
+
+ ops->hw_shutdown(skt);
+ out_err_6:
+ list_del(&skt->node);
+ iounmap(skt->virt_io);
+ out_err_5:
+ release_resource(&skt->res_attr);
+ out_err_4:
+ release_resource(&skt->res_mem);
+ out_err_3:
+ release_resource(&skt->res_io);
+ out_err_2:
+ release_resource(&skt->res_skt);
+ out_err_1:
+ i--;
+ } while (i > 0);
+
+ kfree(sinfo);
+
+ out:
+ up(&soc_pcmcia_sockets_lock);
+ return ret;
+}
+
+int soc_common_drv_pcmcia_remove(struct device *dev)
+{
+ struct skt_dev_info *sinfo = dev_get_drvdata(dev);
+ int i;
+
+ dev_set_drvdata(dev, NULL);
+
+ down(&soc_pcmcia_sockets_lock);
+ for (i = 0; i < sinfo->nskt; i++) {
+ struct soc_pcmcia_socket *skt = &sinfo->skt[i];
+
+ del_timer_sync(&skt->poll_timer);
+
+ pcmcia_unregister_socket(&skt->socket);
+
+ flush_scheduled_work();
+
+ skt->ops->hw_shutdown(skt);
+
+ soc_common_pcmcia_config_skt(skt, &dead_socket);
+
+ list_del(&skt->node);
+ iounmap(skt->virt_io);
+ skt->virt_io = NULL;
+ release_resource(&skt->res_attr);
+ release_resource(&skt->res_mem);
+ release_resource(&skt->res_io);
+ release_resource(&skt->res_skt);
+ }
+ if ( list_empty(&soc_pcmcia_sockets) )
+ soc_pcmcia_cpufreq_unregister();
+
+ up(&soc_pcmcia_sockets_lock);
+
+ kfree(sinfo);
+
+ return 0;
+}
--- /dev/null
+/*
+ *
+ * linux/drivers/s390/net/ctcdbug.h ($Revision: 1.4 $)
+ *
+ * CTC / ESCON network driver - s390 dbf exploit.
+ *
+ * Copyright 2000,2003 IBM Corporation
+ *
+ * Author(s): Original Code written by
+ * Peter Tiedemann (ptiedem@de.ibm.com)
+ *
+ * $Revision: 1.4 $ $Date: 2004/10/15 09:26:58 $
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+
+#include <asm/debug.h>
+/**
+ * Debug Facility stuff
+ */
+#define CTC_DBF_SETUP_NAME "ctc_setup"
+#define CTC_DBF_SETUP_LEN 16
+#define CTC_DBF_SETUP_INDEX 3
+#define CTC_DBF_SETUP_NR_AREAS 1
+#define CTC_DBF_SETUP_LEVEL 3
+
+#define CTC_DBF_DATA_NAME "ctc_data"
+#define CTC_DBF_DATA_LEN 128
+#define CTC_DBF_DATA_INDEX 3
+#define CTC_DBF_DATA_NR_AREAS 1
+#define CTC_DBF_DATA_LEVEL 2
+
+#define CTC_DBF_TRACE_NAME "ctc_trace"
+#define CTC_DBF_TRACE_LEN 16
+#define CTC_DBF_TRACE_INDEX 2
+#define CTC_DBF_TRACE_NR_AREAS 2
+#define CTC_DBF_TRACE_LEVEL 3
+
+#define DBF_TEXT(name,level,text) \
+ do { \
+ debug_text_event(ctc_dbf_##name,level,text); \
+ } while (0)
+
+#define DBF_HEX(name,level,addr,len) \
+ do { \
+ debug_event(ctc_dbf_##name,level,(void*)(addr),len); \
+ } while (0)
+
+DECLARE_PER_CPU(char[256], ctc_dbf_txt_buf);
+extern debug_info_t *ctc_dbf_setup;
+extern debug_info_t *ctc_dbf_data;
+extern debug_info_t *ctc_dbf_trace;
+
+
+#define DBF_TEXT_(name,level,text...) \
+ do { \
+ char* ctc_dbf_txt_buf = get_cpu_var(ctc_dbf_txt_buf); \
+ sprintf(ctc_dbf_txt_buf, text); \
+ debug_text_event(ctc_dbf_##name,level,ctc_dbf_txt_buf); \
+ put_cpu_var(ctc_dbf_txt_buf); \
+ } while (0)
+
+#define DBF_SPRINTF(name,level,text...) \
+ do { \
+ debug_sprintf_event(ctc_dbf_trace, level, ##text ); \
+ debug_sprintf_event(ctc_dbf_trace, level, text ); \
+ } while (0)
+
+
+int ctc_register_dbf_views(void);
+
+void ctc_unregister_dbf_views(void);
+
+/**
+ * some more debug stuff
+ */
+
+#define HEXDUMP16(importance,header,ptr) \
+PRINT_##importance(header "%02x %02x %02x %02x %02x %02x %02x %02x " \
+ "%02x %02x %02x %02x %02x %02x %02x %02x\n", \
+ *(((char*)ptr)),*(((char*)ptr)+1),*(((char*)ptr)+2), \
+ *(((char*)ptr)+3),*(((char*)ptr)+4),*(((char*)ptr)+5), \
+ *(((char*)ptr)+6),*(((char*)ptr)+7),*(((char*)ptr)+8), \
+ *(((char*)ptr)+9),*(((char*)ptr)+10),*(((char*)ptr)+11), \
+ *(((char*)ptr)+12),*(((char*)ptr)+13), \
+ *(((char*)ptr)+14),*(((char*)ptr)+15)); \
+PRINT_##importance(header "%02x %02x %02x %02x %02x %02x %02x %02x " \
+ "%02x %02x %02x %02x %02x %02x %02x %02x\n", \
+ *(((char*)ptr)+16),*(((char*)ptr)+17), \
+ *(((char*)ptr)+18),*(((char*)ptr)+19), \
+ *(((char*)ptr)+20),*(((char*)ptr)+21), \
+ *(((char*)ptr)+22),*(((char*)ptr)+23), \
+ *(((char*)ptr)+24),*(((char*)ptr)+25), \
+ *(((char*)ptr)+26),*(((char*)ptr)+27), \
+ *(((char*)ptr)+28),*(((char*)ptr)+29), \
+ *(((char*)ptr)+30),*(((char*)ptr)+31));
+
+static inline void
+hex_dump(unsigned char *buf, size_t len)
+{
+ size_t i;
+
+ for (i = 0; i < len; i++) {
+ if (i && !(i % 16))
+ printk("\n");
+ printk("%02x ", *(buf + i));
+ }
+ printk("\n");
+}
+
--- /dev/null
+/*
+ 3w-9xxx.c -- 3ware 9000 Storage Controller device driver for Linux.
+
+ Written By: Adam Radford <linuxraid@amcc.com>
+
+ Copyright (C) 2004 Applied Micro Circuits Corporation.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; version 2 of the License.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ NO WARRANTY
+ THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ solely responsible for determining the appropriateness of using and
+ distributing the Program and assumes all risks associated with its
+ exercise of rights under this Agreement, including but not limited to
+ the risks and costs of program errors, damage to or loss of data,
+ programs or equipment, and unavailability or interruption of operations.
+
+ DISCLAIMER OF LIABILITY
+ NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ Bugs/Comments/Suggestions should be mailed to:
+ linuxraid@amcc.com
+
+ For more information, goto:
+ http://www.amcc.com
+
+ Note: This version of the driver does not contain a bundled firmware
+ image.
+
+ History
+ -------
+ 2.26.02.000 - Driver cleanup for kernel submission.
+ 2.26.02.001 - Replace schedule_timeout() calls with msleep().
+*/
+
+#include <linux/module.h>
+#include <linux/reboot.h>
+#include <linux/spinlock.h>
+#include <linux/interrupt.h>
+#include <linux/moduleparam.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/time.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_tcq.h>
+#include <scsi/scsi_cmnd.h>
+#include "3w-9xxx.h"
+
+/* Globals */
+#define TWA_DRIVER_VERSION "2.26.02.001"
+static TW_Device_Extension *twa_device_extension_list[TW_MAX_SLOT];
+static unsigned int twa_device_extension_count;
+static int twa_major = -1;
+extern struct timezone sys_tz;
+
+/* Module parameters */
+MODULE_AUTHOR ("AMCC");
+MODULE_DESCRIPTION ("3ware 9000 Storage Controller Linux Driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(TWA_DRIVER_VERSION);
+
+/* Function prototypes */
+static void twa_aen_queue_event(TW_Device_Extension *tw_dev, TW_Command_Apache_Header *header);
+static int twa_aen_read_queue(TW_Device_Extension *tw_dev, int request_id);
+static char *twa_aen_severity_lookup(unsigned char severity_code);
+static void twa_aen_sync_time(TW_Device_Extension *tw_dev, int request_id);
+static int twa_chrdev_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg);
+static int twa_chrdev_open(struct inode *inode, struct file *file);
+static int twa_fill_sense(TW_Device_Extension *tw_dev, int request_id, int copy_sense, int print_host);
+static void twa_free_request_id(TW_Device_Extension *tw_dev,int request_id);
+static void twa_get_request_id(TW_Device_Extension *tw_dev, int *request_id);
+static int twa_initconnection(TW_Device_Extension *tw_dev, int message_credits,
+ u32 set_features, unsigned short current_fw_srl,
+ unsigned short current_fw_arch_id,
+ unsigned short current_fw_branch,
+ unsigned short current_fw_build,
+ unsigned short *fw_on_ctlr_srl,
+ unsigned short *fw_on_ctlr_arch_id,
+ unsigned short *fw_on_ctlr_branch,
+ unsigned short *fw_on_ctlr_build,
+ u32 *init_connect_result);
+static void twa_load_sgl(TW_Command_Full *full_command_packet, int request_id, dma_addr_t dma_handle, int length);
+static int twa_poll_response(TW_Device_Extension *tw_dev, int request_id, int seconds);
+static int twa_poll_status_gone(TW_Device_Extension *tw_dev, u32 flag, int seconds);
+static int twa_post_command_packet(TW_Device_Extension *tw_dev, int request_id, char internal);
+static int twa_reset_device_extension(TW_Device_Extension *tw_dev);
+static int twa_reset_sequence(TW_Device_Extension *tw_dev, int soft_reset);
+static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Apache *sglistarg);
+static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int request_id);
+static char *twa_string_lookup(twa_message_type *table, unsigned int aen_code);
+static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id);
+
+/* Functions */
+
+/* Show some statistics about the card */
+static ssize_t twa_show_stats(struct class_device *class_dev, char *buf)
+{
+ struct Scsi_Host *host = class_to_shost(class_dev);
+ TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
+ unsigned long flags = 0;
+ ssize_t len;
+
+ spin_lock_irqsave(tw_dev->host->host_lock, flags);
+ len = snprintf(buf, PAGE_SIZE, "Driver version: %s\n"
+ "Current commands posted: %4d\n"
+ "Max commands posted: %4d\n"
+ "Current pending commands: %4d\n"
+ "Max pending commands: %4d\n"
+ "Last sgl length: %4d\n"
+ "Max sgl length: %4d\n"
+ "Last sector count: %4d\n"
+ "Max sector count: %4d\n"
+ "SCSI Host Resets: %4d\n"
+ "SCSI Aborts/Timeouts: %4d\n"
+ "AEN's: %4d\n",
+ TWA_DRIVER_VERSION,
+ tw_dev->posted_request_count,
+ tw_dev->max_posted_request_count,
+ tw_dev->pending_request_count,
+ tw_dev->max_pending_request_count,
+ tw_dev->sgl_entries,
+ tw_dev->max_sgl_entries,
+ tw_dev->sector_count,
+ tw_dev->max_sector_count,
+ tw_dev->num_resets,
+ tw_dev->num_aborts,
+ tw_dev->aen_count);
+ spin_unlock_irqrestore(tw_dev->host->host_lock, flags);
+ return len;
+} /* End twa_show_stats() */
+
+/* This function will set a devices queue depth */
+static ssize_t twa_store_queue_depth(struct device *dev, const char *buf, size_t count)
+{
+ int queue_depth;
+ struct scsi_device *sdev = to_scsi_device(dev);
+
+ queue_depth = simple_strtoul(buf, NULL, 0);
+ if (queue_depth > TW_Q_LENGTH-2)
+ return -EINVAL;
+ scsi_adjust_queue_depth(sdev, MSG_ORDERED_TAG, queue_depth);
+
+ return count;
+} /* End twa_store_queue_depth() */
+
+/* Create sysfs 'queue_depth' entry */
+static struct device_attribute twa_queue_depth_attr = {
+ .attr = {
+ .name = "queue_depth",
+ .mode = S_IRUSR | S_IWUSR,
+ },
+ .store = twa_store_queue_depth
+};
+
+/* Device attributes initializer */
+static struct device_attribute *twa_dev_attrs[] = {
+ &twa_queue_depth_attr,
+ NULL,
+};
+
+/* Create sysfs 'stats' entry */
+static struct class_device_attribute twa_host_stats_attr = {
+ .attr = {
+ .name = "stats",
+ .mode = S_IRUGO,
+ },
+ .show = twa_show_stats
+};
+
+/* Host attributes initializer */
+static struct class_device_attribute *twa_host_attrs[] = {
+ &twa_host_stats_attr,
+ NULL,
+};
+
+/* File operations struct for character device */
+static struct file_operations twa_fops = {
+ .owner = THIS_MODULE,
+ .ioctl = twa_chrdev_ioctl,
+ .open = twa_chrdev_open,
+ .release = NULL
+};
+
+/* This function will complete an aen request from the isr */
+static int twa_aen_complete(TW_Device_Extension *tw_dev, int request_id)
+{
+ TW_Command_Full *full_command_packet;
+ TW_Command *command_packet;
+ TW_Command_Apache_Header *header;
+ unsigned short aen;
+ int retval = 1;
+
+ header = (TW_Command_Apache_Header *)tw_dev->generic_buffer_virt[request_id];
+ tw_dev->posted_request_count--;
+ aen = header->status_block.error;
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ command_packet = &full_command_packet->command.oldcommand;
+
+ /* First check for internal completion of set param for time sync */
+ if (TW_OP_OUT(command_packet->opcode__sgloffset) == TW_OP_SET_PARAM) {
+ /* Keep reading the queue in case there are more aen's */
+ if (twa_aen_read_queue(tw_dev, request_id))
+ goto out2;
+ else {
+ retval = 0;
+ goto out;
+ }
+ }
+
+ switch (aen) {
+ case TW_AEN_QUEUE_EMPTY:
+ /* Quit reading the queue if this is the last one */
+ break;
+ case TW_AEN_SYNC_TIME_WITH_HOST:
+ twa_aen_sync_time(tw_dev, request_id);
+ retval = 0;
+ goto out;
+ default:
+ twa_aen_queue_event(tw_dev, header);
+
+ /* If there are more aen's, keep reading the queue */
+ if (twa_aen_read_queue(tw_dev, request_id))
+ goto out2;
+ else {
+ retval = 0;
+ goto out;
+ }
+ }
+ retval = 0;
+out2:
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ clear_bit(TW_IN_ATTENTION_LOOP, &tw_dev->flags);
+out:
+ return retval;
+} /* End twa_aen_complete() */
+
+/* This function will drain aen queue */
+static int twa_aen_drain_queue(TW_Device_Extension *tw_dev, int no_check_reset)
+{
+ int request_id = 0;
+ char cdb[TW_MAX_CDB_LEN];
+ TW_SG_Apache sglist[1];
+ int finished = 0, count = 0;
+ TW_Command_Full *full_command_packet;
+ TW_Command_Apache_Header *header;
+ unsigned short aen;
+ int first_reset = 0, queue = 0, retval = 1;
+
+ if (no_check_reset)
+ first_reset = 0;
+ else
+ first_reset = 1;
+
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ memset(full_command_packet, 0, sizeof(TW_Command_Full));
+
+ /* Initialize cdb */
+ memset(&cdb, 0, TW_MAX_CDB_LEN);
+ cdb[0] = REQUEST_SENSE; /* opcode */
+ cdb[4] = TW_ALLOCATION_LENGTH; /* allocation length */
+
+ /* Initialize sglist */
+ memset(&sglist, 0, sizeof(TW_SG_Apache));
+ sglist[0].length = TW_SECTOR_SIZE;
+ sglist[0].address = tw_dev->generic_buffer_phys[request_id];
+
+ if (sglist[0].address & TW_ALIGNMENT_9000_SGL) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1, "Found unaligned address during AEN drain");
+ goto out;
+ }
+
+ /* Mark internal command */
+ tw_dev->srb[request_id] = NULL;
+
+ do {
+ /* Send command to the board */
+ if (twa_scsiop_execute_scsi(tw_dev, request_id, cdb, 1, sglist)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2, "Error posting request sense");
+ goto out;
+ }
+
+ /* Now poll for completion */
+ if (twa_poll_response(tw_dev, request_id, 30)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x3, "No valid response while draining AEN queue");
+ tw_dev->posted_request_count--;
+ goto out;
+ }
+
+ tw_dev->posted_request_count--;
+ header = (TW_Command_Apache_Header *)tw_dev->generic_buffer_virt[request_id];
+ aen = header->status_block.error;
+ queue = 0;
+ count++;
+
+ switch (aen) {
+ case TW_AEN_QUEUE_EMPTY:
+ if (first_reset != 1)
+ goto out;
+ else
+ finished = 1;
+ break;
+ case TW_AEN_SOFT_RESET:
+ if (first_reset == 0)
+ first_reset = 1;
+ else
+ queue = 1;
+ break;
+ case TW_AEN_SYNC_TIME_WITH_HOST:
+ break;
+ default:
+ queue = 1;
+ }
+
+ /* Now queue an event info */
+ if (queue)
+ twa_aen_queue_event(tw_dev, header);
+ } while ((finished == 0) && (count < TW_MAX_AEN_DRAIN));
+
+ if (count == TW_MAX_AEN_DRAIN)
+ goto out;
+
+ retval = 0;
+out:
+ tw_dev->state[request_id] = TW_S_INITIAL;
+ return retval;
+} /* End twa_aen_drain_queue() */
+
+/* This function will queue an event */
+static void twa_aen_queue_event(TW_Device_Extension *tw_dev, TW_Command_Apache_Header *header)
+{
+ u32 local_time;
+ struct timeval time;
+ TW_Event *event;
+ unsigned short aen;
+ char host[16];
+
+ tw_dev->aen_count++;
+
+ /* Fill out event info */
+ event = tw_dev->event_queue[tw_dev->error_index];
+
+ /* Check for clobber */
+ host[0] = '\0';
+ if (tw_dev->host) {
+ sprintf(host, " scsi%d:", tw_dev->host->host_no);
+ if (event->retrieved == TW_AEN_NOT_RETRIEVED)
+ tw_dev->aen_clobber = 1;
+ }
+
+ aen = header->status_block.error;
+ memset(event, 0, sizeof(TW_Event));
+
+ event->severity = TW_SEV_OUT(header->status_block.severity__reserved);
+ do_gettimeofday(&time);
+ local_time = (u32)(time.tv_sec - (sys_tz.tz_minuteswest * 60));
+ event->time_stamp_sec = local_time;
+ event->aen_code = aen;
+ event->retrieved = TW_AEN_NOT_RETRIEVED;
+ event->sequence_id = tw_dev->error_sequence_id;
+ tw_dev->error_sequence_id++;
+
+ header->err_specific_desc[sizeof(header->err_specific_desc) - 1] = '\0';
+ event->parameter_len = strlen(header->err_specific_desc);
+ memcpy(event->parameter_data, header->err_specific_desc, event->parameter_len);
+ if (event->severity != TW_AEN_SEVERITY_DEBUG)
+ printk(KERN_WARNING "3w-9xxx:%s AEN: %s (0x%02X:0x%04X): %s:%s.\n",
+ host,
+ twa_aen_severity_lookup(TW_SEV_OUT(header->status_block.severity__reserved)),
+ TW_MESSAGE_SOURCE_CONTROLLER_EVENT, aen,
+ twa_string_lookup(twa_aen_table, aen),
+ header->err_specific_desc);
+ else
+ tw_dev->aen_count--;
+
+ if ((tw_dev->error_index + 1) == TW_Q_LENGTH)
+ tw_dev->event_queue_wrapped = 1;
+ tw_dev->error_index = (tw_dev->error_index + 1 ) % TW_Q_LENGTH;
+} /* End twa_aen_queue_event() */
+
+/* This function will read the aen queue from the isr */
+static int twa_aen_read_queue(TW_Device_Extension *tw_dev, int request_id)
+{
+ char cdb[TW_MAX_CDB_LEN];
+ TW_SG_Apache sglist[1];
+ TW_Command_Full *full_command_packet;
+ int retval = 1;
+
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ memset(full_command_packet, 0, sizeof(TW_Command_Full));
+
+ /* Initialize cdb */
+ memset(&cdb, 0, TW_MAX_CDB_LEN);
+ cdb[0] = REQUEST_SENSE; /* opcode */
+ cdb[4] = TW_ALLOCATION_LENGTH; /* allocation length */
+
+ /* Initialize sglist */
+ memset(&sglist, 0, sizeof(TW_SG_Apache));
+ sglist[0].length = TW_SECTOR_SIZE;
+ sglist[0].address = tw_dev->generic_buffer_phys[request_id];
+
+ /* Mark internal command */
+ tw_dev->srb[request_id] = NULL;
+
+ /* Now post the command packet */
+ if (twa_scsiop_execute_scsi(tw_dev, request_id, cdb, 1, sglist)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x4, "Post failed while reading AEN queue");
+ goto out;
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_aen_read_queue() */
+
+/* This function will look up an AEN severity string */
+static char *twa_aen_severity_lookup(unsigned char severity_code)
+{
+ char *retval = NULL;
+
+ if ((severity_code < (unsigned char) TW_AEN_SEVERITY_ERROR) ||
+ (severity_code > (unsigned char) TW_AEN_SEVERITY_DEBUG))
+ goto out;
+
+ retval = twa_aen_severity_table[severity_code];
+out:
+ return retval;
+} /* End twa_aen_severity_lookup() */
+
+/* This function will sync firmware time with the host time */
+static void twa_aen_sync_time(TW_Device_Extension *tw_dev, int request_id)
+{
+ u32 schedulertime;
+ struct timeval utc;
+ TW_Command_Full *full_command_packet;
+ TW_Command *command_packet;
+ TW_Param_Apache *param;
+ u32 local_time;
+
+ /* Fill out the command packet */
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ memset(full_command_packet, 0, sizeof(TW_Command_Full));
+ command_packet = &full_command_packet->command.oldcommand;
+ command_packet->opcode__sgloffset = TW_OPSGL_IN(2, TW_OP_SET_PARAM);
+ command_packet->request_id = request_id;
+ command_packet->byte8_offset.param.sgl[0].address = tw_dev->generic_buffer_phys[request_id];
+ command_packet->byte8_offset.param.sgl[0].length = TW_SECTOR_SIZE;
+ command_packet->size = TW_COMMAND_SIZE;
+ command_packet->byte6_offset.parameter_count = 1;
+
+ /* Setup the param */
+ param = (TW_Param_Apache *)tw_dev->generic_buffer_virt[request_id];
+ memset(param, 0, TW_SECTOR_SIZE);
+ param->table_id = TW_TIMEKEEP_TABLE | 0x8000; /* Controller time keep table */
+ param->parameter_id = 0x3; /* SchedulerTime */
+ param->parameter_size_bytes = 4;
+
+ /* Convert system time in UTC to local time seconds since last
+ Sunday 12:00AM */
+ do_gettimeofday(&utc);
+ local_time = (u32)(utc.tv_sec - (sys_tz.tz_minuteswest * 60));
+ schedulertime = local_time - (3 * 86400);
+ schedulertime = schedulertime % 604800;
+
+ memcpy(param->data, &schedulertime, sizeof(u32));
+
+ /* Mark internal command */
+ tw_dev->srb[request_id] = NULL;
+
+ /* Now post the command */
+ twa_post_command_packet(tw_dev, request_id, 1);
+} /* End twa_aen_sync_time() */
+
+/* This function will allocate memory and check if it is correctly aligned */
+static int twa_allocate_memory(TW_Device_Extension *tw_dev, int size, int which)
+{
+ int i;
+ dma_addr_t dma_handle;
+ unsigned long *cpu_addr;
+ int retval = 1;
+
+ cpu_addr = pci_alloc_consistent(tw_dev->tw_pci_dev, size*TW_Q_LENGTH, &dma_handle);
+ if (!cpu_addr) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x5, "Memory allocation failed");
+ goto out;
+ }
+
+ if ((unsigned long)cpu_addr % (TW_ALIGNMENT_9000)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x6, "Failed to allocate correctly aligned memory");
+ pci_free_consistent(tw_dev->tw_pci_dev, size*TW_Q_LENGTH, cpu_addr, dma_handle);
+ goto out;
+ }
+
+ memset(cpu_addr, 0, size*TW_Q_LENGTH);
+
+ for (i = 0; i < TW_Q_LENGTH; i++) {
+ switch(which) {
+ case 0:
+ tw_dev->command_packet_phys[i] = dma_handle+(i*size);
+ tw_dev->command_packet_virt[i] = (TW_Command_Full *)((unsigned char *)cpu_addr + (i*size));
+ break;
+ case 1:
+ tw_dev->generic_buffer_phys[i] = dma_handle+(i*size);
+ tw_dev->generic_buffer_virt[i] = (unsigned long *)((unsigned char *)cpu_addr + (i*size));
+ break;
+ }
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_allocate_memory() */
+
+/* This function will check the status register for unexpected bits */
+static int twa_check_bits(u32 status_reg_value)
+{
+ int retval = 1;
+
+ if ((status_reg_value & TW_STATUS_EXPECTED_BITS) != TW_STATUS_EXPECTED_BITS)
+ goto out;
+ if ((status_reg_value & TW_STATUS_UNEXPECTED_BITS) != 0)
+ goto out;
+
+ retval = 0;
+out:
+ return retval;
+} /* End twa_check_bits() */
+
+/* This function will check the srl and decide if we are compatible */
+static int twa_check_srl(TW_Device_Extension *tw_dev, int *flashed)
+{
+ int retval = 1;
+ unsigned short fw_on_ctlr_srl = 0, fw_on_ctlr_arch_id = 0;
+ unsigned short fw_on_ctlr_branch = 0, fw_on_ctlr_build = 0;
+ u32 init_connect_result = 0;
+
+ if (twa_initconnection(tw_dev, TW_INIT_MESSAGE_CREDITS,
+ TW_EXTENDED_INIT_CONNECT, TW_CURRENT_FW_SRL,
+ TW_9000_ARCH_ID, TW_CURRENT_FW_BRANCH,
+ TW_CURRENT_FW_BUILD, &fw_on_ctlr_srl,
+ &fw_on_ctlr_arch_id, &fw_on_ctlr_branch,
+ &fw_on_ctlr_build, &init_connect_result)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x7, "Initconnection failed while checking SRL");
+ goto out;
+ }
+
+ tw_dev->working_srl = TW_CURRENT_FW_SRL;
+ tw_dev->working_branch = TW_CURRENT_FW_BRANCH;
+ tw_dev->working_build = TW_CURRENT_FW_BUILD;
+
+ /* Try base mode compatibility */
+ if (!(init_connect_result & TW_CTLR_FW_COMPATIBLE)) {
+ if (twa_initconnection(tw_dev, TW_INIT_MESSAGE_CREDITS,
+ TW_EXTENDED_INIT_CONNECT,
+ TW_BASE_FW_SRL, TW_9000_ARCH_ID,
+ TW_BASE_FW_BRANCH, TW_BASE_FW_BUILD,
+ &fw_on_ctlr_srl, &fw_on_ctlr_arch_id,
+ &fw_on_ctlr_branch, &fw_on_ctlr_build,
+ &init_connect_result)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0xa, "Initconnection (base mode) failed while checking SRL");
+ goto out;
+ }
+ if (!(init_connect_result & TW_CTLR_FW_COMPATIBLE)) {
+ if (TW_CURRENT_FW_SRL > fw_on_ctlr_srl) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x32, "Firmware and driver incompatibility: please upgrade firmware");
+ } else {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x33, "Firmware and driver incompatibility: please upgrade driver");
+ }
+ goto out;
+ }
+ tw_dev->working_srl = TW_BASE_FW_SRL;
+ tw_dev->working_branch = TW_BASE_FW_BRANCH;
+ tw_dev->working_build = TW_BASE_FW_BUILD;
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_check_srl() */
+
+/* This function handles ioctl for the character device */
+static int twa_chrdev_ioctl(struct inode *inode, struct file *file, unsigned int cmd, unsigned long arg)
+{
+ long timeout;
+ unsigned long *cpu_addr, data_buffer_length_adjusted = 0, flags = 0;
+ dma_addr_t dma_handle;
+ int request_id = 0;
+ unsigned int sequence_id = 0;
+ unsigned char event_index, start_index;
+ TW_Ioctl_Driver_Command driver_command;
+ TW_Ioctl_Buf_Apache *tw_ioctl;
+ TW_Lock *tw_lock;
+ TW_Command_Full *full_command_packet;
+ TW_Compatibility_Info *tw_compat_info;
+ TW_Event *event;
+ struct timeval current_time;
+ u32 current_time_ms;
+ TW_Device_Extension *tw_dev = twa_device_extension_list[iminor(inode)];
+ int retval = TW_IOCTL_ERROR_OS_EFAULT;
+ void __user *argp = (void __user *)arg;
+
+ /* Only let one of these through at a time */
+ if (down_interruptible(&tw_dev->ioctl_sem)) {
+ retval = TW_IOCTL_ERROR_OS_EINTR;
+ goto out;
+ }
+
+ /* First copy down the driver command */
+ if (copy_from_user(&driver_command, argp, sizeof(TW_Ioctl_Driver_Command)))
+ goto out2;
+
+ /* Check data buffer size */
+ if (driver_command.buffer_length > TW_MAX_SECTORS * 512) {
+ retval = TW_IOCTL_ERROR_OS_EINVAL;
+ goto out2;
+ }
+
+ /* Hardware can only do multiple of 512 byte transfers */
+ data_buffer_length_adjusted = (driver_command.buffer_length + 511) & ~511;
+
+ /* Now allocate ioctl buf memory */
+ cpu_addr = pci_alloc_consistent(tw_dev->tw_pci_dev, data_buffer_length_adjusted+sizeof(TW_Ioctl_Buf_Apache) - 1, &dma_handle);
+ if (!cpu_addr) {
+ retval = TW_IOCTL_ERROR_OS_ENOMEM;
+ goto out2;
+ }
+
+ tw_ioctl = (TW_Ioctl_Buf_Apache *)cpu_addr;
+
+ /* Now copy down the entire ioctl */
+ if (copy_from_user(tw_ioctl, argp, driver_command.buffer_length + sizeof(TW_Ioctl_Buf_Apache) - 1))
+ goto out3;
+
+ /* See which ioctl we are doing */
+ switch (cmd) {
+ case TW_IOCTL_FIRMWARE_PASS_THROUGH:
+ spin_lock_irqsave(tw_dev->host->host_lock, flags);
+ twa_get_request_id(tw_dev, &request_id);
+
+ /* Flag internal command */
+ tw_dev->srb[request_id] = NULL;
+
+ /* Flag chrdev ioctl */
+ tw_dev->chrdev_request_id = request_id;
+
+ full_command_packet = &tw_ioctl->firmware_command;
+
+ /* Load request id and sglist for both command types */
+ twa_load_sgl(full_command_packet, request_id, dma_handle, data_buffer_length_adjusted);
+
+ memcpy(tw_dev->command_packet_virt[request_id], &(tw_ioctl->firmware_command), sizeof(TW_Command_Full));
+
+ /* Now post the command packet to the controller */
+ twa_post_command_packet(tw_dev, request_id, 1);
+ spin_unlock_irqrestore(tw_dev->host->host_lock, flags);
+
+ timeout = TW_IOCTL_CHRDEV_TIMEOUT*HZ;
+
+ /* Now wait for command to complete */
+ timeout = wait_event_interruptible_timeout(tw_dev->ioctl_wqueue, tw_dev->chrdev_request_id == TW_IOCTL_CHRDEV_FREE, timeout);
+
+ /* Check if we timed out, got a signal, or didn't get
+ an interrupt */
+ if ((timeout <= 0) && (tw_dev->chrdev_request_id != TW_IOCTL_CHRDEV_FREE)) {
+ /* Now we need to reset the board */
+ if (timeout == TW_IOCTL_ERROR_OS_ERESTARTSYS) {
+ retval = timeout;
+ } else {
+ printk(KERN_WARNING "3w-9xxx: scsi%d: WARNING: (0x%02X:0x%04X): Character ioctl (0x%x) timed out, resetting card.\n",
+ tw_dev->host->host_no, TW_DRIVER, 0xc,
+ cmd);
+ retval = TW_IOCTL_ERROR_OS_EIO;
+ }
+ spin_lock_irqsave(tw_dev->host->host_lock, flags);
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ tw_dev->posted_request_count--;
+ twa_reset_device_extension(tw_dev);
+ spin_unlock_irqrestore(tw_dev->host->host_lock, flags);
+ goto out3;
+ }
+
+ /* Now copy in the command packet response */
+ memcpy(&(tw_ioctl->firmware_command), tw_dev->command_packet_virt[request_id], sizeof(TW_Command_Full));
+
+ /* Now complete the io */
+ spin_lock_irqsave(tw_dev->host->host_lock, flags);
+ tw_dev->posted_request_count--;
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ spin_unlock_irqrestore(tw_dev->host->host_lock, flags);
+ break;
+ case TW_IOCTL_GET_COMPATIBILITY_INFO:
+ tw_ioctl->driver_command.status = 0;
+ /* Copy compatiblity struct into ioctl data buffer */
+ tw_compat_info = (TW_Compatibility_Info *)tw_ioctl->data_buffer;
+ strncpy(tw_compat_info->driver_version, TWA_DRIVER_VERSION, strlen(TWA_DRIVER_VERSION));
+ tw_compat_info->working_srl = tw_dev->working_srl;
+ tw_compat_info->working_branch = tw_dev->working_branch;
+ tw_compat_info->working_build = tw_dev->working_build;
+ break;
+ case TW_IOCTL_GET_LAST_EVENT:
+ if (tw_dev->event_queue_wrapped) {
+ if (tw_dev->aen_clobber) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_AEN_CLOBBER;
+ tw_dev->aen_clobber = 0;
+ } else
+ tw_ioctl->driver_command.status = 0;
+ } else {
+ if (!tw_dev->error_index) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ tw_ioctl->driver_command.status = 0;
+ }
+ event_index = (tw_dev->error_index - 1 + TW_Q_LENGTH) % TW_Q_LENGTH;
+ memcpy(tw_ioctl->data_buffer, tw_dev->event_queue[event_index], sizeof(TW_Event));
+ tw_dev->event_queue[event_index]->retrieved = TW_AEN_RETRIEVED;
+ break;
+ case TW_IOCTL_GET_FIRST_EVENT:
+ if (tw_dev->event_queue_wrapped) {
+ if (tw_dev->aen_clobber) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_AEN_CLOBBER;
+ tw_dev->aen_clobber = 0;
+ } else
+ tw_ioctl->driver_command.status = 0;
+ event_index = tw_dev->error_index;
+ } else {
+ if (!tw_dev->error_index) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ tw_ioctl->driver_command.status = 0;
+ event_index = 0;
+ }
+ memcpy(tw_ioctl->data_buffer, tw_dev->event_queue[event_index], sizeof(TW_Event));
+ tw_dev->event_queue[event_index]->retrieved = TW_AEN_RETRIEVED;
+ break;
+ case TW_IOCTL_GET_NEXT_EVENT:
+ event = (TW_Event *)tw_ioctl->data_buffer;
+ sequence_id = event->sequence_id;
+ tw_ioctl->driver_command.status = 0;
+
+ if (tw_dev->event_queue_wrapped) {
+ if (tw_dev->aen_clobber) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_AEN_CLOBBER;
+ tw_dev->aen_clobber = 0;
+ }
+ start_index = tw_dev->error_index;
+ } else {
+ if (!tw_dev->error_index) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ start_index = 0;
+ }
+ event_index = (start_index + sequence_id - tw_dev->event_queue[start_index]->sequence_id + 1) % TW_Q_LENGTH;
+
+ if (!(tw_dev->event_queue[event_index]->sequence_id > sequence_id)) {
+ if (tw_ioctl->driver_command.status == TW_IOCTL_ERROR_STATUS_AEN_CLOBBER)
+ tw_dev->aen_clobber = 1;
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ memcpy(tw_ioctl->data_buffer, tw_dev->event_queue[event_index], sizeof(TW_Event));
+ tw_dev->event_queue[event_index]->retrieved = TW_AEN_RETRIEVED;
+ break;
+ case TW_IOCTL_GET_PREVIOUS_EVENT:
+ event = (TW_Event *)tw_ioctl->data_buffer;
+ sequence_id = event->sequence_id;
+ tw_ioctl->driver_command.status = 0;
+
+ if (tw_dev->event_queue_wrapped) {
+ if (tw_dev->aen_clobber) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_AEN_CLOBBER;
+ tw_dev->aen_clobber = 0;
+ }
+ start_index = tw_dev->error_index;
+ } else {
+ if (!tw_dev->error_index) {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ start_index = 0;
+ }
+ event_index = (start_index + sequence_id - tw_dev->event_queue[start_index]->sequence_id - 1) % TW_Q_LENGTH;
+
+ if (!(tw_dev->event_queue[event_index]->sequence_id < sequence_id)) {
+ if (tw_ioctl->driver_command.status == TW_IOCTL_ERROR_STATUS_AEN_CLOBBER)
+ tw_dev->aen_clobber = 1;
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS;
+ break;
+ }
+ memcpy(tw_ioctl->data_buffer, tw_dev->event_queue[event_index], sizeof(TW_Event));
+ tw_dev->event_queue[event_index]->retrieved = TW_AEN_RETRIEVED;
+ break;
+ case TW_IOCTL_GET_LOCK:
+ tw_lock = (TW_Lock *)tw_ioctl->data_buffer;
+ do_gettimeofday(¤t_time);
+ current_time_ms = (current_time.tv_sec * 1000) + (current_time.tv_usec / 1000);
+
+ if ((tw_lock->force_flag == 1) || (tw_dev->ioctl_sem_lock == 0) || (current_time_ms >= tw_dev->ioctl_msec)) {
+ tw_dev->ioctl_sem_lock = 1;
+ tw_dev->ioctl_msec = current_time_ms + tw_lock->timeout_msec;
+ tw_ioctl->driver_command.status = 0;
+ tw_lock->time_remaining_msec = tw_lock->timeout_msec;
+ } else {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_LOCKED;
+ tw_lock->time_remaining_msec = tw_dev->ioctl_msec - current_time_ms;
+ }
+ break;
+ case TW_IOCTL_RELEASE_LOCK:
+ if (tw_dev->ioctl_sem_lock == 1) {
+ tw_dev->ioctl_sem_lock = 0;
+ tw_ioctl->driver_command.status = 0;
+ } else {
+ tw_ioctl->driver_command.status = TW_IOCTL_ERROR_STATUS_NOT_LOCKED;
+ }
+ break;
+ default:
+ retval = TW_IOCTL_ERROR_OS_ENOTTY;
+ goto out3;
+ }
+
+ /* Now copy the entire response to userspace */
+ if (copy_to_user(argp, tw_ioctl, sizeof(TW_Ioctl_Buf_Apache) + driver_command.buffer_length - 1) == 0)
+ retval = 0;
+out3:
+ /* Now free ioctl buf memory */
+ pci_free_consistent(tw_dev->tw_pci_dev, data_buffer_length_adjusted+sizeof(TW_Ioctl_Buf_Apache) - 1, cpu_addr, dma_handle);
+out2:
+ up(&tw_dev->ioctl_sem);
+out:
+ return retval;
+} /* End twa_chrdev_ioctl() */
+
+/* This function handles open for the character device */
+static int twa_chrdev_open(struct inode *inode, struct file *file)
+{
+ unsigned int minor_number;
+ int retval = TW_IOCTL_ERROR_OS_ENODEV;
+
+ minor_number = iminor(inode);
+ if (minor_number >= twa_device_extension_count)
+ goto out;
+ retval = 0;
+out:
+ return retval;
+} /* End twa_chrdev_open() */
+
+/* This function will print readable messages from status register errors */
+static int twa_decode_bits(TW_Device_Extension *tw_dev, u32 status_reg_value)
+{
+ int retval = 1;
+
+ /* Check for various error conditions and handle them appropriately */
+ if (status_reg_value & TW_STATUS_PCI_PARITY_ERROR) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0xc, "PCI Parity Error: clearing");
+ writel(TW_CONTROL_CLEAR_PARITY_ERROR, TW_CONTROL_REG_ADDR(tw_dev));
+ }
+
+ if (status_reg_value & TW_STATUS_PCI_ABORT) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0xd, "PCI Abort: clearing");
+ writel(TW_CONTROL_CLEAR_PCI_ABORT, TW_CONTROL_REG_ADDR(tw_dev));
+ pci_write_config_word(tw_dev->tw_pci_dev, PCI_STATUS, TW_PCI_CLEAR_PCI_ABORT);
+ }
+
+ if (status_reg_value & TW_STATUS_QUEUE_ERROR) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0xe, "Controller Queue Error: clearing");
+ writel(TW_CONTROL_CLEAR_QUEUE_ERROR, TW_CONTROL_REG_ADDR(tw_dev));
+ }
+
+ if (status_reg_value & TW_STATUS_SBUF_WRITE_ERROR) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0xf, "SBUF Write Error: clearing");
+ writel(TW_CONTROL_CLEAR_SBUF_WRITE_ERROR, TW_CONTROL_REG_ADDR(tw_dev));
+ }
+
+ if (status_reg_value & TW_STATUS_MICROCONTROLLER_ERROR) {
+ if (tw_dev->reset_print == 0) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x10, "Microcontroller Error: clearing");
+ tw_dev->reset_print = 1;
+ }
+ goto out;
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_decode_bits() */
+
+/* This function will empty the response queue */
+static int twa_empty_response_queue(TW_Device_Extension *tw_dev)
+{
+ u32 status_reg_value, response_que_value;
+ int count = 0, retval = 1;
+
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+
+ while (((status_reg_value & TW_STATUS_RESPONSE_QUEUE_EMPTY) == 0) && (count < TW_MAX_RESPONSE_DRAIN)) {
+ response_que_value = readl(TW_RESPONSE_QUEUE_REG_ADDR(tw_dev));
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+ count++;
+ }
+ if (count == TW_MAX_RESPONSE_DRAIN)
+ goto out;
+
+ retval = 0;
+out:
+ return retval;
+} /* End twa_empty_response_queue() */
+
+/* This function passes sense keys from firmware to scsi layer */
+static int twa_fill_sense(TW_Device_Extension *tw_dev, int request_id, int copy_sense, int print_host)
+{
+ TW_Command_Full *full_command_packet;
+ unsigned short error;
+ int retval = 1;
+
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ /* Don't print error for Logical unit not supported during rollcall */
+ error = full_command_packet->header.status_block.error;
+ if ((error != TW_ERROR_LOGICAL_UNIT_NOT_SUPPORTED) && (error != TW_ERROR_UNIT_OFFLINE)) {
+ if (print_host)
+ printk(KERN_WARNING "3w-9xxx: scsi%d: ERROR: (0x%02X:0x%04X): %s:%s.\n",
+ tw_dev->host->host_no,
+ TW_MESSAGE_SOURCE_CONTROLLER_ERROR,
+ full_command_packet->header.status_block.error,
+ twa_string_lookup(twa_error_table,
+ full_command_packet->header.status_block.error),
+ full_command_packet->header.err_specific_desc);
+ else
+ printk(KERN_WARNING "3w-9xxx: ERROR: (0x%02X:0x%04X): %s:%s.\n",
+ TW_MESSAGE_SOURCE_CONTROLLER_ERROR,
+ full_command_packet->header.status_block.error,
+ twa_string_lookup(twa_error_table,
+ full_command_packet->header.status_block.error),
+ full_command_packet->header.err_specific_desc);
+ }
+
+ if (copy_sense) {
+ memcpy(tw_dev->srb[request_id]->sense_buffer, full_command_packet->header.sense_data, TW_SENSE_DATA_LENGTH);
+ tw_dev->srb[request_id]->result = (full_command_packet->command.newcommand.status << 1);
+ retval = TW_ISR_DONT_RESULT;
+ goto out;
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_fill_sense() */
+
+/* This function will free up device extension resources */
+static void twa_free_device_extension(TW_Device_Extension *tw_dev)
+{
+ if (tw_dev->command_packet_virt[0])
+ pci_free_consistent(tw_dev->tw_pci_dev,
+ sizeof(TW_Command_Full)*TW_Q_LENGTH,
+ tw_dev->command_packet_virt[0],
+ tw_dev->command_packet_phys[0]);
+
+ if (tw_dev->generic_buffer_virt[0])
+ pci_free_consistent(tw_dev->tw_pci_dev,
+ TW_SECTOR_SIZE*TW_Q_LENGTH,
+ tw_dev->generic_buffer_virt[0],
+ tw_dev->generic_buffer_phys[0]);
+
+ if (tw_dev->event_queue[0])
+ kfree(tw_dev->event_queue[0]);
+} /* End twa_free_device_extension() */
+
+/* This function will free a request id */
+static void twa_free_request_id(TW_Device_Extension *tw_dev, int request_id)
+{
+ tw_dev->free_queue[tw_dev->free_tail] = request_id;
+ tw_dev->state[request_id] = TW_S_FINISHED;
+ tw_dev->free_tail = (tw_dev->free_tail + 1) % TW_Q_LENGTH;
+} /* End twa_free_request_id() */
+
+/* This function will get parameter table entires from the firmware */
+static void *twa_get_param(TW_Device_Extension *tw_dev, int request_id, int table_id, int parameter_id, int parameter_size_bytes)
+{
+ TW_Command_Full *full_command_packet;
+ TW_Command *command_packet;
+ TW_Param_Apache *param;
+ unsigned long param_value;
+ void *retval = NULL;
+
+ /* Setup the command packet */
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ memset(full_command_packet, 0, sizeof(TW_Command_Full));
+ command_packet = &full_command_packet->command.oldcommand;
+
+ command_packet->opcode__sgloffset = TW_OPSGL_IN(2, TW_OP_GET_PARAM);
+ command_packet->size = TW_COMMAND_SIZE;
+ command_packet->request_id = request_id;
+ command_packet->byte6_offset.block_count = 1;
+
+ /* Now setup the param */
+ param = (TW_Param_Apache *)tw_dev->generic_buffer_virt[request_id];
+ memset(param, 0, TW_SECTOR_SIZE);
+ param->table_id = table_id | 0x8000;
+ param->parameter_id = parameter_id;
+ param->parameter_size_bytes = parameter_size_bytes;
+ param_value = tw_dev->generic_buffer_phys[request_id];
+
+ command_packet->byte8_offset.param.sgl[0].address = param_value;
+ command_packet->byte8_offset.param.sgl[0].length = TW_SECTOR_SIZE;
+
+ /* Post the command packet to the board */
+ twa_post_command_packet(tw_dev, request_id, 1);
+
+ /* Poll for completion */
+ if (twa_poll_response(tw_dev, request_id, 30))
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x13, "No valid response during get param")
+ else
+ retval = (void *)&(param->data[0]);
+
+ tw_dev->posted_request_count--;
+ tw_dev->state[request_id] = TW_S_INITIAL;
+
+ return retval;
+} /* End twa_get_param() */
+
+/* This function will assign an available request id */
+static void twa_get_request_id(TW_Device_Extension *tw_dev, int *request_id)
+{
+ *request_id = tw_dev->free_queue[tw_dev->free_head];
+ tw_dev->free_head = (tw_dev->free_head + 1) % TW_Q_LENGTH;
+ tw_dev->state[*request_id] = TW_S_STARTED;
+} /* End twa_get_request_id() */
+
+/* This function will send an initconnection command to controller */
+static int twa_initconnection(TW_Device_Extension *tw_dev, int message_credits,
+ u32 set_features, unsigned short current_fw_srl,
+ unsigned short current_fw_arch_id,
+ unsigned short current_fw_branch,
+ unsigned short current_fw_build,
+ unsigned short *fw_on_ctlr_srl,
+ unsigned short *fw_on_ctlr_arch_id,
+ unsigned short *fw_on_ctlr_branch,
+ unsigned short *fw_on_ctlr_build,
+ u32 *init_connect_result)
+{
+ TW_Command_Full *full_command_packet;
+ TW_Initconnect *tw_initconnect;
+ int request_id = 0, retval = 1;
+
+ /* Initialize InitConnection command packet */
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ memset(full_command_packet, 0, sizeof(TW_Command_Full));
+ full_command_packet->header.header_desc.size_header = 128;
+
+ tw_initconnect = (TW_Initconnect *)&full_command_packet->command.oldcommand;
+ tw_initconnect->opcode__reserved = TW_OPRES_IN(0, TW_OP_INIT_CONNECTION);
+ tw_initconnect->request_id = request_id;
+ tw_initconnect->message_credits = message_credits;
+ tw_initconnect->features = set_features;
+#if BITS_PER_LONG > 32
+ /* Turn on 64-bit sgl support */
+ tw_initconnect->features |= 1;
+#endif
+
+ if (set_features & TW_EXTENDED_INIT_CONNECT) {
+ tw_initconnect->size = TW_INIT_COMMAND_PACKET_SIZE_EXTENDED;
+ tw_initconnect->fw_srl = current_fw_srl;
+ tw_initconnect->fw_arch_id = current_fw_arch_id;
+ tw_initconnect->fw_branch = current_fw_branch;
+ tw_initconnect->fw_build = current_fw_build;
+ } else
+ tw_initconnect->size = TW_INIT_COMMAND_PACKET_SIZE;
+
+ /* Send command packet to the board */
+ twa_post_command_packet(tw_dev, request_id, 1);
+
+ /* Poll for completion */
+ if (twa_poll_response(tw_dev, request_id, 30)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x15, "No valid response during init connection");
+ } else {
+ if (set_features & TW_EXTENDED_INIT_CONNECT) {
+ *fw_on_ctlr_srl = tw_initconnect->fw_srl;
+ *fw_on_ctlr_arch_id = tw_initconnect->fw_arch_id;
+ *fw_on_ctlr_branch = tw_initconnect->fw_branch;
+ *fw_on_ctlr_build = tw_initconnect->fw_build;
+ *init_connect_result = tw_initconnect->result;
+ }
+ retval = 0;
+ }
+
+ tw_dev->posted_request_count--;
+ tw_dev->state[request_id] = TW_S_INITIAL;
+
+ return retval;
+} /* End twa_initconnection() */
+
+/* This function will initialize the fields of a device extension */
+static int twa_initialize_device_extension(TW_Device_Extension *tw_dev)
+{
+ int i, retval = 1;
+
+ /* Initialize command packet buffers */
+ if (twa_allocate_memory(tw_dev, sizeof(TW_Command_Full), 0)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x16, "Command packet memory allocation failed");
+ goto out;
+ }
+
+ /* Initialize generic buffer */
+ if (twa_allocate_memory(tw_dev, TW_SECTOR_SIZE, 1)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x17, "Generic memory allocation failed");
+ goto out;
+ }
+
+ /* Allocate event info space */
+ tw_dev->event_queue[0] = kmalloc(sizeof(TW_Event) * TW_Q_LENGTH, GFP_KERNEL);
+ if (!tw_dev->event_queue[0]) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x18, "Event info memory allocation failed");
+ goto out;
+ }
+
+ memset(tw_dev->event_queue[0], 0, sizeof(TW_Event) * TW_Q_LENGTH);
+
+ for (i = 0; i < TW_Q_LENGTH; i++) {
+ tw_dev->event_queue[i] = (TW_Event *)((unsigned char *)tw_dev->event_queue[0] + (i * sizeof(TW_Event)));
+ tw_dev->free_queue[i] = i;
+ tw_dev->state[i] = TW_S_INITIAL;
+ }
+
+ tw_dev->pending_head = TW_Q_START;
+ tw_dev->pending_tail = TW_Q_START;
+ tw_dev->free_head = TW_Q_START;
+ tw_dev->free_tail = TW_Q_START;
+ tw_dev->error_sequence_id = 1;
+ tw_dev->chrdev_request_id = TW_IOCTL_CHRDEV_FREE;
+
+ init_MUTEX(&tw_dev->ioctl_sem);
+ init_waitqueue_head(&tw_dev->ioctl_wqueue);
+
+ retval = 0;
+out:
+ return retval;
+} /* End twa_initialize_device_extension() */
+
+/* This function is the interrupt service routine */
+static irqreturn_t twa_interrupt(int irq, void *dev_instance, struct pt_regs *regs)
+{
+ int request_id, error = 0;
+ u32 status_reg_value;
+ TW_Response_Queue response_que;
+ TW_Command_Full *full_command_packet;
+ TW_Command *command_packet;
+ TW_Device_Extension *tw_dev = (TW_Device_Extension *)dev_instance;
+ int handled = 0;
+
+ /* Get the per adapter lock */
+ spin_lock(tw_dev->host->host_lock);
+
+ /* See if the interrupt matches this instance */
+ if (tw_dev->tw_pci_dev->irq == (unsigned int)irq) {
+
+ handled = 1;
+
+ /* Read the registers */
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+
+ /* Check if this is our interrupt, otherwise bail */
+ if (!(status_reg_value & TW_STATUS_VALID_INTERRUPT))
+ goto twa_interrupt_bail;
+
+ /* Check controller for errors */
+ if (twa_check_bits(status_reg_value)) {
+ if (twa_decode_bits(tw_dev, status_reg_value)) {
+ TW_CLEAR_ALL_INTERRUPTS(tw_dev);
+ goto twa_interrupt_bail;
+ }
+ }
+
+ /* Handle host interrupt */
+ if (status_reg_value & TW_STATUS_HOST_INTERRUPT)
+ TW_CLEAR_HOST_INTERRUPT(tw_dev);
+
+ /* Handle attention interrupt */
+ if (status_reg_value & TW_STATUS_ATTENTION_INTERRUPT) {
+ TW_CLEAR_ATTENTION_INTERRUPT(tw_dev);
+ if (!(test_and_set_bit(TW_IN_ATTENTION_LOOP, &tw_dev->flags))) {
+ twa_get_request_id(tw_dev, &request_id);
+
+ error = twa_aen_read_queue(tw_dev, request_id);
+ if (error) {
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ clear_bit(TW_IN_ATTENTION_LOOP, &tw_dev->flags);
+ }
+ }
+ }
+
+ /* Handle command interrupt */
+ if (status_reg_value & TW_STATUS_COMMAND_INTERRUPT) {
+ TW_MASK_COMMAND_INTERRUPT(tw_dev);
+ /* Drain as many pending commands as we can */
+ while (tw_dev->pending_request_count > 0) {
+ request_id = tw_dev->pending_queue[tw_dev->pending_head];
+ if (tw_dev->state[request_id] != TW_S_PENDING) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x19, "Found request id that wasn't pending");
+ TW_CLEAR_ALL_INTERRUPTS(tw_dev);
+ goto twa_interrupt_bail;
+ }
+ if (twa_post_command_packet(tw_dev, request_id, 1)==0) {
+ tw_dev->pending_head = (tw_dev->pending_head + 1) % TW_Q_LENGTH;
+ tw_dev->pending_request_count--;
+ } else {
+ /* If we get here, we will continue re-posting on the next command interrupt */
+ break;
+ }
+ }
+ }
+
+ /* Handle response interrupt */
+ if (status_reg_value & TW_STATUS_RESPONSE_INTERRUPT) {
+
+ /* Drain the response queue from the board */
+ while ((status_reg_value & TW_STATUS_RESPONSE_QUEUE_EMPTY) == 0) {
+ /* Complete the response */
+ response_que.value = readl(TW_RESPONSE_QUEUE_REG_ADDR(tw_dev));
+ request_id = TW_RESID_OUT(response_que.response_id);
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ error = 0;
+ command_packet = &full_command_packet->command.oldcommand;
+ /* Check for command packet errors */
+ if (full_command_packet->command.newcommand.status != 0) {
+ if (tw_dev->srb[request_id] != 0) {
+ error = twa_fill_sense(tw_dev, request_id, 1, 1);
+ } else {
+ /* Skip ioctl error prints */
+ if (request_id != tw_dev->chrdev_request_id) {
+ error = twa_fill_sense(tw_dev, request_id, 0, 1);
+ }
+ }
+ }
+
+ /* Check for correct state */
+ if (tw_dev->state[request_id] != TW_S_POSTED) {
+ if (tw_dev->srb[request_id] != 0) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1a, "Received a request id that wasn't posted");
+ TW_CLEAR_ALL_INTERRUPTS(tw_dev);
+ goto twa_interrupt_bail;
+ }
+ }
+
+ /* Check for internal command completion */
+ if (tw_dev->srb[request_id] == 0) {
+ if (request_id != tw_dev->chrdev_request_id) {
+ if (twa_aen_complete(tw_dev, request_id))
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1b, "Error completing AEN during attention interrupt");
+ } else {
+ tw_dev->chrdev_request_id = TW_IOCTL_CHRDEV_FREE;
+ wake_up(&tw_dev->ioctl_wqueue);
+ }
+ } else {
+ twa_scsiop_execute_scsi_complete(tw_dev, request_id);
+ /* If no error command was a success */
+ if (error == 0) {
+ tw_dev->srb[request_id]->result = (DID_OK << 16);
+ }
+
+ /* If error, command failed */
+ if (error == 1) {
+ /* Ask for a host reset */
+ tw_dev->srb[request_id]->result = (DID_OK << 16) | (CHECK_CONDITION << 1);
+ }
+
+ /* Now complete the io */
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ tw_dev->posted_request_count--;
+ tw_dev->srb[request_id]->scsi_done(tw_dev->srb[request_id]);
+ twa_unmap_scsi_data(tw_dev, request_id);
+ }
+
+ /* Check for valid status after each drain */
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+ if (twa_check_bits(status_reg_value)) {
+ if (twa_decode_bits(tw_dev, status_reg_value)) {
+ TW_CLEAR_ALL_INTERRUPTS(tw_dev);
+ goto twa_interrupt_bail;
+ }
+ }
+ }
+ }
+ }
+twa_interrupt_bail:
+ spin_unlock(tw_dev->host->host_lock);
+ return IRQ_RETVAL(handled);
+} /* End twa_interrupt() */
+
+/* This function will load the request id and various sgls for ioctls */
+static void twa_load_sgl(TW_Command_Full *full_command_packet, int request_id, dma_addr_t dma_handle, int length)
+{
+ TW_Command *oldcommand;
+ TW_Command_Apache *newcommand;
+ TW_SG_Entry *sgl;
+
+ if (TW_OP_OUT(full_command_packet->command.newcommand.opcode__reserved) == TW_OP_EXECUTE_SCSI) {
+ newcommand = &full_command_packet->command.newcommand;
+ newcommand->request_id = request_id;
+ newcommand->sg_list[0].address = dma_handle + sizeof(TW_Ioctl_Buf_Apache) - 1;
+ newcommand->sg_list[0].length = length;
+ } else {
+ oldcommand = &full_command_packet->command.oldcommand;
+ oldcommand->request_id = request_id;
+
+ if (TW_SGL_OUT(oldcommand->opcode__sgloffset)) {
+ /* Load the sg list */
+ sgl = (TW_SG_Entry *)((u32 *)oldcommand+TW_SGL_OUT(oldcommand->opcode__sgloffset));
+ sgl->address = dma_handle + sizeof(TW_Ioctl_Buf_Apache) - 1;
+ sgl->length = length;
+ }
+ }
+} /* End twa_load_sgl() */
+
+/* This function will perform a pci-dma mapping for a scatter gather list */
+static int twa_map_scsi_sg_data(TW_Device_Extension *tw_dev, int request_id)
+{
+ int use_sg;
+ struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+ struct pci_dev *pdev = tw_dev->tw_pci_dev;
+ int retval = 0;
+
+ if (cmd->use_sg == 0)
+ goto out;
+
+ use_sg = pci_map_sg(pdev, cmd->buffer, cmd->use_sg, DMA_BIDIRECTIONAL);
+
+ if (use_sg == 0) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1c, "Failed to map scatter gather list");
+ goto out;
+ }
+
+ cmd->SCp.phase = TW_PHASE_SGLIST;
+ cmd->SCp.have_data_in = use_sg;
+ retval = use_sg;
+out:
+ return retval;
+} /* End twa_map_scsi_sg_data() */
+
+/* This function will perform a pci-dma map for a single buffer */
+static dma_addr_t twa_map_scsi_single_data(TW_Device_Extension *tw_dev, int request_id)
+{
+ dma_addr_t mapping;
+ struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+ struct pci_dev *pdev = tw_dev->tw_pci_dev;
+ int retval = 0;
+
+ if (cmd->request_bufflen == 0) {
+ retval = 0;
+ goto out;
+ }
+
+ mapping = pci_map_single(pdev, cmd->request_buffer, cmd->request_bufflen, DMA_BIDIRECTIONAL);
+
+ if (mapping == 0) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1d, "Failed to map page");
+ goto out;
+ }
+
+ cmd->SCp.phase = TW_PHASE_SINGLE;
+ cmd->SCp.have_data_in = mapping;
+ retval = mapping;
+out:
+ return retval;
+} /* End twa_map_scsi_single_data() */
+
+/* This function will poll for a response interrupt of a request */
+static int twa_poll_response(TW_Device_Extension *tw_dev, int request_id, int seconds)
+{
+ int retval = 1, found = 0, response_request_id;
+ TW_Response_Queue response_queue;
+ TW_Command_Full *full_command_packet = tw_dev->command_packet_virt[request_id];
+
+ if (twa_poll_status_gone(tw_dev, TW_STATUS_RESPONSE_QUEUE_EMPTY, seconds) == 0) {
+ response_queue.value = readl(TW_RESPONSE_QUEUE_REG_ADDR(tw_dev));
+ response_request_id = TW_RESID_OUT(response_queue.response_id);
+ if (request_id != response_request_id) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1e, "Found unexpected request id while polling for response");
+ goto out;
+ }
+ if (TW_OP_OUT(full_command_packet->command.newcommand.opcode__reserved) == TW_OP_EXECUTE_SCSI) {
+ if (full_command_packet->command.newcommand.status != 0) {
+ /* bad response */
+ twa_fill_sense(tw_dev, request_id, 0, 0);
+ goto out;
+ }
+ found = 1;
+ } else {
+ if (full_command_packet->command.oldcommand.status != 0) {
+ /* bad response */
+ twa_fill_sense(tw_dev, request_id, 0, 0);
+ goto out;
+ }
+ found = 1;
+ }
+ }
+
+ if (found)
+ retval = 0;
+out:
+ return retval;
+} /* End twa_poll_response() */
+
+/* This function will poll the status register for a flag */
+static int twa_poll_status(TW_Device_Extension *tw_dev, u32 flag, int seconds)
+{
+ u32 status_reg_value;
+ unsigned long before;
+ int retval = 1;
+
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+ before = jiffies;
+
+ if (twa_check_bits(status_reg_value))
+ twa_decode_bits(tw_dev, status_reg_value);
+
+ while ((status_reg_value & flag) != flag) {
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+
+ if (twa_check_bits(status_reg_value))
+ twa_decode_bits(tw_dev, status_reg_value);
+
+ if (time_after(jiffies, before + HZ * seconds))
+ goto out;
+
+ msleep(50);
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_poll_status() */
+
+/* This function will poll the status register for disappearance of a flag */
+static int twa_poll_status_gone(TW_Device_Extension *tw_dev, u32 flag, int seconds)
+{
+ u32 status_reg_value;
+ unsigned long before;
+ int retval = 1;
+
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+ before = jiffies;
+
+ if (twa_check_bits(status_reg_value))
+ twa_decode_bits(tw_dev, status_reg_value);
+
+ while ((status_reg_value & flag) != 0) {
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+ if (twa_check_bits(status_reg_value))
+ twa_decode_bits(tw_dev, status_reg_value);
+
+ if (time_after(jiffies, before + HZ * seconds))
+ goto out;
+
+ msleep(50);
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_poll_status_gone() */
+
+/* This function will attempt to post a command packet to the board */
+static int twa_post_command_packet(TW_Device_Extension *tw_dev, int request_id, char internal)
+{
+ u32 status_reg_value;
+ unsigned long command_que_value;
+ int retval = 1;
+
+ command_que_value = tw_dev->command_packet_phys[request_id];
+ status_reg_value = readl(TW_STATUS_REG_ADDR(tw_dev));
+
+ if (twa_check_bits(status_reg_value))
+ twa_decode_bits(tw_dev, status_reg_value);
+
+ if (((tw_dev->pending_request_count > 0) && (tw_dev->state[request_id] != TW_S_PENDING)) || (status_reg_value & TW_STATUS_COMMAND_QUEUE_FULL)) {
+
+ /* Only pend internal driver commands */
+ if (!internal) {
+ retval = SCSI_MLQUEUE_HOST_BUSY;
+ goto out;
+ }
+
+ /* Couldn't post the command packet, so we do it later */
+ if (tw_dev->state[request_id] != TW_S_PENDING) {
+ tw_dev->state[request_id] = TW_S_PENDING;
+ tw_dev->pending_request_count++;
+ if (tw_dev->pending_request_count > tw_dev->max_pending_request_count) {
+ tw_dev->max_pending_request_count = tw_dev->pending_request_count;
+ }
+ tw_dev->pending_queue[tw_dev->pending_tail] = request_id;
+ tw_dev->pending_tail = (tw_dev->pending_tail + 1) % TW_Q_LENGTH;
+ }
+ TW_UNMASK_COMMAND_INTERRUPT(tw_dev);
+ goto out;
+ } else {
+ /* We successfully posted the command packet */
+#if BITS_PER_LONG > 32
+ writeq(TW_COMMAND_OFFSET + command_que_value, TW_COMMAND_QUEUE_REG_ADDR(tw_dev));
+#else
+ writel(TW_COMMAND_OFFSET + command_que_value, TW_COMMAND_QUEUE_REG_ADDR(tw_dev));
+#endif
+ tw_dev->state[request_id] = TW_S_POSTED;
+ tw_dev->posted_request_count++;
+ if (tw_dev->posted_request_count > tw_dev->max_posted_request_count) {
+ tw_dev->max_posted_request_count = tw_dev->posted_request_count;
+ }
+ }
+ retval = 0;
+out:
+ return retval;
+} /* End twa_post_command_packet() */
+
+/* This function will reset a device extension */
+static int twa_reset_device_extension(TW_Device_Extension *tw_dev)
+{
+ int i = 0;
+ int retval = 1;
+
+ /* Abort all requests that are in progress */
+ for (i = 0; i < TW_Q_LENGTH; i++) {
+ if ((tw_dev->state[i] != TW_S_FINISHED) &&
+ (tw_dev->state[i] != TW_S_INITIAL) &&
+ (tw_dev->state[i] != TW_S_COMPLETED)) {
+ if (tw_dev->srb[i]) {
+ tw_dev->srb[i]->result = (DID_RESET << 16);
+ tw_dev->srb[i]->scsi_done(tw_dev->srb[i]);
+ twa_unmap_scsi_data(tw_dev, i);
+ }
+ }
+ }
+
+ /* Reset queues and counts */
+ for (i = 0; i < TW_Q_LENGTH; i++) {
+ tw_dev->free_queue[i] = i;
+ tw_dev->state[i] = TW_S_INITIAL;
+ }
+ tw_dev->free_head = TW_Q_START;
+ tw_dev->free_tail = TW_Q_START;
+ tw_dev->posted_request_count = 0;
+ tw_dev->pending_request_count = 0;
+ tw_dev->pending_head = TW_Q_START;
+ tw_dev->pending_tail = TW_Q_START;
+ tw_dev->reset_print = 0;
+ tw_dev->chrdev_request_id = TW_IOCTL_CHRDEV_FREE;
+ tw_dev->flags = 0;
+
+ TW_DISABLE_INTERRUPTS(tw_dev);
+
+ if (twa_reset_sequence(tw_dev, 1))
+ goto out;
+
+ TW_ENABLE_AND_CLEAR_INTERRUPTS(tw_dev);
+
+ retval = 0;
+out:
+ return retval;
+} /* End twa_reset_device_extension() */
+
+/* This function will reset a controller */
+static int twa_reset_sequence(TW_Device_Extension *tw_dev, int soft_reset)
+{
+ int tries = 0, retval = 1, flashed = 0, do_soft_reset = soft_reset;
+
+ while (tries < TW_MAX_RESET_TRIES) {
+ if (do_soft_reset)
+ TW_SOFT_RESET(tw_dev);
+
+ /* Make sure controller is in a good state */
+ if (twa_poll_status(tw_dev, TW_STATUS_MICROCONTROLLER_READY | (do_soft_reset == 1 ? TW_STATUS_ATTENTION_INTERRUPT : 0), 30)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x1f, "Microcontroller not ready during reset sequence");
+ do_soft_reset = 1;
+ tries++;
+ continue;
+ }
+
+ /* Empty response queue */
+ if (twa_empty_response_queue(tw_dev)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x20, "Response queue empty failed during reset sequence");
+ do_soft_reset = 1;
+ tries++;
+ continue;
+ }
+
+ flashed = 0;
+
+ /* Check for compatibility/flash */
+ if (twa_check_srl(tw_dev, &flashed)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x21, "Compatibility check failed during reset sequence");
+ do_soft_reset = 1;
+ tries++;
+ continue;
+ } else {
+ if (flashed) {
+ tries++;
+ continue;
+ }
+ }
+
+ /* Drain the AEN queue */
+ if (twa_aen_drain_queue(tw_dev, soft_reset)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x22, "AEN drain failed during reset sequence");
+ do_soft_reset = 1;
+ tries++;
+ continue;
+ }
+
+ /* If we got here, controller is in a good state */
+ retval = 0;
+ goto out;
+ }
+out:
+ return retval;
+} /* End twa_reset_sequence() */
+
+/* This funciton returns unit geometry in cylinders/heads/sectors */
+static int twa_scsi_biosparam(struct scsi_device *sdev, struct block_device *bdev, sector_t capacity, int geom[])
+{
+ int heads, sectors, cylinders;
+ TW_Device_Extension *tw_dev;
+
+ tw_dev = (TW_Device_Extension *)sdev->host->hostdata;
+
+ if (capacity >= 0x200000) {
+ heads = 255;
+ sectors = 63;
+ cylinders = sector_div(capacity, heads * sectors);
+ } else {
+ heads = 64;
+ sectors = 32;
+ cylinders = sector_div(capacity, heads * sectors);
+ }
+
+ geom[0] = heads;
+ geom[1] = sectors;
+ geom[2] = cylinders;
+
+ return 0;
+} /* End twa_scsi_biosparam() */
+
+/* This is the new scsi eh abort function */
+static int twa_scsi_eh_abort(struct scsi_cmnd *SCpnt)
+{
+ int i;
+ TW_Device_Extension *tw_dev = NULL;
+ int retval = FAILED;
+
+ tw_dev = (TW_Device_Extension *)SCpnt->device->host->hostdata;
+
+ spin_unlock_irq(tw_dev->host->host_lock);
+
+ tw_dev->num_aborts++;
+
+ /* If we find any IO's in process, we have to reset the card */
+ for (i = 0; i < TW_Q_LENGTH; i++) {
+ if ((tw_dev->state[i] != TW_S_FINISHED) && (tw_dev->state[i] != TW_S_INITIAL)) {
+ printk(KERN_WARNING "3w-9xxx: scsi%d: WARNING: (0x%02X:0x%04X): Unit #%d: Command (0x%x) timed out, resetting card.\n",
+ tw_dev->host->host_no, TW_DRIVER, 0x2c,
+ SCpnt->device->id, SCpnt->cmnd[0]);
+ if (twa_reset_device_extension(tw_dev)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2a, "Controller reset failed during scsi abort");
+ goto out;
+ }
+ break;
+ }
+ }
+ retval = SUCCESS;
+out:
+ spin_lock_irq(tw_dev->host->host_lock);
+ return retval;
+} /* End twa_scsi_eh_abort() */
+
+/* This is the new scsi eh reset function */
+static int twa_scsi_eh_reset(struct scsi_cmnd *SCpnt)
+{
+ TW_Device_Extension *tw_dev = NULL;
+ int retval = FAILED;
+
+ tw_dev = (TW_Device_Extension *)SCpnt->device->host->hostdata;
+
+ spin_unlock_irq(tw_dev->host->host_lock);
+
+ tw_dev->num_resets++;
+
+ printk(KERN_WARNING "3w-9xxx: scsi%d: SCSI host reset started.\n", tw_dev->host->host_no);
+
+ /* Now reset the card and some of the device extension data */
+ if (twa_reset_device_extension(tw_dev)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2b, "Controller reset failed during scsi host reset");
+ goto out;
+ }
+ printk(KERN_WARNING "3w-9xxx: scsi%d: SCSI host reset succeeded.\n", tw_dev->host->host_no);
+ retval = SUCCESS;
+out:
+ spin_lock_irq(tw_dev->host->host_lock);
+ return retval;
+} /* End twa_scsi_eh_reset() */
+
+/* This is the main scsi queue function to handle scsi opcodes */
+static int twa_scsi_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))
+{
+ int request_id, retval;
+ TW_Device_Extension *tw_dev = (TW_Device_Extension *)SCpnt->device->host->hostdata;
+
+ /* Save done function into scsi_cmnd struct */
+ SCpnt->scsi_done = done;
+
+ /* Get a free request id */
+ twa_get_request_id(tw_dev, &request_id);
+
+ /* Save the scsi command for use by the ISR */
+ tw_dev->srb[request_id] = SCpnt;
+
+ /* Initialize phase to zero */
+ SCpnt->SCp.phase = TW_PHASE_INITIAL;
+
+ retval = twa_scsiop_execute_scsi(tw_dev, request_id, NULL, 0, NULL);
+ switch (retval) {
+ case SCSI_MLQUEUE_HOST_BUSY:
+ twa_free_request_id(tw_dev, request_id);
+ break;
+ case 1:
+ tw_dev->state[request_id] = TW_S_COMPLETED;
+ twa_free_request_id(tw_dev, request_id);
+ SCpnt->result = (DID_ERROR << 16);
+ done(SCpnt);
+ retval = 0;
+ }
+
+ return retval;
+} /* End twa_scsi_queue() */
+
+/* This function hands scsi cdb's to the firmware */
+static int twa_scsiop_execute_scsi(TW_Device_Extension *tw_dev, int request_id, char *cdb, int use_sg, TW_SG_Apache *sglistarg)
+{
+ TW_Command_Full *full_command_packet;
+ TW_Command_Apache *command_packet;
+ u32 num_sectors = 0x0;
+ int i, sg_count;
+ struct scsi_cmnd *srb = NULL;
+ struct scatterlist *sglist = NULL;
+ u32 buffaddr = 0x0;
+ int retval = 1;
+
+ if (tw_dev->srb[request_id]) {
+ if (tw_dev->srb[request_id]->request_buffer) {
+ sglist = (struct scatterlist *)tw_dev->srb[request_id]->request_buffer;
+ }
+ srb = tw_dev->srb[request_id];
+ }
+
+ /* Initialize command packet */
+ full_command_packet = tw_dev->command_packet_virt[request_id];
+ full_command_packet->header.header_desc.size_header = 128;
+ full_command_packet->header.status_block.error = 0;
+ full_command_packet->header.status_block.severity__reserved = 0;
+
+ command_packet = &full_command_packet->command.newcommand;
+ command_packet->status = 0;
+ command_packet->opcode__reserved = TW_OPRES_IN(0, TW_OP_EXECUTE_SCSI);
+
+ /* We forced 16 byte cdb use earlier */
+ if (!cdb)
+ memcpy(command_packet->cdb, srb->cmnd, TW_MAX_CDB_LEN);
+ else
+ memcpy(command_packet->cdb, cdb, TW_MAX_CDB_LEN);
+
+ if (srb)
+ command_packet->unit = srb->device->id;
+ else
+ command_packet->unit = 0;
+
+ command_packet->request_id = request_id;
+ command_packet->sgl_offset = 16;
+
+ if (!sglistarg) {
+ /* Map sglist from scsi layer to cmd packet */
+ if (tw_dev->srb[request_id]->use_sg == 0) {
+ if (tw_dev->srb[request_id]->request_bufflen < TW_MIN_SGL_LENGTH) {
+ command_packet->sg_list[0].address = tw_dev->generic_buffer_phys[request_id];
+ command_packet->sg_list[0].length = TW_MIN_SGL_LENGTH;
+ } else {
+ buffaddr = twa_map_scsi_single_data(tw_dev, request_id);
+ if (buffaddr == 0)
+ goto out;
+
+ command_packet->sg_list[0].address = buffaddr;
+ command_packet->sg_list[0].length = tw_dev->srb[request_id]->request_bufflen;
+ }
+ command_packet->sgl_entries = 1;
+
+ if (command_packet->sg_list[0].address & TW_ALIGNMENT_9000_SGL) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2d, "Found unaligned address during execute scsi");
+ goto out;
+ }
+ }
+
+ if (tw_dev->srb[request_id]->use_sg > 0) {
+ sg_count = twa_map_scsi_sg_data(tw_dev, request_id);
+ if (sg_count == 0)
+ goto out;
+
+ for (i = 0; i < sg_count; i++) {
+ command_packet->sg_list[i].address = sg_dma_address(&sglist[i]);
+ command_packet->sg_list[i].length = sg_dma_len(&sglist[i]);
+ if (command_packet->sg_list[i].address & TW_ALIGNMENT_9000_SGL) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2e, "Found unaligned sgl address during execute scsi");
+ goto out;
+ }
+ }
+ command_packet->sgl_entries = tw_dev->srb[request_id]->use_sg;
+ }
+ } else {
+ /* Internal cdb post */
+ for (i = 0; i < use_sg; i++) {
+ command_packet->sg_list[i].address = sglistarg[i].address;
+ command_packet->sg_list[i].length = sglistarg[i].length;
+ if (command_packet->sg_list[i].address & TW_ALIGNMENT_9000_SGL) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x2f, "Found unaligned sgl address during internal post");
+ goto out;
+ }
+ }
+ command_packet->sgl_entries = use_sg;
+ }
+
+ if (srb) {
+ if (srb->cmnd[0] == READ_6 || srb->cmnd[0] == WRITE_6)
+ num_sectors = (u32)srb->cmnd[4];
+
+ if (srb->cmnd[0] == READ_10 || srb->cmnd[0] == WRITE_10)
+ num_sectors = (u32)srb->cmnd[8] | ((u32)srb->cmnd[7] << 8);
+ }
+
+ /* Update sector statistic */
+ tw_dev->sector_count = num_sectors;
+ if (tw_dev->sector_count > tw_dev->max_sector_count)
+ tw_dev->max_sector_count = tw_dev->sector_count;
+
+ /* Update SG statistics */
+ if (srb) {
+ tw_dev->sgl_entries = tw_dev->srb[request_id]->use_sg;
+ if (tw_dev->sgl_entries > tw_dev->max_sgl_entries)
+ tw_dev->max_sgl_entries = tw_dev->sgl_entries;
+ }
+
+ /* Now post the command to the board */
+ if (srb) {
+ retval = twa_post_command_packet(tw_dev, request_id, 0);
+ } else {
+ twa_post_command_packet(tw_dev, request_id, 1);
+ retval = 0;
+ }
+out:
+ return retval;
+} /* End twa_scsiop_execute_scsi() */
+
+/* This function completes an execute scsi operation */
+static void twa_scsiop_execute_scsi_complete(TW_Device_Extension *tw_dev, int request_id)
+{
+ /* Copy the response if too small */
+ if ((tw_dev->srb[request_id]->request_buffer) && (tw_dev->srb[request_id]->request_bufflen < TW_MIN_SGL_LENGTH)) {
+ memcpy(tw_dev->srb[request_id]->request_buffer,
+ tw_dev->generic_buffer_virt[request_id],
+ tw_dev->srb[request_id]->request_bufflen);
+ }
+} /* End twa_scsiop_execute_scsi_complete() */
+
+/* This function tells the controller to shut down */
+static void __twa_shutdown(TW_Device_Extension *tw_dev)
+{
+ /* Disable interrupts */
+ TW_DISABLE_INTERRUPTS(tw_dev);
+
+ printk(KERN_WARNING "3w-9xxx: Shutting down host %d.\n", tw_dev->host->host_no);
+
+ /* Tell the card we are shutting down */
+ if (twa_initconnection(tw_dev, 1, 0, 0, 0, 0, 0, NULL, NULL, NULL, NULL, NULL)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x31, "Connection shutdown failed");
+ } else {
+ printk(KERN_WARNING "3w-9xxx: Shutdown complete.\n");
+ }
+
+ /* Clear all interrupts just before exit */
+ TW_ENABLE_AND_CLEAR_INTERRUPTS(tw_dev);
+} /* End __twa_shutdown() */
+
+/* Wrapper for __twa_shutdown */
+static void twa_shutdown(struct device *dev)
+{
+ struct Scsi_Host *host = pci_get_drvdata(to_pci_dev(dev));
+ TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
+
+ __twa_shutdown(tw_dev);
+} /* End twa_shutdown() */
+
+/* This function will look up a string */
+static char *twa_string_lookup(twa_message_type *table, unsigned int code)
+{
+ int index;
+
+ for (index = 0; ((code != table[index].code) &&
+ (table[index].text != (char *)0)); index++);
+ return(table[index].text);
+} /* End twa_string_lookup() */
+
+/* This function will perform a pci-dma unmap */
+static void twa_unmap_scsi_data(TW_Device_Extension *tw_dev, int request_id)
+{
+ struct scsi_cmnd *cmd = tw_dev->srb[request_id];
+ struct pci_dev *pdev = tw_dev->tw_pci_dev;
+
+ switch(cmd->SCp.phase) {
+ case TW_PHASE_SINGLE:
+ pci_unmap_single(pdev, cmd->SCp.have_data_in, cmd->request_bufflen, DMA_BIDIRECTIONAL);
+ break;
+ case TW_PHASE_SGLIST:
+ pci_unmap_sg(pdev, cmd->request_buffer, cmd->use_sg, DMA_BIDIRECTIONAL);
+ break;
+ }
+} /* End twa_unmap_scsi_data() */
+
+/* scsi_host_template initializer */
+static struct scsi_host_template driver_template = {
+ .module = THIS_MODULE,
+ .name = "3ware 9000 Storage Controller",
+ .queuecommand = twa_scsi_queue,
+ .eh_abort_handler = twa_scsi_eh_abort,
+ .eh_host_reset_handler = twa_scsi_eh_reset,
+ .bios_param = twa_scsi_biosparam,
+ .can_queue = TW_Q_LENGTH-2,
+ .this_id = -1,
+ .sg_tablesize = TW_APACHE_MAX_SGL_LENGTH,
+ .max_sectors = TW_MAX_SECTORS,
+ .cmd_per_lun = TW_MAX_CMDS_PER_LUN,
+ .use_clustering = ENABLE_CLUSTERING,
+ .shost_attrs = twa_host_attrs,
+ .sdev_attrs = twa_dev_attrs,
+ .emulated = 1
+};
+
+/* This function will probe and initialize a card */
+static int __devinit twa_probe(struct pci_dev *pdev, const struct pci_device_id *dev_id)
+{
+ struct Scsi_Host *host = NULL;
+ TW_Device_Extension *tw_dev;
+ u32 mem_addr;
+ int retval = -ENODEV;
+
+ retval = pci_enable_device(pdev);
+ if (retval) {
+ TW_PRINTK(host, TW_DRIVER, 0x34, "Failed to enable pci device");
+ goto out_disable_device;
+ }
+
+ pci_set_master(pdev);
+
+ retval = pci_set_dma_mask(pdev, TW_DMA_MASK);
+ if (retval) {
+ TW_PRINTK(host, TW_DRIVER, 0x23, "Failed to set dma mask");
+ goto out_disable_device;
+ }
+
+ host = scsi_host_alloc(&driver_template, sizeof(TW_Device_Extension));
+ if (!host) {
+ TW_PRINTK(host, TW_DRIVER, 0x24, "Failed to allocate memory for device extension");
+ retval = -ENOMEM;
+ goto out_disable_device;
+ }
+ tw_dev = (TW_Device_Extension *)host->hostdata;
+
+ memset(tw_dev, 0, sizeof(TW_Device_Extension));
+
+ /* Save values to device extension */
+ tw_dev->host = host;
+ tw_dev->tw_pci_dev = pdev;
+
+ if (twa_initialize_device_extension(tw_dev)) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x25, "Failed to initialize device extension");
+ goto out_free_device_extension;
+ }
+
+ /* Request IO regions */
+ retval = pci_request_regions(pdev, "3w-9xxx");
+ if (retval) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x26, "Failed to get mem region");
+ goto out_free_device_extension;
+ }
+
+ mem_addr = pci_resource_start(pdev, 1);
+
+ /* Save base address */
+ tw_dev->base_addr = ioremap(mem_addr, PAGE_SIZE);
+ if (!tw_dev->base_addr) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x35, "Failed to ioremap");
+ goto out_release_mem_region;
+ }
+
+ /* Disable interrupts on the card */
+ TW_DISABLE_INTERRUPTS(tw_dev);
+
+ /* Initialize the card */
+ if (twa_reset_sequence(tw_dev, 0))
+ goto out_release_mem_region;
+
+ /* Set host specific parameters */
+ host->max_id = TW_MAX_UNITS;
+ host->max_cmd_len = TW_MAX_CDB_LEN;
+
+ /* Luns and channels aren't supported by adapter */
+ host->max_lun = 0;
+ host->max_channel = 0;
+
+ /* Register the card with the kernel SCSI layer */
+ retval = scsi_add_host(host, &pdev->dev);
+ if (retval) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x27, "scsi add host failed");
+ goto out_release_mem_region;
+ }
+
+ pci_set_drvdata(pdev, host);
+
+ printk(KERN_WARNING "3w-9xxx: scsi%d: Found a 3ware 9000 Storage Controller at 0x%x, IRQ: %d.\n",
+ host->host_no, mem_addr, pdev->irq);
+ printk(KERN_WARNING "3w-9xxx: scsi%d: Firmware %s, BIOS %s, Ports: %d.\n",
+ host->host_no,
+ (char *)twa_get_param(tw_dev, 0, TW_VERSION_TABLE,
+ TW_PARAM_FWVER, TW_PARAM_FWVER_LENGTH),
+ (char *)twa_get_param(tw_dev, 1, TW_VERSION_TABLE,
+ TW_PARAM_BIOSVER, TW_PARAM_BIOSVER_LENGTH),
+ *(int *)twa_get_param(tw_dev, 2, TW_INFORMATION_TABLE,
+ TW_PARAM_PORTCOUNT, TW_PARAM_PORTCOUNT_LENGTH));
+
+ /* Now setup the interrupt handler */
+ retval = request_irq(pdev->irq, twa_interrupt, SA_SHIRQ, "3w-9xxx", tw_dev);
+ if (retval) {
+ TW_PRINTK(tw_dev->host, TW_DRIVER, 0x30, "Error requesting IRQ");
+ goto out_remove_host;
+ }
+
+ twa_device_extension_list[twa_device_extension_count] = tw_dev;
+ twa_device_extension_count++;
+
+ /* Re-enable interrupts on the card */
+ TW_ENABLE_AND_CLEAR_INTERRUPTS(tw_dev);
+
+ /* Finally, scan the host */
+ scsi_scan_host(host);
+
+ if (twa_major == -1) {
+ if ((twa_major = register_chrdev (0, "twa", &twa_fops)) < 0)
+ TW_PRINTK(host, TW_DRIVER, 0x29, "Failed to register character device");
+ }
+ return 0;
+
+out_remove_host:
+ scsi_remove_host(host);
+out_release_mem_region:
+ pci_release_regions(pdev);
+out_free_device_extension:
+ twa_free_device_extension(tw_dev);
+ scsi_host_put(host);
+out_disable_device:
+ pci_disable_device(pdev);
+
+ return retval;
+} /* End twa_probe() */
+
+/* This function is called to remove a device */
+static void twa_remove(struct pci_dev *pdev)
+{
+ struct Scsi_Host *host = pci_get_drvdata(pdev);
+ TW_Device_Extension *tw_dev = (TW_Device_Extension *)host->hostdata;
+
+ scsi_remove_host(tw_dev->host);
+
+ __twa_shutdown(tw_dev);
+
+ /* Free up the IRQ */
+ free_irq(tw_dev->tw_pci_dev->irq, tw_dev);
+
+ /* Free up the mem region */
+ pci_release_regions(pdev);
+
+ /* Free up device extension resources */
+ twa_free_device_extension(tw_dev);
+
+ /* Unregister character device */
+ if (twa_major >= 0) {
+ unregister_chrdev(twa_major, "twa");
+ twa_major = -1;
+ }
+
+ scsi_host_put(tw_dev->host);
+ pci_disable_device(pdev);
+ twa_device_extension_count--;
+} /* End twa_remove() */
+
+/* PCI Devices supported by this driver */
+static struct pci_device_id twa_pci_tbl[] __devinitdata = {
+ { PCI_VENDOR_ID_3WARE, PCI_DEVICE_ID_3WARE_9000,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0},
+ { }
+};
+MODULE_DEVICE_TABLE(pci, twa_pci_tbl);
+
+/* pci_driver initializer */
+static struct pci_driver twa_driver = {
+ .name = "3w-9xxx",
+ .id_table = twa_pci_tbl,
+ .probe = twa_probe,
+ .remove = twa_remove,
+ .driver = {
+ .shutdown = twa_shutdown
+ }
+};
+
+/* This function is called on driver initialization */
+static int __init twa_init(void)
+{
+ printk(KERN_WARNING "3ware 9000 Storage Controller device driver for Linux v%s.\n", TWA_DRIVER_VERSION);
+
+ return pci_module_init(&twa_driver);
+} /* End twa_init() */
+
+/* This function is called on driver exit */
+static void __exit twa_exit(void)
+{
+ pci_unregister_driver(&twa_driver);
+} /* End twa_exit() */
+
+module_init(twa_init);
+module_exit(twa_exit);
+
--- /dev/null
+/*
+ 3w-9xxx.h -- 3ware 9000 Storage Controller device driver for Linux.
+
+ Written By: Adam Radford <linuxraid@amcc.com>
+
+ Copyright (C) 2004 Applied Micro Circuits Corporation.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; version 2 of the License.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ NO WARRANTY
+ THE PROGRAM IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR
+ CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT
+ LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT,
+ MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Each Recipient is
+ solely responsible for determining the appropriateness of using and
+ distributing the Program and assumes all risks associated with its
+ exercise of rights under this Agreement, including but not limited to
+ the risks and costs of program errors, damage to or loss of data,
+ programs or equipment, and unavailability or interruption of operations.
+
+ DISCLAIMER OF LIABILITY
+ NEITHER RECIPIENT NOR ANY CONTRIBUTORS SHALL HAVE ANY LIABILITY FOR ANY
+ DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND
+ ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR
+ TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE
+ USE OR DISTRIBUTION OF THE PROGRAM OR THE EXERCISE OF ANY RIGHTS GRANTED
+ HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+ Bugs/Comments/Suggestions should be mailed to:
+ linuxraid@amcc.com
+
+ For more information, goto:
+ http://www.amcc.com
+*/
+
+#ifndef _3W_9XXX_H
+#define _3W_9XXX_H
+
+/* AEN string type */
+typedef struct TAG_twa_message_type {
+ unsigned int code;
+ char* text;
+} twa_message_type;
+
+/* AEN strings */
+static twa_message_type twa_aen_table[] = {
+ {0x0000, "AEN queue empty"},
+ {0x0001, "Controller reset occurred"},
+ {0x0002, "Degraded unit detected"},
+ {0x0003, "Controller error occured"},
+ {0x0004, "Background rebuild failed"},
+ {0x0005, "Background rebuild done"},
+ {0x0006, "Incomplete unit detected"},
+ {0x0007, "Background initialize done"},
+ {0x0008, "Unclean shutdown detected"},
+ {0x0009, "Drive timeout detected"},
+ {0x000A, "Drive error detected"},
+ {0x000B, "Rebuild started"},
+ {0x000C, "Background initialize started"},
+ {0x000D, "Entire logical unit was deleted"},
+ {0x000E, "Background initialize failed"},
+ {0x000F, "SMART attribute exceeded threshold"},
+ {0x0010, "Power supply reported AC under range"},
+ {0x0011, "Power supply reported DC out of range"},
+ {0x0012, "Power supply reported a malfunction"},
+ {0x0013, "Power supply predicted malfunction"},
+ {0x0014, "Battery charge is below threshold"},
+ {0x0015, "Fan speed is below threshold"},
+ {0x0016, "Temperature sensor is above threshold"},
+ {0x0017, "Power supply was removed"},
+ {0x0018, "Power supply was inserted"},
+ {0x0019, "Drive was removed from a bay"},
+ {0x001A, "Drive was inserted into a bay"},
+ {0x001B, "Drive bay cover door was opened"},
+ {0x001C, "Drive bay cover door was closed"},
+ {0x001D, "Product case was opened"},
+ {0x0020, "Prepare for shutdown (power-off)"},
+ {0x0021, "Downgrade UDMA mode to lower speed"},
+ {0x0022, "Upgrade UDMA mode to higher speed"},
+ {0x0023, "Sector repair completed"},
+ {0x0024, "Sbuf memory test failed"},
+ {0x0025, "Error flushing cached write data to array"},
+ {0x0026, "Drive reported data ECC error"},
+ {0x0027, "DCB has checksum error"},
+ {0x0028, "DCB version is unsupported"},
+ {0x0029, "Background verify started"},
+ {0x002A, "Background verify failed"},
+ {0x002B, "Background verify done"},
+ {0x002C, "Bad sector overwritten during rebuild"},
+ {0x002D, "Background rebuild error on source drive"},
+ {0x002E, "Replace failed because replacement drive too small"},
+ {0x002F, "Verify failed because array was never initialized"},
+ {0x0030, "Unsupported ATA drive"},
+ {0x0031, "Synchronize host/controller time"},
+ {0x0032, "Spare capacity is inadequate for some units"},
+ {0x0033, "Background migration started"},
+ {0x0034, "Background migration failed"},
+ {0x0035, "Background migration done"},
+ {0x0036, "Verify detected and fixed data/parity mismatch"},
+ {0x0037, "SO-DIMM incompatible"},
+ {0x0038, "SO-DIMM not detected"},
+ {0x0039, "Corrected Sbuf ECC error"},
+ {0x003A, "Drive power on reset detected"},
+ {0x003B, "Background rebuild paused"},
+ {0x003C, "Background initialize paused"},
+ {0x003D, "Background verify paused"},
+ {0x003E, "Background migration paused"},
+ {0x003F, "Corrupt flash file system detected"},
+ {0x0040, "Flash file system repaired"},
+ {0x0041, "Unit number assignments were lost"},
+ {0x0042, "Error during read of primary DCB"},
+ {0x0043, "Latent error found in backup DCB"},
+ {0x00FC, "Recovered/finished array membership update"},
+ {0x00FD, "Handler lockup"},
+ {0x00FE, "Retrying PCI transfer"},
+ {0x00FF, "AEN queue is full"},
+ {0xFFFFFFFF, (char*) 0}
+};
+
+/* AEN severity table */
+static char *twa_aen_severity_table[] =
+{
+ "None", "ERROR", "WARNING", "INFO", "DEBUG", (char*) 0
+};
+
+/* Error strings */
+static twa_message_type twa_error_table[] = {
+ {0x0100, "SGL entry contains zero data"},
+ {0x0101, "Invalid command opcode"},
+ {0x0102, "SGL entry has unaligned address"},
+ {0x0103, "SGL size does not match command"},
+ {0x0104, "SGL entry has illegal length"},
+ {0x0105, "Command packet is not aligned"},
+ {0x0106, "Invalid request ID"},
+ {0x0107, "Duplicate request ID"},
+ {0x0108, "ID not locked"},
+ {0x0109, "LBA out of range"},
+ {0x010A, "Logical unit not supported"},
+ {0x010B, "Parameter table does not exist"},
+ {0x010C, "Parameter index does not exist"},
+ {0x010D, "Invalid field in CDB"},
+ {0x010E, "Specified port has invalid drive"},
+ {0x010F, "Parameter item size mismatch"},
+ {0x0110, "Failed memory allocation"},
+ {0x0111, "Memory request too large"},
+ {0x0112, "Out of memory segments"},
+ {0x0113, "Invalid address to deallocate"},
+ {0x0114, "Out of memory"},
+ {0x0115, "Out of heap"},
+ {0x0120, "Double degrade"},
+ {0x0121, "Drive not degraded"},
+ {0x0122, "Reconstruct error"},
+ {0x0123, "Replace not accepted"},
+ {0x0124, "Replace drive capacity too small"},
+ {0x0125, "Sector count not allowed"},
+ {0x0126, "No spares left"},
+ {0x0127, "Reconstruct error"},
+ {0x0128, "Unit is offline"},
+ {0x0129, "Cannot update status to DCB"},
+ {0x0130, "Invalid stripe handle"},
+ {0x0131, "Handle that was not locked"},
+ {0x0132, "Handle that was not empty"},
+ {0x0133, "Handle has different owner"},
+ {0x0140, "IPR has parent"},
+ {0x0150, "Illegal Pbuf address alignment"},
+ {0x0151, "Illegal Pbuf transfer length"},
+ {0x0152, "Illegal Sbuf address alignment"},
+ {0x0153, "Illegal Sbuf transfer length"},
+ {0x0160, "Command packet too large"},
+ {0x0161, "SGL exceeds maximum length"},
+ {0x0162, "SGL has too many entries"},
+ {0x0170, "Insufficient resources for rebuilder"},
+ {0x0171, "Verify error (data != parity)"},
+ {0x0180, "Requested segment not in directory of this DCB"},
+ {0x0181, "DCB segment has unsupported version"},
+ {0x0182, "DCB segment has checksum error"},
+ {0x0183, "DCB support (settings) segment invalid"},
+ {0x0184, "DCB UDB (unit descriptor block) segment invalid"},
+ {0x0185, "DCB GUID (globally unique identifier) segment invalid"},
+ {0x01A0, "Could not clear Sbuf"},
+ {0x01C0, "Flash identify failed"},
+ {0x01C1, "Flash out of bounds"},
+ {0x01C2, "Flash verify error"},
+ {0x01C3, "Flash file object not found"},
+ {0x01C4, "Flash file already present"},
+ {0x01C5, "Flash file system full"},
+ {0x01C6, "Flash file not present"},
+ {0x01C7, "Flash file size error"},
+ {0x01C8, "Bad flash file checksum"},
+ {0x01CA, "Corrupt flash file system detected"},
+ {0x01D0, "Invalid field in parameter list"},
+ {0x01D1, "Parameter list length error"},
+ {0x01D2, "Parameter item is not changeable"},
+ {0x01D3, "Parameter item is not saveable"},
+ {0x0200, "UDMA CRC error"},
+ {0x0201, "Internal CRC error"},
+ {0x0202, "Data ECC error"},
+ {0x0203, "ADP level 1 error"},
+ {0x0204, "Port timeout"},
+ {0x0205, "Drive power on reset"},
+ {0x0206, "ADP level 2 error"},
+ {0x0207, "Soft reset failed"},
+ {0x0208, "Drive not ready"},
+ {0x0209, "Unclassified port error"},
+ {0x020A, "Drive aborted command"},
+ {0x0210, "Internal CRC error"},
+ {0x0211, "PCI abort error"},
+ {0x0212, "PCI parity error"},
+ {0x0213, "Port handler error"},
+ {0x0214, "Token interrupt count error"},
+ {0x0215, "Timeout waiting for PCI transfer"},
+ {0x0216, "Corrected buffer ECC"},
+ {0x0217, "Uncorrected buffer ECC"},
+ {0x0230, "Unsupported command during flash recovery"},
+ {0x0231, "Next image buffer expected"},
+ {0x0232, "Binary image architecture incompatible"},
+ {0x0233, "Binary image has no signature"},
+ {0x0234, "Binary image has bad checksum"},
+ {0x0235, "Image downloaded overflowed buffer"},
+ {0x0240, "I2C device not found"},
+ {0x0241, "I2C transaction aborted"},
+ {0x0242, "SO-DIMM parameter(s) incompatible using defaults"},
+ {0x0243, "SO-DIMM unsupported"},
+ {0x0248, "SPI transfer status error"},
+ {0x0249, "SPI transfer timeout error"},
+ {0x0250, "Invalid unit descriptor size in CreateUnit"},
+ {0x0251, "Unit descriptor size exceeds data buffer in CreateUnit"},
+ {0x0252, "Invalid value in CreateUnit descriptor"},
+ {0x0253, "Inadequate disk space to support descriptor in CreateUnit"},
+ {0x0254, "Unable to create data channel for this unit descriptor"},
+ {0x0255, "CreateUnit descriptor specifies a drive already in use"},
+ {0x0256, "Unable to write configuration to all disks during CreateUnit"},
+ {0x0257, "CreateUnit does not support this descriptor version"},
+ {0x0258, "Invalid subunit for RAID 0 or 5 in CreateUnit"},
+ {0x0259, "Too many descriptors in CreateUnit"},
+ {0x025A, "Invalid configuration specified in CreateUnit descriptor"},
+ {0x025B, "Invalid LBA offset specified in CreateUnit descriptor"},
+ {0x025C, "Invalid stripelet size specified in CreateUnit descriptor"},
+ {0x0260, "SMART attribute exceeded threshold"},
+ {0xFFFFFFFF, (char*) 0}
+};
+
+/* Control register bit definitions */
+#define TW_CONTROL_CLEAR_HOST_INTERRUPT 0x00080000
+#define TW_CONTROL_CLEAR_ATTENTION_INTERRUPT 0x00040000
+#define TW_CONTROL_MASK_COMMAND_INTERRUPT 0x00020000
+#define TW_CONTROL_MASK_RESPONSE_INTERRUPT 0x00010000
+#define TW_CONTROL_UNMASK_COMMAND_INTERRUPT 0x00008000
+#define TW_CONTROL_UNMASK_RESPONSE_INTERRUPT 0x00004000
+#define TW_CONTROL_CLEAR_ERROR_STATUS 0x00000200
+#define TW_CONTROL_ISSUE_SOFT_RESET 0x00000100
+#define TW_CONTROL_ENABLE_INTERRUPTS 0x00000080
+#define TW_CONTROL_DISABLE_INTERRUPTS 0x00000040
+#define TW_CONTROL_ISSUE_HOST_INTERRUPT 0x00000020
+#define TW_CONTROL_CLEAR_PARITY_ERROR 0x00800000
+#define TW_CONTROL_CLEAR_QUEUE_ERROR 0x00400000
+#define TW_CONTROL_CLEAR_PCI_ABORT 0x00100000
+#define TW_CONTROL_CLEAR_SBUF_WRITE_ERROR 0x00000008
+
+/* Status register bit definitions */
+#define TW_STATUS_MAJOR_VERSION_MASK 0xF0000000
+#define TW_STATUS_MINOR_VERSION_MASK 0x0F000000
+#define TW_STATUS_PCI_PARITY_ERROR 0x00800000
+#define TW_STATUS_QUEUE_ERROR 0x00400000
+#define TW_STATUS_MICROCONTROLLER_ERROR 0x00200000
+#define TW_STATUS_PCI_ABORT 0x00100000
+#define TW_STATUS_HOST_INTERRUPT 0x00080000
+#define TW_STATUS_ATTENTION_INTERRUPT 0x00040000
+#define TW_STATUS_COMMAND_INTERRUPT 0x00020000
+#define TW_STATUS_RESPONSE_INTERRUPT 0x00010000
+#define TW_STATUS_COMMAND_QUEUE_FULL 0x00008000
+#define TW_STATUS_RESPONSE_QUEUE_EMPTY 0x00004000
+#define TW_STATUS_MICROCONTROLLER_READY 0x00002000
+#define TW_STATUS_COMMAND_QUEUE_EMPTY 0x00001000
+#define TW_STATUS_EXPECTED_BITS 0x00002000
+#define TW_STATUS_UNEXPECTED_BITS 0x00F00008
+#define TW_STATUS_SBUF_WRITE_ERROR 0x00000008
+#define TW_STATUS_VALID_INTERRUPT 0x00DF0008
+
+/* RESPONSE QUEUE BIT DEFINITIONS */
+#define TW_RESPONSE_ID_MASK 0x00000FF0
+
+/* PCI related defines */
+#define TW_DEVICE_NAME "3w-9xxx"
+#define TW_NUMDEVICES 1
+#define TW_PCI_CLEAR_PARITY_ERRORS 0xc100
+#define TW_PCI_CLEAR_PCI_ABORT 0x2000
+
+/* Command packet opcodes used by the driver */
+#define TW_OP_INIT_CONNECTION 0x1
+#define TW_OP_GET_PARAM 0x12
+#define TW_OP_SET_PARAM 0x13
+#define TW_OP_EXECUTE_SCSI 0x10
+#define TW_OP_DOWNLOAD_FIRMWARE 0x16
+#define TW_OP_RESET 0x1C
+
+/* Asynchronous Event Notification (AEN) codes used by the driver */
+#define TW_AEN_QUEUE_EMPTY 0x0000
+#define TW_AEN_SOFT_RESET 0x0001
+#define TW_AEN_SYNC_TIME_WITH_HOST 0x031
+#define TW_AEN_SEVERITY_ERROR 0x1
+#define TW_AEN_SEVERITY_DEBUG 0x4
+#define TW_AEN_NOT_RETRIEVED 0x1
+#define TW_AEN_RETRIEVED 0x2
+
+/* Command state defines */
+#define TW_S_INITIAL 0x1 /* Initial state */
+#define TW_S_STARTED 0x2 /* Id in use */
+#define TW_S_POSTED 0x4 /* Posted to the controller */
+#define TW_S_PENDING 0x8 /* Waiting to be posted in isr */
+#define TW_S_COMPLETED 0x10 /* Completed by isr */
+#define TW_S_FINISHED 0x20 /* I/O completely done */
+
+/* Compatibility defines */
+#define TW_9000_ARCH_ID 0x5
+#define TW_CURRENT_FW_SRL 24
+#define TW_CURRENT_FW_BUILD 5
+#define TW_CURRENT_FW_BRANCH 1
+
+/* Phase defines */
+#define TW_PHASE_INITIAL 0
+#define TW_PHASE_SINGLE 1
+#define TW_PHASE_SGLIST 2
+
+/* Misc defines */
+#define TW_SECTOR_SIZE 512
+#define TW_ALIGNMENT_9000 4 /* 4 bytes */
+#define TW_ALIGNMENT_9000_SGL 0x3
+#define TW_MAX_UNITS 16
+#define TW_INIT_MESSAGE_CREDITS 0x100
+#define TW_INIT_COMMAND_PACKET_SIZE 0x3
+#define TW_INIT_COMMAND_PACKET_SIZE_EXTENDED 0x6
+#define TW_EXTENDED_INIT_CONNECT 0x2
+#define TW_BUNDLED_FW_SAFE_TO_FLASH 0x4
+#define TW_CTLR_FW_RECOMMENDS_FLASH 0x8
+#define TW_CTLR_FW_COMPATIBLE 0x2
+#define TW_BASE_FW_SRL 0x17
+#define TW_BASE_FW_BRANCH 0
+#define TW_BASE_FW_BUILD 1
+#if BITS_PER_LONG > 32
+#define TW_APACHE_MAX_SGL_LENGTH 72
+#define TW_ESCALADE_MAX_SGL_LENGTH 41
+#define TW_APACHE_CMD_PKT_SIZE 5
+#else
+#define TW_APACHE_MAX_SGL_LENGTH 109
+#define TW_ESCALADE_MAX_SGL_LENGTH 62
+#define TW_APACHE_CMD_PKT_SIZE 4
+#endif
+#define TW_ATA_PASS_SGL_MAX 60
+#define TW_Q_LENGTH 256
+#define TW_Q_START 0
+#define TW_MAX_SLOT 32
+#define TW_MAX_RESET_TRIES 2
+#define TW_MAX_CMDS_PER_LUN 254
+#define TW_MAX_RESPONSE_DRAIN 256
+#define TW_MAX_AEN_DRAIN 40
+#define TW_IN_IOCTL 2
+#define TW_IN_CHRDEV_IOCTL 3
+#define TW_IN_ATTENTION_LOOP 4
+#define TW_MAX_SECTORS 256
+#define TW_AEN_WAIT_TIME 1000
+#define TW_IOCTL_WAIT_TIME (1 * HZ) /* 1 second */
+#define TW_MAX_CDB_LEN 16
+#define TW_ISR_DONT_COMPLETE 2
+#define TW_ISR_DONT_RESULT 3
+#define TW_IOCTL_CHRDEV_TIMEOUT 60 /* 60 seconds */
+#define TW_IOCTL_CHRDEV_FREE -1
+#define TW_COMMAND_OFFSET 128 /* 128 bytes */
+#define TW_VERSION_TABLE 0x0402
+#define TW_TIMEKEEP_TABLE 0x040A
+#define TW_INFORMATION_TABLE 0x0403
+#define TW_PARAM_FWVER 3
+#define TW_PARAM_FWVER_LENGTH 16
+#define TW_PARAM_BIOSVER 4
+#define TW_PARAM_BIOSVER_LENGTH 16
+#define TW_PARAM_PORTCOUNT 3
+#define TW_PARAM_PORTCOUNT_LENGTH 1
+#define TW_MIN_SGL_LENGTH 0x200 /* 512 bytes */
+#define TW_MAX_SENSE_LENGTH 256
+#define TW_EVENT_SOURCE_AEN 0x1000
+#define TW_EVENT_SOURCE_COMMAND 0x1001
+#define TW_EVENT_SOURCE_PCHIP 0x1002
+#define TW_EVENT_SOURCE_DRIVER 0x1003
+#define TW_IOCTL_GET_COMPATIBILITY_INFO 0x101
+#define TW_IOCTL_GET_LAST_EVENT 0x102
+#define TW_IOCTL_GET_FIRST_EVENT 0x103
+#define TW_IOCTL_GET_NEXT_EVENT 0x104
+#define TW_IOCTL_GET_PREVIOUS_EVENT 0x105
+#define TW_IOCTL_GET_LOCK 0x106
+#define TW_IOCTL_RELEASE_LOCK 0x107
+#define TW_IOCTL_FIRMWARE_PASS_THROUGH 0x108
+#define TW_IOCTL_ERROR_STATUS_NOT_LOCKED 0x1001 // Not locked
+#define TW_IOCTL_ERROR_STATUS_LOCKED 0x1002 // Already locked
+#define TW_IOCTL_ERROR_STATUS_NO_MORE_EVENTS 0x1003 // No more events
+#define TW_IOCTL_ERROR_STATUS_AEN_CLOBBER 0x1004 // AEN clobber occurred
+#define TW_IOCTL_ERROR_OS_EFAULT -EFAULT // Bad address
+#define TW_IOCTL_ERROR_OS_EINTR -EINTR // Interrupted system call
+#define TW_IOCTL_ERROR_OS_EINVAL -EINVAL // Invalid argument
+#define TW_IOCTL_ERROR_OS_ENOMEM -ENOMEM // Out of memory
+#define TW_IOCTL_ERROR_OS_ERESTARTSYS -ERESTARTSYS // Restart system call
+#define TW_IOCTL_ERROR_OS_EIO -EIO // I/O error
+#define TW_IOCTL_ERROR_OS_ENOTTY -ENOTTY // Not a typewriter
+#define TW_IOCTL_ERROR_OS_ENODEV -ENODEV // No such device
+#define TW_ALLOCATION_LENGTH 128
+#define TW_SENSE_DATA_LENGTH 18
+#define TW_STATUS_CHECK_CONDITION 2
+#define TW_ERROR_LOGICAL_UNIT_NOT_SUPPORTED 0x10a
+#define TW_ERROR_UNIT_OFFLINE 0x128
+#define TW_MESSAGE_SOURCE_CONTROLLER_ERROR 3
+#define TW_MESSAGE_SOURCE_CONTROLLER_EVENT 4
+#define TW_MESSAGE_SOURCE_LINUX_DRIVER 6
+#define TW_DRIVER TW_MESSAGE_SOURCE_LINUX_DRIVER
+#define TW_MESSAGE_SOURCE_LINUX_OS 9
+#define TW_OS TW_MESSAGE_SOURCE_LINUX_OS
+#if BITS_PER_LONG > 32
+#define TW_COMMAND_SIZE 5
+#define TW_DMA_MASK DMA_64BIT_MASK
+#else
+#define TW_COMMAND_SIZE 4
+#define TW_DMA_MASK DMA_32BIT_MASK
+#endif
+#ifndef PCI_DEVICE_ID_3WARE_9000
+#define PCI_DEVICE_ID_3WARE_9000 0x1002
+#endif
+
+/* Bitmask macros to eliminate bitfields */
+
+/* opcode: 5, reserved: 3 */
+#define TW_OPRES_IN(x,y) ((x << 5) | (y & 0x1f))
+#define TW_OP_OUT(x) (x & 0x1f)
+
+/* opcode: 5, sgloffset: 3 */
+#define TW_OPSGL_IN(x,y) ((x << 5) | (y & 0x1f))
+#define TW_SGL_OUT(x) ((x >> 5) & 0x7)
+
+/* severity: 3, reserved: 5 */
+#define TW_SEV_OUT(x) (x & 0x7)
+
+/* reserved_1: 4, response_id: 8, reserved_2: 20 */
+#define TW_RESID_OUT(x) ((x >> 4) & 0xff)
+
+/* Macros */
+#define TW_CONTROL_REG_ADDR(x) (x->base_addr)
+#define TW_STATUS_REG_ADDR(x) ((unsigned char __iomem *)x->base_addr + 0x4)
+#if BITS_PER_LONG > 32
+#define TW_COMMAND_QUEUE_REG_ADDR(x) ((unsigned char __iomem *)x->base_addr + 0x20)
+#else
+#define TW_COMMAND_QUEUE_REG_ADDR(x) ((unsigned char __iomem *)x->base_addr + 0x8)
+#endif
+#define TW_RESPONSE_QUEUE_REG_ADDR(x) ((unsigned char __iomem *)x->base_addr + 0xC)
+#define TW_CLEAR_ALL_INTERRUPTS(x) (writel(TW_STATUS_VALID_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
+#define TW_CLEAR_ATTENTION_INTERRUPT(x) (writel(TW_CONTROL_CLEAR_ATTENTION_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
+#define TW_CLEAR_HOST_INTERRUPT(x) (writel(TW_CONTROL_CLEAR_HOST_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
+#define TW_DISABLE_INTERRUPTS(x) (writel(TW_CONTROL_DISABLE_INTERRUPTS, TW_CONTROL_REG_ADDR(x)))
+#define TW_ENABLE_AND_CLEAR_INTERRUPTS(x) (writel(TW_CONTROL_CLEAR_ATTENTION_INTERRUPT | TW_CONTROL_UNMASK_RESPONSE_INTERRUPT | TW_CONTROL_ENABLE_INTERRUPTS, TW_CONTROL_REG_ADDR(x)))
+#define TW_MASK_COMMAND_INTERRUPT(x) (writel(TW_CONTROL_MASK_COMMAND_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
+#define TW_UNMASK_COMMAND_INTERRUPT(x) (writel(TW_CONTROL_UNMASK_COMMAND_INTERRUPT, TW_CONTROL_REG_ADDR(x)))
+#define TW_SOFT_RESET(x) (writel(TW_CONTROL_ISSUE_SOFT_RESET | \
+ TW_CONTROL_CLEAR_HOST_INTERRUPT | \
+ TW_CONTROL_CLEAR_ATTENTION_INTERRUPT | \
+ TW_CONTROL_MASK_COMMAND_INTERRUPT | \
+ TW_CONTROL_MASK_RESPONSE_INTERRUPT | \
+ TW_CONTROL_CLEAR_ERROR_STATUS | \
+ TW_CONTROL_DISABLE_INTERRUPTS, TW_CONTROL_REG_ADDR(x)))
+#define TW_PRINTK(h,a,b,c) { \
+if (h) \
+printk(KERN_WARNING "3w-9xxx: scsi%d: ERROR: (0x%02X:0x%04X): %s.\n",h->host_no,a,b,c); \
+else \
+printk(KERN_WARNING "3w-9xxx: ERROR: (0x%02X:0x%04X): %s.\n",a,b,c); \
+}
+
+#pragma pack(1)
+
+/* Scatter Gather List Entry */
+typedef struct TAG_TW_SG_Entry {
+ unsigned long address;
+ u32 length;
+} TW_SG_Entry;
+
+/* Command Packet */
+typedef struct TW_Command {
+ unsigned char opcode__sgloffset;
+ unsigned char size;
+ unsigned char request_id;
+ unsigned char unit__hostid;
+ /* Second DWORD */
+ unsigned char status;
+ unsigned char flags;
+ union {
+ unsigned short block_count;
+ unsigned short parameter_count;
+ } byte6_offset;
+ union {
+ struct {
+ u32 lba;
+ TW_SG_Entry sgl[TW_ESCALADE_MAX_SGL_LENGTH];
+#if BITS_PER_LONG > 32
+ u32 padding[2]; /* pad to 512 bytes */
+#else
+ u32 padding;
+#endif
+ } io;
+ struct {
+ TW_SG_Entry sgl[TW_ESCALADE_MAX_SGL_LENGTH];
+#if BITS_PER_LONG > 32
+ u32 padding[3];
+#else
+ u32 padding[2];
+#endif
+ } param;
+ } byte8_offset;
+} TW_Command;
+
+/* Scatter gather element for 9000+ controllers */
+typedef struct TAG_TW_SG_Apache {
+ unsigned long address;
+ u32 length;
+} TW_SG_Apache;
+
+/* Command Packet for 9000+ controllers */
+typedef struct TAG_TW_Command_Apache {
+ unsigned char opcode__reserved;
+ unsigned char unit;
+ unsigned short request_id;
+ unsigned char status;
+ unsigned char sgl_offset;
+ unsigned short sgl_entries;
+ unsigned char cdb[16];
+ TW_SG_Apache sg_list[TW_APACHE_MAX_SGL_LENGTH];
+#if BITS_PER_LONG > 32
+ unsigned char padding[8];
+#endif
+} TW_Command_Apache;
+
+/* New command packet header */
+typedef struct TAG_TW_Command_Apache_Header {
+ unsigned char sense_data[TW_SENSE_DATA_LENGTH];
+ struct {
+ char reserved[4];
+ unsigned short error;
+ unsigned char padding;
+ unsigned char severity__reserved;
+ } status_block;
+ unsigned char err_specific_desc[98];
+ struct {
+ unsigned char size_header;
+ unsigned short reserved;
+ unsigned char size_sense;
+ } header_desc;
+} TW_Command_Apache_Header;
+
+/* This struct is a union of the 2 command packets */
+typedef struct TAG_TW_Command_Full {
+ TW_Command_Apache_Header header;
+ union {
+ TW_Command oldcommand;
+ TW_Command_Apache newcommand;
+ } command;
+} TW_Command_Full;
+
+/* Initconnection structure */
+typedef struct TAG_TW_Initconnect {
+ unsigned char opcode__reserved;
+ unsigned char size;
+ unsigned char request_id;
+ unsigned char res2;
+ unsigned char status;
+ unsigned char flags;
+ unsigned short message_credits;
+ u32 features;
+ unsigned short fw_srl;
+ unsigned short fw_arch_id;
+ unsigned short fw_branch;
+ unsigned short fw_build;
+ u32 result;
+} TW_Initconnect;
+
+/* Event info structure */
+typedef struct TAG_TW_Event
+{
+ unsigned int sequence_id;
+ unsigned int time_stamp_sec;
+ unsigned short aen_code;
+ unsigned char severity;
+ unsigned char retrieved;
+ unsigned char repeat_count;
+ unsigned char parameter_len;
+ unsigned char parameter_data[98];
+} TW_Event;
+
+typedef struct TAG_TW_Ioctl_Driver_Command {
+ unsigned int control_code;
+ unsigned int status;
+ unsigned int unique_id;
+ unsigned int sequence_id;
+ unsigned int os_specific;
+ unsigned int buffer_length;
+} TW_Ioctl_Driver_Command;
+
+typedef struct TAG_TW_Ioctl_Apache {
+ TW_Ioctl_Driver_Command driver_command;
+ char padding[488];
+ TW_Command_Full firmware_command;
+ char data_buffer[1];
+} TW_Ioctl_Buf_Apache;
+
+/* Lock structure for ioctl get/release lock */
+typedef struct TAG_TW_Lock {
+ unsigned long timeout_msec;
+ unsigned long time_remaining_msec;
+ unsigned long force_flag;
+} TW_Lock;
+
+/* GetParam descriptor */
+typedef struct {
+ unsigned short table_id;
+ unsigned short parameter_id;
+ unsigned short parameter_size_bytes;
+ unsigned short actual_parameter_size_bytes;
+ unsigned char data[1];
+} TW_Param_Apache, *PTW_Param_Apache;
+
+/* Response queue */
+typedef union TAG_TW_Response_Queue {
+ u32 response_id;
+ u32 value;
+} TW_Response_Queue;
+
+typedef struct TAG_TW_Info {
+ char *buffer;
+ int length;
+ int offset;
+ int position;
+} TW_Info;
+
+/* Compatibility information structure */
+typedef struct TAG_TW_Compatibility_Info
+{
+ char driver_version[32];
+ unsigned short working_srl;
+ unsigned short working_branch;
+ unsigned short working_build;
+} TW_Compatibility_Info;
+
+typedef struct TAG_TW_Device_Extension {
+ u32 __iomem *base_addr;
+ unsigned long *generic_buffer_virt[TW_Q_LENGTH];
+ unsigned long generic_buffer_phys[TW_Q_LENGTH];
+ TW_Command_Full *command_packet_virt[TW_Q_LENGTH];
+ unsigned long command_packet_phys[TW_Q_LENGTH];
+ struct pci_dev *tw_pci_dev;
+ struct scsi_cmnd *srb[TW_Q_LENGTH];
+ unsigned char free_queue[TW_Q_LENGTH];
+ unsigned char free_head;
+ unsigned char free_tail;
+ unsigned char pending_queue[TW_Q_LENGTH];
+ unsigned char pending_head;
+ unsigned char pending_tail;
+ int state[TW_Q_LENGTH];
+ unsigned int posted_request_count;
+ unsigned int max_posted_request_count;
+ unsigned int pending_request_count;
+ unsigned int max_pending_request_count;
+ unsigned int max_sgl_entries;
+ unsigned int sgl_entries;
+ unsigned int num_aborts;
+ unsigned int num_resets;
+ unsigned int sector_count;
+ unsigned int max_sector_count;
+ unsigned int aen_count;
+ struct Scsi_Host *host;
+ long flags;
+ int reset_print;
+ TW_Event *event_queue[TW_Q_LENGTH];
+ unsigned char error_index;
+ unsigned char event_queue_wrapped;
+ unsigned int error_sequence_id;
+ int ioctl_sem_lock;
+ u32 ioctl_msec;
+ int chrdev_request_id;
+ wait_queue_head_t ioctl_wqueue;
+ struct semaphore ioctl_sem;
+ char aen_clobber;
+ unsigned short working_srl;
+ unsigned short working_branch;
+ unsigned short working_build;
+} TW_Device_Extension;
+
+#pragma pack()
+
+#endif /* _3W_9XXX_H */
+
--- /dev/null
+/*
+ * ahci.c - AHCI SATA support
+ *
+ * Copyright 2004 Red Hat, Inc.
+ *
+ * The contents of this file are subject to the Open
+ * Software License version 1.1 that can be found at
+ * http://www.opensource.org/licenses/osl-1.1.txt and is included herein
+ * by reference.
+ *
+ * Alternatively, the contents of this file may be used under the terms
+ * of the GNU General Public License version 2 (the "GPL") as distributed
+ * in the kernel source COPYING file, in which case the provisions of
+ * the GPL are applicable instead of the above. If you wish to allow
+ * the use of your version of this file only under the terms of the
+ * GPL and not to allow others to use your version of this file under
+ * the OSL, indicate your decision by deleting the provisions above and
+ * replace them with the notice and other provisions required by the GPL.
+ * If you do not delete the provisions above, a recipient may use your
+ * version of this file under either the OSL or the GPL.
+ *
+ * Version 1.0 of the AHCI specification:
+ * http://www.intel.com/technology/serialata/pdf/rev1_0.pdf
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include "scsi.h"
+#include <scsi/scsi_host.h>
+#include <linux/libata.h>
+#include <asm/io.h>
+
+#define DRV_NAME "ahci"
+#define DRV_VERSION "1.00"
+
+
+enum {
+ AHCI_PCI_BAR = 5,
+ AHCI_MAX_SG = 168, /* hardware max is 64K */
+ AHCI_DMA_BOUNDARY = 0xffffffff,
+ AHCI_USE_CLUSTERING = 0,
+ AHCI_CMD_SLOT_SZ = 32 * 32,
+ AHCI_RX_FIS_SZ = 256,
+ AHCI_CMD_TBL_HDR = 0x80,
+ AHCI_CMD_TBL_SZ = AHCI_CMD_TBL_HDR + (AHCI_MAX_SG * 16),
+ AHCI_PORT_PRIV_DMA_SZ = AHCI_CMD_SLOT_SZ + AHCI_CMD_TBL_SZ +
+ AHCI_RX_FIS_SZ,
+ AHCI_IRQ_ON_SG = (1 << 31),
+ AHCI_CMD_ATAPI = (1 << 5),
+ AHCI_CMD_WRITE = (1 << 6),
+
+ RX_FIS_D2H_REG = 0x40, /* offset of D2H Register FIS data */
+
+ board_ahci = 0,
+
+ /* global controller registers */
+ HOST_CAP = 0x00, /* host capabilities */
+ HOST_CTL = 0x04, /* global host control */
+ HOST_IRQ_STAT = 0x08, /* interrupt status */
+ HOST_PORTS_IMPL = 0x0c, /* bitmap of implemented ports */
+ HOST_VERSION = 0x10, /* AHCI spec. version compliancy */
+
+ /* HOST_CTL bits */
+ HOST_RESET = (1 << 0), /* reset controller; self-clear */
+ HOST_IRQ_EN = (1 << 1), /* global IRQ enable */
+ HOST_AHCI_EN = (1 << 31), /* AHCI enabled */
+
+ /* HOST_CAP bits */
+ HOST_CAP_64 = (1 << 31), /* PCI DAC (64-bit DMA) support */
+
+ /* registers for each SATA port */
+ PORT_LST_ADDR = 0x00, /* command list DMA addr */
+ PORT_LST_ADDR_HI = 0x04, /* command list DMA addr hi */
+ PORT_FIS_ADDR = 0x08, /* FIS rx buf addr */
+ PORT_FIS_ADDR_HI = 0x0c, /* FIS rx buf addr hi */
+ PORT_IRQ_STAT = 0x10, /* interrupt status */
+ PORT_IRQ_MASK = 0x14, /* interrupt enable/disable mask */
+ PORT_CMD = 0x18, /* port command */
+ PORT_TFDATA = 0x20, /* taskfile data */
+ PORT_SIG = 0x24, /* device TF signature */
+ PORT_CMD_ISSUE = 0x38, /* command issue */
+ PORT_SCR = 0x28, /* SATA phy register block */
+ PORT_SCR_STAT = 0x28, /* SATA phy register: SStatus */
+ PORT_SCR_CTL = 0x2c, /* SATA phy register: SControl */
+ PORT_SCR_ERR = 0x30, /* SATA phy register: SError */
+ PORT_SCR_ACT = 0x34, /* SATA phy register: SActive */
+
+ /* PORT_IRQ_{STAT,MASK} bits */
+ PORT_IRQ_COLD_PRES = (1 << 31), /* cold presence detect */
+ PORT_IRQ_TF_ERR = (1 << 30), /* task file error */
+ PORT_IRQ_HBUS_ERR = (1 << 29), /* host bus fatal error */
+ PORT_IRQ_HBUS_DATA_ERR = (1 << 28), /* host bus data error */
+ PORT_IRQ_IF_ERR = (1 << 27), /* interface fatal error */
+ PORT_IRQ_IF_NONFATAL = (1 << 26), /* interface non-fatal error */
+ PORT_IRQ_OVERFLOW = (1 << 24), /* xfer exhausted available S/G */
+ PORT_IRQ_BAD_PMP = (1 << 23), /* incorrect port multiplier */
+
+ PORT_IRQ_PHYRDY = (1 << 22), /* PhyRdy changed */
+ PORT_IRQ_DEV_ILCK = (1 << 7), /* device interlock */
+ PORT_IRQ_CONNECT = (1 << 6), /* port connect change status */
+ PORT_IRQ_SG_DONE = (1 << 5), /* descriptor processed */
+ PORT_IRQ_UNK_FIS = (1 << 4), /* unknown FIS rx'd */
+ PORT_IRQ_SDB_FIS = (1 << 3), /* Set Device Bits FIS rx'd */
+ PORT_IRQ_DMAS_FIS = (1 << 2), /* DMA Setup FIS rx'd */
+ PORT_IRQ_PIOS_FIS = (1 << 1), /* PIO Setup FIS rx'd */
+ PORT_IRQ_D2H_REG_FIS = (1 << 0), /* D2H Register FIS rx'd */
+
+ PORT_IRQ_FATAL = PORT_IRQ_TF_ERR |
+ PORT_IRQ_HBUS_ERR |
+ PORT_IRQ_HBUS_DATA_ERR |
+ PORT_IRQ_IF_ERR,
+ DEF_PORT_IRQ = PORT_IRQ_FATAL | PORT_IRQ_PHYRDY |
+ PORT_IRQ_CONNECT | PORT_IRQ_SG_DONE |
+ PORT_IRQ_UNK_FIS | PORT_IRQ_SDB_FIS |
+ PORT_IRQ_DMAS_FIS | PORT_IRQ_PIOS_FIS |
+ PORT_IRQ_D2H_REG_FIS,
+
+ /* PORT_CMD bits */
+ PORT_CMD_LIST_ON = (1 << 15), /* cmd list DMA engine running */
+ PORT_CMD_FIS_ON = (1 << 14), /* FIS DMA engine running */
+ PORT_CMD_FIS_RX = (1 << 4), /* Enable FIS receive DMA engine */
+ PORT_CMD_POWER_ON = (1 << 2), /* Power up device */
+ PORT_CMD_SPIN_UP = (1 << 1), /* Spin up device */
+ PORT_CMD_START = (1 << 0), /* Enable port DMA engine */
+
+ PORT_CMD_ICC_ACTIVE = (0x1 << 28), /* Put i/f in active state */
+ PORT_CMD_ICC_PARTIAL = (0x2 << 28), /* Put i/f in partial state */
+ PORT_CMD_ICC_SLUMBER = (0x6 << 28), /* Put i/f in slumber state */
+};
+
+struct ahci_cmd_hdr {
+ u32 opts;
+ u32 status;
+ u32 tbl_addr;
+ u32 tbl_addr_hi;
+ u32 reserved[4];
+};
+
+struct ahci_sg {
+ u32 addr;
+ u32 addr_hi;
+ u32 reserved;
+ u32 flags_size;
+};
+
+struct ahci_host_priv {
+ unsigned long flags;
+ u32 cap; /* cache of HOST_CAP register */
+ u32 port_map; /* cache of HOST_PORTS_IMPL reg */
+};
+
+struct ahci_port_priv {
+ struct ahci_cmd_hdr *cmd_slot;
+ dma_addr_t cmd_slot_dma;
+ void *cmd_tbl;
+ dma_addr_t cmd_tbl_dma;
+ struct ahci_sg *cmd_tbl_sg;
+ void *rx_fis;
+ dma_addr_t rx_fis_dma;
+};
+
+static u32 ahci_scr_read (struct ata_port *ap, unsigned int sc_reg);
+static void ahci_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val);
+static int ahci_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
+static int ahci_qc_issue(struct ata_queued_cmd *qc);
+static irqreturn_t ahci_interrupt (int irq, void *dev_instance, struct pt_regs *regs);
+static void ahci_phy_reset(struct ata_port *ap);
+static void ahci_irq_clear(struct ata_port *ap);
+static void ahci_eng_timeout(struct ata_port *ap);
+static int ahci_port_start(struct ata_port *ap);
+static void ahci_port_stop(struct ata_port *ap);
+static void ahci_host_stop(struct ata_host_set *host_set);
+static void ahci_qc_prep(struct ata_queued_cmd *qc);
+static u8 ahci_check_status(struct ata_port *ap);
+static inline int ahci_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc);
+
+static Scsi_Host_Template ahci_sht = {
+ .module = THIS_MODULE,
+ .name = DRV_NAME,
+ .ioctl = ata_scsi_ioctl,
+ .queuecommand = ata_scsi_queuecmd,
+ .eh_strategy_handler = ata_scsi_error,
+ .can_queue = ATA_DEF_QUEUE,
+ .this_id = ATA_SHT_THIS_ID,
+ .sg_tablesize = AHCI_MAX_SG,
+ .max_sectors = ATA_MAX_SECTORS,
+ .cmd_per_lun = ATA_SHT_CMD_PER_LUN,
+ .emulated = ATA_SHT_EMULATED,
+ .use_clustering = AHCI_USE_CLUSTERING,
+ .proc_name = DRV_NAME,
+ .dma_boundary = AHCI_DMA_BOUNDARY,
+ .slave_configure = ata_scsi_slave_config,
+ .bios_param = ata_std_bios_param,
+};
+
+static struct ata_port_operations ahci_ops = {
+ .port_disable = ata_port_disable,
+
+ .check_status = ahci_check_status,
+ .dev_select = ata_noop_dev_select,
+
+ .phy_reset = ahci_phy_reset,
+
+ .qc_prep = ahci_qc_prep,
+ .qc_issue = ahci_qc_issue,
+
+ .eng_timeout = ahci_eng_timeout,
+
+ .irq_handler = ahci_interrupt,
+ .irq_clear = ahci_irq_clear,
+
+ .scr_read = ahci_scr_read,
+ .scr_write = ahci_scr_write,
+
+ .port_start = ahci_port_start,
+ .port_stop = ahci_port_stop,
+ .host_stop = ahci_host_stop,
+};
+
+static struct ata_port_info ahci_port_info[] = {
+ /* board_ahci */
+ {
+ .sht = &ahci_sht,
+ .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
+ ATA_FLAG_SATA_RESET | ATA_FLAG_MMIO |
+ ATA_FLAG_PIO_DMA,
+ .pio_mask = 0x03, /* pio3-4 */
+ .udma_mask = 0x7f, /* udma0-6 ; FIXME */
+ .port_ops = &ahci_ops,
+ },
+};
+
+static struct pci_device_id ahci_pci_tbl[] = {
+ { PCI_VENDOR_ID_INTEL, 0x2652, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_ahci }, /* ICH6 */
+ { PCI_VENDOR_ID_INTEL, 0x2653, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_ahci }, /* ICH6M */
+ { PCI_VENDOR_ID_INTEL, 0x27c1, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_ahci }, /* ICH7 */
+ { PCI_VENDOR_ID_INTEL, 0x27c5, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_ahci }, /* ICH7M */
+ { } /* terminate list */
+};
+
+
+static struct pci_driver ahci_pci_driver = {
+ .name = DRV_NAME,
+ .id_table = ahci_pci_tbl,
+ .probe = ahci_init_one,
+ .remove = ata_pci_remove_one,
+};
+
+
+static inline unsigned long ahci_port_base_ul (unsigned long base, unsigned int port)
+{
+ return base + 0x100 + (port * 0x80);
+}
+
+static inline void *ahci_port_base (void *base, unsigned int port)
+{
+ return (void *) ahci_port_base_ul((unsigned long)base, port);
+}
+
+static void ahci_host_stop(struct ata_host_set *host_set)
+{
+ struct ahci_host_priv *hpriv = host_set->private_data;
+ kfree(hpriv);
+}
+
+static int ahci_port_start(struct ata_port *ap)
+{
+ struct device *dev = ap->host_set->dev;
+ struct ahci_host_priv *hpriv = ap->host_set->private_data;
+ struct ahci_port_priv *pp;
+ int rc;
+ void *mem, *mmio = ap->host_set->mmio_base;
+ void *port_mmio = ahci_port_base(mmio, ap->port_no);
+ dma_addr_t mem_dma;
+
+ rc = ata_port_start(ap);
+ if (rc)
+ return rc;
+
+ pp = kmalloc(sizeof(*pp), GFP_KERNEL);
+ if (!pp) {
+ rc = -ENOMEM;
+ goto err_out;
+ }
+ memset(pp, 0, sizeof(*pp));
+
+ mem = dma_alloc_coherent(dev, AHCI_PORT_PRIV_DMA_SZ, &mem_dma, GFP_KERNEL);
+ if (!mem) {
+ rc = -ENOMEM;
+ goto err_out_kfree;
+ }
+ memset(mem, 0, AHCI_PORT_PRIV_DMA_SZ);
+
+ /*
+ * First item in chunk of DMA memory: 32-slot command table,
+ * 32 bytes each in size
+ */
+ pp->cmd_slot = mem;
+ pp->cmd_slot_dma = mem_dma;
+
+ mem += AHCI_CMD_SLOT_SZ;
+ mem_dma += AHCI_CMD_SLOT_SZ;
+
+ /*
+ * Second item: Received-FIS area
+ */
+ pp->rx_fis = mem;
+ pp->rx_fis_dma = mem_dma;
+
+ mem += AHCI_RX_FIS_SZ;
+ mem_dma += AHCI_RX_FIS_SZ;
+
+ /*
+ * Third item: data area for storing a single command
+ * and its scatter-gather table
+ */
+ pp->cmd_tbl = mem;
+ pp->cmd_tbl_dma = mem_dma;
+
+ pp->cmd_tbl_sg = mem + AHCI_CMD_TBL_HDR;
+
+ ap->private_data = pp;
+
+ if (hpriv->cap & HOST_CAP_64)
+ writel((pp->cmd_slot_dma >> 16) >> 16, port_mmio + PORT_LST_ADDR_HI);
+ writel(pp->cmd_slot_dma & 0xffffffff, port_mmio + PORT_LST_ADDR);
+ readl(port_mmio + PORT_LST_ADDR); /* flush */
+
+ if (hpriv->cap & HOST_CAP_64)
+ writel((pp->rx_fis_dma >> 16) >> 16, port_mmio + PORT_FIS_ADDR_HI);
+ writel(pp->rx_fis_dma & 0xffffffff, port_mmio + PORT_FIS_ADDR);
+ readl(port_mmio + PORT_FIS_ADDR); /* flush */
+
+ writel(PORT_CMD_ICC_ACTIVE | PORT_CMD_FIS_RX |
+ PORT_CMD_POWER_ON | PORT_CMD_SPIN_UP |
+ PORT_CMD_START, port_mmio + PORT_CMD);
+ readl(port_mmio + PORT_CMD); /* flush */
+
+ return 0;
+
+err_out_kfree:
+ kfree(pp);
+err_out:
+ ata_port_stop(ap);
+ return rc;
+}
+
+
+static void ahci_port_stop(struct ata_port *ap)
+{
+ struct device *dev = ap->host_set->dev;
+ struct ahci_port_priv *pp = ap->private_data;
+ void *mmio = ap->host_set->mmio_base;
+ void *port_mmio = ahci_port_base(mmio, ap->port_no);
+ u32 tmp;
+
+ tmp = readl(port_mmio + PORT_CMD);
+ tmp &= ~(PORT_CMD_START | PORT_CMD_FIS_RX);
+ writel(tmp, port_mmio + PORT_CMD);
+ readl(port_mmio + PORT_CMD); /* flush */
+
+ /* spec says 500 msecs for each PORT_CMD_{START,FIS_RX} bit, so
+ * this is slightly incorrect.
+ */
+ msleep(500);
+
+ ap->private_data = NULL;
+ dma_free_coherent(dev, AHCI_PORT_PRIV_DMA_SZ,
+ pp->cmd_slot, pp->cmd_slot_dma);
+ kfree(pp);
+ ata_port_stop(ap);
+}
+
+static u32 ahci_scr_read (struct ata_port *ap, unsigned int sc_reg_in)
+{
+ unsigned int sc_reg;
+
+ switch (sc_reg_in) {
+ case SCR_STATUS: sc_reg = 0; break;
+ case SCR_CONTROL: sc_reg = 1; break;
+ case SCR_ERROR: sc_reg = 2; break;
+ case SCR_ACTIVE: sc_reg = 3; break;
+ default:
+ return 0xffffffffU;
+ }
+
+ return readl((void *) ap->ioaddr.scr_addr + (sc_reg * 4));
+}
+
+
+static void ahci_scr_write (struct ata_port *ap, unsigned int sc_reg_in,
+ u32 val)
+{
+ unsigned int sc_reg;
+
+ switch (sc_reg_in) {
+ case SCR_STATUS: sc_reg = 0; break;
+ case SCR_CONTROL: sc_reg = 1; break;
+ case SCR_ERROR: sc_reg = 2; break;
+ case SCR_ACTIVE: sc_reg = 3; break;
+ default:
+ return;
+ }
+
+ writel(val, (void *) ap->ioaddr.scr_addr + (sc_reg * 4));
+}
+
+static void ahci_phy_reset(struct ata_port *ap)
+{
+ void __iomem *port_mmio = (void __iomem *) ap->ioaddr.cmd_addr;
+ struct ata_taskfile tf;
+ struct ata_device *dev = &ap->device[0];
+ u32 tmp;
+
+ __sata_phy_reset(ap);
+
+ if (ap->flags & ATA_FLAG_PORT_DISABLED)
+ return;
+
+ tmp = readl(port_mmio + PORT_SIG);
+ tf.lbah = (tmp >> 24) & 0xff;
+ tf.lbam = (tmp >> 16) & 0xff;
+ tf.lbal = (tmp >> 8) & 0xff;
+ tf.nsect = (tmp) & 0xff;
+
+ dev->class = ata_dev_classify(&tf);
+ if (!ata_dev_present(dev))
+ ata_port_disable(ap);
+}
+
+static u8 ahci_check_status(struct ata_port *ap)
+{
+ void *mmio = (void *) ap->ioaddr.cmd_addr;
+
+ return readl(mmio + PORT_TFDATA) & 0xFF;
+}
+
+static void ahci_fill_sg(struct ata_queued_cmd *qc)
+{
+ struct ahci_port_priv *pp = qc->ap->private_data;
+ unsigned int i;
+
+ VPRINTK("ENTER\n");
+
+ /*
+ * Next, the S/G list.
+ */
+ for (i = 0; i < qc->n_elem; i++) {
+ u32 sg_len;
+ dma_addr_t addr;
+
+ addr = sg_dma_address(&qc->sg[i]);
+ sg_len = sg_dma_len(&qc->sg[i]);
+
+ pp->cmd_tbl_sg[i].addr = cpu_to_le32(addr & 0xffffffff);
+ pp->cmd_tbl_sg[i].addr_hi = cpu_to_le32((addr >> 16) >> 16);
+ pp->cmd_tbl_sg[i].flags_size = cpu_to_le32(sg_len - 1);
+ }
+}
+
+static void ahci_qc_prep(struct ata_queued_cmd *qc)
+{
+ struct ahci_port_priv *pp = qc->ap->private_data;
+ u32 opts;
+ const u32 cmd_fis_len = 5; /* five dwords */
+
+ /*
+ * Fill in command slot information (currently only one slot,
+ * slot 0, is currently since we don't do queueing)
+ */
+
+ opts = (qc->n_elem << 16) | cmd_fis_len;
+ if (qc->tf.flags & ATA_TFLAG_WRITE)
+ opts |= AHCI_CMD_WRITE;
+
+ switch (qc->tf.protocol) {
+ case ATA_PROT_ATAPI:
+ case ATA_PROT_ATAPI_NODATA:
+ case ATA_PROT_ATAPI_DMA:
+ opts |= AHCI_CMD_ATAPI;
+ break;
+
+ default:
+ /* do nothing */
+ break;
+ }
+
+ pp->cmd_slot[0].opts = cpu_to_le32(opts);
+ pp->cmd_slot[0].status = 0;
+ pp->cmd_slot[0].tbl_addr = cpu_to_le32(pp->cmd_tbl_dma & 0xffffffff);
+ pp->cmd_slot[0].tbl_addr_hi = cpu_to_le32((pp->cmd_tbl_dma >> 16) >> 16);
+
+ /*
+ * Fill in command table information. First, the header,
+ * a SATA Register - Host to Device command FIS.
+ */
+ ata_tf_to_fis(&qc->tf, pp->cmd_tbl, 0);
+
+ if (!(qc->flags & ATA_QCFLAG_DMAMAP))
+ return;
+
+ ahci_fill_sg(qc);
+}
+
+static inline void ahci_dma_complete (struct ata_port *ap,
+ struct ata_queued_cmd *qc,
+ int have_err)
+{
+ /* get drive status; clear intr; complete txn */
+ ata_qc_complete(ata_qc_from_tag(ap, ap->active_tag),
+ have_err ? ATA_ERR : 0);
+}
+
+static void ahci_intr_error(struct ata_port *ap, u32 irq_stat)
+{
+ void *mmio = ap->host_set->mmio_base;
+ void *port_mmio = ahci_port_base(mmio, ap->port_no);
+ u32 tmp;
+ int work;
+
+ /* stop DMA */
+ tmp = readl(port_mmio + PORT_CMD);
+ tmp &= PORT_CMD_START | PORT_CMD_FIS_RX;
+ writel(tmp, port_mmio + PORT_CMD);
+
+ /* wait for engine to stop. TODO: this could be
+ * as long as 500 msec
+ */
+ work = 1000;
+ while (work-- > 0) {
+ tmp = readl(port_mmio + PORT_CMD);
+ if ((tmp & PORT_CMD_LIST_ON) == 0)
+ break;
+ udelay(10);
+ }
+
+ /* clear SATA phy error, if any */
+ tmp = readl(port_mmio + PORT_SCR_ERR);
+ writel(tmp, port_mmio + PORT_SCR_ERR);
+
+ /* if DRQ/BSY is set, device needs to be reset.
+ * if so, issue COMRESET
+ */
+ tmp = readl(port_mmio + PORT_TFDATA);
+ if (tmp & (ATA_BUSY | ATA_DRQ)) {
+ writel(0x301, port_mmio + PORT_SCR_CTL);
+ readl(port_mmio + PORT_SCR_CTL); /* flush */
+ udelay(10);
+ writel(0x300, port_mmio + PORT_SCR_CTL);
+ readl(port_mmio + PORT_SCR_CTL); /* flush */
+ }
+
+ /* re-start DMA */
+ tmp = readl(port_mmio + PORT_CMD);
+ tmp |= PORT_CMD_START | PORT_CMD_FIS_RX;
+ writel(tmp, port_mmio + PORT_CMD);
+ readl(port_mmio + PORT_CMD); /* flush */
+
+ printk(KERN_WARNING "ata%u: error occurred, port reset\n", ap->port_no);
+}
+
+static void ahci_eng_timeout(struct ata_port *ap)
+{
+ void *mmio = ap->host_set->mmio_base;
+ void *port_mmio = ahci_port_base(mmio, ap->port_no);
+ struct ata_queued_cmd *qc;
+
+ DPRINTK("ENTER\n");
+
+ ahci_intr_error(ap, readl(port_mmio + PORT_IRQ_STAT));
+
+ qc = ata_qc_from_tag(ap, ap->active_tag);
+ if (!qc) {
+ printk(KERN_ERR "ata%u: BUG: timeout without command\n",
+ ap->id);
+ } else {
+ /* hack alert! We cannot use the supplied completion
+ * function from inside the ->eh_strategy_handler() thread.
+ * libata is the only user of ->eh_strategy_handler() in
+ * any kernel, so the default scsi_done() assumes it is
+ * not being called from the SCSI EH.
+ */
+ qc->scsidone = scsi_finish_command;
+ ata_qc_complete(qc, ATA_ERR);
+ }
+
+}
+
+static inline int ahci_host_intr(struct ata_port *ap, struct ata_queued_cmd *qc)
+{
+ void *mmio = ap->host_set->mmio_base;
+ void *port_mmio = ahci_port_base(mmio, ap->port_no);
+ u32 status, serr, ci;
+
+ serr = readl(port_mmio + PORT_SCR_ERR);
+ writel(serr, port_mmio + PORT_SCR_ERR);
+
+ status = readl(port_mmio + PORT_IRQ_STAT);
+ writel(status, port_mmio + PORT_IRQ_STAT);
+
+ ci = readl(port_mmio + PORT_CMD_ISSUE);
+ if (likely((ci & 0x1) == 0)) {
+ if (qc) {
+ ata_qc_complete(qc, 0);
+ qc = NULL;
+ }
+ }
+
+ if (status & PORT_IRQ_FATAL) {
+ ahci_intr_error(ap, status);
+ if (qc)
+ ata_qc_complete(qc, ATA_ERR);
+ }
+
+ return 1;
+}
+
+static void ahci_irq_clear(struct ata_port *ap)
+{
+ /* TODO */
+}
+
+static irqreturn_t ahci_interrupt (int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ata_host_set *host_set = dev_instance;
+ struct ahci_host_priv *hpriv;
+ unsigned int i, handled = 0;
+ void *mmio;
+ u32 irq_stat, irq_ack = 0;
+
+ VPRINTK("ENTER\n");
+
+ hpriv = host_set->private_data;
+ mmio = host_set->mmio_base;
+
+ /* sigh. 0xffffffff is a valid return from h/w */
+ irq_stat = readl(mmio + HOST_IRQ_STAT);
+ irq_stat &= hpriv->port_map;
+ if (!irq_stat)
+ return IRQ_NONE;
+
+ spin_lock(&host_set->lock);
+
+ for (i = 0; i < host_set->n_ports; i++) {
+ struct ata_port *ap;
+ u32 tmp;
+
+ VPRINTK("port %u\n", i);
+ ap = host_set->ports[i];
+ tmp = irq_stat & (1 << i);
+ if (tmp && ap) {
+ struct ata_queued_cmd *qc;
+ qc = ata_qc_from_tag(ap, ap->active_tag);
+ if (ahci_host_intr(ap, qc))
+ irq_ack |= (1 << i);
+ }
+ }
+
+ if (irq_ack) {
+ writel(irq_ack, mmio + HOST_IRQ_STAT);
+ handled = 1;
+ }
+
+ spin_unlock(&host_set->lock);
+
+ VPRINTK("EXIT\n");
+
+ return IRQ_RETVAL(handled);
+}
+
+static int ahci_qc_issue(struct ata_queued_cmd *qc)
+{
+ struct ata_port *ap = qc->ap;
+ void *port_mmio = (void *) ap->ioaddr.cmd_addr;
+
+ writel(1, port_mmio + PORT_SCR_ACT);
+ readl(port_mmio + PORT_SCR_ACT); /* flush */
+
+ writel(1, port_mmio + PORT_CMD_ISSUE);
+ readl(port_mmio + PORT_CMD_ISSUE); /* flush */
+
+ return 0;
+}
+
+static void ahci_setup_port(struct ata_ioports *port, unsigned long base,
+ unsigned int port_idx)
+{
+ VPRINTK("ENTER, base==0x%lx, port_idx %u\n", base, port_idx);
+ base = ahci_port_base_ul(base, port_idx);
+ VPRINTK("base now==0x%lx\n", base);
+
+ port->cmd_addr = base;
+ port->scr_addr = base + PORT_SCR;
+
+ VPRINTK("EXIT\n");
+}
+
+static int ahci_host_init(struct ata_probe_ent *probe_ent)
+{
+ struct ahci_host_priv *hpriv = probe_ent->private_data;
+ struct pci_dev *pdev = to_pci_dev(probe_ent->dev);
+ void __iomem *mmio = probe_ent->mmio_base;
+ u32 tmp, cap_save;
+ u16 tmp16;
+ unsigned int i, j, using_dac;
+ int rc;
+ void __iomem *port_mmio;
+
+ cap_save = readl(mmio + HOST_CAP);
+ cap_save &= ( (1<<28) | (1<<17) );
+ cap_save |= (1 << 27);
+
+ /* global controller reset */
+ tmp = readl(mmio + HOST_CTL);
+ if ((tmp & HOST_RESET) == 0) {
+ writel(tmp | HOST_RESET, mmio + HOST_CTL);
+ readl(mmio + HOST_CTL); /* flush */
+ }
+
+ /* reset must complete within 1 second, or
+ * the hardware should be considered fried.
+ */
+ ssleep(1);
+
+ tmp = readl(mmio + HOST_CTL);
+ if (tmp & HOST_RESET) {
+ printk(KERN_ERR DRV_NAME "(%s): controller reset failed (0x%x)\n",
+ pci_name(pdev), tmp);
+ return -EIO;
+ }
+
+ writel(HOST_AHCI_EN, mmio + HOST_CTL);
+ (void) readl(mmio + HOST_CTL); /* flush */
+ writel(cap_save, mmio + HOST_CAP);
+ writel(0xf, mmio + HOST_PORTS_IMPL);
+ (void) readl(mmio + HOST_PORTS_IMPL); /* flush */
+
+ pci_read_config_word(pdev, 0x92, &tmp16);
+ tmp16 |= 0xf;
+ pci_write_config_word(pdev, 0x92, tmp16);
+
+ hpriv->cap = readl(mmio + HOST_CAP);
+ hpriv->port_map = readl(mmio + HOST_PORTS_IMPL);
+ probe_ent->n_ports = (hpriv->cap & 0x1f) + 1;
+
+ VPRINTK("cap 0x%x port_map 0x%x n_ports %d\n",
+ hpriv->cap, hpriv->port_map, probe_ent->n_ports);
+
+ using_dac = hpriv->cap & HOST_CAP_64;
+ if (using_dac &&
+ !pci_set_dma_mask(pdev, 0xffffffffffffffffULL)) {
+ rc = pci_set_consistent_dma_mask(pdev, 0xffffffffffffffffULL);
+ if (rc) {
+ rc = pci_set_consistent_dma_mask(pdev, 0xffffffffULL);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): 64-bit DMA enable failed\n",
+ pci_name(pdev));
+ return rc;
+ }
+ }
+
+ hpriv->flags |= HOST_CAP_64;
+ } else {
+ rc = pci_set_dma_mask(pdev, 0xffffffffULL);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): 32-bit DMA enable failed\n",
+ pci_name(pdev));
+ return rc;
+ }
+ rc = pci_set_consistent_dma_mask(pdev, 0xffffffffULL);
+ if (rc) {
+ printk(KERN_ERR DRV_NAME "(%s): 32-bit consistent DMA enable failed\n",
+ pci_name(pdev));
+ return rc;
+ }
+ }
+
+ for (i = 0; i < probe_ent->n_ports; i++) {
+#if 0 /* BIOSen initialize this incorrectly */
+ if (!(hpriv->port_map & (1 << i)))
+ continue;
+#endif
+
+ port_mmio = ahci_port_base(mmio, i);
+ VPRINTK("mmio %p port_mmio %p\n", mmio, port_mmio);
+
+ ahci_setup_port(&probe_ent->port[i],
+ (unsigned long) mmio, i);
+
+ /* make sure port is not active */
+ tmp = readl(port_mmio + PORT_CMD);
+ VPRINTK("PORT_CMD 0x%x\n", tmp);
+ if (tmp & (PORT_CMD_LIST_ON | PORT_CMD_FIS_ON |
+ PORT_CMD_FIS_RX | PORT_CMD_START)) {
+ tmp &= ~(PORT_CMD_LIST_ON | PORT_CMD_FIS_ON |
+ PORT_CMD_FIS_RX | PORT_CMD_START);
+ writel(tmp, port_mmio + PORT_CMD);
+ readl(port_mmio + PORT_CMD); /* flush */
+
+ /* spec says 500 msecs for each bit, so
+ * this is slightly incorrect.
+ */
+ msleep(500);
+ }
+
+ writel(PORT_CMD_SPIN_UP, port_mmio + PORT_CMD);
+
+ j = 0;
+ while (j < 100) {
+ msleep(10);
+ tmp = readl(port_mmio + PORT_SCR_STAT);
+ if ((tmp & 0xf) == 0x3)
+ break;
+ j++;
+ }
+
+ tmp = readl(port_mmio + PORT_SCR_ERR);
+ VPRINTK("PORT_SCR_ERR 0x%x\n", tmp);
+ writel(tmp, port_mmio + PORT_SCR_ERR);
+
+ /* ack any pending irq events for this port */
+ tmp = readl(port_mmio + PORT_IRQ_STAT);
+ VPRINTK("PORT_IRQ_STAT 0x%x\n", tmp);
+ if (tmp)
+ writel(tmp, port_mmio + PORT_IRQ_STAT);
+
+ writel(1 << i, mmio + HOST_IRQ_STAT);
+
+ /* set irq mask (enables interrupts) */
+ writel(DEF_PORT_IRQ, port_mmio + PORT_IRQ_MASK);
+ }
+
+ tmp = readl(mmio + HOST_CTL);
+ VPRINTK("HOST_CTL 0x%x\n", tmp);
+ writel(tmp | HOST_IRQ_EN, mmio + HOST_CTL);
+ tmp = readl(mmio + HOST_CTL);
+ VPRINTK("HOST_CTL 0x%x\n", tmp);
+
+ pci_set_master(pdev);
+
+ return 0;
+}
+
+/* move to PCI layer, integrate w/ MSI stuff */
+static void pci_enable_intx(struct pci_dev *pdev)
+{
+ u16 pci_command;
+
+ pci_read_config_word(pdev, PCI_COMMAND, &pci_command);
+ if (pci_command & PCI_COMMAND_INTX_DISABLE) {
+ pci_command &= ~PCI_COMMAND_INTX_DISABLE;
+ pci_write_config_word(pdev, PCI_COMMAND, pci_command);
+ }
+}
+
+static void ahci_print_info(struct ata_probe_ent *probe_ent)
+{
+ struct ahci_host_priv *hpriv = probe_ent->private_data;
+ struct pci_dev *pdev = to_pci_dev(probe_ent->dev);
+ void *mmio = probe_ent->mmio_base;
+ u32 vers, cap, impl, speed;
+ const char *speed_s;
+ u16 cc;
+ const char *scc_s;
+
+ vers = readl(mmio + HOST_VERSION);
+ cap = hpriv->cap;
+ impl = hpriv->port_map;
+
+ speed = (cap >> 20) & 0xf;
+ if (speed == 1)
+ speed_s = "1.5";
+ else if (speed == 2)
+ speed_s = "3";
+ else
+ speed_s = "?";
+
+ pci_read_config_word(pdev, 0x0a, &cc);
+ if (cc == 0x0101)
+ scc_s = "IDE";
+ else if (cc == 0x0106)
+ scc_s = "SATA";
+ else if (cc == 0x0104)
+ scc_s = "RAID";
+ else
+ scc_s = "unknown";
+
+ printk(KERN_INFO DRV_NAME "(%s) AHCI %02x%02x.%02x%02x "
+ "%u slots %u ports %s Gbps 0x%x impl %s mode\n"
+ ,
+ pci_name(pdev),
+
+ (vers >> 24) & 0xff,
+ (vers >> 16) & 0xff,
+ (vers >> 8) & 0xff,
+ vers & 0xff,
+
+ ((cap >> 8) & 0x1f) + 1,
+ (cap & 0x1f) + 1,
+ speed_s,
+ impl,
+ scc_s);
+
+ printk(KERN_INFO DRV_NAME "(%s) flags: "
+ "%s%s%s%s%s%s"
+ "%s%s%s%s%s%s%s\n"
+ ,
+ pci_name(pdev),
+
+ cap & (1 << 31) ? "64bit " : "",
+ cap & (1 << 30) ? "ncq " : "",
+ cap & (1 << 28) ? "ilck " : "",
+ cap & (1 << 27) ? "stag " : "",
+ cap & (1 << 26) ? "pm " : "",
+ cap & (1 << 25) ? "led " : "",
+
+ cap & (1 << 24) ? "clo " : "",
+ cap & (1 << 19) ? "nz " : "",
+ cap & (1 << 18) ? "only " : "",
+ cap & (1 << 17) ? "pmp " : "",
+ cap & (1 << 15) ? "pio " : "",
+ cap & (1 << 14) ? "slum " : "",
+ cap & (1 << 13) ? "part " : ""
+ );
+}
+
+static int ahci_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ static int printed_version;
+ struct ata_probe_ent *probe_ent = NULL;
+ struct ahci_host_priv *hpriv;
+ unsigned long base;
+ void *mmio_base;
+ unsigned int board_idx = (unsigned int) ent->driver_data;
+ int rc;
+
+ VPRINTK("ENTER\n");
+
+ if (!printed_version++)
+ printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
+
+ rc = pci_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ rc = pci_request_regions(pdev, DRV_NAME);
+ if (rc)
+ goto err_out;
+
+ pci_enable_intx(pdev);
+
+ probe_ent = kmalloc(sizeof(*probe_ent), GFP_KERNEL);
+ if (probe_ent == NULL) {
+ rc = -ENOMEM;
+ goto err_out_regions;
+ }
+
+ memset(probe_ent, 0, sizeof(*probe_ent));
+ probe_ent->dev = pci_dev_to_dev(pdev);
+ INIT_LIST_HEAD(&probe_ent->node);
+
+ mmio_base = ioremap(pci_resource_start(pdev, AHCI_PCI_BAR),
+ pci_resource_len(pdev, AHCI_PCI_BAR));
+ if (mmio_base == NULL) {
+ rc = -ENOMEM;
+ goto err_out_free_ent;
+ }
+ base = (unsigned long) mmio_base;
+
+ hpriv = kmalloc(sizeof(*hpriv), GFP_KERNEL);
+ if (!hpriv) {
+ rc = -ENOMEM;
+ goto err_out_iounmap;
+ }
+ memset(hpriv, 0, sizeof(*hpriv));
+
+ probe_ent->sht = ahci_port_info[board_idx].sht;
+ probe_ent->host_flags = ahci_port_info[board_idx].host_flags;
+ probe_ent->pio_mask = ahci_port_info[board_idx].pio_mask;
+ probe_ent->udma_mask = ahci_port_info[board_idx].udma_mask;
+ probe_ent->port_ops = ahci_port_info[board_idx].port_ops;
+
+ probe_ent->irq = pdev->irq;
+ probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->mmio_base = mmio_base;
+ probe_ent->private_data = hpriv;
+
+ /* initialize adapter */
+ rc = ahci_host_init(probe_ent);
+ if (rc)
+ goto err_out_hpriv;
+
+ ahci_print_info(probe_ent);
+
+ /* FIXME: check ata_device_add return value */
+ ata_device_add(probe_ent);
+ kfree(probe_ent);
+
+ return 0;
+
+err_out_hpriv:
+ kfree(hpriv);
+err_out_iounmap:
+ iounmap(mmio_base);
+err_out_free_ent:
+ kfree(probe_ent);
+err_out_regions:
+ pci_release_regions(pdev);
+err_out:
+ pci_disable_device(pdev);
+ return rc;
+}
+
+
+static int __init ahci_init(void)
+{
+ return pci_module_init(&ahci_pci_driver);
+}
+
+
+static void __exit ahci_exit(void)
+{
+ pci_unregister_driver(&ahci_pci_driver);
+}
+
+
+MODULE_AUTHOR("Jeff Garzik");
+MODULE_DESCRIPTION("AHCI SATA low-level driver");
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, ahci_pci_tbl);
+MODULE_VERSION(DRV_VERSION);
+
+module_init(ahci_init);
+module_exit(ahci_exit);
--- /dev/null
+/*
+ * ipr.c -- driver for IBM Power Linux RAID adapters
+ *
+ * Written By: Brian King <brking@us.ibm.com>, IBM Corporation
+ *
+ * Copyright (C) 2003, 2004 IBM Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+/*
+ * Notes:
+ *
+ * This driver is used to control the following SCSI adapters:
+ *
+ * IBM iSeries: 5702, 5703, 2780, 5709, 570A, 570B
+ *
+ * IBM pSeries: PCI-X Dual Channel Ultra 320 SCSI RAID Adapter
+ * PCI-X Dual Channel Ultra 320 SCSI Adapter
+ * PCI-X Dual Channel Ultra 320 SCSI RAID Enablement Card
+ * Embedded SCSI adapter on p615 and p655 systems
+ *
+ * Supported Hardware Features:
+ * - Ultra 320 SCSI controller
+ * - PCI-X host interface
+ * - Embedded PowerPC RISC Processor and Hardware XOR DMA Engine
+ * - Non-Volatile Write Cache
+ * - Supports attachment of non-RAID disks, tape, and optical devices
+ * - RAID Levels 0, 5, 10
+ * - Hot spare
+ * - Background Parity Checking
+ * - Background Data Scrubbing
+ * - Ability to increase the capacity of an existing RAID 5 disk array
+ * by adding disks
+ *
+ * Driver Features:
+ * - Tagged command queuing
+ * - Adapter microcode download
+ * - PCI hot plug
+ * - SCSI device hot plug
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/fs.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/ioport.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/wait.h>
+#include <linux/spinlock.h>
+#include <linux/sched.h>
+#include <linux/interrupt.h>
+#include <linux/blkdev.h>
+#include <linux/firmware.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/processor.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+#include <scsi/scsi_tcq.h>
+#include <scsi/scsi_eh.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_request.h>
+#include "ipr.h"
+
+/*
+ * Global Data
+ */
+static struct list_head ipr_ioa_head = LIST_HEAD_INIT(ipr_ioa_head);
+static unsigned int ipr_log_level = IPR_DEFAULT_LOG_LEVEL;
+static unsigned int ipr_max_speed = 1;
+static int ipr_testmode = 0;
+static spinlock_t ipr_driver_lock = SPIN_LOCK_UNLOCKED;
+
+/* This table describes the differences between DMA controller chips */
+static const struct ipr_chip_cfg_t ipr_chip_cfg[] = {
+ { /* Gemstone and Citrine */
+ .mailbox = 0x0042C,
+ .cache_line_size = 0x20,
+ {
+ .set_interrupt_mask_reg = 0x0022C,
+ .clr_interrupt_mask_reg = 0x00230,
+ .sense_interrupt_mask_reg = 0x0022C,
+ .clr_interrupt_reg = 0x00228,
+ .sense_interrupt_reg = 0x00224,
+ .ioarrin_reg = 0x00404,
+ .sense_uproc_interrupt_reg = 0x00214,
+ .set_uproc_interrupt_reg = 0x00214,
+ .clr_uproc_interrupt_reg = 0x00218
+ }
+ },
+ { /* Snipe */
+ .mailbox = 0x0052C,
+ .cache_line_size = 0x20,
+ {
+ .set_interrupt_mask_reg = 0x00288,
+ .clr_interrupt_mask_reg = 0x0028C,
+ .sense_interrupt_mask_reg = 0x00288,
+ .clr_interrupt_reg = 0x00284,
+ .sense_interrupt_reg = 0x00280,
+ .ioarrin_reg = 0x00504,
+ .sense_uproc_interrupt_reg = 0x00290,
+ .set_uproc_interrupt_reg = 0x00290,
+ .clr_uproc_interrupt_reg = 0x00294
+ }
+ },
+};
+
+static int ipr_max_bus_speeds [] = {
+ IPR_80MBs_SCSI_RATE, IPR_U160_SCSI_RATE, IPR_U320_SCSI_RATE
+};
+
+MODULE_AUTHOR("Brian King <brking@us.ibm.com>");
+MODULE_DESCRIPTION("IBM Power RAID SCSI Adapter Driver");
+module_param_named(max_speed, ipr_max_speed, uint, 0);
+MODULE_PARM_DESC(max_speed, "Maximum bus speed (0-2). Default: 1=U160. Speeds: 0=80 MB/s, 1=U160, 2=U320");
+module_param_named(log_level, ipr_log_level, uint, 0);
+MODULE_PARM_DESC(log_level, "Set to 0 - 4 for increasing verbosity of device driver");
+module_param_named(testmode, ipr_testmode, int, 0);
+MODULE_PARM_DESC(testmode, "DANGEROUS!!! Allows unsupported configurations");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(IPR_DRIVER_VERSION);
+
+static const char *ipr_gpdd_dev_end_states[] = {
+ "Command complete",
+ "Terminated by host",
+ "Terminated by device reset",
+ "Terminated by bus reset",
+ "Unknown",
+ "Command not started"
+};
+
+static const char *ipr_gpdd_dev_bus_phases[] = {
+ "Bus free",
+ "Arbitration",
+ "Selection",
+ "Message out",
+ "Command",
+ "Message in",
+ "Data out",
+ "Data in",
+ "Status",
+ "Reselection",
+ "Unknown"
+};
+
+/* A constant array of IOASCs/URCs/Error Messages */
+static const
+struct ipr_error_table_t ipr_error_table[] = {
+ {0x00000000, 1, 1,
+ "8155: An unknown error was received"},
+ {0x00330000, 0, 0,
+ "Soft underlength error"},
+ {0x005A0000, 0, 0,
+ "Command to be cancelled not found"},
+ {0x00808000, 0, 0,
+ "Qualified success"},
+ {0x01080000, 1, 1,
+ "FFFE: Soft device bus error recovered by the IOA"},
+ {0x01170600, 0, 1,
+ "FFF9: Device sector reassign successful"},
+ {0x01170900, 0, 1,
+ "FFF7: Media error recovered by device rewrite procedures"},
+ {0x01180200, 0, 1,
+ "7001: IOA sector reassignment successful"},
+ {0x01180500, 0, 1,
+ "FFF9: Soft media error. Sector reassignment recommended"},
+ {0x01180600, 0, 1,
+ "FFF7: Media error recovered by IOA rewrite procedures"},
+ {0x01418000, 0, 1,
+ "FF3D: Soft PCI bus error recovered by the IOA"},
+ {0x01440000, 1, 1,
+ "FFF6: Device hardware error recovered by the IOA"},
+ {0x01448100, 0, 1,
+ "FFF6: Device hardware error recovered by the device"},
+ {0x01448200, 1, 1,
+ "FF3D: Soft IOA error recovered by the IOA"},
+ {0x01448300, 0, 1,
+ "FFFA: Undefined device response recovered by the IOA"},
+ {0x014A0000, 1, 1,
+ "FFF6: Device bus error, message or command phase"},
+ {0x015D0000, 0, 1,
+ "FFF6: Failure prediction threshold exceeded"},
+ {0x015D9200, 0, 1,
+ "8009: Impending cache battery pack failure"},
+ {0x02040400, 0, 0,
+ "34FF: Disk device format in progress"},
+ {0x023F0000, 0, 0,
+ "Synchronization required"},
+ {0x024E0000, 0, 0,
+ "No ready, IOA shutdown"},
+ {0x025A0000, 0, 0,
+ "Not ready, IOA has been shutdown"},
+ {0x02670100, 0, 1,
+ "3020: Storage subsystem configuration error"},
+ {0x03110B00, 0, 0,
+ "FFF5: Medium error, data unreadable, recommend reassign"},
+ {0x03110C00, 0, 0,
+ "7000: Medium error, data unreadable, do not reassign"},
+ {0x03310000, 0, 1,
+ "FFF3: Disk media format bad"},
+ {0x04050000, 0, 1,
+ "3002: Addressed device failed to respond to selection"},
+ {0x04080000, 1, 1,
+ "3100: Device bus error"},
+ {0x04080100, 0, 1,
+ "3109: IOA timed out a device command"},
+ {0x04088000, 0, 0,
+ "3120: SCSI bus is not operational"},
+ {0x04118000, 0, 1,
+ "9000: IOA reserved area data check"},
+ {0x04118100, 0, 1,
+ "9001: IOA reserved area invalid data pattern"},
+ {0x04118200, 0, 1,
+ "9002: IOA reserved area LRC error"},
+ {0x04320000, 0, 1,
+ "102E: Out of alternate sectors for disk storage"},
+ {0x04330000, 1, 1,
+ "FFF4: Data transfer underlength error"},
+ {0x04338000, 1, 1,
+ "FFF4: Data transfer overlength error"},
+ {0x043E0100, 0, 1,
+ "3400: Logical unit failure"},
+ {0x04408500, 0, 1,
+ "FFF4: Device microcode is corrupt"},
+ {0x04418000, 1, 1,
+ "8150: PCI bus error"},
+ {0x04430000, 1, 0,
+ "Unsupported device bus message received"},
+ {0x04440000, 1, 1,
+ "FFF4: Disk device problem"},
+ {0x04448200, 1, 1,
+ "8150: Permanent IOA failure"},
+ {0x04448300, 0, 1,
+ "3010: Disk device returned wrong response to IOA"},
+ {0x04448400, 0, 1,
+ "8151: IOA microcode error"},
+ {0x04448500, 0, 0,
+ "Device bus status error"},
+ {0x04448600, 0, 1,
+ "8157: IOA error requiring IOA reset to recover"},
+ {0x04490000, 0, 0,
+ "Message reject received from the device"},
+ {0x04449200, 0, 1,
+ "8008: A permanent cache battery pack failure occurred"},
+ {0x0444A000, 0, 1,
+ "9090: Disk unit has been modified after the last known status"},
+ {0x0444A200, 0, 1,
+ "9081: IOA detected device error"},
+ {0x0444A300, 0, 1,
+ "9082: IOA detected device error"},
+ {0x044A0000, 1, 1,
+ "3110: Device bus error, message or command phase"},
+ {0x04670400, 0, 1,
+ "9091: Incorrect hardware configuration change has been detected"},
+ {0x046E0000, 0, 1,
+ "FFF4: Command to logical unit failed"},
+ {0x05240000, 1, 0,
+ "Illegal request, invalid request type or request packet"},
+ {0x05250000, 0, 0,
+ "Illegal request, invalid resource handle"},
+ {0x05260000, 0, 0,
+ "Illegal request, invalid field in parameter list"},
+ {0x05260100, 0, 0,
+ "Illegal request, parameter not supported"},
+ {0x05260200, 0, 0,
+ "Illegal request, parameter value invalid"},
+ {0x052C0000, 0, 0,
+ "Illegal request, command sequence error"},
+ {0x06040500, 0, 1,
+ "9031: Array protection temporarily suspended, protection resuming"},
+ {0x06040600, 0, 1,
+ "9040: Array protection temporarily suspended, protection resuming"},
+ {0x06290000, 0, 1,
+ "FFFB: SCSI bus was reset"},
+ {0x06290500, 0, 0,
+ "FFFE: SCSI bus transition to single ended"},
+ {0x06290600, 0, 0,
+ "FFFE: SCSI bus transition to LVD"},
+ {0x06298000, 0, 1,
+ "FFFB: SCSI bus was reset by another initiator"},
+ {0x063F0300, 0, 1,
+ "3029: A device replacement has occurred"},
+ {0x064C8000, 0, 1,
+ "9051: IOA cache data exists for a missing or failed device"},
+ {0x06670100, 0, 1,
+ "9025: Disk unit is not supported at its physical location"},
+ {0x06670600, 0, 1,
+ "3020: IOA detected a SCSI bus configuration error"},
+ {0x06678000, 0, 1,
+ "3150: SCSI bus configuration error"},
+ {0x06690200, 0, 1,
+ "9041: Array protection temporarily suspended"},
+ {0x066B0200, 0, 1,
+ "9030: Array no longer protected due to missing or failed disk unit"},
+ {0x07270000, 0, 0,
+ "Failure due to other device"},
+ {0x07278000, 0, 1,
+ "9008: IOA does not support functions expected by devices"},
+ {0x07278100, 0, 1,
+ "9010: Cache data associated with attached devices cannot be found"},
+ {0x07278200, 0, 1,
+ "9011: Cache data belongs to devices other than those attached"},
+ {0x07278400, 0, 1,
+ "9020: Array missing 2 or more devices with only 1 device present"},
+ {0x07278500, 0, 1,
+ "9021: Array missing 2 or more devices with 2 or more devices present"},
+ {0x07278600, 0, 1,
+ "9022: Exposed array is missing a required device"},
+ {0x07278700, 0, 1,
+ "9023: Array member(s) not at required physical locations"},
+ {0x07278800, 0, 1,
+ "9024: Array not functional due to present hardware configuration"},
+ {0x07278900, 0, 1,
+ "9026: Array not functional due to present hardware configuration"},
+ {0x07278A00, 0, 1,
+ "9027: Array is missing a device and parity is out of sync"},
+ {0x07278B00, 0, 1,
+ "9028: Maximum number of arrays already exist"},
+ {0x07278C00, 0, 1,
+ "9050: Required cache data cannot be located for a disk unit"},
+ {0x07278D00, 0, 1,
+ "9052: Cache data exists for a device that has been modified"},
+ {0x07278F00, 0, 1,
+ "9054: IOA resources not available due to previous problems"},
+ {0x07279100, 0, 1,
+ "9092: Disk unit requires initialization before use"},
+ {0x07279200, 0, 1,
+ "9029: Incorrect hardware configuration change has been detected"},
+ {0x07279600, 0, 1,
+ "9060: One or more disk pairs are missing from an array"},
+ {0x07279700, 0, 1,
+ "9061: One or more disks are missing from an array"},
+ {0x07279800, 0, 1,
+ "9062: One or more disks are missing from an array"},
+ {0x07279900, 0, 1,
+ "9063: Maximum number of functional arrays has been exceeded"},
+ {0x0B260000, 0, 0,
+ "Aborted command, invalid descriptor"},
+ {0x0B5A0000, 0, 0,
+ "Command terminated by host"}
+};
+
+static const struct ipr_ses_table_entry ipr_ses_table[] = {
+ { "2104-DL1 ", "XXXXXXXXXXXXXXXX", 80 },
+ { "2104-TL1 ", "XXXXXXXXXXXXXXXX", 80 },
+ { "HSBP07M P U2SCSI", "XXXXXXXXXXXXXXXX", 80 }, /* Hidive 7 slot */
+ { "HSBP05M P U2SCSI", "XXXXXXXXXXXXXXXX", 80 }, /* Hidive 5 slot */
+ { "HSBP05M S U2SCSI", "XXXXXXXXXXXXXXXX", 80 }, /* Bowtie */
+ { "HSBP06E ASU2SCSI", "XXXXXXXXXXXXXXXX", 80 }, /* MartinFenning */
+ { "2104-DU3 ", "XXXXXXXXXXXXXXXX", 160 },
+ { "2104-TU3 ", "XXXXXXXXXXXXXXXX", 160 },
+ { "HSBP04C RSU2SCSI", "XXXXXXX*XXXXXXXX", 160 },
+ { "HSBP06E RSU2SCSI", "XXXXXXX*XXXXXXXX", 160 },
+ { "St V1S2 ", "XXXXXXXXXXXXXXXX", 160 },
+ { "HSBPD4M PU3SCSI", "XXXXXXX*XXXXXXXX", 160 },
+ { "VSBPD1H U3SCSI", "XXXXXXX*XXXXXXXX", 160 }
+};
+
+/*
+ * Function Prototypes
+ */
+static int ipr_reset_alert(struct ipr_cmnd *);
+static void ipr_process_ccn(struct ipr_cmnd *);
+static void ipr_process_error(struct ipr_cmnd *);
+static void ipr_reset_ioa_job(struct ipr_cmnd *);
+static void ipr_initiate_ioa_reset(struct ipr_ioa_cfg *,
+ enum ipr_shutdown_type);
+
+#ifdef CONFIG_SCSI_IPR_TRACE
+/**
+ * ipr_trc_hook - Add a trace entry to the driver trace
+ * @ipr_cmd: ipr command struct
+ * @type: trace type
+ * @add_data: additional data
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_trc_hook(struct ipr_cmnd *ipr_cmd,
+ u8 type, u32 add_data)
+{
+ struct ipr_trace_entry *trace_entry;
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ trace_entry = &ioa_cfg->trace[ioa_cfg->trace_index++];
+ trace_entry->time = jiffies;
+ trace_entry->op_code = ipr_cmd->ioarcb.cmd_pkt.cdb[0];
+ trace_entry->type = type;
+ trace_entry->cmd_index = ipr_cmd->cmd_index;
+ trace_entry->res_handle = ipr_cmd->ioarcb.res_handle;
+ trace_entry->u.add_data = add_data;
+}
+#else
+#define ipr_trc_hook(ipr_cmd, type, add_data) do { } while(0)
+#endif
+
+/**
+ * ipr_reinit_ipr_cmnd - Re-initialize an IPR Cmnd block for reuse
+ * @ipr_cmd: ipr command struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_reinit_ipr_cmnd(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_ioasa *ioasa = &ipr_cmd->ioasa;
+
+ memset(&ioarcb->cmd_pkt, 0, sizeof(struct ipr_cmd_pkt));
+ ioarcb->write_data_transfer_length = 0;
+ ioarcb->read_data_transfer_length = 0;
+ ioarcb->write_ioadl_len = 0;
+ ioarcb->read_ioadl_len = 0;
+ ioasa->ioasc = 0;
+ ioasa->residual_data_len = 0;
+
+ ipr_cmd->scsi_cmd = NULL;
+ ipr_cmd->sense_buffer[0] = 0;
+ ipr_cmd->dma_use_sg = 0;
+}
+
+/**
+ * ipr_init_ipr_cmnd - Initialize an IPR Cmnd block
+ * @ipr_cmd: ipr command struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_init_ipr_cmnd(struct ipr_cmnd *ipr_cmd)
+{
+ ipr_reinit_ipr_cmnd(ipr_cmd);
+ ipr_cmd->u.scratch = 0;
+ ipr_cmd->sibling = NULL;
+ init_timer(&ipr_cmd->timer);
+}
+
+/**
+ * ipr_get_free_ipr_cmnd - Get a free IPR Cmnd block
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * pointer to ipr command struct
+ **/
+static
+struct ipr_cmnd *ipr_get_free_ipr_cmnd(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct ipr_cmnd *ipr_cmd;
+
+ ipr_cmd = list_entry(ioa_cfg->free_q.next, struct ipr_cmnd, queue);
+ list_del(&ipr_cmd->queue);
+ ipr_init_ipr_cmnd(ipr_cmd);
+
+ return ipr_cmd;
+}
+
+/**
+ * ipr_unmap_sglist - Unmap scatterlist if mapped
+ * @ioa_cfg: ioa config struct
+ * @ipr_cmd: ipr command struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_unmap_sglist(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_cmnd *ipr_cmd)
+{
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+
+ if (ipr_cmd->dma_use_sg) {
+ if (scsi_cmd->use_sg > 0) {
+ pci_unmap_sg(ioa_cfg->pdev, scsi_cmd->request_buffer,
+ scsi_cmd->use_sg,
+ scsi_cmd->sc_data_direction);
+ } else {
+ pci_unmap_single(ioa_cfg->pdev, ipr_cmd->dma_handle,
+ scsi_cmd->request_bufflen,
+ scsi_cmd->sc_data_direction);
+ }
+ }
+}
+
+/**
+ * ipr_mask_and_clear_interrupts - Mask all and clear specified interrupts
+ * @ioa_cfg: ioa config struct
+ * @clr_ints: interrupts to clear
+ *
+ * This function masks all interrupts on the adapter, then clears the
+ * interrupts specified in the mask
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_mask_and_clear_interrupts(struct ipr_ioa_cfg *ioa_cfg,
+ u32 clr_ints)
+{
+ volatile u32 int_reg;
+
+ /* Stop new interrupts */
+ ioa_cfg->allow_interrupts = 0;
+
+ /* Set interrupt mask to stop all new interrupts */
+ writel(~0, ioa_cfg->regs.set_interrupt_mask_reg);
+
+ /* Clear any pending interrupts */
+ writel(clr_ints, ioa_cfg->regs.clr_interrupt_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
+}
+
+/**
+ * ipr_save_pcix_cmd_reg - Save PCI-X command register
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / -EIO on failure
+ **/
+static int ipr_save_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX);
+
+ if (pcix_cmd_reg == 0) {
+ dev_err(&ioa_cfg->pdev->dev, "Failed to save PCI-X command register\n");
+ return -EIO;
+ }
+
+ if (pci_read_config_word(ioa_cfg->pdev, pcix_cmd_reg,
+ &ioa_cfg->saved_pcix_cmd_reg) != PCIBIOS_SUCCESSFUL) {
+ dev_err(&ioa_cfg->pdev->dev, "Failed to save PCI-X command register\n");
+ return -EIO;
+ }
+
+ ioa_cfg->saved_pcix_cmd_reg |= PCI_X_CMD_DPERR_E | PCI_X_CMD_ERO;
+ return 0;
+}
+
+/**
+ * ipr_set_pcix_cmd_reg - Setup PCI-X command register
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / -EIO on failure
+ **/
+static int ipr_set_pcix_cmd_reg(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int pcix_cmd_reg = pci_find_capability(ioa_cfg->pdev, PCI_CAP_ID_PCIX);
+
+ if (pcix_cmd_reg) {
+ if (pci_write_config_word(ioa_cfg->pdev, pcix_cmd_reg,
+ ioa_cfg->saved_pcix_cmd_reg) != PCIBIOS_SUCCESSFUL) {
+ dev_err(&ioa_cfg->pdev->dev, "Failed to setup PCI-X command register\n");
+ return -EIO;
+ }
+ } else {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Failed to setup PCI-X command register\n");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * ipr_scsi_eh_done - mid-layer done function for aborted ops
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is invoked by the interrupt handler for
+ * ops generated by the SCSI mid-layer which are being aborted.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_scsi_eh_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+
+ scsi_cmd->result |= (DID_ERROR << 16);
+
+ ipr_unmap_sglist(ioa_cfg, ipr_cmd);
+ scsi_cmd->scsi_done(scsi_cmd);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+}
+
+/**
+ * ipr_fail_all_ops - Fails all outstanding ops.
+ * @ioa_cfg: ioa config struct
+ *
+ * This function fails all outstanding ops.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_fail_all_ops(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct ipr_cmnd *ipr_cmd, *temp;
+
+ ENTER;
+ list_for_each_entry_safe(ipr_cmd, temp, &ioa_cfg->pending_q, queue) {
+ list_del(&ipr_cmd->queue);
+
+ ipr_cmd->ioasa.ioasc = cpu_to_be32(IPR_IOASC_IOA_WAS_RESET);
+ ipr_cmd->ioasa.ilid = cpu_to_be32(IPR_DRIVER_ILID);
+
+ if (ipr_cmd->scsi_cmd)
+ ipr_cmd->done = ipr_scsi_eh_done;
+
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_FINISH, IPR_IOASC_IOA_WAS_RESET);
+ del_timer(&ipr_cmd->timer);
+ ipr_cmd->done(ipr_cmd);
+ }
+
+ LEAVE;
+}
+
+/**
+ * ipr_do_req - Send driver initiated requests.
+ * @ipr_cmd: ipr command struct
+ * @done: done function
+ * @timeout_func: timeout function
+ * @timeout: timeout value
+ *
+ * This function sends the specified command to the adapter with the
+ * timeout given. The done function is invoked on command completion.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_do_req(struct ipr_cmnd *ipr_cmd,
+ void (*done) (struct ipr_cmnd *),
+ void (*timeout_func) (struct ipr_cmnd *), u32 timeout)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+
+ ipr_cmd->done = done;
+
+ ipr_cmd->timer.data = (unsigned long) ipr_cmd;
+ ipr_cmd->timer.expires = jiffies + timeout;
+ ipr_cmd->timer.function = (void (*)(unsigned long))timeout_func;
+
+ add_timer(&ipr_cmd->timer);
+
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_START, 0);
+
+ mb();
+ writel(be32_to_cpu(ipr_cmd->ioarcb.ioarcb_host_pci_addr),
+ ioa_cfg->regs.ioarrin_reg);
+}
+
+/**
+ * ipr_internal_cmd_done - Op done function for an internally generated op.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is the op done function for an internally generated,
+ * blocking op. It simply wakes the sleeping thread.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_internal_cmd_done(struct ipr_cmnd *ipr_cmd)
+{
+ if (ipr_cmd->sibling)
+ ipr_cmd->sibling = NULL;
+ else
+ complete(&ipr_cmd->completion);
+}
+
+/**
+ * ipr_send_blocking_cmd - Send command and sleep on its completion.
+ * @ipr_cmd: ipr command struct
+ * @timeout_func: function to invoke if command times out
+ * @timeout: timeout
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_send_blocking_cmd(struct ipr_cmnd *ipr_cmd,
+ void (*timeout_func) (struct ipr_cmnd *ipr_cmd),
+ u32 timeout)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ init_completion(&ipr_cmd->completion);
+ ipr_do_req(ipr_cmd, ipr_internal_cmd_done, timeout_func, timeout);
+
+ spin_unlock_irq(ioa_cfg->host->host_lock);
+ wait_for_completion(&ipr_cmd->completion);
+ spin_lock_irq(ioa_cfg->host->host_lock);
+}
+
+/**
+ * ipr_send_hcam - Send an HCAM to the adapter.
+ * @ioa_cfg: ioa config struct
+ * @type: HCAM type
+ * @hostrcb: hostrcb struct
+ *
+ * This function will send a Host Controlled Async command to the adapter.
+ * If HCAMs are currently not allowed to be issued to the adapter, it will
+ * place the hostrcb on the free queue.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_send_hcam(struct ipr_ioa_cfg *ioa_cfg, u8 type,
+ struct ipr_hostrcb *hostrcb)
+{
+ struct ipr_cmnd *ipr_cmd;
+ struct ipr_ioarcb *ioarcb;
+
+ if (ioa_cfg->allow_cmds) {
+ ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+ list_add_tail(&hostrcb->queue, &ioa_cfg->hostrcb_pending_q);
+
+ ipr_cmd->u.hostrcb = hostrcb;
+ ioarcb = &ipr_cmd->ioarcb;
+
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_HCAM;
+ ioarcb->cmd_pkt.cdb[0] = IPR_HOST_CONTROLLED_ASYNC;
+ ioarcb->cmd_pkt.cdb[1] = type;
+ ioarcb->cmd_pkt.cdb[7] = (sizeof(hostrcb->hcam) >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[8] = sizeof(hostrcb->hcam) & 0xff;
+
+ ioarcb->read_data_transfer_length = cpu_to_be32(sizeof(hostrcb->hcam));
+ ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ipr_cmd->ioadl[0].flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_READ_LAST | sizeof(hostrcb->hcam));
+ ipr_cmd->ioadl[0].address = cpu_to_be32(hostrcb->hostrcb_dma);
+
+ if (type == IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE)
+ ipr_cmd->done = ipr_process_ccn;
+ else
+ ipr_cmd->done = ipr_process_error;
+
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_START, IPR_IOA_RES_ADDR);
+
+ mb();
+ writel(be32_to_cpu(ipr_cmd->ioarcb.ioarcb_host_pci_addr),
+ ioa_cfg->regs.ioarrin_reg);
+ } else {
+ list_add_tail(&hostrcb->queue, &ioa_cfg->hostrcb_free_q);
+ }
+}
+
+/**
+ * ipr_init_res_entry - Initialize a resource entry struct.
+ * @res: resource entry struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_init_res_entry(struct ipr_resource_entry *res)
+{
+ res->needs_sync_complete = 1;
+ res->in_erp = 0;
+ res->add_to_ml = 0;
+ res->del_from_ml = 0;
+ res->resetting_device = 0;
+ res->tcq_active = 0;
+ res->qdepth = IPR_MAX_CMD_PER_LUN;
+ res->sdev = NULL;
+}
+
+/**
+ * ipr_handle_config_change - Handle a config change from the adapter
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_handle_config_change(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ struct ipr_resource_entry *res = NULL;
+ struct ipr_config_table_entry *cfgte;
+ u32 is_ndn = 1;
+
+ cfgte = &hostrcb->hcam.u.ccn.cfgte;
+
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (!memcmp(&res->cfgte.res_addr, &cfgte->res_addr,
+ sizeof(cfgte->res_addr))) {
+ is_ndn = 0;
+ break;
+ }
+ }
+
+ if (is_ndn) {
+ if (list_empty(&ioa_cfg->free_res_q)) {
+ ipr_send_hcam(ioa_cfg,
+ IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE,
+ hostrcb);
+ return;
+ }
+
+ res = list_entry(ioa_cfg->free_res_q.next,
+ struct ipr_resource_entry, queue);
+
+ list_del(&res->queue);
+ ipr_init_res_entry(res);
+ list_add_tail(&res->queue, &ioa_cfg->used_res_q);
+ }
+
+ memcpy(&res->cfgte, cfgte, sizeof(struct ipr_config_table_entry));
+
+ if (hostrcb->hcam.notify_type == IPR_HOST_RCB_NOTIF_TYPE_REM_ENTRY) {
+ if (res->sdev) {
+ res->sdev->hostdata = NULL;
+ res->del_from_ml = 1;
+ if (ioa_cfg->allow_ml_add_del)
+ schedule_work(&ioa_cfg->work_q);
+ } else
+ list_move_tail(&res->queue, &ioa_cfg->free_res_q);
+ } else if (!res->sdev) {
+ res->add_to_ml = 1;
+ if (ioa_cfg->allow_ml_add_del)
+ schedule_work(&ioa_cfg->work_q);
+ }
+
+ ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE, hostrcb);
+}
+
+/**
+ * ipr_process_ccn - Op done function for a CCN.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is the op done function for a configuration
+ * change notification host controlled async from the adapter.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_process_ccn(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_hostrcb *hostrcb = ipr_cmd->u.hostrcb;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ list_del(&hostrcb->queue);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+
+ if (ioasc) {
+ if (ioasc != IPR_IOASC_IOA_WAS_RESET)
+ dev_err(&ioa_cfg->pdev->dev,
+ "Host RCB failed with IOASC: 0x%08X\n", ioasc);
+
+ ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE, hostrcb);
+ } else {
+ ipr_handle_config_change(ioa_cfg, hostrcb);
+ }
+}
+
+/**
+ * ipr_log_vpd - Log the passed VPD to the error log.
+ * @vpids: vendor/product id struct
+ * @serial_num: serial number string
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_log_vpd(struct ipr_std_inq_vpids *vpids, u8 *serial_num)
+{
+ char buffer[IPR_VENDOR_ID_LEN + IPR_PROD_ID_LEN
+ + IPR_SERIAL_NUM_LEN];
+
+ memcpy(buffer, vpids->vendor_id, IPR_VENDOR_ID_LEN);
+ memcpy(buffer + IPR_VENDOR_ID_LEN, vpids->product_id,
+ IPR_PROD_ID_LEN);
+ buffer[IPR_VENDOR_ID_LEN + IPR_PROD_ID_LEN] = '\0';
+ ipr_err("Vendor/Product ID: %s\n", buffer);
+
+ memcpy(buffer, serial_num, IPR_SERIAL_NUM_LEN);
+ buffer[IPR_SERIAL_NUM_LEN] = '\0';
+ ipr_err(" Serial Number: %s\n", buffer);
+}
+
+/**
+ * ipr_log_cache_error - Log a cache error.
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_log_cache_error(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ struct ipr_hostrcb_type_02_error *error =
+ &hostrcb->hcam.u.error.u.type_02_error;
+
+ ipr_err("-----Current Configuration-----\n");
+ ipr_err("Cache Directory Card Information:\n");
+ ipr_log_vpd(&error->ioa_vpids, error->ioa_sn);
+ ipr_err("Adapter Card Information:\n");
+ ipr_log_vpd(&error->cfc_vpids, error->cfc_sn);
+
+ ipr_err("-----Expected Configuration-----\n");
+ ipr_err("Cache Directory Card Information:\n");
+ ipr_log_vpd(&error->ioa_last_attached_to_cfc_vpids,
+ error->ioa_last_attached_to_cfc_sn);
+ ipr_err("Adapter Card Information:\n");
+ ipr_log_vpd(&error->cfc_last_attached_to_ioa_vpids,
+ error->cfc_last_attached_to_ioa_sn);
+
+ ipr_err("Additional IOA Data: %08X %08X %08X\n",
+ be32_to_cpu(error->ioa_data[0]),
+ be32_to_cpu(error->ioa_data[1]),
+ be32_to_cpu(error->ioa_data[2]));
+}
+
+/**
+ * ipr_log_config_error - Log a configuration error.
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_log_config_error(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ int errors_logged, i;
+ struct ipr_hostrcb_device_data_entry *dev_entry;
+ struct ipr_hostrcb_type_03_error *error;
+
+ error = &hostrcb->hcam.u.error.u.type_03_error;
+ errors_logged = be32_to_cpu(error->errors_logged);
+
+ ipr_err("Device Errors Detected/Logged: %d/%d\n",
+ be32_to_cpu(error->errors_detected), errors_logged);
+
+ dev_entry = error->dev_entry;
+
+ for (i = 0; i < errors_logged; i++, dev_entry++) {
+ ipr_err_separator;
+
+ if (dev_entry->dev_res_addr.bus >= IPR_MAX_NUM_BUSES) {
+ ipr_err("Device %d: missing\n", i + 1);
+ } else {
+ ipr_err("Device %d: %d:%d:%d:%d\n", i + 1,
+ ioa_cfg->host->host_no, dev_entry->dev_res_addr.bus,
+ dev_entry->dev_res_addr.target, dev_entry->dev_res_addr.lun);
+ }
+ ipr_log_vpd(&dev_entry->dev_vpids, dev_entry->dev_sn);
+
+ ipr_err("-----New Device Information-----\n");
+ ipr_log_vpd(&dev_entry->new_dev_vpids, dev_entry->new_dev_sn);
+
+ ipr_err("Cache Directory Card Information:\n");
+ ipr_log_vpd(&dev_entry->ioa_last_with_dev_vpids,
+ dev_entry->ioa_last_with_dev_sn);
+
+ ipr_err("Adapter Card Information:\n");
+ ipr_log_vpd(&dev_entry->cfc_last_with_dev_vpids,
+ dev_entry->cfc_last_with_dev_sn);
+
+ ipr_err("Additional IOA Data: %08X %08X %08X %08X %08X\n",
+ be32_to_cpu(dev_entry->ioa_data[0]),
+ be32_to_cpu(dev_entry->ioa_data[1]),
+ be32_to_cpu(dev_entry->ioa_data[2]),
+ be32_to_cpu(dev_entry->ioa_data[3]),
+ be32_to_cpu(dev_entry->ioa_data[4]));
+ }
+}
+
+/**
+ * ipr_log_array_error - Log an array configuration error.
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_log_array_error(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ int i;
+ struct ipr_hostrcb_type_04_error *error;
+ struct ipr_hostrcb_array_data_entry *array_entry;
+ u8 zero_sn[IPR_SERIAL_NUM_LEN];
+
+ memset(zero_sn, '0', IPR_SERIAL_NUM_LEN);
+
+ error = &hostrcb->hcam.u.error.u.type_04_error;
+
+ ipr_err_separator;
+
+ ipr_err("RAID %s Array Configuration: %d:%d:%d:%d\n",
+ error->protection_level,
+ ioa_cfg->host->host_no,
+ error->last_func_vset_res_addr.bus,
+ error->last_func_vset_res_addr.target,
+ error->last_func_vset_res_addr.lun);
+
+ ipr_err_separator;
+
+ array_entry = error->array_member;
+
+ for (i = 0; i < 18; i++) {
+ if (!memcmp(array_entry->serial_num, zero_sn, IPR_SERIAL_NUM_LEN))
+ continue;
+
+ if (error->exposed_mode_adn == i) {
+ ipr_err("Exposed Array Member %d:\n", i);
+ } else {
+ ipr_err("Array Member %d:\n", i);
+ }
+
+ ipr_log_vpd(&array_entry->vpids, array_entry->serial_num);
+
+ if (array_entry->dev_res_addr.bus >= IPR_MAX_NUM_BUSES) {
+ ipr_err("Current Location: unknown\n");
+ } else {
+ ipr_err("Current Location: %d:%d:%d:%d\n",
+ ioa_cfg->host->host_no,
+ array_entry->dev_res_addr.bus,
+ array_entry->dev_res_addr.target,
+ array_entry->dev_res_addr.lun);
+ }
+
+ if (array_entry->dev_res_addr.bus >= IPR_MAX_NUM_BUSES) {
+ ipr_err("Expected Location: unknown\n");
+ } else {
+ ipr_err("Expected Location: %d:%d:%d:%d\n",
+ ioa_cfg->host->host_no,
+ array_entry->expected_dev_res_addr.bus,
+ array_entry->expected_dev_res_addr.target,
+ array_entry->expected_dev_res_addr.lun);
+ }
+
+ ipr_err_separator;
+
+ if (i == 9)
+ array_entry = error->array_member2;
+ else
+ array_entry++;
+ }
+}
+
+/**
+ * ipr_log_generic_error - Log an adapter error.
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_log_generic_error(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ int i;
+ int ioa_data_len = be32_to_cpu(hostrcb->hcam.length);
+
+ if (ioa_data_len == 0)
+ return;
+
+ ipr_err("IOA Error Data:\n");
+ ipr_err("Offset 0 1 2 3 4 5 6 7 8 9 A B C D E F\n");
+
+ for (i = 0; i < ioa_data_len / 4; i += 4) {
+ ipr_err("%08X: %08X %08X %08X %08X\n", i*4,
+ be32_to_cpu(hostrcb->hcam.u.raw.data[i]),
+ be32_to_cpu(hostrcb->hcam.u.raw.data[i+1]),
+ be32_to_cpu(hostrcb->hcam.u.raw.data[i+2]),
+ be32_to_cpu(hostrcb->hcam.u.raw.data[i+3]));
+ }
+}
+
+/**
+ * ipr_get_error - Find the specfied IOASC in the ipr_error_table.
+ * @ioasc: IOASC
+ *
+ * This function will return the index of into the ipr_error_table
+ * for the specified IOASC. If the IOASC is not in the table,
+ * 0 will be returned, which points to the entry used for unknown errors.
+ *
+ * Return value:
+ * index into the ipr_error_table
+ **/
+static u32 ipr_get_error(u32 ioasc)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(ipr_error_table); i++)
+ if (ipr_error_table[i].ioasc == ioasc)
+ return i;
+
+ return 0;
+}
+
+/**
+ * ipr_handle_log_data - Log an adapter error.
+ * @ioa_cfg: ioa config struct
+ * @hostrcb: hostrcb struct
+ *
+ * This function logs an adapter error to the system.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_handle_log_data(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_hostrcb *hostrcb)
+{
+ u32 ioasc;
+ int error_index;
+
+ if (hostrcb->hcam.notify_type != IPR_HOST_RCB_NOTIF_TYPE_ERROR_LOG_ENTRY)
+ return;
+
+ if (hostrcb->hcam.notifications_lost == IPR_HOST_RCB_NOTIFICATIONS_LOST)
+ dev_err(&ioa_cfg->pdev->dev, "Error notifications lost\n");
+
+ ioasc = be32_to_cpu(hostrcb->hcam.u.error.failing_dev_ioasc);
+
+ if (ioasc == IPR_IOASC_BUS_WAS_RESET ||
+ ioasc == IPR_IOASC_BUS_WAS_RESET_BY_OTHER) {
+ /* Tell the midlayer we had a bus reset so it will handle the UA properly */
+ scsi_report_bus_reset(ioa_cfg->host,
+ hostrcb->hcam.u.error.failing_dev_res_addr.bus);
+ }
+
+ error_index = ipr_get_error(ioasc);
+
+ if (!ipr_error_table[error_index].log_hcam)
+ return;
+
+ if (ipr_is_device(&hostrcb->hcam.u.error.failing_dev_res_addr)) {
+ ipr_res_err(ioa_cfg, hostrcb->hcam.u.error.failing_dev_res_addr,
+ "%s\n", ipr_error_table[error_index].error);
+ } else {
+ dev_err(&ioa_cfg->pdev->dev, "%s\n",
+ ipr_error_table[error_index].error);
+ }
+
+ /* Set indication we have logged an error */
+ ioa_cfg->errors_logged++;
+
+ if (ioa_cfg->log_level < IPR_DEFAULT_LOG_LEVEL)
+ return;
+
+ switch (hostrcb->hcam.overlay_id) {
+ case IPR_HOST_RCB_OVERLAY_ID_1:
+ ipr_log_generic_error(ioa_cfg, hostrcb);
+ break;
+ case IPR_HOST_RCB_OVERLAY_ID_2:
+ ipr_log_cache_error(ioa_cfg, hostrcb);
+ break;
+ case IPR_HOST_RCB_OVERLAY_ID_3:
+ ipr_log_config_error(ioa_cfg, hostrcb);
+ break;
+ case IPR_HOST_RCB_OVERLAY_ID_4:
+ case IPR_HOST_RCB_OVERLAY_ID_6:
+ ipr_log_array_error(ioa_cfg, hostrcb);
+ break;
+ case IPR_HOST_RCB_OVERLAY_ID_DEFAULT:
+ ipr_log_generic_error(ioa_cfg, hostrcb);
+ break;
+ default:
+ dev_err(&ioa_cfg->pdev->dev,
+ "Unknown error received. Overlay ID: %d\n",
+ hostrcb->hcam.overlay_id);
+ break;
+ }
+}
+
+/**
+ * ipr_process_error - Op done function for an adapter error log.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is the op done function for an error log host
+ * controlled async from the adapter. It will log the error and
+ * send the HCAM back to the adapter.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_process_error(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_hostrcb *hostrcb = ipr_cmd->u.hostrcb;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ list_del(&hostrcb->queue);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+
+ if (!ioasc) {
+ ipr_handle_log_data(ioa_cfg, hostrcb);
+ } else if (ioasc != IPR_IOASC_IOA_WAS_RESET) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Host RCB failed with IOASC: 0x%08X\n", ioasc);
+ }
+
+ ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_LOG_DATA, hostrcb);
+}
+
+/**
+ * ipr_timeout - An internally generated op has timed out.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function blocks host requests and initiates an
+ * adapter reset.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_timeout(struct ipr_cmnd *ipr_cmd)
+{
+ unsigned long lock_flags = 0;
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ ENTER;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ ioa_cfg->errors_logged++;
+ dev_err(&ioa_cfg->pdev->dev,
+ "Adapter being reset due to command timeout.\n");
+
+ if (WAIT_FOR_DUMP == ioa_cfg->sdt_state)
+ ioa_cfg->sdt_state = GET_DUMP;
+
+ if (!ioa_cfg->in_reset_reload || ioa_cfg->reset_cmd == ipr_cmd)
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ LEAVE;
+}
+
+/**
+ * ipr_reset_reload - Reset/Reload the IOA
+ * @ioa_cfg: ioa config struct
+ * @shutdown_type: shutdown type
+ *
+ * This function resets the adapter and re-initializes it.
+ * This function assumes that all new host commands have been stopped.
+ * Return value:
+ * SUCCESS / FAILED
+ **/
+static int ipr_reset_reload(struct ipr_ioa_cfg *ioa_cfg,
+ enum ipr_shutdown_type shutdown_type)
+{
+ if (!ioa_cfg->in_reset_reload)
+ ipr_initiate_ioa_reset(ioa_cfg, shutdown_type);
+
+ spin_unlock_irq(ioa_cfg->host->host_lock);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+ spin_lock_irq(ioa_cfg->host->host_lock);
+
+ /* If we got hit with a host reset while we were already resetting
+ the adapter for some reason, and the reset failed. */
+ if (ioa_cfg->ioa_is_dead) {
+ ipr_trace;
+ return FAILED;
+ }
+
+ return SUCCESS;
+}
+
+/**
+ * ipr_find_ses_entry - Find matching SES in SES table
+ * @res: resource entry struct of SES
+ *
+ * Return value:
+ * pointer to SES table entry / NULL on failure
+ **/
+static const struct ipr_ses_table_entry *
+ipr_find_ses_entry(struct ipr_resource_entry *res)
+{
+ int i, j, matches;
+ const struct ipr_ses_table_entry *ste = ipr_ses_table;
+
+ for (i = 0; i < ARRAY_SIZE(ipr_ses_table); i++, ste++) {
+ for (j = 0, matches = 0; j < IPR_PROD_ID_LEN; j++) {
+ if (ste->compare_product_id_byte[j] == 'X') {
+ if (res->cfgte.std_inq_data.vpids.product_id[j] == ste->product_id[j])
+ matches++;
+ else
+ break;
+ } else
+ matches++;
+ }
+
+ if (matches == IPR_PROD_ID_LEN)
+ return ste;
+ }
+
+ return NULL;
+}
+
+/**
+ * ipr_get_max_scsi_speed - Determine max SCSI speed for a given bus
+ * @ioa_cfg: ioa config struct
+ * @bus: SCSI bus
+ * @bus_width: bus width
+ *
+ * Return value:
+ * SCSI bus speed in units of 100KHz, 1600 is 160 MHz
+ * For a 2-byte wide SCSI bus, the maximum transfer speed is
+ * twice the maximum transfer rate (e.g. for a wide enabled bus,
+ * max 160MHz = max 320MB/sec).
+ **/
+static u32 ipr_get_max_scsi_speed(struct ipr_ioa_cfg *ioa_cfg, u8 bus, u8 bus_width)
+{
+ struct ipr_resource_entry *res;
+ const struct ipr_ses_table_entry *ste;
+ u32 max_xfer_rate = IPR_MAX_SCSI_RATE(bus_width);
+
+ /* Loop through each config table entry in the config table buffer */
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (!(IPR_IS_SES_DEVICE(res->cfgte.std_inq_data)))
+ continue;
+
+ if (bus != res->cfgte.res_addr.bus)
+ continue;
+
+ if (!(ste = ipr_find_ses_entry(res)))
+ continue;
+
+ max_xfer_rate = (ste->max_bus_speed_limit * 10) / (bus_width / 8);
+ }
+
+ return max_xfer_rate;
+}
+
+/**
+ * ipr_wait_iodbg_ack - Wait for an IODEBUG ACK from the IOA
+ * @ioa_cfg: ioa config struct
+ * @max_delay: max delay in micro-seconds to wait
+ *
+ * Waits for an IODEBUG ACK from the IOA, doing busy looping.
+ *
+ * Return value:
+ * 0 on success / other on failure
+ **/
+static int ipr_wait_iodbg_ack(struct ipr_ioa_cfg *ioa_cfg, int max_delay)
+{
+ volatile u32 pcii_reg;
+ int delay = 1;
+
+ /* Read interrupt reg until IOA signals IO Debug Acknowledge */
+ while (delay < max_delay) {
+ pcii_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
+
+ if (pcii_reg & IPR_PCII_IO_DEBUG_ACKNOWLEDGE)
+ return 0;
+
+ /* udelay cannot be used if delay is more than a few milliseconds */
+ if ((delay / 1000) > MAX_UDELAY_MS)
+ mdelay(delay / 1000);
+ else
+ udelay(delay);
+
+ delay += delay;
+ }
+ return -EIO;
+}
+
+/**
+ * ipr_get_ldump_data_section - Dump IOA memory
+ * @ioa_cfg: ioa config struct
+ * @start_addr: adapter address to dump
+ * @dest: destination kernel buffer
+ * @length_in_words: length to dump in 4 byte words
+ *
+ * Return value:
+ * 0 on success / -EIO on failure
+ **/
+static int ipr_get_ldump_data_section(struct ipr_ioa_cfg *ioa_cfg,
+ u32 start_addr,
+ u32 *dest, u32 length_in_words)
+{
+ volatile u32 temp_pcii_reg;
+ int i, delay = 0;
+
+ /* Write IOA interrupt reg starting LDUMP state */
+ writel((IPR_UPROCI_RESET_ALERT | IPR_UPROCI_IO_DEBUG_ALERT),
+ ioa_cfg->regs.set_uproc_interrupt_reg);
+
+ /* Wait for IO debug acknowledge */
+ if (ipr_wait_iodbg_ack(ioa_cfg,
+ IPR_LDUMP_MAX_LONG_ACK_DELAY_IN_USEC)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "IOA dump long data transfer timeout\n");
+ return -EIO;
+ }
+
+ /* Signal LDUMP interlocked - clear IO debug ack */
+ writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE,
+ ioa_cfg->regs.clr_interrupt_reg);
+
+ /* Write Mailbox with starting address */
+ writel(start_addr, ioa_cfg->ioa_mailbox);
+
+ /* Signal address valid - clear IOA Reset alert */
+ writel(IPR_UPROCI_RESET_ALERT,
+ ioa_cfg->regs.clr_uproc_interrupt_reg);
+
+ for (i = 0; i < length_in_words; i++) {
+ /* Wait for IO debug acknowledge */
+ if (ipr_wait_iodbg_ack(ioa_cfg,
+ IPR_LDUMP_MAX_SHORT_ACK_DELAY_IN_USEC)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "IOA dump short data transfer timeout\n");
+ return -EIO;
+ }
+
+ /* Read data from mailbox and increment destination pointer */
+ *dest = cpu_to_be32(readl(ioa_cfg->ioa_mailbox));
+ dest++;
+
+ /* For all but the last word of data, signal data received */
+ if (i < (length_in_words - 1)) {
+ /* Signal dump data received - Clear IO debug Ack */
+ writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE,
+ ioa_cfg->regs.clr_interrupt_reg);
+ }
+ }
+
+ /* Signal end of block transfer. Set reset alert then clear IO debug ack */
+ writel(IPR_UPROCI_RESET_ALERT,
+ ioa_cfg->regs.set_uproc_interrupt_reg);
+
+ writel(IPR_UPROCI_IO_DEBUG_ALERT,
+ ioa_cfg->regs.clr_uproc_interrupt_reg);
+
+ /* Signal dump data received - Clear IO debug Ack */
+ writel(IPR_PCII_IO_DEBUG_ACKNOWLEDGE,
+ ioa_cfg->regs.clr_interrupt_reg);
+
+ /* Wait for IOA to signal LDUMP exit - IOA reset alert will be cleared */
+ while (delay < IPR_LDUMP_MAX_SHORT_ACK_DELAY_IN_USEC) {
+ temp_pcii_reg =
+ readl(ioa_cfg->regs.sense_uproc_interrupt_reg);
+
+ if (!(temp_pcii_reg & IPR_UPROCI_RESET_ALERT))
+ return 0;
+
+ udelay(10);
+ delay += 10;
+ }
+
+ return 0;
+}
+
+#ifdef CONFIG_SCSI_IPR_DUMP
+/**
+ * ipr_sdt_copy - Copy Smart Dump Table to kernel buffer
+ * @ioa_cfg: ioa config struct
+ * @pci_address: adapter address
+ * @length: length of data to copy
+ *
+ * Copy data from PCI adapter to kernel buffer.
+ * Note: length MUST be a 4 byte multiple
+ * Return value:
+ * 0 on success / other on failure
+ **/
+static int ipr_sdt_copy(struct ipr_ioa_cfg *ioa_cfg,
+ unsigned long pci_address, u32 length)
+{
+ int bytes_copied = 0;
+ int cur_len, rc, rem_len, rem_page_len;
+ u32 *page;
+ unsigned long lock_flags = 0;
+ struct ipr_ioa_dump *ioa_dump = &ioa_cfg->dump->ioa_dump;
+
+ while (bytes_copied < length &&
+ (ioa_dump->hdr.len + bytes_copied) < IPR_MAX_IOA_DUMP_SIZE) {
+ if (ioa_dump->page_offset >= PAGE_SIZE ||
+ ioa_dump->page_offset == 0) {
+ page = (u32 *)__get_free_page(GFP_ATOMIC);
+
+ if (!page) {
+ ipr_trace;
+ return bytes_copied;
+ }
+
+ ioa_dump->page_offset = 0;
+ ioa_dump->ioa_data[ioa_dump->next_page_index] = page;
+ ioa_dump->next_page_index++;
+ } else
+ page = ioa_dump->ioa_data[ioa_dump->next_page_index - 1];
+
+ rem_len = length - bytes_copied;
+ rem_page_len = PAGE_SIZE - ioa_dump->page_offset;
+ cur_len = min(rem_len, rem_page_len);
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (ioa_cfg->sdt_state == ABORT_DUMP) {
+ rc = -EIO;
+ } else {
+ rc = ipr_get_ldump_data_section(ioa_cfg,
+ pci_address + bytes_copied,
+ &page[ioa_dump->page_offset / 4],
+ (cur_len / sizeof(u32)));
+ }
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ if (!rc) {
+ ioa_dump->page_offset += cur_len;
+ bytes_copied += cur_len;
+ } else {
+ ipr_trace;
+ break;
+ }
+ schedule();
+ }
+
+ return bytes_copied;
+}
+
+/**
+ * ipr_init_dump_entry_hdr - Initialize a dump entry header.
+ * @hdr: dump entry header struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_init_dump_entry_hdr(struct ipr_dump_entry_header *hdr)
+{
+ hdr->eye_catcher = IPR_DUMP_EYE_CATCHER;
+ hdr->num_elems = 1;
+ hdr->offset = sizeof(*hdr);
+ hdr->status = IPR_DUMP_STATUS_SUCCESS;
+}
+
+/**
+ * ipr_dump_ioa_type_data - Fill in the adapter type in the dump.
+ * @ioa_cfg: ioa config struct
+ * @driver_dump: driver dump struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_dump_ioa_type_data(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_driver_dump *driver_dump)
+{
+ struct ipr_inquiry_page3 *ucode_vpd = &ioa_cfg->vpd_cbs->page3_data;
+
+ ipr_init_dump_entry_hdr(&driver_dump->ioa_type_entry.hdr);
+ driver_dump->ioa_type_entry.hdr.len =
+ sizeof(struct ipr_dump_ioa_type_entry) -
+ sizeof(struct ipr_dump_entry_header);
+ driver_dump->ioa_type_entry.hdr.data_type = IPR_DUMP_DATA_TYPE_BINARY;
+ driver_dump->ioa_type_entry.hdr.id = IPR_DUMP_DRIVER_TYPE_ID;
+ driver_dump->ioa_type_entry.type = ioa_cfg->type;
+ driver_dump->ioa_type_entry.fw_version = (ucode_vpd->major_release << 24) |
+ (ucode_vpd->card_type << 16) | (ucode_vpd->minor_release[0] << 8) |
+ ucode_vpd->minor_release[1];
+ driver_dump->hdr.num_entries++;
+}
+
+/**
+ * ipr_dump_version_data - Fill in the driver version in the dump.
+ * @ioa_cfg: ioa config struct
+ * @driver_dump: driver dump struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_dump_version_data(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_driver_dump *driver_dump)
+{
+ ipr_init_dump_entry_hdr(&driver_dump->version_entry.hdr);
+ driver_dump->version_entry.hdr.len =
+ sizeof(struct ipr_dump_version_entry) -
+ sizeof(struct ipr_dump_entry_header);
+ driver_dump->version_entry.hdr.data_type = IPR_DUMP_DATA_TYPE_ASCII;
+ driver_dump->version_entry.hdr.id = IPR_DUMP_DRIVER_VERSION_ID;
+ strcpy(driver_dump->version_entry.version, IPR_DRIVER_VERSION);
+ driver_dump->hdr.num_entries++;
+}
+
+/**
+ * ipr_dump_trace_data - Fill in the IOA trace in the dump.
+ * @ioa_cfg: ioa config struct
+ * @driver_dump: driver dump struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_dump_trace_data(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_driver_dump *driver_dump)
+{
+ ipr_init_dump_entry_hdr(&driver_dump->trace_entry.hdr);
+ driver_dump->trace_entry.hdr.len =
+ sizeof(struct ipr_dump_trace_entry) -
+ sizeof(struct ipr_dump_entry_header);
+ driver_dump->trace_entry.hdr.data_type = IPR_DUMP_DATA_TYPE_BINARY;
+ driver_dump->trace_entry.hdr.id = IPR_DUMP_TRACE_ID;
+ memcpy(driver_dump->trace_entry.trace, ioa_cfg->trace, IPR_TRACE_SIZE);
+ driver_dump->hdr.num_entries++;
+}
+
+/**
+ * ipr_dump_location_data - Fill in the IOA location in the dump.
+ * @ioa_cfg: ioa config struct
+ * @driver_dump: driver dump struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_dump_location_data(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_driver_dump *driver_dump)
+{
+ ipr_init_dump_entry_hdr(&driver_dump->location_entry.hdr);
+ driver_dump->location_entry.hdr.len =
+ sizeof(struct ipr_dump_location_entry) -
+ sizeof(struct ipr_dump_entry_header);
+ driver_dump->location_entry.hdr.data_type = IPR_DUMP_DATA_TYPE_ASCII;
+ driver_dump->location_entry.hdr.id = IPR_DUMP_LOCATION_ID;
+ strcpy(driver_dump->location_entry.location, ioa_cfg->pdev->dev.bus_id);
+ driver_dump->hdr.num_entries++;
+}
+
+/**
+ * ipr_get_ioa_dump - Perform a dump of the driver and adapter.
+ * @ioa_cfg: ioa config struct
+ * @dump: dump struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_get_ioa_dump(struct ipr_ioa_cfg *ioa_cfg, struct ipr_dump *dump)
+{
+ unsigned long start_addr, sdt_word;
+ unsigned long lock_flags = 0;
+ struct ipr_driver_dump *driver_dump = &dump->driver_dump;
+ struct ipr_ioa_dump *ioa_dump = &dump->ioa_dump;
+ u32 num_entries, start_off, end_off;
+ u32 bytes_to_copy, bytes_copied, rc;
+ struct ipr_sdt *sdt;
+ int i;
+
+ ENTER;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ if (ioa_cfg->sdt_state != GET_DUMP) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+ start_addr = readl(ioa_cfg->ioa_mailbox);
+
+ if (!ipr_sdt_is_fmt2(start_addr)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Invalid dump table format: %lx\n", start_addr);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+ dev_err(&ioa_cfg->pdev->dev, "Dump of IOA initiated\n");
+
+ driver_dump->hdr.eye_catcher = IPR_DUMP_EYE_CATCHER;
+
+ /* Initialize the overall dump header */
+ driver_dump->hdr.len = sizeof(struct ipr_driver_dump);
+ driver_dump->hdr.num_entries = 1;
+ driver_dump->hdr.first_entry_offset = sizeof(struct ipr_dump_header);
+ driver_dump->hdr.status = IPR_DUMP_STATUS_SUCCESS;
+ driver_dump->hdr.os = IPR_DUMP_OS_LINUX;
+ driver_dump->hdr.driver_name = IPR_DUMP_DRIVER_NAME;
+
+ ipr_dump_version_data(ioa_cfg, driver_dump);
+ ipr_dump_location_data(ioa_cfg, driver_dump);
+ ipr_dump_ioa_type_data(ioa_cfg, driver_dump);
+ ipr_dump_trace_data(ioa_cfg, driver_dump);
+
+ /* Update dump_header */
+ driver_dump->hdr.len += sizeof(struct ipr_dump_entry_header);
+
+ /* IOA Dump entry */
+ ipr_init_dump_entry_hdr(&ioa_dump->hdr);
+ ioa_dump->format = IPR_SDT_FMT2;
+ ioa_dump->hdr.len = 0;
+ ioa_dump->hdr.data_type = IPR_DUMP_DATA_TYPE_BINARY;
+ ioa_dump->hdr.id = IPR_DUMP_IOA_DUMP_ID;
+
+ /* First entries in sdt are actually a list of dump addresses and
+ lengths to gather the real dump data. sdt represents the pointer
+ to the ioa generated dump table. Dump data will be extracted based
+ on entries in this table */
+ sdt = &ioa_dump->sdt;
+
+ rc = ipr_get_ldump_data_section(ioa_cfg, start_addr, (u32 *)sdt,
+ sizeof(struct ipr_sdt) / sizeof(u32));
+
+ /* Smart Dump table is ready to use and the first entry is valid */
+ if (rc || (be32_to_cpu(sdt->hdr.state) != IPR_FMT2_SDT_READY_TO_USE)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Dump of IOA failed. Dump table not valid: %d, %X.\n",
+ rc, be32_to_cpu(sdt->hdr.state));
+ driver_dump->hdr.status = IPR_DUMP_STATUS_FAILED;
+ ioa_cfg->sdt_state = DUMP_OBTAINED;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+ num_entries = be32_to_cpu(sdt->hdr.num_entries_used);
+
+ if (num_entries > IPR_NUM_SDT_ENTRIES)
+ num_entries = IPR_NUM_SDT_ENTRIES;
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ for (i = 0; i < num_entries; i++) {
+ if (ioa_dump->hdr.len > IPR_MAX_IOA_DUMP_SIZE) {
+ driver_dump->hdr.status = IPR_DUMP_STATUS_QUAL_SUCCESS;
+ break;
+ }
+
+ if (sdt->entry[i].flags & IPR_SDT_VALID_ENTRY) {
+ sdt_word = be32_to_cpu(sdt->entry[i].bar_str_offset);
+ start_off = sdt_word & IPR_FMT2_MBX_ADDR_MASK;
+ end_off = be32_to_cpu(sdt->entry[i].end_offset);
+
+ if (ipr_sdt_is_fmt2(sdt_word) && sdt_word) {
+ bytes_to_copy = end_off - start_off;
+ if (bytes_to_copy > IPR_MAX_IOA_DUMP_SIZE) {
+ sdt->entry[i].flags &= ~IPR_SDT_VALID_ENTRY;
+ continue;
+ }
+
+ /* Copy data from adapter to driver buffers */
+ bytes_copied = ipr_sdt_copy(ioa_cfg, sdt_word,
+ bytes_to_copy);
+
+ ioa_dump->hdr.len += bytes_copied;
+
+ if (bytes_copied != bytes_to_copy) {
+ driver_dump->hdr.status = IPR_DUMP_STATUS_QUAL_SUCCESS;
+ break;
+ }
+ }
+ }
+ }
+
+ dev_err(&ioa_cfg->pdev->dev, "Dump of IOA completed.\n");
+
+ /* Update dump_header */
+ driver_dump->hdr.len += ioa_dump->hdr.len;
+ wmb();
+ ioa_cfg->sdt_state = DUMP_OBTAINED;
+ LEAVE;
+}
+
+#else
+#define ipr_get_ioa_dump(ioa_cfg, dump) do { } while(0)
+#endif
+
+/**
+ * ipr_release_dump - Free adapter dump memory
+ * @kref: kref struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_release_dump(struct kref *kref)
+{
+ struct ipr_dump *dump = container_of(kref,struct ipr_dump,kref);
+ struct ipr_ioa_cfg *ioa_cfg = dump->ioa_cfg;
+ unsigned long lock_flags = 0;
+ int i;
+
+ ENTER;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ ioa_cfg->dump = NULL;
+ ioa_cfg->sdt_state = INACTIVE;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ for (i = 0; i < dump->ioa_dump.next_page_index; i++)
+ free_page((unsigned long) dump->ioa_dump.ioa_data[i]);
+
+ kfree(dump);
+ LEAVE;
+}
+
+/**
+ * ipr_worker_thread - Worker thread
+ * @data: ioa config struct
+ *
+ * Called at task level from a work thread. This function takes care
+ * of adding and removing device from the mid-layer as configuration
+ * changes are detected by the adapter.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_worker_thread(void *data)
+{
+ unsigned long lock_flags;
+ struct ipr_resource_entry *res;
+ struct scsi_device *sdev;
+ struct ipr_dump *dump;
+ struct ipr_ioa_cfg *ioa_cfg = data;
+ u8 bus, target, lun;
+ int did_work;
+
+ ENTER;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ if (ioa_cfg->sdt_state == GET_DUMP) {
+ dump = ioa_cfg->dump;
+ if (!dump) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+ kref_get(&dump->kref);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ ipr_get_ioa_dump(ioa_cfg, dump);
+ kref_put(&dump->kref, ipr_release_dump);
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (ioa_cfg->sdt_state == DUMP_OBTAINED)
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+restart:
+ do {
+ did_work = 0;
+ if (!ioa_cfg->allow_cmds || !ioa_cfg->allow_ml_add_del) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (res->del_from_ml && res->sdev) {
+ did_work = 1;
+ sdev = res->sdev;
+ if (!scsi_device_get(sdev)) {
+ res->sdev = NULL;
+ list_move_tail(&res->queue, &ioa_cfg->free_res_q);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ scsi_remove_device(sdev);
+ scsi_device_put(sdev);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ }
+ break;
+ }
+ }
+ } while(did_work);
+
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (res->add_to_ml) {
+ bus = res->cfgte.res_addr.bus;
+ target = res->cfgte.res_addr.target;
+ lun = res->cfgte.res_addr.lun;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ scsi_add_device(ioa_cfg->host, bus, target, lun);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ goto restart;
+ }
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ LEAVE;
+}
+
+#ifdef CONFIG_SCSI_IPR_TRACE
+/**
+ * ipr_read_trace - Dump the adapter trace
+ * @kobj: kobject struct
+ * @buf: buffer
+ * @off: offset
+ * @count: buffer size
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_read_trace(struct kobject *kobj, char *buf,
+ loff_t off, size_t count)
+{
+ struct class_device *cdev = container_of(kobj,struct class_device,kobj);
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags = 0;
+ int size = IPR_TRACE_SIZE;
+ char *src = (char *)ioa_cfg->trace;
+
+ if (off > size)
+ return 0;
+ if (off + count > size) {
+ size -= off;
+ count = size;
+ }
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ memcpy(buf, &src[off], count);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return count;
+}
+
+static struct bin_attribute ipr_trace_attr = {
+ .attr = {
+ .name = "trace",
+ .mode = S_IRUGO,
+ },
+ .size = 0,
+ .read = ipr_read_trace,
+};
+#endif
+
+/**
+ * ipr_show_fw_version - Show the firmware version
+ * @class_dev: class device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_show_fw_version(struct class_device *class_dev, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ struct ipr_inquiry_page3 *ucode_vpd = &ioa_cfg->vpd_cbs->page3_data;
+ unsigned long lock_flags = 0;
+ int len;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ len = snprintf(buf, PAGE_SIZE, "%02X%02X%02X%02X\n",
+ ucode_vpd->major_release, ucode_vpd->card_type,
+ ucode_vpd->minor_release[0],
+ ucode_vpd->minor_release[1]);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return len;
+}
+
+static struct class_device_attribute ipr_fw_version_attr = {
+ .attr = {
+ .name = "fw_version",
+ .mode = S_IRUGO,
+ },
+ .show = ipr_show_fw_version,
+};
+
+/**
+ * ipr_show_log_level - Show the adapter's error logging level
+ * @class_dev: class device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_show_log_level(struct class_device *class_dev, char *buf)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags = 0;
+ int len;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ len = snprintf(buf, PAGE_SIZE, "%d\n", ioa_cfg->log_level);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return len;
+}
+
+/**
+ * ipr_store_log_level - Change the adapter's error logging level
+ * @class_dev: class device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_store_log_level(struct class_device *class_dev,
+ const char *buf, size_t count)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags = 0;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ ioa_cfg->log_level = simple_strtoul(buf, NULL, 10);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return strlen(buf);
+}
+
+static struct class_device_attribute ipr_log_level_attr = {
+ .attr = {
+ .name = "log_level",
+ .mode = S_IRUGO | S_IWUSR,
+ },
+ .show = ipr_show_log_level,
+ .store = ipr_store_log_level
+};
+
+/**
+ * ipr_store_diagnostics - IOA Diagnostics interface
+ * @class_dev: class_device struct
+ * @buf: buffer
+ * @count: buffer size
+ *
+ * This function will reset the adapter and wait a reasonable
+ * amount of time for any errors that the adapter might log.
+ *
+ * Return value:
+ * count on success / other on failure
+ **/
+static ssize_t ipr_store_diagnostics(struct class_device *class_dev,
+ const char *buf, size_t count)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags = 0;
+ int rc = count;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ ioa_cfg->errors_logged = 0;
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NORMAL);
+
+ if (ioa_cfg->in_reset_reload) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+
+ /* Wait for a second for any errors to be logged */
+ msleep(1000);
+ } else {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return -EIO;
+ }
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (ioa_cfg->in_reset_reload || ioa_cfg->errors_logged)
+ rc = -EIO;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ return rc;
+}
+
+static struct class_device_attribute ipr_diagnostics_attr = {
+ .attr = {
+ .name = "run_diagnostics",
+ .mode = S_IWUSR,
+ },
+ .store = ipr_store_diagnostics
+};
+
+/**
+ * ipr_store_reset_adapter - Reset the adapter
+ * @class_dev: class_device struct
+ * @buf: buffer
+ * @count: buffer size
+ *
+ * This function will reset the adapter.
+ *
+ * Return value:
+ * count on success / other on failure
+ **/
+static ssize_t ipr_store_reset_adapter(struct class_device *class_dev,
+ const char *buf, size_t count)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ unsigned long lock_flags;
+ int result = count;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (!ioa_cfg->in_reset_reload)
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NORMAL);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+
+ return result;
+}
+
+static struct class_device_attribute ipr_ioa_reset_attr = {
+ .attr = {
+ .name = "reset_host",
+ .mode = S_IWUSR,
+ },
+ .store = ipr_store_reset_adapter
+};
+
+/**
+ * ipr_alloc_ucode_buffer - Allocates a microcode download buffer
+ * @buf_len: buffer length
+ *
+ * Allocates a DMA'able buffer in chunks and assembles a scatter/gather
+ * list to use for microcode download
+ *
+ * Return value:
+ * pointer to sglist / NULL on failure
+ **/
+static struct ipr_sglist *ipr_alloc_ucode_buffer(int buf_len)
+{
+ int sg_size, order, bsize_elem, num_elem, i, j;
+ struct ipr_sglist *sglist;
+ struct scatterlist *scatterlist;
+ struct page *page;
+
+ /* Get the minimum size per scatter/gather element */
+ sg_size = buf_len / (IPR_MAX_SGLIST - 1);
+
+ /* Get the actual size per element */
+ order = get_order(sg_size);
+
+ /* Determine the actual number of bytes per element */
+ bsize_elem = PAGE_SIZE * (1 << order);
+
+ /* Determine the actual number of sg entries needed */
+ if (buf_len % bsize_elem)
+ num_elem = (buf_len / bsize_elem) + 1;
+ else
+ num_elem = buf_len / bsize_elem;
+
+ /* Allocate a scatter/gather list for the DMA */
+ sglist = kmalloc(sizeof(struct ipr_sglist) +
+ (sizeof(struct scatterlist) * (num_elem - 1)),
+ GFP_KERNEL);
+
+ if (sglist == NULL) {
+ ipr_trace;
+ return NULL;
+ }
+
+ memset(sglist, 0, sizeof(struct ipr_sglist) +
+ (sizeof(struct scatterlist) * (num_elem - 1)));
+
+ scatterlist = sglist->scatterlist;
+
+ sglist->order = order;
+ sglist->num_sg = num_elem;
+
+ /* Allocate a bunch of sg elements */
+ for (i = 0; i < num_elem; i++) {
+ page = alloc_pages(GFP_KERNEL, order);
+ if (!page) {
+ ipr_trace;
+
+ /* Free up what we already allocated */
+ for (j = i - 1; j >= 0; j--)
+ __free_pages(scatterlist[j].page, order);
+ kfree(sglist);
+ return NULL;
+ }
+
+ scatterlist[i].page = page;
+ }
+
+ return sglist;
+}
+
+/**
+ * ipr_free_ucode_buffer - Frees a microcode download buffer
+ * @p_dnld: scatter/gather list pointer
+ *
+ * Free a DMA'able ucode download buffer previously allocated with
+ * ipr_alloc_ucode_buffer
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_free_ucode_buffer(struct ipr_sglist *sglist)
+{
+ int i;
+
+ for (i = 0; i < sglist->num_sg; i++)
+ __free_pages(sglist->scatterlist[i].page, sglist->order);
+
+ kfree(sglist);
+}
+
+/**
+ * ipr_copy_ucode_buffer - Copy user buffer to kernel buffer
+ * @sglist: scatter/gather list pointer
+ * @buffer: buffer pointer
+ * @len: buffer length
+ *
+ * Copy a microcode image from a user buffer into a buffer allocated by
+ * ipr_alloc_ucode_buffer
+ *
+ * Return value:
+ * 0 on success / other on failure
+ **/
+static int ipr_copy_ucode_buffer(struct ipr_sglist *sglist,
+ u8 *buffer, u32 len)
+{
+ int bsize_elem, i, result = 0;
+ struct scatterlist *scatterlist;
+ void *kaddr;
+
+ /* Determine the actual number of bytes per element */
+ bsize_elem = PAGE_SIZE * (1 << sglist->order);
+
+ scatterlist = sglist->scatterlist;
+
+ for (i = 0; i < (len / bsize_elem); i++, buffer += bsize_elem) {
+ kaddr = kmap(scatterlist[i].page);
+ memcpy(kaddr, buffer, bsize_elem);
+ kunmap(scatterlist[i].page);
+
+ scatterlist[i].length = bsize_elem;
+
+ if (result != 0) {
+ ipr_trace;
+ return result;
+ }
+ }
+
+ if (len % bsize_elem) {
+ kaddr = kmap(scatterlist[i].page);
+ memcpy(kaddr, buffer, len % bsize_elem);
+ kunmap(scatterlist[i].page);
+
+ scatterlist[i].length = len % bsize_elem;
+ }
+
+ sglist->buffer_len = len;
+ return result;
+}
+
+/**
+ * ipr_map_ucode_buffer - Map a microcode download buffer
+ * @ipr_cmd: ipr command struct
+ * @sglist: scatter/gather list
+ * @len: total length of download buffer
+ *
+ * Maps a microcode download scatter/gather list for DMA and
+ * builds the IOADL.
+ *
+ * Return value:
+ * 0 on success / -EIO on failure
+ **/
+static int ipr_map_ucode_buffer(struct ipr_cmnd *ipr_cmd,
+ struct ipr_sglist *sglist, int len)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+ struct scatterlist *scatterlist = sglist->scatterlist;
+ int i;
+
+ ipr_cmd->dma_use_sg = pci_map_sg(ioa_cfg->pdev, scatterlist,
+ sglist->num_sg, DMA_TO_DEVICE);
+
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
+ ioarcb->write_data_transfer_length = cpu_to_be32(len);
+ ioarcb->write_ioadl_len =
+ cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
+
+ for (i = 0; i < ipr_cmd->dma_use_sg; i++) {
+ ioadl[i].flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_WRITE | sg_dma_len(&scatterlist[i]));
+ ioadl[i].address =
+ cpu_to_be32(sg_dma_address(&scatterlist[i]));
+ }
+
+ if (likely(ipr_cmd->dma_use_sg)) {
+ ioadl[i-1].flags_and_data_len |=
+ cpu_to_be32(IPR_IOADL_FLAGS_LAST);
+ }
+ else {
+ dev_err(&ioa_cfg->pdev->dev, "pci_map_sg failed!\n");
+ return -EIO;
+ }
+
+ return 0;
+}
+
+/**
+ * ipr_store_update_fw - Update the firmware on the adapter
+ * @class_dev: class_device struct
+ * @buf: buffer
+ * @count: buffer size
+ *
+ * This function will update the firmware on the adapter.
+ *
+ * Return value:
+ * count on success / other on failure
+ **/
+static ssize_t ipr_store_update_fw(struct class_device *class_dev,
+ const char *buf, size_t count)
+{
+ struct Scsi_Host *shost = class_to_shost(class_dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ struct ipr_ucode_image_header *image_hdr;
+ const struct firmware *fw_entry;
+ struct ipr_sglist *sglist;
+ unsigned long lock_flags;
+ char fname[100];
+ char *src;
+ int len, result, dnld_size;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ len = snprintf(fname, 99, "%s", buf);
+ fname[len-1] = '\0';
+
+ if(request_firmware(&fw_entry, fname, &ioa_cfg->pdev->dev)) {
+ dev_err(&ioa_cfg->pdev->dev, "Firmware file %s not found\n", fname);
+ return -EIO;
+ }
+
+ image_hdr = (struct ipr_ucode_image_header *)fw_entry->data;
+
+ if (be32_to_cpu(image_hdr->header_length) > fw_entry->size ||
+ (ioa_cfg->vpd_cbs->page3_data.card_type &&
+ ioa_cfg->vpd_cbs->page3_data.card_type != image_hdr->card_type)) {
+ dev_err(&ioa_cfg->pdev->dev, "Invalid microcode buffer\n");
+ release_firmware(fw_entry);
+ return -EINVAL;
+ }
+
+ src = (u8 *)image_hdr + be32_to_cpu(image_hdr->header_length);
+ dnld_size = fw_entry->size - be32_to_cpu(image_hdr->header_length);
+ sglist = ipr_alloc_ucode_buffer(dnld_size);
+
+ if (!sglist) {
+ dev_err(&ioa_cfg->pdev->dev, "Microcode buffer allocation failed\n");
+ release_firmware(fw_entry);
+ return -ENOMEM;
+ }
+
+ result = ipr_copy_ucode_buffer(sglist, src, dnld_size);
+
+ if (result) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Microcode buffer copy to DMA buffer failed\n");
+ ipr_free_ucode_buffer(sglist);
+ release_firmware(fw_entry);
+ return result;
+ }
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ if (ioa_cfg->ucode_sglist) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ dev_err(&ioa_cfg->pdev->dev,
+ "Microcode download already in progress\n");
+ ipr_free_ucode_buffer(sglist);
+ release_firmware(fw_entry);
+ return -EIO;
+ }
+
+ ioa_cfg->ucode_sglist = sglist;
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NORMAL);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ ioa_cfg->ucode_sglist = NULL;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ ipr_free_ucode_buffer(sglist);
+ release_firmware(fw_entry);
+
+ return count;
+}
+
+static struct class_device_attribute ipr_update_fw_attr = {
+ .attr = {
+ .name = "update_fw",
+ .mode = S_IWUSR,
+ },
+ .store = ipr_store_update_fw
+};
+
+static struct class_device_attribute *ipr_ioa_attrs[] = {
+ &ipr_fw_version_attr,
+ &ipr_log_level_attr,
+ &ipr_diagnostics_attr,
+ &ipr_ioa_reset_attr,
+ &ipr_update_fw_attr,
+ NULL,
+};
+
+#ifdef CONFIG_SCSI_IPR_DUMP
+/**
+ * ipr_read_dump - Dump the adapter
+ * @kobj: kobject struct
+ * @buf: buffer
+ * @off: offset
+ * @count: buffer size
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_read_dump(struct kobject *kobj, char *buf,
+ loff_t off, size_t count)
+{
+ struct class_device *cdev = container_of(kobj,struct class_device,kobj);
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ struct ipr_dump *dump;
+ unsigned long lock_flags = 0;
+ char *src;
+ int len;
+ size_t rc = count;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ dump = ioa_cfg->dump;
+
+ if (ioa_cfg->sdt_state != DUMP_OBTAINED || !dump) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return 0;
+ }
+ kref_get(&dump->kref);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ if (off > dump->driver_dump.hdr.len) {
+ kref_put(&dump->kref, ipr_release_dump);
+ return 0;
+ }
+
+ if (off + count > dump->driver_dump.hdr.len) {
+ count = dump->driver_dump.hdr.len - off;
+ rc = count;
+ }
+
+ if (count && off < sizeof(dump->driver_dump)) {
+ if (off + count > sizeof(dump->driver_dump))
+ len = sizeof(dump->driver_dump) - off;
+ else
+ len = count;
+ src = (u8 *)&dump->driver_dump + off;
+ memcpy(buf, src, len);
+ buf += len;
+ off += len;
+ count -= len;
+ }
+
+ off -= sizeof(dump->driver_dump);
+
+ if (count && off < offsetof(struct ipr_ioa_dump, ioa_data)) {
+ if (off + count > offsetof(struct ipr_ioa_dump, ioa_data))
+ len = offsetof(struct ipr_ioa_dump, ioa_data) - off;
+ else
+ len = count;
+ src = (u8 *)&dump->ioa_dump + off;
+ memcpy(buf, src, len);
+ buf += len;
+ off += len;
+ count -= len;
+ }
+
+ off -= offsetof(struct ipr_ioa_dump, ioa_data);
+
+ while (count) {
+ if ((off & PAGE_MASK) != ((off + count) & PAGE_MASK))
+ len = PAGE_ALIGN(off) - off;
+ else
+ len = count;
+ src = (u8 *)dump->ioa_dump.ioa_data[(off & PAGE_MASK) >> PAGE_SHIFT];
+ src += off & ~PAGE_MASK;
+ memcpy(buf, src, len);
+ buf += len;
+ off += len;
+ count -= len;
+ }
+
+ kref_put(&dump->kref, ipr_release_dump);
+ return rc;
+}
+
+/**
+ * ipr_alloc_dump - Prepare for adapter dump
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / other on failure
+ **/
+static int ipr_alloc_dump(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct ipr_dump *dump;
+ unsigned long lock_flags = 0;
+
+ ENTER;
+ dump = kmalloc(sizeof(struct ipr_dump), GFP_KERNEL);
+
+ if (!dump) {
+ ipr_err("Dump memory allocation failed\n");
+ return -ENOMEM;
+ }
+
+ memset(dump, 0, sizeof(struct ipr_dump));
+ kref_init(&dump->kref);
+ dump->ioa_cfg = ioa_cfg;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ if (INACTIVE != ioa_cfg->sdt_state) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ kfree(dump);
+ return 0;
+ }
+
+ ioa_cfg->dump = dump;
+ ioa_cfg->sdt_state = WAIT_FOR_DUMP;
+ if (ioa_cfg->ioa_is_dead && !ioa_cfg->dump_taken) {
+ ioa_cfg->dump_taken = 1;
+ schedule_work(&ioa_cfg->work_q);
+ }
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ LEAVE;
+ return 0;
+}
+
+/**
+ * ipr_free_dump - Free adapter dump memory
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / other on failure
+ **/
+static int ipr_free_dump(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct ipr_dump *dump;
+ unsigned long lock_flags = 0;
+
+ ENTER;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ dump = ioa_cfg->dump;
+ if (!dump) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return 0;
+ }
+
+ ioa_cfg->dump = NULL;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ kref_put(&dump->kref, ipr_release_dump);
+
+ LEAVE;
+ return 0;
+}
+
+/**
+ * ipr_write_dump - Setup dump state of adapter
+ * @kobj: kobject struct
+ * @buf: buffer
+ * @off: offset
+ * @count: buffer size
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_write_dump(struct kobject *kobj, char *buf,
+ loff_t off, size_t count)
+{
+ struct class_device *cdev = container_of(kobj,struct class_device,kobj);
+ struct Scsi_Host *shost = class_to_shost(cdev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)shost->hostdata;
+ int rc;
+
+ if (!capable(CAP_SYS_ADMIN))
+ return -EACCES;
+
+ if (buf[0] == '1')
+ rc = ipr_alloc_dump(ioa_cfg);
+ else if (buf[0] == '0')
+ rc = ipr_free_dump(ioa_cfg);
+ else
+ return -EINVAL;
+
+ if (rc)
+ return rc;
+ else
+ return count;
+}
+
+static struct bin_attribute ipr_dump_attr = {
+ .attr = {
+ .name = "dump",
+ .mode = S_IRUSR | S_IWUSR,
+ },
+ .size = 0,
+ .read = ipr_read_dump,
+ .write = ipr_write_dump
+};
+#else
+static int ipr_free_dump(struct ipr_ioa_cfg *ioa_cfg) { return 0; };
+#endif
+
+/**
+ * ipr_store_queue_depth - Change the device's queue depth
+ * @dev: device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_store_queue_depth(struct device *dev,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ int qdepth = simple_strtoul(buf, NULL, 10);
+ int tagged = 0;
+ unsigned long lock_flags = 0;
+ ssize_t len = -ENXIO;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ res = (struct ipr_resource_entry *)sdev->hostdata;
+ if (res) {
+ res->qdepth = qdepth;
+
+ if (ipr_is_gscsi(res) && res->tcq_active)
+ tagged = MSG_ORDERED_TAG;
+
+ len = strlen(buf);
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ scsi_adjust_queue_depth(sdev, tagged, qdepth);
+ return len;
+}
+
+static struct device_attribute ipr_queue_depth_attr = {
+ .attr = {
+ .name = "queue_depth",
+ .mode = S_IRUSR | S_IWUSR,
+ },
+ .store = ipr_store_queue_depth
+};
+
+/**
+ * ipr_show_tcq_enable - Show if the device is enabled for tcqing
+ * @dev: device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_show_tcq_enable(struct device *dev, char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ unsigned long lock_flags = 0;
+ ssize_t len = -ENXIO;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ res = (struct ipr_resource_entry *)sdev->hostdata;
+ if (res)
+ len = snprintf(buf, PAGE_SIZE, "%d\n", res->tcq_active);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return len;
+}
+
+/**
+ * ipr_store_tcq_enable - Change the device's TCQing state
+ * @dev: device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_store_tcq_enable(struct device *dev,
+ const char *buf, size_t count)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ unsigned long lock_flags = 0;
+ int tcq_active = simple_strtoul(buf, NULL, 10);
+ ssize_t len = -ENXIO;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ res = (struct ipr_resource_entry *)sdev->hostdata;
+
+ if (res) {
+ if (ipr_is_gscsi(res) && sdev->tagged_supported) {
+ if (tcq_active) {
+ res->tcq_active = 1;
+ scsi_activate_tcq(sdev, res->qdepth);
+ } else {
+ res->tcq_active = 0;
+ scsi_deactivate_tcq(sdev, res->qdepth);
+ }
+
+ len = strlen(buf);
+ } else if (tcq_active) {
+ len = -EINVAL;
+ }
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return len;
+}
+
+static struct device_attribute ipr_tcqing_attr = {
+ .attr = {
+ .name = "tcq_enable",
+ .mode = S_IRUSR | S_IWUSR,
+ },
+ .store = ipr_store_tcq_enable,
+ .show = ipr_show_tcq_enable
+};
+
+/**
+ * ipr_show_adapter_handle - Show the adapter's resource handle for this device
+ * @dev: device struct
+ * @buf: buffer
+ *
+ * Return value:
+ * number of bytes printed to buffer
+ **/
+static ssize_t ipr_show_adapter_handle(struct device *dev, char *buf)
+{
+ struct scsi_device *sdev = to_scsi_device(dev);
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ unsigned long lock_flags = 0;
+ ssize_t len = -ENXIO;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ res = (struct ipr_resource_entry *)sdev->hostdata;
+ if (res)
+ len = snprintf(buf, PAGE_SIZE, "%08X\n", res->cfgte.res_handle);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return len;
+}
+
+static struct device_attribute ipr_adapter_handle_attr = {
+ .attr = {
+ .name = "adapter_handle",
+ .mode = S_IRUSR,
+ },
+ .show = ipr_show_adapter_handle
+};
+
+static struct device_attribute *ipr_dev_attrs[] = {
+ &ipr_queue_depth_attr,
+ &ipr_tcqing_attr,
+ &ipr_adapter_handle_attr,
+ NULL,
+};
+
+/**
+ * ipr_biosparam - Return the HSC mapping
+ * @sdev: scsi device struct
+ * @block_device: block device pointer
+ * @capacity: capacity of the device
+ * @parm: Array containing returned HSC values.
+ *
+ * This function generates the HSC parms that fdisk uses.
+ * We want to make sure we return something that places partitions
+ * on 4k boundaries for best performance with the IOA.
+ *
+ * Return value:
+ * 0 on success
+ **/
+static int ipr_biosparam(struct scsi_device *sdev,
+ struct block_device *block_device,
+ sector_t capacity, int *parm)
+{
+ int heads, sectors;
+ sector_t cylinders;
+
+ heads = 128;
+ sectors = 32;
+
+ cylinders = capacity;
+ sector_div(cylinders, (128 * 32));
+
+ /* return result */
+ parm[0] = heads;
+ parm[1] = sectors;
+ parm[2] = cylinders;
+
+ return 0;
+}
+
+/**
+ * ipr_slave_destroy - Unconfigure a SCSI device
+ * @sdev: scsi device struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_slave_destroy(struct scsi_device *sdev)
+{
+ struct ipr_resource_entry *res;
+ struct ipr_ioa_cfg *ioa_cfg;
+ unsigned long lock_flags = 0;
+
+ ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ res = (struct ipr_resource_entry *) sdev->hostdata;
+ if (res) {
+ sdev->hostdata = NULL;
+ res->sdev = NULL;
+ }
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+}
+
+/**
+ * ipr_slave_configure - Configure a SCSI device
+ * @sdev: scsi device struct
+ *
+ * This function configures the specified scsi device.
+ *
+ * Return value:
+ * 0 on success
+ **/
+static int ipr_slave_configure(struct scsi_device *sdev)
+{
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ unsigned long lock_flags = 0;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ res = sdev->hostdata;
+ if (res) {
+ if (ipr_is_af_dasd_device(res))
+ sdev->type = TYPE_RAID;
+ if (ipr_is_af_dasd_device(res) || ipr_is_ioa_resource(res))
+ sdev->scsi_level = 4;
+ if (ipr_is_vset_device(res))
+ sdev->timeout = IPR_VSET_RW_TIMEOUT;
+ if (IPR_IS_DASD_DEVICE(res->cfgte.std_inq_data))
+ sdev->allow_restart = 1;
+ scsi_adjust_queue_depth(sdev, 0, res->qdepth);
+ }
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return 0;
+}
+
+/**
+ * ipr_slave_alloc - Prepare for commands to a device.
+ * @sdev: scsi device struct
+ *
+ * This function saves a pointer to the resource entry
+ * in the scsi device struct if the device exists. We
+ * can then use this pointer in ipr_queuecommand when
+ * handling new commands.
+ *
+ * Return value:
+ * 0 on success
+ **/
+static int ipr_slave_alloc(struct scsi_device *sdev)
+{
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *) sdev->host->hostdata;
+ struct ipr_resource_entry *res;
+ unsigned long lock_flags;
+
+ sdev->hostdata = NULL;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if ((res->cfgte.res_addr.bus == sdev->channel) &&
+ (res->cfgte.res_addr.target == sdev->id) &&
+ (res->cfgte.res_addr.lun == sdev->lun)) {
+ res->sdev = sdev;
+ res->add_to_ml = 0;
+ res->in_erp = 0;
+ sdev->hostdata = res;
+ res->needs_sync_complete = 1;
+ break;
+ }
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+
+ return 0;
+}
+
+/**
+ * ipr_eh_host_reset - Reset the host adapter
+ * @scsi_cmd: scsi command struct
+ *
+ * Return value:
+ * SUCCESS / FAILED
+ **/
+static int ipr_eh_host_reset(struct scsi_cmnd * scsi_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+ int rc;
+
+ ENTER;
+ ioa_cfg = (struct ipr_ioa_cfg *) scsi_cmd->device->host->hostdata;
+
+ dev_err(&ioa_cfg->pdev->dev,
+ "Adapter being reset as a result of error recovery.\n");
+
+ if (WAIT_FOR_DUMP == ioa_cfg->sdt_state)
+ ioa_cfg->sdt_state = GET_DUMP;
+
+ rc = ipr_reset_reload(ioa_cfg, IPR_SHUTDOWN_ABBREV);
+
+ LEAVE;
+ return rc;
+}
+
+/**
+ * ipr_eh_dev_reset - Reset the device
+ * @scsi_cmd: scsi command struct
+ *
+ * This function issues a device reset to the affected device.
+ * A LUN reset will be sent to the device first. If that does
+ * not work, a target reset will be sent.
+ *
+ * Return value:
+ * SUCCESS / FAILED
+ **/
+static int ipr_eh_dev_reset(struct scsi_cmnd * scsi_cmd)
+{
+ struct ipr_cmnd *ipr_cmd;
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct ipr_resource_entry *res;
+ struct ipr_cmd_pkt *cmd_pkt;
+ u32 ioasc;
+
+ ENTER;
+ ioa_cfg = (struct ipr_ioa_cfg *) scsi_cmd->device->host->hostdata;
+ res = scsi_cmd->device->hostdata;
+
+ if (!res || (!ipr_is_gscsi(res) && !ipr_is_vset_device(res)))
+ return FAILED;
+
+ /*
+ * If we are currently going through reset/reload, return failed. This will force the
+ * mid-layer to call ipr_eh_host_reset, which will then go to sleep and wait for the
+ * reset to complete
+ */
+ if (ioa_cfg->in_reset_reload)
+ return FAILED;
+ if (ioa_cfg->ioa_is_dead)
+ return FAILED;
+
+ list_for_each_entry(ipr_cmd, &ioa_cfg->pending_q, queue) {
+ if (ipr_cmd->ioarcb.res_handle == res->cfgte.res_handle) {
+ if (ipr_cmd->scsi_cmd)
+ ipr_cmd->done = ipr_scsi_eh_done;
+ }
+ }
+
+ res->resetting_device = 1;
+
+ ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+
+ ipr_cmd->ioarcb.res_handle = res->cfgte.res_handle;
+ cmd_pkt = &ipr_cmd->ioarcb.cmd_pkt;
+ cmd_pkt->request_type = IPR_RQTYPE_IOACMD;
+ cmd_pkt->cdb[0] = IPR_RESET_DEVICE;
+
+ ipr_sdev_err(scsi_cmd->device, "Resetting device\n");
+ ipr_send_blocking_cmd(ipr_cmd, ipr_timeout, IPR_DEVICE_RESET_TIMEOUT);
+
+ ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ res->resetting_device = 0;
+
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+
+ LEAVE;
+ return (IPR_IOASC_SENSE_KEY(ioasc) ? FAILED : SUCCESS);
+}
+
+/**
+ * ipr_bus_reset_done - Op done function for bus reset.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is the op done function for a bus reset
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_bus_reset_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_resource_entry *res;
+
+ ENTER;
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (!memcmp(&res->cfgte.res_handle, &ipr_cmd->ioarcb.res_handle,
+ sizeof(res->cfgte.res_handle))) {
+ scsi_report_bus_reset(ioa_cfg->host, res->cfgte.res_addr.bus);
+ break;
+ }
+ }
+
+ /*
+ * If abort has not completed, indicate the reset has, else call the
+ * abort's done function to wake the sleeping eh thread
+ */
+ if (ipr_cmd->sibling->sibling)
+ ipr_cmd->sibling->sibling = NULL;
+ else
+ ipr_cmd->sibling->done(ipr_cmd->sibling);
+
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ LEAVE;
+}
+
+/**
+ * ipr_abort_timeout - An abort task has timed out
+ * @ipr_cmd: ipr command struct
+ *
+ * This function handles when an abort task times out. If this
+ * happens we issue a bus reset since we have resources tied
+ * up that must be freed before returning to the midlayer.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_abort_timeout(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_cmnd *reset_cmd;
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_cmd_pkt *cmd_pkt;
+ unsigned long lock_flags = 0;
+
+ ENTER;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ if (ipr_cmd->completion.done || ioa_cfg->in_reset_reload) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return;
+ }
+
+ ipr_sdev_err(ipr_cmd->u.sdev, "Abort timed out. Resetting bus\n");
+ reset_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ ipr_cmd->sibling = reset_cmd;
+ reset_cmd->sibling = ipr_cmd;
+ reset_cmd->ioarcb.res_handle = ipr_cmd->ioarcb.res_handle;
+ cmd_pkt = &reset_cmd->ioarcb.cmd_pkt;
+ cmd_pkt->request_type = IPR_RQTYPE_IOACMD;
+ cmd_pkt->cdb[0] = IPR_RESET_DEVICE;
+ cmd_pkt->cdb[2] = IPR_RESET_TYPE_SELECT | IPR_BUS_RESET;
+
+ ipr_do_req(reset_cmd, ipr_bus_reset_done, ipr_timeout, IPR_DEVICE_RESET_TIMEOUT);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ LEAVE;
+}
+
+/**
+ * ipr_cancel_op - Cancel specified op
+ * @scsi_cmd: scsi command struct
+ *
+ * This function cancels specified op.
+ *
+ * Return value:
+ * SUCCESS / FAILED
+ **/
+static int ipr_cancel_op(struct scsi_cmnd * scsi_cmd)
+{
+ struct ipr_cmnd *ipr_cmd;
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct ipr_resource_entry *res;
+ struct ipr_cmd_pkt *cmd_pkt;
+ u32 ioasc;
+ int op_found = 0;
+
+ ENTER;
+ ioa_cfg = (struct ipr_ioa_cfg *)scsi_cmd->device->host->hostdata;
+ res = scsi_cmd->device->hostdata;
+
+ if (!res || (!ipr_is_gscsi(res) && !ipr_is_vset_device(res)))
+ return FAILED;
+
+ list_for_each_entry(ipr_cmd, &ioa_cfg->pending_q, queue) {
+ if (ipr_cmd->scsi_cmd == scsi_cmd) {
+ ipr_cmd->done = ipr_scsi_eh_done;
+ op_found = 1;
+ break;
+ }
+ }
+
+ if (!op_found)
+ return SUCCESS;
+
+ ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ ipr_cmd->ioarcb.res_handle = res->cfgte.res_handle;
+ cmd_pkt = &ipr_cmd->ioarcb.cmd_pkt;
+ cmd_pkt->request_type = IPR_RQTYPE_IOACMD;
+ cmd_pkt->cdb[0] = IPR_CANCEL_ALL_REQUESTS;
+ ipr_cmd->u.sdev = scsi_cmd->device;
+
+ ipr_sdev_err(scsi_cmd->device, "Aborting command: %02X\n", scsi_cmd->cmnd[0]);
+ ipr_send_blocking_cmd(ipr_cmd, ipr_abort_timeout, IPR_CANCEL_ALL_TIMEOUT);
+ ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ /*
+ * If the abort task timed out and we sent a bus reset, we will get
+ * one the following responses to the abort
+ */
+ if (ioasc == IPR_IOASC_BUS_WAS_RESET || ioasc == IPR_IOASC_SYNC_REQUIRED) {
+ ioasc = 0;
+ ipr_trace;
+ }
+
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ res->needs_sync_complete = 1;
+
+ LEAVE;
+ return (IPR_IOASC_SENSE_KEY(ioasc) ? FAILED : SUCCESS);
+}
+
+/**
+ * ipr_eh_abort - Abort a single op
+ * @scsi_cmd: scsi command struct
+ *
+ * Return value:
+ * SUCCESS / FAILED
+ **/
+static int ipr_eh_abort(struct scsi_cmnd * scsi_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+
+ ENTER;
+ ioa_cfg = (struct ipr_ioa_cfg *) scsi_cmd->device->host->hostdata;
+
+ /* If we are currently going through reset/reload, return failed. This will force the
+ mid-layer to call ipr_eh_host_reset, which will then go to sleep and wait for the
+ reset to complete */
+ if (ioa_cfg->in_reset_reload)
+ return FAILED;
+ if (ioa_cfg->ioa_is_dead)
+ return FAILED;
+ if (!scsi_cmd->device->hostdata)
+ return FAILED;
+
+ LEAVE;
+ return ipr_cancel_op(scsi_cmd);
+}
+
+/**
+ * ipr_handle_other_interrupt - Handle "other" interrupts
+ * @ioa_cfg: ioa config struct
+ * @int_reg: interrupt register
+ *
+ * Return value:
+ * IRQ_NONE / IRQ_HANDLED
+ **/
+static irqreturn_t ipr_handle_other_interrupt(struct ipr_ioa_cfg *ioa_cfg,
+ volatile u32 int_reg)
+{
+ irqreturn_t rc = IRQ_HANDLED;
+
+ if (int_reg & IPR_PCII_IOA_TRANS_TO_OPER) {
+ /* Mask the interrupt */
+ writel(IPR_PCII_IOA_TRANS_TO_OPER, ioa_cfg->regs.set_interrupt_mask_reg);
+
+ /* Clear the interrupt */
+ writel(IPR_PCII_IOA_TRANS_TO_OPER, ioa_cfg->regs.clr_interrupt_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
+
+ list_del(&ioa_cfg->reset_cmd->queue);
+ del_timer(&ioa_cfg->reset_cmd->timer);
+ ipr_reset_ioa_job(ioa_cfg->reset_cmd);
+ } else {
+ if (int_reg & IPR_PCII_IOA_UNIT_CHECKED)
+ ioa_cfg->ioa_unit_checked = 1;
+ else
+ dev_err(&ioa_cfg->pdev->dev,
+ "Permanent IOA failure. 0x%08X\n", int_reg);
+
+ if (WAIT_FOR_DUMP == ioa_cfg->sdt_state)
+ ioa_cfg->sdt_state = GET_DUMP;
+
+ ipr_mask_and_clear_interrupts(ioa_cfg, ~0);
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
+ }
+
+ return rc;
+}
+
+/**
+ * ipr_isr - Interrupt service routine
+ * @irq: irq number
+ * @devp: pointer to ioa config struct
+ * @regs: pt_regs struct
+ *
+ * Return value:
+ * IRQ_NONE / IRQ_HANDLED
+ **/
+static irqreturn_t ipr_isr(int irq, void *devp, struct pt_regs *regs)
+{
+ struct ipr_ioa_cfg *ioa_cfg = (struct ipr_ioa_cfg *)devp;
+ unsigned long lock_flags = 0;
+ volatile u32 int_reg, int_mask_reg;
+ u32 ioasc;
+ u16 cmd_index;
+ struct ipr_cmnd *ipr_cmd;
+ irqreturn_t rc = IRQ_NONE;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ /* If interrupts are disabled, ignore the interrupt */
+ if (!ioa_cfg->allow_interrupts) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return IRQ_NONE;
+ }
+
+ int_mask_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_reg) & ~int_mask_reg;
+
+ /* If an interrupt on the adapter did not occur, ignore it */
+ if (unlikely((int_reg & IPR_PCII_OPER_INTERRUPTS) == 0)) {
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return IRQ_NONE;
+ }
+
+ while (1) {
+ ipr_cmd = NULL;
+
+ while ((be32_to_cpu(*ioa_cfg->hrrq_curr) & IPR_HRRQ_TOGGLE_BIT) ==
+ ioa_cfg->toggle_bit) {
+
+ cmd_index = (be32_to_cpu(*ioa_cfg->hrrq_curr) &
+ IPR_HRRQ_REQ_RESP_HANDLE_MASK) >> IPR_HRRQ_REQ_RESP_HANDLE_SHIFT;
+
+ if (unlikely(cmd_index >= IPR_NUM_CMD_BLKS)) {
+ ioa_cfg->errors_logged++;
+ dev_err(&ioa_cfg->pdev->dev, "Invalid response handle from IOA\n");
+
+ if (WAIT_FOR_DUMP == ioa_cfg->sdt_state)
+ ioa_cfg->sdt_state = GET_DUMP;
+
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return IRQ_HANDLED;
+ }
+
+ ipr_cmd = ioa_cfg->ipr_cmnd_list[cmd_index];
+
+ ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_FINISH, ioasc);
+
+ list_del(&ipr_cmd->queue);
+ del_timer(&ipr_cmd->timer);
+ ipr_cmd->done(ipr_cmd);
+
+ rc = IRQ_HANDLED;
+
+ if (ioa_cfg->hrrq_curr < ioa_cfg->hrrq_end) {
+ ioa_cfg->hrrq_curr++;
+ } else {
+ ioa_cfg->hrrq_curr = ioa_cfg->hrrq_start;
+ ioa_cfg->toggle_bit ^= 1u;
+ }
+ }
+
+ if (ipr_cmd != NULL) {
+ /* Clear the PCI interrupt */
+ writel(IPR_PCII_HRRQ_UPDATED, ioa_cfg->regs.clr_interrupt_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_reg) & ~int_mask_reg;
+ } else
+ break;
+ }
+
+ if (unlikely(rc == IRQ_NONE))
+ rc = ipr_handle_other_interrupt(ioa_cfg, int_reg);
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ return rc;
+}
+
+/**
+ * ipr_build_ioadl - Build a scatter/gather list and map the buffer
+ * @ioa_cfg: ioa config struct
+ * @ipr_cmd: ipr command struct
+ *
+ * Return value:
+ * 0 on success / -1 on failure
+ **/
+static int ipr_build_ioadl(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_cmnd *ipr_cmd)
+{
+ int i;
+ struct scatterlist *sglist;
+ u32 length;
+ u32 ioadl_flags = 0;
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+
+ length = scsi_cmd->request_bufflen;
+
+ if (length == 0)
+ return 0;
+
+ if (scsi_cmd->use_sg) {
+ ipr_cmd->dma_use_sg = pci_map_sg(ioa_cfg->pdev,
+ scsi_cmd->request_buffer,
+ scsi_cmd->use_sg,
+ scsi_cmd->sc_data_direction);
+
+ if (scsi_cmd->sc_data_direction == DMA_TO_DEVICE) {
+ ioadl_flags = IPR_IOADL_FLAGS_WRITE;
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
+ ioarcb->write_data_transfer_length = cpu_to_be32(length);
+ ioarcb->write_ioadl_len =
+ cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
+ } else if (scsi_cmd->sc_data_direction == DMA_FROM_DEVICE) {
+ ioadl_flags = IPR_IOADL_FLAGS_READ;
+ ioarcb->read_data_transfer_length = cpu_to_be32(length);
+ ioarcb->read_ioadl_len =
+ cpu_to_be32(sizeof(struct ipr_ioadl_desc) * ipr_cmd->dma_use_sg);
+ }
+
+ sglist = scsi_cmd->request_buffer;
+
+ for (i = 0; i < ipr_cmd->dma_use_sg; i++) {
+ ioadl[i].flags_and_data_len =
+ cpu_to_be32(ioadl_flags | sg_dma_len(&sglist[i]));
+ ioadl[i].address =
+ cpu_to_be32(sg_dma_address(&sglist[i]));
+ }
+
+ if (likely(ipr_cmd->dma_use_sg)) {
+ ioadl[i-1].flags_and_data_len |=
+ cpu_to_be32(IPR_IOADL_FLAGS_LAST);
+ return 0;
+ } else
+ dev_err(&ioa_cfg->pdev->dev, "pci_map_sg failed!\n");
+ } else {
+ if (scsi_cmd->sc_data_direction == DMA_TO_DEVICE) {
+ ioadl_flags = IPR_IOADL_FLAGS_WRITE;
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
+ ioarcb->write_data_transfer_length = cpu_to_be32(length);
+ ioarcb->write_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ } else if (scsi_cmd->sc_data_direction == DMA_FROM_DEVICE) {
+ ioadl_flags = IPR_IOADL_FLAGS_READ;
+ ioarcb->read_data_transfer_length = cpu_to_be32(length);
+ ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ }
+
+ ipr_cmd->dma_handle = pci_map_single(ioa_cfg->pdev,
+ scsi_cmd->request_buffer, length,
+ scsi_cmd->sc_data_direction);
+
+ if (likely(!pci_dma_mapping_error(ipr_cmd->dma_handle))) {
+ ipr_cmd->dma_use_sg = 1;
+ ioadl[0].flags_and_data_len =
+ cpu_to_be32(ioadl_flags | length | IPR_IOADL_FLAGS_LAST);
+ ioadl[0].address = cpu_to_be32(ipr_cmd->dma_handle);
+ return 0;
+ } else
+ dev_err(&ioa_cfg->pdev->dev, "pci_map_single failed!\n");
+ }
+
+ return -1;
+}
+
+/**
+ * ipr_get_task_attributes - Translate SPI Q-Tag to task attributes
+ * @scsi_cmd: scsi command struct
+ *
+ * Return value:
+ * task attributes
+ **/
+static u8 ipr_get_task_attributes(struct scsi_cmnd *scsi_cmd)
+{
+ u8 tag[2];
+ u8 rc = IPR_FLAGS_LO_UNTAGGED_TASK;
+
+ if (scsi_populate_tag_msg(scsi_cmd, tag)) {
+ switch (tag[0]) {
+ case MSG_SIMPLE_TAG:
+ rc = IPR_FLAGS_LO_SIMPLE_TASK;
+ break;
+ case MSG_HEAD_TAG:
+ rc = IPR_FLAGS_LO_HEAD_OF_Q_TASK;
+ break;
+ case MSG_ORDERED_TAG:
+ rc = IPR_FLAGS_LO_ORDERED_TASK;
+ break;
+ };
+ }
+
+ return rc;
+}
+
+/**
+ * ipr_erp_done - Process completion of ERP for a device
+ * @ipr_cmd: ipr command struct
+ *
+ * This function copies the sense buffer into the scsi_cmd
+ * struct and pushes the scsi_done function.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_erp_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+ struct ipr_resource_entry *res = scsi_cmd->device->hostdata;
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ if (IPR_IOASC_SENSE_KEY(ioasc) > 0) {
+ scsi_cmd->result |= (DID_ERROR << 16);
+ ipr_sdev_err(scsi_cmd->device,
+ "Request Sense failed with IOASC: 0x%08X\n", ioasc);
+ } else {
+ memcpy(scsi_cmd->sense_buffer, ipr_cmd->sense_buffer,
+ SCSI_SENSE_BUFFERSIZE);
+ }
+
+ if (res) {
+ res->needs_sync_complete = 1;
+ res->in_erp = 0;
+ }
+ ipr_unmap_sglist(ioa_cfg, ipr_cmd);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ scsi_cmd->scsi_done(scsi_cmd);
+}
+
+/**
+ * ipr_reinit_ipr_cmnd_for_erp - Re-initialize a cmnd block to be used for ERP
+ * @ipr_cmd: ipr command struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_reinit_ipr_cmnd_for_erp(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioarcb *ioarcb;
+ struct ipr_ioasa *ioasa;
+
+ ioarcb = &ipr_cmd->ioarcb;
+ ioasa = &ipr_cmd->ioasa;
+
+ memset(&ioarcb->cmd_pkt, 0, sizeof(struct ipr_cmd_pkt));
+ ioarcb->write_data_transfer_length = 0;
+ ioarcb->read_data_transfer_length = 0;
+ ioarcb->write_ioadl_len = 0;
+ ioarcb->read_ioadl_len = 0;
+ ioasa->ioasc = 0;
+ ioasa->residual_data_len = 0;
+}
+
+/**
+ * ipr_erp_request_sense - Send request sense to a device
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sends a request sense to a device as a result
+ * of a check condition.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_erp_request_sense(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_cmd_pkt *cmd_pkt = &ipr_cmd->ioarcb.cmd_pkt;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ if (IPR_IOASC_SENSE_KEY(ioasc) > 0) {
+ ipr_erp_done(ipr_cmd);
+ return;
+ }
+
+ ipr_reinit_ipr_cmnd_for_erp(ipr_cmd);
+
+ cmd_pkt->request_type = IPR_RQTYPE_SCSICDB;
+ cmd_pkt->cdb[0] = REQUEST_SENSE;
+ cmd_pkt->cdb[4] = SCSI_SENSE_BUFFERSIZE;
+ cmd_pkt->flags_hi |= IPR_FLAGS_HI_SYNC_OVERRIDE;
+ cmd_pkt->flags_hi |= IPR_FLAGS_HI_NO_ULEN_CHK;
+ cmd_pkt->timeout = cpu_to_be16(IPR_REQUEST_SENSE_TIMEOUT / HZ);
+
+ ipr_cmd->ioadl[0].flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_READ_LAST | SCSI_SENSE_BUFFERSIZE);
+ ipr_cmd->ioadl[0].address =
+ cpu_to_be32(ipr_cmd->sense_buffer_dma);
+
+ ipr_cmd->ioarcb.read_ioadl_len =
+ cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ipr_cmd->ioarcb.read_data_transfer_length =
+ cpu_to_be32(SCSI_SENSE_BUFFERSIZE);
+
+ ipr_do_req(ipr_cmd, ipr_erp_done, ipr_timeout,
+ IPR_REQUEST_SENSE_TIMEOUT * 2);
+}
+
+/**
+ * ipr_erp_cancel_all - Send cancel all to a device
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sends a cancel all to a device to clear the
+ * queue. If we are running TCQ on the device, QERR is set to 1,
+ * which means all outstanding ops have been dropped on the floor.
+ * Cancel all will return them to us.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_erp_cancel_all(struct ipr_cmnd *ipr_cmd)
+{
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+ struct ipr_resource_entry *res = scsi_cmd->device->hostdata;
+ struct ipr_cmd_pkt *cmd_pkt;
+
+ res->in_erp = 1;
+
+ ipr_reinit_ipr_cmnd_for_erp(ipr_cmd);
+
+ if (!res->tcq_active) {
+ ipr_erp_request_sense(ipr_cmd);
+ return;
+ }
+
+ cmd_pkt = &ipr_cmd->ioarcb.cmd_pkt;
+ cmd_pkt->request_type = IPR_RQTYPE_IOACMD;
+ cmd_pkt->cdb[0] = IPR_CANCEL_ALL_REQUESTS;
+
+ ipr_do_req(ipr_cmd, ipr_erp_request_sense, ipr_timeout,
+ IPR_CANCEL_ALL_TIMEOUT);
+}
+
+/**
+ * ipr_dump_ioasa - Dump contents of IOASA
+ * @ioa_cfg: ioa config struct
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is invoked by the interrupt handler when ops
+ * fail. It will log the IOASA if appropriate. Only called
+ * for GPDD ops.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_dump_ioasa(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_cmnd *ipr_cmd)
+{
+ int i;
+ u16 data_len;
+ u32 ioasc;
+ struct ipr_ioasa *ioasa = &ipr_cmd->ioasa;
+ u32 *ioasa_data = (u32 *)ioasa;
+ int error_index;
+
+ ioasc = be32_to_cpu(ioasa->ioasc) & IPR_IOASC_IOASC_MASK;
+
+ if (0 == ioasc)
+ return;
+
+ if (ioa_cfg->log_level < IPR_DEFAULT_LOG_LEVEL)
+ return;
+
+ error_index = ipr_get_error(ioasc);
+
+ if (ioa_cfg->log_level < IPR_MAX_LOG_LEVEL) {
+ /* Don't log an error if the IOA already logged one */
+ if (ioasa->ilid != 0)
+ return;
+
+ if (ipr_error_table[error_index].log_ioasa == 0)
+ return;
+ }
+
+ ipr_sdev_err(ipr_cmd->scsi_cmd->device, "%s\n",
+ ipr_error_table[error_index].error);
+
+ if ((ioasa->u.gpdd.end_state <= ARRAY_SIZE(ipr_gpdd_dev_end_states)) &&
+ (ioasa->u.gpdd.bus_phase <= ARRAY_SIZE(ipr_gpdd_dev_bus_phases))) {
+ ipr_sdev_err(ipr_cmd->scsi_cmd->device,
+ "Device End state: %s Phase: %s\n",
+ ipr_gpdd_dev_end_states[ioasa->u.gpdd.end_state],
+ ipr_gpdd_dev_bus_phases[ioasa->u.gpdd.bus_phase]);
+ }
+
+ if (sizeof(struct ipr_ioasa) < be16_to_cpu(ioasa->ret_stat_len))
+ data_len = sizeof(struct ipr_ioasa);
+ else
+ data_len = be16_to_cpu(ioasa->ret_stat_len);
+
+ ipr_err("IOASA Dump:\n");
+
+ for (i = 0; i < data_len / 4; i += 4) {
+ ipr_err("%08X: %08X %08X %08X %08X\n", i*4,
+ be32_to_cpu(ioasa_data[i]),
+ be32_to_cpu(ioasa_data[i+1]),
+ be32_to_cpu(ioasa_data[i+2]),
+ be32_to_cpu(ioasa_data[i+3]));
+ }
+}
+
+/**
+ * ipr_gen_sense - Generate SCSI sense data from an IOASA
+ * @ioasa: IOASA
+ * @sense_buf: sense data buffer
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_gen_sense(struct ipr_cmnd *ipr_cmd)
+{
+ u32 failing_lba;
+ u8 *sense_buf = ipr_cmd->scsi_cmd->sense_buffer;
+ struct ipr_resource_entry *res = ipr_cmd->scsi_cmd->device->hostdata;
+ struct ipr_ioasa *ioasa = &ipr_cmd->ioasa;
+ u32 ioasc = be32_to_cpu(ioasa->ioasc);
+
+ memset(sense_buf, 0, SCSI_SENSE_BUFFERSIZE);
+
+ if (ioasc >= IPR_FIRST_DRIVER_IOASC)
+ return;
+
+ ipr_cmd->scsi_cmd->result = SAM_STAT_CHECK_CONDITION;
+
+ if (ipr_is_vset_device(res) &&
+ ioasc == IPR_IOASC_MED_DO_NOT_REALLOC &&
+ ioasa->u.vset.failing_lba_hi != 0) {
+ sense_buf[0] = 0x72;
+ sense_buf[1] = IPR_IOASC_SENSE_KEY(ioasc);
+ sense_buf[2] = IPR_IOASC_SENSE_CODE(ioasc);
+ sense_buf[3] = IPR_IOASC_SENSE_QUAL(ioasc);
+
+ sense_buf[7] = 12;
+ sense_buf[8] = 0;
+ sense_buf[9] = 0x0A;
+ sense_buf[10] = 0x80;
+
+ failing_lba = be32_to_cpu(ioasa->u.vset.failing_lba_hi);
+
+ sense_buf[12] = (failing_lba & 0xff000000) >> 24;
+ sense_buf[13] = (failing_lba & 0x00ff0000) >> 16;
+ sense_buf[14] = (failing_lba & 0x0000ff00) >> 8;
+ sense_buf[15] = failing_lba & 0x000000ff;
+
+ failing_lba = be32_to_cpu(ioasa->u.vset.failing_lba_lo);
+
+ sense_buf[16] = (failing_lba & 0xff000000) >> 24;
+ sense_buf[17] = (failing_lba & 0x00ff0000) >> 16;
+ sense_buf[18] = (failing_lba & 0x0000ff00) >> 8;
+ sense_buf[19] = failing_lba & 0x000000ff;
+ } else {
+ sense_buf[0] = 0x70;
+ sense_buf[2] = IPR_IOASC_SENSE_KEY(ioasc);
+ sense_buf[12] = IPR_IOASC_SENSE_CODE(ioasc);
+ sense_buf[13] = IPR_IOASC_SENSE_QUAL(ioasc);
+
+ /* Illegal request */
+ if ((IPR_IOASC_SENSE_KEY(ioasc) == 0x05) &&
+ (be32_to_cpu(ioasa->ioasc_specific) & IPR_FIELD_POINTER_VALID)) {
+ sense_buf[7] = 10; /* additional length */
+
+ /* IOARCB was in error */
+ if (IPR_IOASC_SENSE_CODE(ioasc) == 0x24)
+ sense_buf[15] = 0xC0;
+ else /* Parameter data was invalid */
+ sense_buf[15] = 0x80;
+
+ sense_buf[16] =
+ ((IPR_FIELD_POINTER_MASK &
+ be32_to_cpu(ioasa->ioasc_specific)) >> 8) & 0xff;
+ sense_buf[17] =
+ (IPR_FIELD_POINTER_MASK &
+ be32_to_cpu(ioasa->ioasc_specific)) & 0xff;
+ } else {
+ if (ioasc == IPR_IOASC_MED_DO_NOT_REALLOC) {
+ if (ipr_is_vset_device(res))
+ failing_lba = be32_to_cpu(ioasa->u.vset.failing_lba_lo);
+ else
+ failing_lba = be32_to_cpu(ioasa->u.dasd.failing_lba);
+
+ sense_buf[0] |= 0x80; /* Or in the Valid bit */
+ sense_buf[3] = (failing_lba & 0xff000000) >> 24;
+ sense_buf[4] = (failing_lba & 0x00ff0000) >> 16;
+ sense_buf[5] = (failing_lba & 0x0000ff00) >> 8;
+ sense_buf[6] = failing_lba & 0x000000ff;
+ }
+
+ sense_buf[7] = 6; /* additional length */
+ }
+ }
+}
+
+/**
+ * ipr_erp_start - Process an error response for a SCSI op
+ * @ioa_cfg: ioa config struct
+ * @ipr_cmd: ipr command struct
+ *
+ * This function determines whether or not to initiate ERP
+ * on the affected device.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_erp_start(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_cmnd *ipr_cmd)
+{
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+ struct ipr_resource_entry *res = scsi_cmd->device->hostdata;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ if (!res) {
+ ipr_scsi_eh_done(ipr_cmd);
+ return;
+ }
+
+ if (ipr_is_gscsi(res))
+ ipr_dump_ioasa(ioa_cfg, ipr_cmd);
+ else
+ ipr_gen_sense(ipr_cmd);
+
+ switch (ioasc & IPR_IOASC_IOASC_MASK) {
+ case IPR_IOASC_ABORTED_CMD_TERM_BY_HOST:
+ scsi_cmd->result |= (DID_IMM_RETRY << 16);
+ break;
+ case IPR_IOASC_IR_RESOURCE_HANDLE:
+ scsi_cmd->result |= (DID_NO_CONNECT << 16);
+ break;
+ case IPR_IOASC_HW_SEL_TIMEOUT:
+ scsi_cmd->result |= (DID_NO_CONNECT << 16);
+ res->needs_sync_complete = 1;
+ break;
+ case IPR_IOASC_SYNC_REQUIRED:
+ if (!res->in_erp)
+ res->needs_sync_complete = 1;
+ scsi_cmd->result |= (DID_IMM_RETRY << 16);
+ break;
+ case IPR_IOASC_MED_DO_NOT_REALLOC: /* prevent retries */
+ scsi_cmd->result |= (DID_PASSTHROUGH << 16);
+ break;
+ case IPR_IOASC_BUS_WAS_RESET:
+ case IPR_IOASC_BUS_WAS_RESET_BY_OTHER:
+ /*
+ * Report the bus reset and ask for a retry. The device
+ * will give CC/UA the next command.
+ */
+ if (!res->resetting_device)
+ scsi_report_bus_reset(ioa_cfg->host, scsi_cmd->device->channel);
+ scsi_cmd->result |= (DID_ERROR << 16);
+ res->needs_sync_complete = 1;
+ break;
+ case IPR_IOASC_HW_DEV_BUS_STATUS:
+ scsi_cmd->result |= IPR_IOASC_SENSE_STATUS(ioasc);
+ if (IPR_IOASC_SENSE_STATUS(ioasc) == SAM_STAT_CHECK_CONDITION) {
+ ipr_erp_cancel_all(ipr_cmd);
+ return;
+ }
+ res->needs_sync_complete = 1;
+ break;
+ case IPR_IOASC_NR_INIT_CMD_REQUIRED:
+ break;
+ default:
+ scsi_cmd->result |= (DID_ERROR << 16);
+ if (!ipr_is_vset_device(res))
+ res->needs_sync_complete = 1;
+ break;
+ }
+
+ ipr_unmap_sglist(ioa_cfg, ipr_cmd);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ scsi_cmd->scsi_done(scsi_cmd);
+}
+
+/**
+ * ipr_scsi_done - mid-layer done function
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is invoked by the interrupt handler for
+ * ops generated by the SCSI mid-layer
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_scsi_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct scsi_cmnd *scsi_cmd = ipr_cmd->scsi_cmd;
+ u32 ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ scsi_cmd->resid = be32_to_cpu(ipr_cmd->ioasa.residual_data_len);
+
+ if (likely(IPR_IOASC_SENSE_KEY(ioasc) == 0)) {
+ ipr_unmap_sglist(ioa_cfg, ipr_cmd);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ scsi_cmd->scsi_done(scsi_cmd);
+ } else
+ ipr_erp_start(ioa_cfg, ipr_cmd);
+}
+
+/**
+ * ipr_save_ioafp_mode_select - Save adapters mode select data
+ * @ioa_cfg: ioa config struct
+ * @scsi_cmd: scsi command struct
+ *
+ * This function saves mode select data for the adapter to
+ * use following an adapter reset.
+ *
+ * Return value:
+ * 0 on success / SCSI_MLQUEUE_HOST_BUSY on failure
+ **/
+static int ipr_save_ioafp_mode_select(struct ipr_ioa_cfg *ioa_cfg,
+ struct scsi_cmnd *scsi_cmd)
+{
+ if (!ioa_cfg->saved_mode_pages) {
+ ioa_cfg->saved_mode_pages = kmalloc(sizeof(struct ipr_mode_pages),
+ GFP_ATOMIC);
+ if (!ioa_cfg->saved_mode_pages) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "IOA mode select buffer allocation failed\n");
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+ }
+
+ memcpy(ioa_cfg->saved_mode_pages, scsi_cmd->buffer, scsi_cmd->cmnd[4]);
+ ioa_cfg->saved_mode_page_len = scsi_cmd->cmnd[4];
+ return 0;
+}
+
+/**
+ * ipr_queuecommand - Queue a mid-layer request
+ * @scsi_cmd: scsi command struct
+ * @done: done function
+ *
+ * This function queues a request generated by the mid-layer.
+ *
+ * Return value:
+ * 0 on success
+ * SCSI_MLQUEUE_DEVICE_BUSY if device is busy
+ * SCSI_MLQUEUE_HOST_BUSY if host is busy
+ **/
+static int ipr_queuecommand(struct scsi_cmnd *scsi_cmd,
+ void (*done) (struct scsi_cmnd *))
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct ipr_resource_entry *res;
+ struct ipr_ioarcb *ioarcb;
+ struct ipr_cmnd *ipr_cmd;
+ int rc = 0;
+
+ scsi_cmd->scsi_done = done;
+ ioa_cfg = (struct ipr_ioa_cfg *)scsi_cmd->device->host->hostdata;
+ res = scsi_cmd->device->hostdata;
+ scsi_cmd->result = (DID_OK << 16);
+
+ /*
+ * We are currently blocking all devices due to a host reset
+ * We have told the host to stop giving us new requests, but
+ * ERP ops don't count. FIXME
+ */
+ if (unlikely(!ioa_cfg->allow_cmds && !ioa_cfg->ioa_is_dead))
+ return SCSI_MLQUEUE_HOST_BUSY;
+
+ /*
+ * FIXME - Create scsi_set_host_offline interface
+ * and the ioa_is_dead check can be removed
+ */
+ if (unlikely(ioa_cfg->ioa_is_dead || !res)) {
+ memset(scsi_cmd->sense_buffer, 0, SCSI_SENSE_BUFFERSIZE);
+ scsi_cmd->result = (DID_NO_CONNECT << 16);
+ scsi_cmd->scsi_done(scsi_cmd);
+ return 0;
+ }
+
+ ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ ioarcb = &ipr_cmd->ioarcb;
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+
+ memcpy(ioarcb->cmd_pkt.cdb, scsi_cmd->cmnd, scsi_cmd->cmd_len);
+ ipr_cmd->scsi_cmd = scsi_cmd;
+ ioarcb->res_handle = res->cfgte.res_handle;
+ ipr_cmd->done = ipr_scsi_done;
+ ipr_trc_hook(ipr_cmd, IPR_TRACE_START, IPR_GET_PHYS_LOC(res->cfgte.res_addr));
+
+ if (ipr_is_gscsi(res) || ipr_is_vset_device(res)) {
+ if (scsi_cmd->underflow == 0)
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_ULEN_CHK;
+
+ if (res->needs_sync_complete) {
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_SYNC_COMPLETE;
+ res->needs_sync_complete = 0;
+ }
+
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_NO_LINK_DESC;
+ ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_DELAY_AFTER_RST;
+ ioarcb->cmd_pkt.flags_lo |= IPR_FLAGS_LO_ALIGNED_BFR;
+ ioarcb->cmd_pkt.flags_lo |= ipr_get_task_attributes(scsi_cmd);
+ }
+
+ if (!ipr_is_gscsi(res) && scsi_cmd->cmnd[0] >= 0xC0)
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+
+ if (ipr_is_ioa_resource(res) && scsi_cmd->cmnd[0] == MODE_SELECT)
+ rc = ipr_save_ioafp_mode_select(ioa_cfg, scsi_cmd);
+
+ if (likely(rc == 0))
+ rc = ipr_build_ioadl(ioa_cfg, ipr_cmd);
+
+ if (likely(rc == 0)) {
+ mb();
+ writel(be32_to_cpu(ipr_cmd->ioarcb.ioarcb_host_pci_addr),
+ ioa_cfg->regs.ioarrin_reg);
+ } else {
+ list_move_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ return SCSI_MLQUEUE_HOST_BUSY;
+ }
+
+ return 0;
+}
+
+/**
+ * ipr_info - Get information about the card/driver
+ * @scsi_host: scsi host struct
+ *
+ * Return value:
+ * pointer to buffer with description string
+ **/
+static const char * ipr_ioa_info(struct Scsi_Host *host)
+{
+ static char buffer[512];
+ struct ipr_ioa_cfg *ioa_cfg;
+ unsigned long lock_flags = 0;
+
+ ioa_cfg = (struct ipr_ioa_cfg *) host->hostdata;
+
+ spin_lock_irqsave(host->host_lock, lock_flags);
+ sprintf(buffer, "IBM %X Storage Adapter", ioa_cfg->type);
+ spin_unlock_irqrestore(host->host_lock, lock_flags);
+
+ return buffer;
+}
+
+static struct scsi_host_template driver_template = {
+ .module = THIS_MODULE,
+ .name = "IPR",
+ .info = ipr_ioa_info,
+ .queuecommand = ipr_queuecommand,
+ .eh_abort_handler = ipr_eh_abort,
+ .eh_device_reset_handler = ipr_eh_dev_reset,
+ .eh_host_reset_handler = ipr_eh_host_reset,
+ .slave_alloc = ipr_slave_alloc,
+ .slave_configure = ipr_slave_configure,
+ .slave_destroy = ipr_slave_destroy,
+ .bios_param = ipr_biosparam,
+ .can_queue = IPR_MAX_COMMANDS,
+ .this_id = -1,
+ .sg_tablesize = IPR_MAX_SGLIST,
+ .max_sectors = IPR_MAX_SECTORS,
+ .cmd_per_lun = IPR_MAX_CMD_PER_LUN,
+ .use_clustering = ENABLE_CLUSTERING,
+ .shost_attrs = ipr_ioa_attrs,
+ .sdev_attrs = ipr_dev_attrs,
+ .proc_name = IPR_NAME
+};
+
+#ifdef CONFIG_PPC_PSERIES
+static const u16 ipr_blocked_processors[] = {
+ PV_NORTHSTAR,
+ PV_PULSAR,
+ PV_POWER4,
+ PV_ICESTAR,
+ PV_SSTAR,
+ PV_POWER4p,
+ PV_630,
+ PV_630p
+};
+
+/**
+ * ipr_invalid_adapter - Determine if this adapter is supported on this hardware
+ * @ioa_cfg: ioa cfg struct
+ *
+ * Adapters that use Gemstone revision < 3.1 do not work reliably on
+ * certain pSeries hardware. This function determines if the given
+ * adapter is in one of these confgurations or not.
+ *
+ * Return value:
+ * 1 if adapter is not supported / 0 if adapter is supported
+ **/
+static int ipr_invalid_adapter(struct ipr_ioa_cfg *ioa_cfg)
+{
+ u8 rev_id;
+ int i;
+
+ if (ioa_cfg->type == 0x5702) {
+ if (pci_read_config_byte(ioa_cfg->pdev, PCI_REVISION_ID,
+ &rev_id) == PCIBIOS_SUCCESSFUL) {
+ if (rev_id < 4) {
+ for (i = 0; i < ARRAY_SIZE(ipr_blocked_processors); i++){
+ if (__is_processor(ipr_blocked_processors[i]))
+ return 1;
+ }
+ }
+ }
+ }
+ return 0;
+}
+#else
+#define ipr_invalid_adapter(ioa_cfg) 0
+#endif
+
+/**
+ * ipr_ioa_bringdown_done - IOA bring down completion.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function processes the completion of an adapter bring down.
+ * It wakes any reset sleepers.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioa_bringdown_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ ENTER;
+ ioa_cfg->in_reset_reload = 0;
+ ioa_cfg->reset_retries = 0;
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ wake_up_all(&ioa_cfg->reset_wait_q);
+
+ spin_unlock_irq(ioa_cfg->host->host_lock);
+ scsi_unblock_requests(ioa_cfg->host);
+ spin_lock_irq(ioa_cfg->host->host_lock);
+ LEAVE;
+
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_ioa_reset_done - IOA reset completion.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function processes the completion of an adapter reset.
+ * It schedules any necessary mid-layer add/removes and
+ * wakes any reset sleepers.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioa_reset_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_resource_entry *res;
+ struct ipr_hostrcb *hostrcb, *temp;
+ int i = 0;
+
+ ENTER;
+ ioa_cfg->in_reset_reload = 0;
+ ioa_cfg->allow_cmds = 1;
+ ioa_cfg->reset_cmd = NULL;
+
+ list_for_each_entry(res, &ioa_cfg->used_res_q, queue) {
+ if (ioa_cfg->allow_ml_add_del && (res->add_to_ml || res->del_from_ml)) {
+ ipr_trace;
+ schedule_work(&ioa_cfg->work_q);
+ break;
+ }
+ }
+
+ list_for_each_entry_safe(hostrcb, temp, &ioa_cfg->hostrcb_free_q, queue) {
+ list_del(&hostrcb->queue);
+ if (i++ < IPR_NUM_LOG_HCAMS)
+ ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_LOG_DATA, hostrcb);
+ else
+ ipr_send_hcam(ioa_cfg, IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE, hostrcb);
+ }
+
+ dev_info(&ioa_cfg->pdev->dev, "IOA initialized.\n");
+
+ ioa_cfg->reset_retries = 0;
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ wake_up_all(&ioa_cfg->reset_wait_q);
+
+ spin_unlock_irq(ioa_cfg->host->host_lock);
+ scsi_unblock_requests(ioa_cfg->host);
+ spin_lock_irq(ioa_cfg->host->host_lock);
+
+ if (!ioa_cfg->allow_cmds)
+ scsi_block_requests(ioa_cfg->host);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_set_sup_dev_dflt - Initialize a Set Supported Device buffer
+ * @supported_dev: supported device struct
+ * @vpids: vendor product id struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_set_sup_dev_dflt(struct ipr_supported_device *supported_dev,
+ struct ipr_std_inq_vpids *vpids)
+{
+ memset(supported_dev, 0, sizeof(struct ipr_supported_device));
+ memcpy(&supported_dev->vpids, vpids, sizeof(struct ipr_std_inq_vpids));
+ supported_dev->num_records = 1;
+ supported_dev->data_length =
+ cpu_to_be16(sizeof(struct ipr_supported_device));
+ supported_dev->reserved = 0;
+}
+
+/**
+ * ipr_set_supported_devs - Send Set Supported Devices for a device
+ * @ipr_cmd: ipr command struct
+ *
+ * This function send a Set Supported Devices to the adapter
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_set_supported_devs(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_supported_device *supp_dev = &ioa_cfg->vpd_cbs->supp_dev;
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_resource_entry *res = ipr_cmd->u.res;
+
+ ipr_cmd->job_step = ipr_ioa_reset_done;
+
+ list_for_each_entry_continue(res, &ioa_cfg->used_res_q, queue) {
+ if (!ipr_is_af_dasd_device(res))
+ continue;
+
+ ipr_cmd->u.res = res;
+ ipr_set_sup_dev_dflt(supp_dev, &res->cfgte.std_inq_data.vpids);
+
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+
+ ioarcb->cmd_pkt.cdb[0] = IPR_SET_SUPPORTED_DEVICES;
+ ioarcb->cmd_pkt.cdb[7] = (sizeof(struct ipr_supported_device) >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[8] = sizeof(struct ipr_supported_device) & 0xff;
+
+ ioadl->flags_and_data_len = cpu_to_be32(IPR_IOADL_FLAGS_WRITE_LAST |
+ sizeof(struct ipr_supported_device));
+ ioadl->address = cpu_to_be32(ioa_cfg->vpd_cbs_dma +
+ offsetof(struct ipr_misc_cbs, supp_dev));
+ ioarcb->write_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ioarcb->write_data_transfer_length =
+ cpu_to_be32(sizeof(struct ipr_supported_device));
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout,
+ IPR_SET_SUP_DEVICE_TIMEOUT);
+
+ ipr_cmd->job_step = ipr_set_supported_devs;
+ return IPR_RC_JOB_RETURN;
+ }
+
+ return IPR_RC_JOB_CONTINUE;
+}
+
+/**
+ * ipr_get_mode_page - Locate specified mode page
+ * @mode_pages: mode page buffer
+ * @page_code: page code to find
+ * @len: minimum required length for mode page
+ *
+ * Return value:
+ * pointer to mode page / NULL on failure
+ **/
+static void *ipr_get_mode_page(struct ipr_mode_pages *mode_pages,
+ u32 page_code, u32 len)
+{
+ struct ipr_mode_page_hdr *mode_hdr;
+ u32 page_length;
+ u32 length;
+
+ if (!mode_pages || (mode_pages->hdr.length == 0))
+ return NULL;
+
+ length = (mode_pages->hdr.length + 1) - 4 - mode_pages->hdr.block_desc_len;
+ mode_hdr = (struct ipr_mode_page_hdr *)
+ (mode_pages->data + mode_pages->hdr.block_desc_len);
+
+ while (length) {
+ if (IPR_GET_MODE_PAGE_CODE(mode_hdr) == page_code) {
+ if (mode_hdr->page_length >= (len - sizeof(struct ipr_mode_page_hdr)))
+ return mode_hdr;
+ break;
+ } else {
+ page_length = (sizeof(struct ipr_mode_page_hdr) +
+ mode_hdr->page_length);
+ length -= page_length;
+ mode_hdr = (struct ipr_mode_page_hdr *)
+ ((unsigned long)mode_hdr + page_length);
+ }
+ }
+ return NULL;
+}
+
+/**
+ * ipr_check_term_power - Check for term power errors
+ * @ioa_cfg: ioa config struct
+ * @mode_pages: IOAFP mode pages buffer
+ *
+ * Check the IOAFP's mode page 28 for term power errors
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_check_term_power(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_mode_pages *mode_pages)
+{
+ int i;
+ int entry_length;
+ struct ipr_dev_bus_entry *bus;
+ struct ipr_mode_page28 *mode_page;
+
+ mode_page = ipr_get_mode_page(mode_pages, 0x28,
+ sizeof(struct ipr_mode_page28));
+
+ entry_length = mode_page->entry_length;
+
+ bus = mode_page->bus;
+
+ for (i = 0; i < mode_page->num_entries; i++) {
+ if (bus->flags & IPR_SCSI_ATTR_NO_TERM_PWR) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Term power is absent on scsi bus %d\n",
+ bus->res_addr.bus);
+ }
+
+ bus = (struct ipr_dev_bus_entry *)((char *)bus + entry_length);
+ }
+}
+
+/**
+ * ipr_scsi_bus_speed_limit - Limit the SCSI speed based on SES table
+ * @ioa_cfg: ioa config struct
+ *
+ * Looks through the config table checking for SES devices. If
+ * the SES device is in the SES table indicating a maximum SCSI
+ * bus speed, the speed is limited for the bus.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_scsi_bus_speed_limit(struct ipr_ioa_cfg *ioa_cfg)
+{
+ u32 max_xfer_rate;
+ int i;
+
+ for (i = 0; i < IPR_MAX_NUM_BUSES; i++) {
+ max_xfer_rate = ipr_get_max_scsi_speed(ioa_cfg, i,
+ ioa_cfg->bus_attr[i].bus_width);
+
+ if (max_xfer_rate < ioa_cfg->bus_attr[i].max_xfer_rate)
+ ioa_cfg->bus_attr[i].max_xfer_rate = max_xfer_rate;
+ }
+}
+
+/**
+ * ipr_modify_ioafp_mode_page_28 - Modify IOAFP Mode Page 28
+ * @ioa_cfg: ioa config struct
+ * @mode_pages: mode page 28 buffer
+ *
+ * Updates mode page 28 based on driver configuration
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_modify_ioafp_mode_page_28(struct ipr_ioa_cfg *ioa_cfg,
+ struct ipr_mode_pages *mode_pages)
+{
+ int i, entry_length;
+ struct ipr_dev_bus_entry *bus;
+ struct ipr_bus_attributes *bus_attr;
+ struct ipr_mode_page28 *mode_page;
+
+ mode_page = ipr_get_mode_page(mode_pages, 0x28,
+ sizeof(struct ipr_mode_page28));
+
+ entry_length = mode_page->entry_length;
+
+ /* Loop for each device bus entry */
+ for (i = 0, bus = mode_page->bus;
+ i < mode_page->num_entries;
+ i++, bus = (struct ipr_dev_bus_entry *)((u8 *)bus + entry_length)) {
+ if (bus->res_addr.bus > IPR_MAX_NUM_BUSES) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Invalid resource address reported: 0x%08X\n",
+ IPR_GET_PHYS_LOC(bus->res_addr));
+ continue;
+ }
+
+ bus_attr = &ioa_cfg->bus_attr[i];
+ bus->extended_reset_delay = IPR_EXTENDED_RESET_DELAY;
+ bus->bus_width = bus_attr->bus_width;
+ bus->max_xfer_rate = cpu_to_be32(bus_attr->max_xfer_rate);
+ bus->flags &= ~IPR_SCSI_ATTR_QAS_MASK;
+ if (bus_attr->qas_enabled)
+ bus->flags |= IPR_SCSI_ATTR_ENABLE_QAS;
+ else
+ bus->flags |= IPR_SCSI_ATTR_DISABLE_QAS;
+ }
+}
+
+/**
+ * ipr_build_mode_select - Build a mode select command
+ * @ipr_cmd: ipr command struct
+ * @res_handle: resource handle to send command to
+ * @parm: Byte 2 of Mode Sense command
+ * @dma_addr: DMA buffer address
+ * @xfer_len: data transfer length
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_build_mode_select(struct ipr_cmnd *ipr_cmd,
+ u32 res_handle, u8 parm, u32 dma_addr,
+ u8 xfer_len)
+{
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+
+ ioarcb->res_handle = res_handle;
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_SCSICDB;
+ ioarcb->cmd_pkt.flags_hi |= IPR_FLAGS_HI_WRITE_NOT_READ;
+ ioarcb->cmd_pkt.cdb[0] = MODE_SELECT;
+ ioarcb->cmd_pkt.cdb[1] = parm;
+ ioarcb->cmd_pkt.cdb[4] = xfer_len;
+
+ ioadl->flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_WRITE_LAST | xfer_len);
+ ioadl->address = cpu_to_be32(dma_addr);
+ ioarcb->write_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ioarcb->write_data_transfer_length = cpu_to_be32(xfer_len);
+}
+
+/**
+ * ipr_ioafp_mode_select_page28 - Issue Mode Select Page 28 to IOA
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sets up the SCSI bus attributes and sends
+ * a Mode Select for Page 28 to activate them.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_mode_select_page28(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_mode_pages *mode_pages = &ioa_cfg->vpd_cbs->mode_pages;
+ int length;
+
+ ENTER;
+ if (ioa_cfg->saved_mode_pages) {
+ memcpy(mode_pages, ioa_cfg->saved_mode_pages,
+ ioa_cfg->saved_mode_page_len);
+ length = ioa_cfg->saved_mode_page_len;
+ } else {
+ ipr_scsi_bus_speed_limit(ioa_cfg);
+ ipr_check_term_power(ioa_cfg, mode_pages);
+ ipr_modify_ioafp_mode_page_28(ioa_cfg, mode_pages);
+ length = mode_pages->hdr.length + 1;
+ mode_pages->hdr.length = 0;
+ }
+
+ ipr_build_mode_select(ipr_cmd, cpu_to_be32(IPR_IOA_RES_HANDLE), 0x11,
+ ioa_cfg->vpd_cbs_dma + offsetof(struct ipr_misc_cbs, mode_pages),
+ length);
+
+ ipr_cmd->job_step = ipr_set_supported_devs;
+ ipr_cmd->u.res = list_entry(ioa_cfg->used_res_q.next,
+ struct ipr_resource_entry, queue);
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_build_mode_sense - Builds a mode sense command
+ * @ipr_cmd: ipr command struct
+ * @res: resource entry struct
+ * @parm: Byte 2 of mode sense command
+ * @dma_addr: DMA address of mode sense buffer
+ * @xfer_len: Size of DMA buffer
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_build_mode_sense(struct ipr_cmnd *ipr_cmd,
+ u32 res_handle,
+ u8 parm, u32 dma_addr, u8 xfer_len)
+{
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+
+ ioarcb->res_handle = res_handle;
+ ioarcb->cmd_pkt.cdb[0] = MODE_SENSE;
+ ioarcb->cmd_pkt.cdb[2] = parm;
+ ioarcb->cmd_pkt.cdb[4] = xfer_len;
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_SCSICDB;
+
+ ioadl->flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_READ_LAST | xfer_len);
+ ioadl->address = cpu_to_be32(dma_addr);
+ ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ioarcb->read_data_transfer_length = cpu_to_be32(xfer_len);
+}
+
+/**
+ * ipr_ioafp_mode_sense_page28 - Issue Mode Sense Page 28 to IOA
+ * @ipr_cmd: ipr command struct
+ *
+ * This function send a Page 28 mode sense to the IOA to
+ * retrieve SCSI bus attributes.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_mode_sense_page28(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ ENTER;
+ ipr_build_mode_sense(ipr_cmd, cpu_to_be32(IPR_IOA_RES_HANDLE),
+ 0x28, ioa_cfg->vpd_cbs_dma +
+ offsetof(struct ipr_misc_cbs, mode_pages),
+ sizeof(struct ipr_mode_pages));
+
+ ipr_cmd->job_step = ipr_ioafp_mode_select_page28;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_init_res_table - Initialize the resource table
+ * @ipr_cmd: ipr command struct
+ *
+ * This function looks through the existing resource table, comparing
+ * it with the config table. This function will take care of old/new
+ * devices and schedule adding/removing them from the mid-layer
+ * as appropriate.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE
+ **/
+static int ipr_init_res_table(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_resource_entry *res, *temp;
+ struct ipr_config_table_entry *cfgte;
+ int found, i;
+ LIST_HEAD(old_res);
+
+ ENTER;
+ if (ioa_cfg->cfg_table->hdr.flags & IPR_UCODE_DOWNLOAD_REQ)
+ dev_err(&ioa_cfg->pdev->dev, "Microcode download required\n");
+
+ list_for_each_entry_safe(res, temp, &ioa_cfg->used_res_q, queue)
+ list_move_tail(&res->queue, &old_res);
+
+ for (i = 0; i < ioa_cfg->cfg_table->hdr.num_entries; i++) {
+ cfgte = &ioa_cfg->cfg_table->dev[i];
+ found = 0;
+
+ list_for_each_entry_safe(res, temp, &old_res, queue) {
+ if (!memcmp(&res->cfgte.res_addr,
+ &cfgte->res_addr, sizeof(cfgte->res_addr))) {
+ list_move_tail(&res->queue, &ioa_cfg->used_res_q);
+ found = 1;
+ break;
+ }
+ }
+
+ if (!found) {
+ if (list_empty(&ioa_cfg->free_res_q)) {
+ dev_err(&ioa_cfg->pdev->dev, "Too many devices attached\n");
+ break;
+ }
+
+ found = 1;
+ res = list_entry(ioa_cfg->free_res_q.next,
+ struct ipr_resource_entry, queue);
+ list_move_tail(&res->queue, &ioa_cfg->used_res_q);
+ ipr_init_res_entry(res);
+ res->add_to_ml = 1;
+ }
+
+ if (found)
+ memcpy(&res->cfgte, cfgte, sizeof(struct ipr_config_table_entry));
+ }
+
+ list_for_each_entry_safe(res, temp, &old_res, queue) {
+ if (res->sdev) {
+ res->del_from_ml = 1;
+ list_move_tail(&res->queue, &ioa_cfg->used_res_q);
+ } else {
+ list_move_tail(&res->queue, &ioa_cfg->free_res_q);
+ }
+ }
+
+ ipr_cmd->job_step = ipr_ioafp_mode_sense_page28;
+
+ LEAVE;
+ return IPR_RC_JOB_CONTINUE;
+}
+
+/**
+ * ipr_ioafp_query_ioa_cfg - Send a Query IOA Config to the adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sends a Query IOA Configuration command
+ * to the adapter to retrieve the IOA configuration table.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_query_ioa_cfg(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+ struct ipr_inquiry_page3 *ucode_vpd = &ioa_cfg->vpd_cbs->page3_data;
+
+ ENTER;
+ dev_info(&ioa_cfg->pdev->dev, "Adapter firmware version: %02X%02X%02X%02X\n",
+ ucode_vpd->major_release, ucode_vpd->card_type,
+ ucode_vpd->minor_release[0], ucode_vpd->minor_release[1]);
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+
+ ioarcb->cmd_pkt.cdb[0] = IPR_QUERY_IOA_CONFIG;
+ ioarcb->cmd_pkt.cdb[7] = (sizeof(struct ipr_config_table) >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[8] = sizeof(struct ipr_config_table) & 0xff;
+
+ ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ioarcb->read_data_transfer_length =
+ cpu_to_be32(sizeof(struct ipr_config_table));
+
+ ioadl->address = cpu_to_be32(ioa_cfg->cfg_table_dma);
+ ioadl->flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_READ_LAST | sizeof(struct ipr_config_table));
+
+ ipr_cmd->job_step = ipr_init_res_table;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_ioafp_inquiry - Send an Inquiry to the adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * This utility function sends an inquiry to the adapter.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_ioafp_inquiry(struct ipr_cmnd *ipr_cmd, u8 flags, u8 page,
+ u32 dma_addr, u8 xfer_len)
+{
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+ struct ipr_ioadl_desc *ioadl = ipr_cmd->ioadl;
+
+ ENTER;
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_SCSICDB;
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+
+ ioarcb->cmd_pkt.cdb[0] = INQUIRY;
+ ioarcb->cmd_pkt.cdb[1] = flags;
+ ioarcb->cmd_pkt.cdb[2] = page;
+ ioarcb->cmd_pkt.cdb[4] = xfer_len;
+
+ ioarcb->read_ioadl_len = cpu_to_be32(sizeof(struct ipr_ioadl_desc));
+ ioarcb->read_data_transfer_length = cpu_to_be32(xfer_len);
+
+ ioadl->address = cpu_to_be32(dma_addr);
+ ioadl->flags_and_data_len =
+ cpu_to_be32(IPR_IOADL_FLAGS_READ_LAST | xfer_len);
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+ LEAVE;
+}
+
+/**
+ * ipr_ioafp_page3_inquiry - Send a Page 3 Inquiry to the adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sends a Page 3 inquiry to the adapter
+ * to retrieve software VPD information.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_page3_inquiry(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ char type[5];
+
+ ENTER;
+
+ /* Grab the type out of the VPD and store it away */
+ memcpy(type, ioa_cfg->vpd_cbs->ioa_vpd.std_inq_data.vpids.product_id, 4);
+ type[4] = '\0';
+ ioa_cfg->type = simple_strtoul((char *)type, NULL, 16);
+
+ ipr_cmd->job_step = ipr_ioafp_query_ioa_cfg;
+
+ ipr_ioafp_inquiry(ipr_cmd, 1, 3,
+ ioa_cfg->vpd_cbs_dma + offsetof(struct ipr_misc_cbs, page3_data),
+ sizeof(struct ipr_inquiry_page3));
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_ioafp_std_inquiry - Send a Standard Inquiry to the adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function sends a standard inquiry to the adapter.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_std_inquiry(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ ENTER;
+ ipr_cmd->job_step = ipr_ioafp_page3_inquiry;
+
+ ipr_ioafp_inquiry(ipr_cmd, 0, 0,
+ ioa_cfg->vpd_cbs_dma + offsetof(struct ipr_misc_cbs, ioa_vpd),
+ sizeof(struct ipr_ioa_vpd));
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_ioafp_indentify_hrrq - Send Identify Host RRQ.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function send an Identify Host Request Response Queue
+ * command to establish the HRRQ with the adapter.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_ioafp_indentify_hrrq(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_ioarcb *ioarcb = &ipr_cmd->ioarcb;
+
+ ENTER;
+ dev_info(&ioa_cfg->pdev->dev, "Starting IOA initialization sequence.\n");
+
+ ioarcb->cmd_pkt.cdb[0] = IPR_ID_HOST_RR_Q;
+ ioarcb->res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+
+ ioarcb->cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+ ioarcb->cmd_pkt.cdb[2] =
+ ((u32) ioa_cfg->host_rrq_dma >> 24) & 0xff;
+ ioarcb->cmd_pkt.cdb[3] =
+ ((u32) ioa_cfg->host_rrq_dma >> 16) & 0xff;
+ ioarcb->cmd_pkt.cdb[4] =
+ ((u32) ioa_cfg->host_rrq_dma >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[5] =
+ ((u32) ioa_cfg->host_rrq_dma) & 0xff;
+ ioarcb->cmd_pkt.cdb[7] =
+ ((sizeof(u32) * IPR_NUM_CMD_BLKS) >> 8) & 0xff;
+ ioarcb->cmd_pkt.cdb[8] =
+ (sizeof(u32) * IPR_NUM_CMD_BLKS) & 0xff;
+
+ ipr_cmd->job_step = ipr_ioafp_std_inquiry;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, IPR_INTERNAL_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_reset_timer_done - Adapter reset timer function
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function is used in adapter reset processing
+ * for timing events. If the reset_cmd pointer in the IOA
+ * config struct is not this adapter's we are doing nested
+ * resets and fail_all_ops will take care of freeing the
+ * command block.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_reset_timer_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ unsigned long lock_flags = 0;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+
+ if (ioa_cfg->reset_cmd == ipr_cmd) {
+ list_del(&ipr_cmd->queue);
+ ipr_cmd->done(ipr_cmd);
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+}
+
+/**
+ * ipr_reset_start_timer - Start a timer for adapter reset job
+ * @ipr_cmd: ipr command struct
+ * @timeout: timeout value
+ *
+ * Description: This function is used in adapter reset processing
+ * for timing events. If the reset_cmd pointer in the IOA
+ * config struct is not this adapter's we are doing nested
+ * resets and fail_all_ops will take care of freeing the
+ * command block.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_reset_start_timer(struct ipr_cmnd *ipr_cmd,
+ unsigned long timeout)
+{
+ list_add_tail(&ipr_cmd->queue, &ipr_cmd->ioa_cfg->pending_q);
+ ipr_cmd->done = ipr_reset_ioa_job;
+
+ ipr_cmd->timer.data = (unsigned long) ipr_cmd;
+ ipr_cmd->timer.expires = jiffies + timeout;
+ ipr_cmd->timer.function = (void (*)(unsigned long))ipr_reset_timer_done;
+ add_timer(&ipr_cmd->timer);
+}
+
+/**
+ * ipr_init_ioa_mem - Initialize ioa_cfg control block
+ * @ioa_cfg: ioa cfg struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_init_ioa_mem(struct ipr_ioa_cfg *ioa_cfg)
+{
+ memset(ioa_cfg->host_rrq, 0, sizeof(u32) * IPR_NUM_CMD_BLKS);
+
+ /* Initialize Host RRQ pointers */
+ ioa_cfg->hrrq_start = ioa_cfg->host_rrq;
+ ioa_cfg->hrrq_end = &ioa_cfg->host_rrq[IPR_NUM_CMD_BLKS - 1];
+ ioa_cfg->hrrq_curr = ioa_cfg->hrrq_start;
+ ioa_cfg->toggle_bit = 1;
+
+ /* Zero out config table */
+ memset(ioa_cfg->cfg_table, 0, sizeof(struct ipr_config_table));
+}
+
+/**
+ * ipr_reset_enable_ioa - Enable the IOA following a reset.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function reinitializes some control blocks and
+ * enables destructive diagnostics on the adapter.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_enable_ioa(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ volatile u32 int_reg;
+
+ ENTER;
+ ipr_cmd->job_step = ipr_ioafp_indentify_hrrq;
+ ipr_init_ioa_mem(ioa_cfg);
+
+ ioa_cfg->allow_interrupts = 1;
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
+
+ if (int_reg & IPR_PCII_IOA_TRANS_TO_OPER) {
+ writel((IPR_PCII_ERROR_INTERRUPTS | IPR_PCII_HRRQ_UPDATED),
+ ioa_cfg->regs.clr_interrupt_mask_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
+ return IPR_RC_JOB_CONTINUE;
+ }
+
+ /* Enable destructive diagnostics on IOA */
+ writel(IPR_DOORBELL, ioa_cfg->regs.set_uproc_interrupt_reg);
+
+ writel(IPR_PCII_OPER_INTERRUPTS, ioa_cfg->regs.clr_interrupt_mask_reg);
+ int_reg = readl(ioa_cfg->regs.sense_interrupt_mask_reg);
+
+ dev_info(&ioa_cfg->pdev->dev, "Initializing IOA.\n");
+
+ ipr_cmd->timer.data = (unsigned long) ipr_cmd;
+ ipr_cmd->timer.expires = jiffies + IPR_OPERATIONAL_TIMEOUT;
+ ipr_cmd->timer.function = (void (*)(unsigned long))ipr_timeout;
+ ipr_cmd->done = ipr_reset_ioa_job;
+ add_timer(&ipr_cmd->timer);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->pending_q);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_reset_wait_for_dump - Wait for a dump to timeout.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function is invoked when an adapter dump has run out
+ * of processing time.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE
+ **/
+static int ipr_reset_wait_for_dump(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ if (ioa_cfg->sdt_state == GET_DUMP)
+ ioa_cfg->sdt_state = ABORT_DUMP;
+
+ ipr_cmd->job_step = ipr_reset_alert;
+
+ return IPR_RC_JOB_CONTINUE;
+}
+
+/**
+ * ipr_unit_check_no_data - Log a unit check/no data error log
+ * @ioa_cfg: ioa config struct
+ *
+ * Logs an error indicating the adapter unit checked, but for some
+ * reason, we were unable to fetch the unit check buffer.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_unit_check_no_data(struct ipr_ioa_cfg *ioa_cfg)
+{
+ ioa_cfg->errors_logged++;
+ dev_err(&ioa_cfg->pdev->dev, "IOA unit check with no data\n");
+}
+
+/**
+ * ipr_get_unit_check_buffer - Get the unit check buffer from the IOA
+ * @ioa_cfg: ioa config struct
+ *
+ * Fetches the unit check buffer from the adapter by clocking the data
+ * through the mailbox register.
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_get_unit_check_buffer(struct ipr_ioa_cfg *ioa_cfg)
+{
+ unsigned long mailbox;
+ struct ipr_hostrcb *hostrcb;
+ struct ipr_uc_sdt sdt;
+ int rc, length;
+
+ mailbox = readl(ioa_cfg->ioa_mailbox);
+
+ if (!ipr_sdt_is_fmt2(mailbox)) {
+ ipr_unit_check_no_data(ioa_cfg);
+ return;
+ }
+
+ memset(&sdt, 0, sizeof(struct ipr_uc_sdt));
+ rc = ipr_get_ldump_data_section(ioa_cfg, mailbox, (u32 *) &sdt,
+ (sizeof(struct ipr_uc_sdt)) / sizeof(u32));
+
+ if (rc || (be32_to_cpu(sdt.hdr.state) != IPR_FMT2_SDT_READY_TO_USE) ||
+ !(sdt.entry[0].flags & IPR_SDT_VALID_ENTRY)) {
+ ipr_unit_check_no_data(ioa_cfg);
+ return;
+ }
+
+ /* Find length of the first sdt entry (UC buffer) */
+ length = (be32_to_cpu(sdt.entry[0].end_offset) -
+ be32_to_cpu(sdt.entry[0].bar_str_offset)) & IPR_FMT2_MBX_ADDR_MASK;
+
+ hostrcb = list_entry(ioa_cfg->hostrcb_free_q.next,
+ struct ipr_hostrcb, queue);
+ list_del(&hostrcb->queue);
+ memset(&hostrcb->hcam, 0, sizeof(hostrcb->hcam));
+
+ rc = ipr_get_ldump_data_section(ioa_cfg,
+ be32_to_cpu(sdt.entry[0].bar_str_offset),
+ (u32 *)&hostrcb->hcam,
+ min(length, (int)sizeof(hostrcb->hcam)) / sizeof(u32));
+
+ if (!rc)
+ ipr_handle_log_data(ioa_cfg, hostrcb);
+ else
+ ipr_unit_check_no_data(ioa_cfg);
+
+ list_add_tail(&hostrcb->queue, &ioa_cfg->hostrcb_free_q);
+}
+
+/**
+ * ipr_reset_restore_cfg_space - Restore PCI config space.
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function restores the saved PCI config space of
+ * the adapter, fails all outstanding ops back to the callers, and
+ * fetches the dump/unit check if applicable to this reset.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_restore_cfg_space(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ int rc;
+
+ ENTER;
+ rc = pci_restore_state(ioa_cfg->pdev);
+
+ if (rc != PCIBIOS_SUCCESSFUL) {
+ ipr_cmd->ioasa.ioasc = cpu_to_be32(IPR_IOASC_PCI_ACCESS_ERROR);
+ return IPR_RC_JOB_CONTINUE;
+ }
+
+ if (ipr_set_pcix_cmd_reg(ioa_cfg)) {
+ ipr_cmd->ioasa.ioasc = cpu_to_be32(IPR_IOASC_PCI_ACCESS_ERROR);
+ return IPR_RC_JOB_CONTINUE;
+ }
+
+ ipr_fail_all_ops(ioa_cfg);
+
+ if (ioa_cfg->ioa_unit_checked) {
+ ioa_cfg->ioa_unit_checked = 0;
+ ipr_get_unit_check_buffer(ioa_cfg);
+ ipr_cmd->job_step = ipr_reset_alert;
+ ipr_reset_start_timer(ipr_cmd, 0);
+ return IPR_RC_JOB_RETURN;
+ }
+
+ if (ioa_cfg->in_ioa_bringdown) {
+ ipr_cmd->job_step = ipr_ioa_bringdown_done;
+ } else {
+ ipr_cmd->job_step = ipr_reset_enable_ioa;
+
+ if (GET_DUMP == ioa_cfg->sdt_state) {
+ ipr_reset_start_timer(ipr_cmd, IPR_DUMP_TIMEOUT);
+ ipr_cmd->job_step = ipr_reset_wait_for_dump;
+ schedule_work(&ioa_cfg->work_q);
+ return IPR_RC_JOB_RETURN;
+ }
+ }
+
+ ENTER;
+ return IPR_RC_JOB_CONTINUE;
+}
+
+/**
+ * ipr_reset_start_bist - Run BIST on the adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function runs BIST on the adapter, then delays 2 seconds.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_start_bist(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ int rc;
+
+ ENTER;
+ rc = pci_write_config_byte(ioa_cfg->pdev, PCI_BIST, PCI_BIST_START);
+
+ if (rc != PCIBIOS_SUCCESSFUL) {
+ ipr_cmd->ioasa.ioasc = cpu_to_be32(IPR_IOASC_PCI_ACCESS_ERROR);
+ rc = IPR_RC_JOB_CONTINUE;
+ } else {
+ ipr_cmd->job_step = ipr_reset_restore_cfg_space;
+ ipr_reset_start_timer(ipr_cmd, IPR_WAIT_FOR_BIST_TIMEOUT);
+ rc = IPR_RC_JOB_RETURN;
+ }
+
+ LEAVE;
+ return rc;
+}
+
+/**
+ * ipr_reset_allowed - Query whether or not IOA can be reset
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 if reset not allowed / non-zero if reset is allowed
+ **/
+static int ipr_reset_allowed(struct ipr_ioa_cfg *ioa_cfg)
+{
+ volatile u32 temp_reg;
+
+ temp_reg = readl(ioa_cfg->regs.sense_interrupt_reg);
+ return ((temp_reg & IPR_PCII_CRITICAL_OPERATION) == 0);
+}
+
+/**
+ * ipr_reset_wait_to_start_bist - Wait for permission to reset IOA.
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function waits for adapter permission to run BIST,
+ * then runs BIST. If the adapter does not give permission after a
+ * reasonable time, we will reset the adapter anyway. The impact of
+ * resetting the adapter without warning the adapter is the risk of
+ * losing the persistent error log on the adapter. If the adapter is
+ * reset while it is writing to the flash on the adapter, the flash
+ * segment will have bad ECC and be zeroed.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_wait_to_start_bist(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ int rc = IPR_RC_JOB_RETURN;
+
+ if (!ipr_reset_allowed(ioa_cfg) && ipr_cmd->u.time_left) {
+ ipr_cmd->u.time_left -= IPR_CHECK_FOR_RESET_TIMEOUT;
+ ipr_reset_start_timer(ipr_cmd, IPR_CHECK_FOR_RESET_TIMEOUT);
+ } else {
+ ipr_cmd->job_step = ipr_reset_start_bist;
+ rc = IPR_RC_JOB_CONTINUE;
+ }
+
+ return rc;
+}
+
+/**
+ * ipr_reset_alert_part2 - Alert the adapter of a pending reset
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function alerts the adapter that it will be reset.
+ * If memory space is not currently enabled, proceed directly
+ * to running BIST on the adapter. The timer must always be started
+ * so we guarantee we do not run BIST from ipr_isr.
+ *
+ * Return value:
+ * IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_alert(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ u16 cmd_reg;
+ int rc;
+
+ ENTER;
+ rc = pci_read_config_word(ioa_cfg->pdev, PCI_COMMAND, &cmd_reg);
+
+ if ((rc == PCIBIOS_SUCCESSFUL) && (cmd_reg & PCI_COMMAND_MEMORY)) {
+ ipr_mask_and_clear_interrupts(ioa_cfg, ~0);
+ writel(IPR_UPROCI_RESET_ALERT, ioa_cfg->regs.set_uproc_interrupt_reg);
+ ipr_cmd->job_step = ipr_reset_wait_to_start_bist;
+ } else {
+ ipr_cmd->job_step = ipr_reset_start_bist;
+ }
+
+ ipr_cmd->u.time_left = IPR_WAIT_FOR_RESET_TIMEOUT;
+ ipr_reset_start_timer(ipr_cmd, IPR_CHECK_FOR_RESET_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_reset_ucode_download_done - Microcode download completion
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function unmaps the microcode download buffer.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE
+ **/
+static int ipr_reset_ucode_download_done(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_sglist *sglist = ioa_cfg->ucode_sglist;
+
+ pci_unmap_sg(ioa_cfg->pdev, sglist->scatterlist,
+ sglist->num_sg, DMA_TO_DEVICE);
+
+ ipr_cmd->job_step = ipr_reset_alert;
+ return IPR_RC_JOB_CONTINUE;
+}
+
+/**
+ * ipr_reset_ucode_download - Download microcode to the adapter
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function checks to see if it there is microcode
+ * to download to the adapter. If there is, a download is performed.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_ucode_download(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ struct ipr_sglist *sglist = ioa_cfg->ucode_sglist;
+
+ ENTER;
+ ipr_cmd->job_step = ipr_reset_alert;
+
+ if (!sglist)
+ return IPR_RC_JOB_CONTINUE;
+
+ ipr_cmd->ioarcb.res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+ ipr_cmd->ioarcb.cmd_pkt.request_type = IPR_RQTYPE_SCSICDB;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[0] = WRITE_BUFFER;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[1] = IPR_WR_BUF_DOWNLOAD_AND_SAVE;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[6] = (sglist->buffer_len & 0xff0000) >> 16;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[7] = (sglist->buffer_len & 0x00ff00) >> 8;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[8] = sglist->buffer_len & 0x0000ff;
+
+ if (ipr_map_ucode_buffer(ipr_cmd, sglist, sglist->buffer_len)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "Failed to map microcode download buffer\n");
+ return IPR_RC_JOB_CONTINUE;
+ }
+
+ ipr_cmd->job_step = ipr_reset_ucode_download_done;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout,
+ IPR_WRITE_BUFFER_TIMEOUT);
+
+ LEAVE;
+ return IPR_RC_JOB_RETURN;
+}
+
+/**
+ * ipr_reset_shutdown_ioa - Shutdown the adapter
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function issues an adapter shutdown of the
+ * specified type to the specified adapter as part of the
+ * adapter reset job.
+ *
+ * Return value:
+ * IPR_RC_JOB_CONTINUE / IPR_RC_JOB_RETURN
+ **/
+static int ipr_reset_shutdown_ioa(struct ipr_cmnd *ipr_cmd)
+{
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+ enum ipr_shutdown_type shutdown_type = ipr_cmd->u.shutdown_type;
+ unsigned long timeout;
+ int rc = IPR_RC_JOB_CONTINUE;
+
+ ENTER;
+ if (shutdown_type != IPR_SHUTDOWN_NONE && !ioa_cfg->ioa_is_dead) {
+ ipr_cmd->ioarcb.res_handle = cpu_to_be32(IPR_IOA_RES_HANDLE);
+ ipr_cmd->ioarcb.cmd_pkt.request_type = IPR_RQTYPE_IOACMD;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[0] = IPR_IOA_SHUTDOWN;
+ ipr_cmd->ioarcb.cmd_pkt.cdb[1] = shutdown_type;
+
+ if (shutdown_type == IPR_SHUTDOWN_ABBREV)
+ timeout = IPR_ABBREV_SHUTDOWN_TIMEOUT;
+ else if (shutdown_type == IPR_SHUTDOWN_PREPARE_FOR_NORMAL)
+ timeout = IPR_INTERNAL_TIMEOUT;
+ else
+ timeout = IPR_SHUTDOWN_TIMEOUT;
+
+ ipr_do_req(ipr_cmd, ipr_reset_ioa_job, ipr_timeout, timeout);
+
+ rc = IPR_RC_JOB_RETURN;
+ ipr_cmd->job_step = ipr_reset_ucode_download;
+ } else
+ ipr_cmd->job_step = ipr_reset_alert;
+
+ LEAVE;
+ return rc;
+}
+
+/**
+ * ipr_reset_ioa_job - Adapter reset job
+ * @ipr_cmd: ipr command struct
+ *
+ * Description: This function is the job router for the adapter reset job.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_reset_ioa_job(struct ipr_cmnd *ipr_cmd)
+{
+ u32 rc, ioasc;
+ unsigned long scratch = ipr_cmd->u.scratch;
+ struct ipr_ioa_cfg *ioa_cfg = ipr_cmd->ioa_cfg;
+
+ do {
+ ioasc = be32_to_cpu(ipr_cmd->ioasa.ioasc);
+
+ if (ioa_cfg->reset_cmd != ipr_cmd) {
+ /*
+ * We are doing nested adapter resets and this is
+ * not the current reset job.
+ */
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ return;
+ }
+
+ if (IPR_IOASC_SENSE_KEY(ioasc)) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "0x%02X failed with IOASC: 0x%08X\n",
+ ipr_cmd->ioarcb.cmd_pkt.cdb[0], ioasc);
+
+ ipr_initiate_ioa_reset(ioa_cfg, IPR_SHUTDOWN_NONE);
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ return;
+ }
+
+ ipr_reinit_ipr_cmnd(ipr_cmd);
+ ipr_cmd->u.scratch = scratch;
+ rc = ipr_cmd->job_step(ipr_cmd);
+ } while(rc == IPR_RC_JOB_CONTINUE);
+}
+
+/**
+ * _ipr_initiate_ioa_reset - Initiate an adapter reset
+ * @ioa_cfg: ioa config struct
+ * @job_step: first job step of reset job
+ * @shutdown_type: shutdown type
+ *
+ * Description: This function will initiate the reset of the given adapter
+ * starting at the selected job step.
+ * If the caller needs to wait on the completion of the reset,
+ * the caller must sleep on the reset_wait_q.
+ *
+ * Return value:
+ * none
+ **/
+static void _ipr_initiate_ioa_reset(struct ipr_ioa_cfg *ioa_cfg,
+ int (*job_step) (struct ipr_cmnd *),
+ enum ipr_shutdown_type shutdown_type)
+{
+ struct ipr_cmnd *ipr_cmd;
+
+ ioa_cfg->in_reset_reload = 1;
+ ioa_cfg->allow_cmds = 0;
+ scsi_block_requests(ioa_cfg->host);
+
+ ipr_cmd = ipr_get_free_ipr_cmnd(ioa_cfg);
+ ioa_cfg->reset_cmd = ipr_cmd;
+ ipr_cmd->job_step = job_step;
+ ipr_cmd->u.shutdown_type = shutdown_type;
+
+ ipr_reset_ioa_job(ipr_cmd);
+}
+
+/**
+ * ipr_initiate_ioa_reset - Initiate an adapter reset
+ * @ioa_cfg: ioa config struct
+ * @shutdown_type: shutdown type
+ *
+ * Description: This function will initiate the reset of the given adapter.
+ * If the caller needs to wait on the completion of the reset,
+ * the caller must sleep on the reset_wait_q.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_initiate_ioa_reset(struct ipr_ioa_cfg *ioa_cfg,
+ enum ipr_shutdown_type shutdown_type)
+{
+ if (ioa_cfg->ioa_is_dead)
+ return;
+
+ if (ioa_cfg->in_reset_reload && ioa_cfg->sdt_state == GET_DUMP)
+ ioa_cfg->sdt_state = ABORT_DUMP;
+
+ if (ioa_cfg->reset_retries++ > IPR_NUM_RESET_RELOAD_RETRIES) {
+ dev_err(&ioa_cfg->pdev->dev,
+ "IOA taken offline - error recovery failed\n");
+
+ ioa_cfg->reset_retries = 0;
+ ioa_cfg->ioa_is_dead = 1;
+
+ if (ioa_cfg->in_ioa_bringdown) {
+ ioa_cfg->reset_cmd = NULL;
+ ioa_cfg->in_reset_reload = 0;
+ ipr_fail_all_ops(ioa_cfg);
+ wake_up_all(&ioa_cfg->reset_wait_q);
+
+ spin_unlock_irq(ioa_cfg->host->host_lock);
+ scsi_unblock_requests(ioa_cfg->host);
+ spin_lock_irq(ioa_cfg->host->host_lock);
+ return;
+ } else {
+ ioa_cfg->in_ioa_bringdown = 1;
+ shutdown_type = IPR_SHUTDOWN_NONE;
+ }
+ }
+
+ _ipr_initiate_ioa_reset(ioa_cfg, ipr_reset_shutdown_ioa,
+ shutdown_type);
+}
+
+/**
+ * ipr_probe_ioa_part2 - Initializes IOAs found in ipr_probe_ioa(..)
+ * @ioa_cfg: ioa cfg struct
+ *
+ * Description: This is the second phase of adapter intialization
+ * This function takes care of initilizing the adapter to the point
+ * where it can accept new commands.
+
+ * Return value:
+ * 0 on sucess / -EIO on failure
+ **/
+static int __devinit ipr_probe_ioa_part2(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int rc = 0;
+ unsigned long host_lock_flags = 0;
+
+ ENTER;
+ spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
+ dev_dbg(&ioa_cfg->pdev->dev, "ioa_cfg adx: 0x%p\n", ioa_cfg);
+ _ipr_initiate_ioa_reset(ioa_cfg, ipr_reset_enable_ioa, IPR_SHUTDOWN_NONE);
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
+
+ if (ioa_cfg->ioa_is_dead) {
+ rc = -EIO;
+ } else if (ipr_invalid_adapter(ioa_cfg)) {
+ if (!ipr_testmode)
+ rc = -EIO;
+
+ dev_err(&ioa_cfg->pdev->dev,
+ "Adapter not supported in this hardware configuration.\n");
+ }
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags);
+
+ LEAVE;
+ return rc;
+}
+
+/**
+ * ipr_free_cmd_blks - Frees command blocks allocated for an adapter
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_free_cmd_blks(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int i;
+
+ for (i = 0; i < IPR_NUM_CMD_BLKS; i++) {
+ if (ioa_cfg->ipr_cmnd_list[i])
+ pci_pool_free(ioa_cfg->ipr_cmd_pool,
+ ioa_cfg->ipr_cmnd_list[i],
+ ioa_cfg->ipr_cmnd_list_dma[i]);
+
+ ioa_cfg->ipr_cmnd_list[i] = NULL;
+ }
+
+ if (ioa_cfg->ipr_cmd_pool)
+ pci_pool_destroy (ioa_cfg->ipr_cmd_pool);
+
+ ioa_cfg->ipr_cmd_pool = NULL;
+}
+
+/**
+ * ipr_free_mem - Frees memory allocated for an adapter
+ * @ioa_cfg: ioa cfg struct
+ *
+ * Return value:
+ * nothing
+ **/
+static void ipr_free_mem(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int i;
+
+ kfree(ioa_cfg->res_entries);
+ pci_free_consistent(ioa_cfg->pdev, sizeof(struct ipr_misc_cbs),
+ ioa_cfg->vpd_cbs, ioa_cfg->vpd_cbs_dma);
+ ipr_free_cmd_blks(ioa_cfg);
+ pci_free_consistent(ioa_cfg->pdev, sizeof(u32) * IPR_NUM_CMD_BLKS,
+ ioa_cfg->host_rrq, ioa_cfg->host_rrq_dma);
+ pci_free_consistent(ioa_cfg->pdev, sizeof(struct ipr_config_table),
+ ioa_cfg->cfg_table,
+ ioa_cfg->cfg_table_dma);
+
+ for (i = 0; i < IPR_NUM_HCAMS; i++) {
+ pci_free_consistent(ioa_cfg->pdev,
+ sizeof(struct ipr_hostrcb),
+ ioa_cfg->hostrcb[i],
+ ioa_cfg->hostrcb_dma[i]);
+ }
+
+ ipr_free_dump(ioa_cfg);
+ kfree(ioa_cfg->saved_mode_pages);
+ kfree(ioa_cfg->trace);
+}
+
+/**
+ * ipr_free_all_resources - Free all allocated resources for an adapter.
+ * @ipr_cmd: ipr command struct
+ *
+ * This function frees all allocated resources for the
+ * specified adapter.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_free_all_resources(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct pci_dev *pdev = ioa_cfg->pdev;
+
+ ENTER;
+ free_irq(pdev->irq, ioa_cfg);
+ iounmap(ioa_cfg->hdw_dma_regs);
+ pci_release_regions(pdev);
+ ipr_free_mem(ioa_cfg);
+ scsi_host_put(ioa_cfg->host);
+ pci_disable_device(pdev);
+ LEAVE;
+}
+
+/**
+ * ipr_alloc_cmd_blks - Allocate command blocks for an adapter
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / -ENOMEM on allocation failure
+ **/
+static int __devinit ipr_alloc_cmd_blks(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct ipr_cmnd *ipr_cmd;
+ struct ipr_ioarcb *ioarcb;
+ dma_addr_t dma_addr;
+ int i;
+
+ ioa_cfg->ipr_cmd_pool = pci_pool_create (IPR_NAME, ioa_cfg->pdev,
+ sizeof(struct ipr_cmnd), 8, 0);
+
+ if (!ioa_cfg->ipr_cmd_pool)
+ return -ENOMEM;
+
+ for (i = 0; i < IPR_NUM_CMD_BLKS; i++) {
+ ipr_cmd = pci_pool_alloc (ioa_cfg->ipr_cmd_pool, SLAB_KERNEL, &dma_addr);
+
+ if (!ipr_cmd) {
+ ipr_free_cmd_blks(ioa_cfg);
+ return -ENOMEM;
+ }
+
+ memset(ipr_cmd, 0, sizeof(*ipr_cmd));
+ ioa_cfg->ipr_cmnd_list[i] = ipr_cmd;
+ ioa_cfg->ipr_cmnd_list_dma[i] = dma_addr;
+
+ ioarcb = &ipr_cmd->ioarcb;
+ ioarcb->ioarcb_host_pci_addr = cpu_to_be32(dma_addr);
+ ioarcb->host_response_handle = cpu_to_be32(i << 2);
+ ioarcb->write_ioadl_addr =
+ cpu_to_be32(dma_addr + offsetof(struct ipr_cmnd, ioadl));
+ ioarcb->read_ioadl_addr = ioarcb->write_ioadl_addr;
+ ioarcb->ioasa_host_pci_addr =
+ cpu_to_be32(dma_addr + offsetof(struct ipr_cmnd, ioasa));
+ ioarcb->ioasa_len = cpu_to_be16(sizeof(struct ipr_ioasa));
+ ipr_cmd->cmd_index = i;
+ ipr_cmd->ioa_cfg = ioa_cfg;
+ ipr_cmd->sense_buffer_dma = dma_addr +
+ offsetof(struct ipr_cmnd, sense_buffer);
+
+ list_add_tail(&ipr_cmd->queue, &ioa_cfg->free_q);
+ }
+
+ return 0;
+}
+
+/**
+ * ipr_alloc_mem - Allocate memory for an adapter
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * 0 on success / non-zero for error
+ **/
+static int __devinit ipr_alloc_mem(struct ipr_ioa_cfg *ioa_cfg)
+{
+ struct pci_dev *pdev = ioa_cfg->pdev;
+ int i, rc = -ENOMEM;
+
+ ENTER;
+ ioa_cfg->res_entries = kmalloc(sizeof(struct ipr_resource_entry) *
+ IPR_MAX_PHYSICAL_DEVS, GFP_KERNEL);
+
+ if (!ioa_cfg->res_entries)
+ goto out;
+
+ memset(ioa_cfg->res_entries, 0,
+ sizeof(struct ipr_resource_entry) * IPR_MAX_PHYSICAL_DEVS);
+
+ for (i = 0; i < IPR_MAX_PHYSICAL_DEVS; i++)
+ list_add_tail(&ioa_cfg->res_entries[i].queue, &ioa_cfg->free_res_q);
+
+ ioa_cfg->vpd_cbs = pci_alloc_consistent(ioa_cfg->pdev,
+ sizeof(struct ipr_misc_cbs),
+ &ioa_cfg->vpd_cbs_dma);
+
+ if (!ioa_cfg->vpd_cbs)
+ goto out_free_res_entries;
+
+ if (ipr_alloc_cmd_blks(ioa_cfg))
+ goto out_free_vpd_cbs;
+
+ ioa_cfg->host_rrq = pci_alloc_consistent(ioa_cfg->pdev,
+ sizeof(u32) * IPR_NUM_CMD_BLKS,
+ &ioa_cfg->host_rrq_dma);
+
+ if (!ioa_cfg->host_rrq)
+ goto out_ipr_free_cmd_blocks;
+
+ ioa_cfg->cfg_table = pci_alloc_consistent(ioa_cfg->pdev,
+ sizeof(struct ipr_config_table),
+ &ioa_cfg->cfg_table_dma);
+
+ if (!ioa_cfg->cfg_table)
+ goto out_free_host_rrq;
+
+ for (i = 0; i < IPR_NUM_HCAMS; i++) {
+ ioa_cfg->hostrcb[i] = pci_alloc_consistent(ioa_cfg->pdev,
+ sizeof(struct ipr_hostrcb),
+ &ioa_cfg->hostrcb_dma[i]);
+
+ if (!ioa_cfg->hostrcb[i])
+ goto out_free_hostrcb_dma;
+
+ ioa_cfg->hostrcb[i]->hostrcb_dma =
+ ioa_cfg->hostrcb_dma[i] + offsetof(struct ipr_hostrcb, hcam);
+ list_add_tail(&ioa_cfg->hostrcb[i]->queue, &ioa_cfg->hostrcb_free_q);
+ }
+
+ ioa_cfg->trace = kmalloc(sizeof(struct ipr_trace_entry) *
+ IPR_NUM_TRACE_ENTRIES, GFP_KERNEL);
+
+ if (!ioa_cfg->trace)
+ goto out_free_hostrcb_dma;
+
+ memset(ioa_cfg->trace, 0,
+ sizeof(struct ipr_trace_entry) * IPR_NUM_TRACE_ENTRIES);
+
+ rc = 0;
+out:
+ LEAVE;
+ return rc;
+
+out_free_hostrcb_dma:
+ while (i-- > 0) {
+ pci_free_consistent(pdev, sizeof(struct ipr_hostrcb),
+ ioa_cfg->hostrcb[i],
+ ioa_cfg->hostrcb_dma[i]);
+ }
+ pci_free_consistent(pdev, sizeof(struct ipr_config_table),
+ ioa_cfg->cfg_table, ioa_cfg->cfg_table_dma);
+out_free_host_rrq:
+ pci_free_consistent(pdev, sizeof(u32) * IPR_NUM_CMD_BLKS,
+ ioa_cfg->host_rrq, ioa_cfg->host_rrq_dma);
+out_ipr_free_cmd_blocks:
+ ipr_free_cmd_blks(ioa_cfg);
+out_free_vpd_cbs:
+ pci_free_consistent(pdev, sizeof(struct ipr_misc_cbs),
+ ioa_cfg->vpd_cbs, ioa_cfg->vpd_cbs_dma);
+out_free_res_entries:
+ kfree(ioa_cfg->res_entries);
+ goto out;
+}
+
+/**
+ * ipr_initialize_bus_attr - Initialize SCSI bus attributes to default values
+ * @ioa_cfg: ioa config struct
+ *
+ * Return value:
+ * none
+ **/
+static void __devinit ipr_initialize_bus_attr(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int i;
+
+ for (i = 0; i < IPR_MAX_NUM_BUSES; i++) {
+ ioa_cfg->bus_attr[i].bus = i;
+ ioa_cfg->bus_attr[i].qas_enabled = 0;
+ ioa_cfg->bus_attr[i].bus_width = IPR_DEFAULT_BUS_WIDTH;
+ if (ipr_max_speed < ARRAY_SIZE(ipr_max_bus_speeds))
+ ioa_cfg->bus_attr[i].max_xfer_rate = ipr_max_bus_speeds[ipr_max_speed];
+ else
+ ioa_cfg->bus_attr[i].max_xfer_rate = IPR_U160_SCSI_RATE;
+ }
+}
+
+/**
+ * ipr_init_ioa_cfg - Initialize IOA config struct
+ * @ioa_cfg: ioa config struct
+ * @host: scsi host struct
+ * @pdev: PCI dev struct
+ *
+ * Return value:
+ * none
+ **/
+static void __devinit ipr_init_ioa_cfg(struct ipr_ioa_cfg *ioa_cfg,
+ struct Scsi_Host *host, struct pci_dev *pdev)
+{
+ const struct ipr_interrupt_offsets *p;
+ struct ipr_interrupts *t;
+ void __iomem *base;
+
+ ioa_cfg->host = host;
+ ioa_cfg->pdev = pdev;
+ ioa_cfg->log_level = ipr_log_level;
+ sprintf(ioa_cfg->eye_catcher, IPR_EYECATCHER);
+ sprintf(ioa_cfg->trace_start, IPR_TRACE_START_LABEL);
+ sprintf(ioa_cfg->ipr_free_label, IPR_FREEQ_LABEL);
+ sprintf(ioa_cfg->ipr_pending_label, IPR_PENDQ_LABEL);
+ sprintf(ioa_cfg->cfg_table_start, IPR_CFG_TBL_START);
+ sprintf(ioa_cfg->resource_table_label, IPR_RES_TABLE_LABEL);
+ sprintf(ioa_cfg->ipr_hcam_label, IPR_HCAM_LABEL);
+ sprintf(ioa_cfg->ipr_cmd_label, IPR_CMD_LABEL);
+
+ INIT_LIST_HEAD(&ioa_cfg->free_q);
+ INIT_LIST_HEAD(&ioa_cfg->pending_q);
+ INIT_LIST_HEAD(&ioa_cfg->hostrcb_free_q);
+ INIT_LIST_HEAD(&ioa_cfg->hostrcb_pending_q);
+ INIT_LIST_HEAD(&ioa_cfg->free_res_q);
+ INIT_LIST_HEAD(&ioa_cfg->used_res_q);
+ INIT_WORK(&ioa_cfg->work_q, ipr_worker_thread, ioa_cfg);
+ init_waitqueue_head(&ioa_cfg->reset_wait_q);
+ ioa_cfg->sdt_state = INACTIVE;
+
+ ipr_initialize_bus_attr(ioa_cfg);
+
+ host->max_id = IPR_MAX_NUM_TARGETS_PER_BUS;
+ host->max_lun = IPR_MAX_NUM_LUNS_PER_TARGET;
+ host->max_channel = IPR_MAX_BUS_TO_SCAN;
+ host->unique_id = host->host_no;
+ host->max_cmd_len = IPR_MAX_CDB_LEN;
+ pci_set_drvdata(pdev, ioa_cfg);
+
+ p = &ioa_cfg->chip_cfg->regs;
+ t = &ioa_cfg->regs;
+ base = ioa_cfg->hdw_dma_regs;
+
+ t->set_interrupt_mask_reg = base + p->set_interrupt_mask_reg;
+ t->clr_interrupt_mask_reg = base + p->clr_interrupt_mask_reg;
+ t->sense_interrupt_mask_reg = base + p->sense_interrupt_mask_reg;
+ t->clr_interrupt_reg = base + p->clr_interrupt_reg;
+ t->sense_interrupt_reg = base + p->sense_interrupt_reg;
+ t->ioarrin_reg = base + p->ioarrin_reg;
+ t->sense_uproc_interrupt_reg = base + p->sense_uproc_interrupt_reg;
+ t->set_uproc_interrupt_reg = base + p->set_uproc_interrupt_reg;
+ t->clr_uproc_interrupt_reg = base + p->clr_uproc_interrupt_reg;
+}
+
+/**
+ * ipr_probe_ioa - Allocates memory and does first stage of initialization
+ * @pdev: PCI device struct
+ * @dev_id: PCI device id struct
+ *
+ * Return value:
+ * 0 on success / non-zero on failure
+ **/
+static int __devinit ipr_probe_ioa(struct pci_dev *pdev,
+ const struct pci_device_id *dev_id)
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct Scsi_Host *host;
+ unsigned long ipr_regs_pci;
+ void __iomem *ipr_regs;
+ u32 rc = PCIBIOS_SUCCESSFUL;
+
+ ENTER;
+
+ if ((rc = pci_enable_device(pdev))) {
+ dev_err(&pdev->dev, "Cannot enable adapter\n");
+ goto out;
+ }
+
+ dev_info(&pdev->dev, "Found IOA with IRQ: %d\n", pdev->irq);
+
+ host = scsi_host_alloc(&driver_template, sizeof(*ioa_cfg));
+
+ if (!host) {
+ dev_err(&pdev->dev, "call to scsi_host_alloc failed!\n");
+ rc = -ENOMEM;
+ goto out_disable;
+ }
+
+ ioa_cfg = (struct ipr_ioa_cfg *)host->hostdata;
+ memset(ioa_cfg, 0, sizeof(struct ipr_ioa_cfg));
+
+ ioa_cfg->chip_cfg = (const struct ipr_chip_cfg_t *)dev_id->driver_data;
+
+ ipr_regs_pci = pci_resource_start(pdev, 0);
+
+ rc = pci_request_regions(pdev, IPR_NAME);
+ if (rc < 0) {
+ dev_err(&pdev->dev,
+ "Couldn't register memory range of registers\n");
+ goto out_scsi_host_put;
+ }
+
+ ipr_regs = ioremap(ipr_regs_pci, pci_resource_len(pdev, 0));
+
+ if (!ipr_regs) {
+ dev_err(&pdev->dev,
+ "Couldn't map memory range of registers\n");
+ rc = -ENOMEM;
+ goto out_release_regions;
+ }
+
+ ioa_cfg->hdw_dma_regs = ipr_regs;
+ ioa_cfg->hdw_dma_regs_pci = ipr_regs_pci;
+ ioa_cfg->ioa_mailbox = ioa_cfg->chip_cfg->mailbox + ipr_regs;
+
+ ipr_init_ioa_cfg(ioa_cfg, host, pdev);
+
+ pci_set_master(pdev);
+
+ rc = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
+ if (rc < 0) {
+ dev_err(&pdev->dev, "Failed to set PCI DMA mask\n");
+ goto cleanup_nomem;
+ }
+
+ rc = pci_write_config_byte(pdev, PCI_CACHE_LINE_SIZE,
+ ioa_cfg->chip_cfg->cache_line_size);
+
+ if (rc != PCIBIOS_SUCCESSFUL) {
+ dev_err(&pdev->dev, "Write of cache line size failed\n");
+ rc = -EIO;
+ goto cleanup_nomem;
+ }
+
+ /* Save away PCI config space for use following IOA reset */
+ rc = pci_save_state(pdev);
+
+ if (rc != PCIBIOS_SUCCESSFUL) {
+ dev_err(&pdev->dev, "Failed to save PCI config space\n");
+ rc = -EIO;
+ goto cleanup_nomem;
+ }
+
+ if ((rc = ipr_save_pcix_cmd_reg(ioa_cfg)))
+ goto cleanup_nomem;
+
+ if ((rc = ipr_set_pcix_cmd_reg(ioa_cfg)))
+ goto cleanup_nomem;
+
+ rc = ipr_alloc_mem(ioa_cfg);
+ if (rc < 0) {
+ dev_err(&pdev->dev,
+ "Couldn't allocate enough memory for device driver!\n");
+ goto cleanup_nomem;
+ }
+
+ ipr_mask_and_clear_interrupts(ioa_cfg, ~IPR_PCII_IOA_TRANS_TO_OPER);
+ rc = request_irq(pdev->irq, ipr_isr, SA_SHIRQ, IPR_NAME, ioa_cfg);
+
+ if (rc) {
+ dev_err(&pdev->dev, "Couldn't register IRQ %d! rc=%d\n",
+ pdev->irq, rc);
+ goto cleanup_nolog;
+ }
+
+ spin_lock(&ipr_driver_lock);
+ list_add_tail(&ioa_cfg->queue, &ipr_ioa_head);
+ spin_unlock(&ipr_driver_lock);
+
+ LEAVE;
+out:
+ return rc;
+
+cleanup_nolog:
+ ipr_free_mem(ioa_cfg);
+cleanup_nomem:
+ iounmap(ipr_regs);
+out_release_regions:
+ pci_release_regions(pdev);
+out_scsi_host_put:
+ scsi_host_put(host);
+out_disable:
+ pci_disable_device(pdev);
+ goto out;
+}
+
+/**
+ * ipr_scan_vsets - Scans for VSET devices
+ * @ioa_cfg: ioa config struct
+ *
+ * Description: Since the VSET resources do not follow SAM in that we can have
+ * sparse LUNs with no LUN 0, we have to scan for these ourselves.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_scan_vsets(struct ipr_ioa_cfg *ioa_cfg)
+{
+ int target, lun;
+
+ for (target = 0; target < IPR_MAX_NUM_TARGETS_PER_BUS; target++)
+ for (lun = 0; lun < IPR_MAX_NUM_VSET_LUNS_PER_TARGET; lun++ )
+ scsi_add_device(ioa_cfg->host, IPR_VSET_BUS, target, lun);
+}
+
+/**
+ * ipr_initiate_ioa_bringdown - Bring down an adapter
+ * @ioa_cfg: ioa config struct
+ * @shutdown_type: shutdown type
+ *
+ * Description: This function will initiate bringing down the adapter.
+ * This consists of issuing an IOA shutdown to the adapter
+ * to flush the cache, and running BIST.
+ * If the caller needs to wait on the completion of the reset,
+ * the caller must sleep on the reset_wait_q.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_initiate_ioa_bringdown(struct ipr_ioa_cfg *ioa_cfg,
+ enum ipr_shutdown_type shutdown_type)
+{
+ ENTER;
+ if (ioa_cfg->sdt_state == WAIT_FOR_DUMP)
+ ioa_cfg->sdt_state = ABORT_DUMP;
+ ioa_cfg->reset_retries = 0;
+ ioa_cfg->in_ioa_bringdown = 1;
+ ipr_initiate_ioa_reset(ioa_cfg, shutdown_type);
+ LEAVE;
+}
+
+/**
+ * __ipr_remove - Remove a single adapter
+ * @pdev: pci device struct
+ *
+ * Adapter hot plug remove entry point.
+ *
+ * Return value:
+ * none
+ **/
+static void __ipr_remove(struct pci_dev *pdev)
+{
+ unsigned long host_lock_flags = 0;
+ struct ipr_ioa_cfg *ioa_cfg = pci_get_drvdata(pdev);
+ ENTER;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
+ ipr_initiate_ioa_bringdown(ioa_cfg, IPR_SHUTDOWN_NORMAL);
+
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+ spin_lock_irqsave(ioa_cfg->host->host_lock, host_lock_flags);
+
+ spin_lock(&ipr_driver_lock);
+ list_del(&ioa_cfg->queue);
+ spin_unlock(&ipr_driver_lock);
+
+ if (ioa_cfg->sdt_state == ABORT_DUMP)
+ ioa_cfg->sdt_state = WAIT_FOR_DUMP;
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, host_lock_flags);
+
+ ipr_free_all_resources(ioa_cfg);
+
+ LEAVE;
+}
+
+/**
+ * ipr_remove - IOA hot plug remove entry point
+ * @pdev: pci device struct
+ *
+ * Adapter hot plug remove entry point.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_remove(struct pci_dev *pdev)
+{
+ struct ipr_ioa_cfg *ioa_cfg = pci_get_drvdata(pdev);
+
+ ENTER;
+
+ ioa_cfg->allow_cmds = 0;
+ flush_scheduled_work();
+ ipr_remove_trace_file(&ioa_cfg->host->shost_classdev.kobj,
+ &ipr_trace_attr);
+ ipr_remove_dump_file(&ioa_cfg->host->shost_classdev.kobj,
+ &ipr_dump_attr);
+ scsi_remove_host(ioa_cfg->host);
+
+ __ipr_remove(pdev);
+
+ LEAVE;
+}
+
+/**
+ * ipr_probe - Adapter hot plug add entry point
+ *
+ * Return value:
+ * 0 on success / non-zero on failure
+ **/
+static int __devinit ipr_probe(struct pci_dev *pdev,
+ const struct pci_device_id *dev_id)
+{
+ struct ipr_ioa_cfg *ioa_cfg;
+ int rc;
+
+ rc = ipr_probe_ioa(pdev, dev_id);
+
+ if (rc)
+ return rc;
+
+ ioa_cfg = pci_get_drvdata(pdev);
+ rc = ipr_probe_ioa_part2(ioa_cfg);
+
+ if (rc) {
+ __ipr_remove(pdev);
+ return rc;
+ }
+
+ rc = scsi_add_host(ioa_cfg->host, &pdev->dev);
+
+ if (rc) {
+ __ipr_remove(pdev);
+ return rc;
+ }
+
+ rc = ipr_create_trace_file(&ioa_cfg->host->shost_classdev.kobj,
+ &ipr_trace_attr);
+
+ if (rc) {
+ scsi_remove_host(ioa_cfg->host);
+ __ipr_remove(pdev);
+ return rc;
+ }
+
+ rc = ipr_create_dump_file(&ioa_cfg->host->shost_classdev.kobj,
+ &ipr_dump_attr);
+
+ if (rc) {
+ ipr_remove_trace_file(&ioa_cfg->host->shost_classdev.kobj,
+ &ipr_trace_attr);
+ scsi_remove_host(ioa_cfg->host);
+ __ipr_remove(pdev);
+ return rc;
+ }
+
+ scsi_scan_host(ioa_cfg->host);
+ ipr_scan_vsets(ioa_cfg);
+ scsi_add_device(ioa_cfg->host, IPR_IOA_BUS, IPR_IOA_TARGET, IPR_IOA_LUN);
+ ioa_cfg->allow_ml_add_del = 1;
+ schedule_work(&ioa_cfg->work_q);
+ return 0;
+}
+
+/**
+ * ipr_shutdown - Shutdown handler.
+ * @dev: device struct
+ *
+ * This function is invoked upon system shutdown/reboot. It will issue
+ * an adapter shutdown to the adapter to flush the write cache.
+ *
+ * Return value:
+ * none
+ **/
+static void ipr_shutdown(struct device *dev)
+{
+ struct ipr_ioa_cfg *ioa_cfg = pci_get_drvdata(to_pci_dev(dev));
+ unsigned long lock_flags = 0;
+
+ spin_lock_irqsave(ioa_cfg->host->host_lock, lock_flags);
+ ipr_initiate_ioa_bringdown(ioa_cfg, IPR_SHUTDOWN_NORMAL);
+ spin_unlock_irqrestore(ioa_cfg->host->host_lock, lock_flags);
+ wait_event(ioa_cfg->reset_wait_q, !ioa_cfg->in_reset_reload);
+}
+
+static struct pci_device_id ipr_pci_table[] __devinitdata = {
+ { PCI_VENDOR_ID_MYLEX, PCI_DEVICE_ID_IBM_GEMSTONE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_5702,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[0] },
+ { PCI_VENDOR_ID_MYLEX, PCI_DEVICE_ID_IBM_GEMSTONE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_572E,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[0] },
+ { PCI_VENDOR_ID_MYLEX, PCI_DEVICE_ID_IBM_GEMSTONE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_5703,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[0] },
+ { PCI_VENDOR_ID_MYLEX, PCI_DEVICE_ID_IBM_GEMSTONE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_573D,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[0] },
+ { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_CITRINE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_571B,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[0] },
+ { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_SNIPE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_2780,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[1] },
+ { PCI_VENDOR_ID_IBM, PCI_DEVICE_ID_IBM_SNIPE,
+ PCI_VENDOR_ID_IBM, IPR_SUBS_DEV_ID_570F,
+ 0, 0, (kernel_ulong_t)&ipr_chip_cfg[1] },
+ { }
+};
+MODULE_DEVICE_TABLE(pci, ipr_pci_table);
+
+static struct pci_driver ipr_driver = {
+ .name = IPR_NAME,
+ .id_table = ipr_pci_table,
+ .probe = ipr_probe,
+ .remove = ipr_remove,
+ .driver = {
+ .shutdown = ipr_shutdown,
+ },
+};
+
+/**
+ * ipr_init - Module entry point
+ *
+ * Return value:
+ * 0 on success / negative value on failure
+ **/
+static int __init ipr_init(void)
+{
+ ipr_info("IBM Power RAID SCSI Device Driver version: %s %s\n",
+ IPR_DRIVER_VERSION, IPR_DRIVER_DATE);
+
+ return pci_module_init(&ipr_driver);
+}
+
+/**
+ * ipr_exit - Module unload
+ *
+ * Module unload entry point.
+ *
+ * Return value:
+ * none
+ **/
+static void __exit ipr_exit(void)
+{
+ pci_unregister_driver(&ipr_driver);
+}
+
+module_init(ipr_init);
+module_exit(ipr_exit);
--- /dev/null
+/*
+ * ipr.h -- driver for IBM Power Linux RAID adapters
+ *
+ * Written By: Brian King <brking@us.ibm.com>, IBM Corporation
+ *
+ * Copyright (C) 2003, 2004 IBM Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Alan Cox <alan@redhat.com> - Removed several careless u32/dma_addr_t errors
+ * that broke 64bit platforms.
+ */
+
+#ifndef _IPR_H
+#define _IPR_H
+
+#include <linux/types.h>
+#include <linux/completion.h>
+#include <linux/list.h>
+#include <linux/kref.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_cmnd.h>
+#ifdef CONFIG_KDB
+#include <linux/kdb.h>
+#endif
+
+/*
+ * Literals
+ */
+#define IPR_DRIVER_VERSION "2.0.11"
+#define IPR_DRIVER_DATE "(August 3, 2004)"
+
+/*
+ * IPR_DBG_TRACE: Setting this to 1 will turn on some general function tracing
+ * resulting in a bunch of extra debugging printks to the console
+ *
+ * IPR_DEBUG: Setting this to 1 will turn on some error path tracing.
+ * Enables the ipr_trace macro.
+ */
+#ifdef IPR_DEBUG_ALL
+#define IPR_DEBUG 1
+#define IPR_DBG_TRACE 1
+#else
+#define IPR_DEBUG 0
+#define IPR_DBG_TRACE 0
+#endif
+
+/*
+ * IPR_MAX_CMD_PER_LUN: This defines the maximum number of outstanding
+ * ops per device for devices not running tagged command queuing.
+ * This can be adjusted at runtime through sysfs device attributes.
+ */
+#define IPR_MAX_CMD_PER_LUN 6
+
+/*
+ * IPR_NUM_BASE_CMD_BLKS: This defines the maximum number of
+ * ops the mid-layer can send to the adapter.
+ */
+#define IPR_NUM_BASE_CMD_BLKS 100
+
+#define IPR_SUBS_DEV_ID_2780 0x0264
+#define IPR_SUBS_DEV_ID_5702 0x0266
+#define IPR_SUBS_DEV_ID_5703 0x0278
+#define IPR_SUBS_DEV_ID_572E 0x02D3
+#define IPR_SUBS_DEV_ID_573D 0x02D4
+#define IPR_SUBS_DEV_ID_570F 0x02BD
+#define IPR_SUBS_DEV_ID_571B 0x02BE
+
+#define IPR_NAME "ipr"
+
+/*
+ * Return codes
+ */
+#define IPR_RC_JOB_CONTINUE 1
+#define IPR_RC_JOB_RETURN 2
+
+/*
+ * IOASCs
+ */
+#define IPR_IOASC_NR_INIT_CMD_REQUIRED 0x02040200
+#define IPR_IOASC_SYNC_REQUIRED 0x023f0000
+#define IPR_IOASC_MED_DO_NOT_REALLOC 0x03110C00
+#define IPR_IOASC_HW_SEL_TIMEOUT 0x04050000
+#define IPR_IOASC_HW_DEV_BUS_STATUS 0x04448500
+#define IPR_IOASC_IOASC_MASK 0xFFFFFF00
+#define IPR_IOASC_SCSI_STATUS_MASK 0x000000FF
+#define IPR_IOASC_IR_RESOURCE_HANDLE 0x05250000
+#define IPR_IOASC_BUS_WAS_RESET 0x06290000
+#define IPR_IOASC_BUS_WAS_RESET_BY_OTHER 0x06298000
+#define IPR_IOASC_ABORTED_CMD_TERM_BY_HOST 0x0B5A0000
+
+#define IPR_FIRST_DRIVER_IOASC 0x10000000
+#define IPR_IOASC_IOA_WAS_RESET 0x10000001
+#define IPR_IOASC_PCI_ACCESS_ERROR 0x10000002
+
+#define IPR_NUM_LOG_HCAMS 2
+#define IPR_NUM_CFG_CHG_HCAMS 2
+#define IPR_NUM_HCAMS (IPR_NUM_LOG_HCAMS + IPR_NUM_CFG_CHG_HCAMS)
+#define IPR_MAX_NUM_TARGETS_PER_BUS 0x10
+#define IPR_MAX_NUM_LUNS_PER_TARGET 256
+#define IPR_MAX_NUM_VSET_LUNS_PER_TARGET 8
+#define IPR_VSET_BUS 0xff
+#define IPR_IOA_BUS 0xff
+#define IPR_IOA_TARGET 0xff
+#define IPR_IOA_LUN 0xff
+#define IPR_MAX_NUM_BUSES 4
+#define IPR_MAX_BUS_TO_SCAN IPR_MAX_NUM_BUSES
+
+#define IPR_NUM_RESET_RELOAD_RETRIES 3
+
+/* We need resources for HCAMS, IOA reset, IOA bringdown, and ERP */
+#define IPR_NUM_INTERNAL_CMD_BLKS (IPR_NUM_HCAMS + \
+ ((IPR_NUM_RESET_RELOAD_RETRIES + 1) * 2) + 3)
+
+#define IPR_MAX_COMMANDS IPR_NUM_BASE_CMD_BLKS
+#define IPR_NUM_CMD_BLKS (IPR_NUM_BASE_CMD_BLKS + \
+ IPR_NUM_INTERNAL_CMD_BLKS)
+
+#define IPR_MAX_PHYSICAL_DEVS 192
+
+#define IPR_MAX_SGLIST 64
+#define IPR_MAX_SECTORS 512
+#define IPR_MAX_CDB_LEN 16
+
+#define IPR_DEFAULT_BUS_WIDTH 16
+#define IPR_80MBs_SCSI_RATE ((80 * 10) / (IPR_DEFAULT_BUS_WIDTH / 8))
+#define IPR_U160_SCSI_RATE ((160 * 10) / (IPR_DEFAULT_BUS_WIDTH / 8))
+#define IPR_U320_SCSI_RATE ((320 * 10) / (IPR_DEFAULT_BUS_WIDTH / 8))
+#define IPR_MAX_SCSI_RATE(width) ((320 * 10) / ((width) / 8))
+
+#define IPR_IOA_RES_HANDLE 0xffffffff
+#define IPR_IOA_RES_ADDR 0x00ffffff
+
+/*
+ * Adapter Commands
+ */
+#define IPR_RESET_DEVICE 0xC3
+#define IPR_RESET_TYPE_SELECT 0x80
+#define IPR_LUN_RESET 0x40
+#define IPR_TARGET_RESET 0x20
+#define IPR_BUS_RESET 0x10
+#define IPR_ID_HOST_RR_Q 0xC4
+#define IPR_QUERY_IOA_CONFIG 0xC5
+#define IPR_CANCEL_ALL_REQUESTS 0xCE
+#define IPR_HOST_CONTROLLED_ASYNC 0xCF
+#define IPR_HCAM_CDB_OP_CODE_CONFIG_CHANGE 0x01
+#define IPR_HCAM_CDB_OP_CODE_LOG_DATA 0x02
+#define IPR_SET_SUPPORTED_DEVICES 0xFB
+#define IPR_IOA_SHUTDOWN 0xF7
+#define IPR_WR_BUF_DOWNLOAD_AND_SAVE 0x05
+
+/*
+ * Timeouts
+ */
+#define IPR_SHUTDOWN_TIMEOUT (10 * 60 * HZ)
+#define IPR_VSET_RW_TIMEOUT (2 * 60 * HZ)
+#define IPR_ABBREV_SHUTDOWN_TIMEOUT (10 * HZ)
+#define IPR_DEVICE_RESET_TIMEOUT (30 * HZ)
+#define IPR_CANCEL_ALL_TIMEOUT (30 * HZ)
+#define IPR_ABORT_TASK_TIMEOUT (30 * HZ)
+#define IPR_INTERNAL_TIMEOUT (30 * HZ)
+#define IPR_WRITE_BUFFER_TIMEOUT (10 * 60 * HZ)
+#define IPR_SET_SUP_DEVICE_TIMEOUT (2 * 60 * HZ)
+#define IPR_REQUEST_SENSE_TIMEOUT (10 * HZ)
+#define IPR_OPERATIONAL_TIMEOUT (5 * 60 * HZ)
+#define IPR_WAIT_FOR_RESET_TIMEOUT (2 * HZ)
+#define IPR_CHECK_FOR_RESET_TIMEOUT (HZ / 10)
+#define IPR_WAIT_FOR_BIST_TIMEOUT (2 * HZ)
+#define IPR_DUMP_TIMEOUT (15 * HZ)
+
+/*
+ * SCSI Literals
+ */
+#define IPR_VENDOR_ID_LEN 8
+#define IPR_PROD_ID_LEN 16
+#define IPR_SERIAL_NUM_LEN 8
+
+/*
+ * Hardware literals
+ */
+#define IPR_FMT2_MBX_ADDR_MASK 0x0fffffff
+#define IPR_FMT2_MBX_BAR_SEL_MASK 0xf0000000
+#define IPR_FMT2_MKR_BAR_SEL_SHIFT 28
+#define IPR_GET_FMT2_BAR_SEL(mbx) \
+(((mbx) & IPR_FMT2_MBX_BAR_SEL_MASK) >> IPR_FMT2_MKR_BAR_SEL_SHIFT)
+#define IPR_SDT_FMT2_BAR0_SEL 0x0
+#define IPR_SDT_FMT2_BAR1_SEL 0x1
+#define IPR_SDT_FMT2_BAR2_SEL 0x2
+#define IPR_SDT_FMT2_BAR3_SEL 0x3
+#define IPR_SDT_FMT2_BAR4_SEL 0x4
+#define IPR_SDT_FMT2_BAR5_SEL 0x5
+#define IPR_SDT_FMT2_EXP_ROM_SEL 0x8
+#define IPR_FMT2_SDT_READY_TO_USE 0xC4D4E3F2
+#define IPR_DOORBELL 0x82800000
+
+#define IPR_PCII_IOA_TRANS_TO_OPER (0x80000000 >> 0)
+#define IPR_PCII_IOARCB_XFER_FAILED (0x80000000 >> 3)
+#define IPR_PCII_IOA_UNIT_CHECKED (0x80000000 >> 4)
+#define IPR_PCII_NO_HOST_RRQ (0x80000000 >> 5)
+#define IPR_PCII_CRITICAL_OPERATION (0x80000000 >> 6)
+#define IPR_PCII_IO_DEBUG_ACKNOWLEDGE (0x80000000 >> 7)
+#define IPR_PCII_IOARRIN_LOST (0x80000000 >> 27)
+#define IPR_PCII_MMIO_ERROR (0x80000000 >> 28)
+#define IPR_PCII_PROC_ERR_STATE (0x80000000 >> 29)
+#define IPR_PCII_HRRQ_UPDATED (0x80000000 >> 30)
+#define IPR_PCII_CORE_ISSUED_RST_REQ (0x80000000 >> 31)
+
+#define IPR_PCII_ERROR_INTERRUPTS \
+(IPR_PCII_IOARCB_XFER_FAILED | IPR_PCII_IOA_UNIT_CHECKED | \
+IPR_PCII_NO_HOST_RRQ | IPR_PCII_IOARRIN_LOST | IPR_PCII_MMIO_ERROR)
+
+#define IPR_PCII_OPER_INTERRUPTS \
+(IPR_PCII_ERROR_INTERRUPTS | IPR_PCII_HRRQ_UPDATED | IPR_PCII_IOA_TRANS_TO_OPER)
+
+#define IPR_UPROCI_RESET_ALERT (0x80000000 >> 7)
+#define IPR_UPROCI_IO_DEBUG_ALERT (0x80000000 >> 9)
+
+#define IPR_LDUMP_MAX_LONG_ACK_DELAY_IN_USEC 200000 /* 200 ms */
+#define IPR_LDUMP_MAX_SHORT_ACK_DELAY_IN_USEC 200000 /* 200 ms */
+
+/*
+ * Dump literals
+ */
+#define IPR_MAX_IOA_DUMP_SIZE (4 * 1024 * 1024)
+#define IPR_NUM_SDT_ENTRIES 511
+#define IPR_MAX_NUM_DUMP_PAGES ((IPR_MAX_IOA_DUMP_SIZE / PAGE_SIZE) + 1)
+
+/*
+ * Misc literals
+ */
+#define IPR_NUM_IOADL_ENTRIES IPR_MAX_SGLIST
+
+/*
+ * Adapter interface types
+ */
+
+struct ipr_res_addr {
+ u8 reserved;
+ u8 bus;
+ u8 target;
+ u8 lun;
+#define IPR_GET_PHYS_LOC(res_addr) \
+ (((res_addr).bus << 16) | ((res_addr).target << 8) | (res_addr).lun)
+}__attribute__((packed, aligned (4)));
+
+struct ipr_std_inq_vpids {
+ u8 vendor_id[IPR_VENDOR_ID_LEN];
+ u8 product_id[IPR_PROD_ID_LEN];
+}__attribute__((packed));
+
+struct ipr_std_inq_data {
+ u8 peri_qual_dev_type;
+#define IPR_STD_INQ_PERI_QUAL(peri) ((peri) >> 5)
+#define IPR_STD_INQ_PERI_DEV_TYPE(peri) ((peri) & 0x1F)
+
+ u8 removeable_medium_rsvd;
+#define IPR_STD_INQ_REMOVEABLE_MEDIUM 0x80
+
+#define IPR_IS_DASD_DEVICE(std_inq) \
+((IPR_STD_INQ_PERI_DEV_TYPE((std_inq).peri_qual_dev_type) == TYPE_DISK) && \
+!(((std_inq).removeable_medium_rsvd) & IPR_STD_INQ_REMOVEABLE_MEDIUM))
+
+#define IPR_IS_SES_DEVICE(std_inq) \
+(IPR_STD_INQ_PERI_DEV_TYPE((std_inq).peri_qual_dev_type) == TYPE_ENCLOSURE)
+
+ u8 version;
+ u8 aen_naca_fmt;
+ u8 additional_len;
+ u8 sccs_rsvd;
+ u8 bq_enc_multi;
+ u8 sync_cmdq_flags;
+
+ struct ipr_std_inq_vpids vpids;
+
+ u8 ros_rsvd_ram_rsvd[4];
+
+ u8 serial_num[IPR_SERIAL_NUM_LEN];
+}__attribute__ ((packed));
+
+struct ipr_config_table_entry {
+ u8 service_level;
+ u8 array_id;
+ u8 flags;
+#define IPR_IS_IOA_RESOURCE 0x80
+#define IPR_IS_ARRAY_MEMBER 0x20
+#define IPR_IS_HOT_SPARE 0x10
+
+ u8 rsvd_subtype;
+#define IPR_RES_SUBTYPE(res) (((res)->cfgte.rsvd_subtype) & 0x0f)
+#define IPR_SUBTYPE_AF_DASD 0
+#define IPR_SUBTYPE_GENERIC_SCSI 1
+#define IPR_SUBTYPE_VOLUME_SET 2
+
+ struct ipr_res_addr res_addr;
+ u32 res_handle;
+ u32 reserved4[2];
+ struct ipr_std_inq_data std_inq_data;
+}__attribute__ ((packed, aligned (4)));
+
+struct ipr_config_table_hdr {
+ u8 num_entries;
+ u8 flags;
+#define IPR_UCODE_DOWNLOAD_REQ 0x10
+ u16 reserved;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_config_table {
+ struct ipr_config_table_hdr hdr;
+ struct ipr_config_table_entry dev[IPR_MAX_PHYSICAL_DEVS];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_cfg_ch_not {
+ struct ipr_config_table_entry cfgte;
+ u8 reserved[936];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_supported_device {
+ u16 data_length;
+ u8 reserved;
+ u8 num_records;
+ struct ipr_std_inq_vpids vpids;
+ u8 reserved2[16];
+}__attribute__((packed, aligned (4)));
+
+/* Command packet structure */
+struct ipr_cmd_pkt {
+ u16 reserved; /* Reserved by IOA */
+ u8 request_type;
+#define IPR_RQTYPE_SCSICDB 0x00
+#define IPR_RQTYPE_IOACMD 0x01
+#define IPR_RQTYPE_HCAM 0x02
+
+ u8 luntar_luntrn;
+
+ u8 flags_hi;
+#define IPR_FLAGS_HI_WRITE_NOT_READ 0x80
+#define IPR_FLAGS_HI_NO_ULEN_CHK 0x20
+#define IPR_FLAGS_HI_SYNC_OVERRIDE 0x10
+#define IPR_FLAGS_HI_SYNC_COMPLETE 0x08
+#define IPR_FLAGS_HI_NO_LINK_DESC 0x04
+
+ u8 flags_lo;
+#define IPR_FLAGS_LO_ALIGNED_BFR 0x20
+#define IPR_FLAGS_LO_DELAY_AFTER_RST 0x10
+#define IPR_FLAGS_LO_UNTAGGED_TASK 0x00
+#define IPR_FLAGS_LO_SIMPLE_TASK 0x02
+#define IPR_FLAGS_LO_ORDERED_TASK 0x04
+#define IPR_FLAGS_LO_HEAD_OF_Q_TASK 0x06
+#define IPR_FLAGS_LO_ACA_TASK 0x08
+
+ u8 cdb[16];
+ u16 timeout;
+}__attribute__ ((packed, aligned(4)));
+
+/* IOA Request Control Block 128 bytes */
+struct ipr_ioarcb {
+ u32 ioarcb_host_pci_addr;
+ u32 reserved;
+ u32 res_handle;
+ u32 host_response_handle;
+ u32 reserved1;
+ u32 reserved2;
+ u32 reserved3;
+
+ u32 write_data_transfer_length;
+ u32 read_data_transfer_length;
+ u32 write_ioadl_addr;
+ u32 write_ioadl_len;
+ u32 read_ioadl_addr;
+ u32 read_ioadl_len;
+
+ u32 ioasa_host_pci_addr;
+ u16 ioasa_len;
+ u16 reserved4;
+
+ struct ipr_cmd_pkt cmd_pkt;
+
+ u32 add_cmd_parms_len;
+ u32 add_cmd_parms[10];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ioadl_desc {
+ u32 flags_and_data_len;
+#define IPR_IOADL_FLAGS_MASK 0xff000000
+#define IPR_IOADL_GET_FLAGS(x) (be32_to_cpu(x) & IPR_IOADL_FLAGS_MASK)
+#define IPR_IOADL_DATA_LEN_MASK 0x00ffffff
+#define IPR_IOADL_GET_DATA_LEN(x) (be32_to_cpu(x) & IPR_IOADL_DATA_LEN_MASK)
+#define IPR_IOADL_FLAGS_READ 0x48000000
+#define IPR_IOADL_FLAGS_READ_LAST 0x49000000
+#define IPR_IOADL_FLAGS_WRITE 0x68000000
+#define IPR_IOADL_FLAGS_WRITE_LAST 0x69000000
+#define IPR_IOADL_FLAGS_LAST 0x01000000
+
+ u32 address;
+}__attribute__((packed, aligned (8)));
+
+struct ipr_ioasa_vset {
+ u32 failing_lba_hi;
+ u32 failing_lba_lo;
+ u32 ioa_data[22];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ioasa_af_dasd {
+ u32 failing_lba;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ioasa_gpdd {
+ u8 end_state;
+ u8 bus_phase;
+ u16 reserved;
+ u32 ioa_data[23];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ioasa_raw {
+ u32 ioa_data[24];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ioasa {
+ u32 ioasc;
+#define IPR_IOASC_SENSE_KEY(ioasc) ((ioasc) >> 24)
+#define IPR_IOASC_SENSE_CODE(ioasc) (((ioasc) & 0x00ff0000) >> 16)
+#define IPR_IOASC_SENSE_QUAL(ioasc) (((ioasc) & 0x0000ff00) >> 8)
+#define IPR_IOASC_SENSE_STATUS(ioasc) ((ioasc) & 0x000000ff)
+
+ u16 ret_stat_len; /* Length of the returned IOASA */
+
+ u16 avail_stat_len; /* Total Length of status available. */
+
+ u32 residual_data_len; /* number of bytes in the host data */
+ /* buffers that were not used by the IOARCB command. */
+
+ u32 ilid;
+#define IPR_NO_ILID 0
+#define IPR_DRIVER_ILID 0xffffffff
+
+ u32 fd_ioasc;
+
+ u32 fd_phys_locator;
+
+ u32 fd_res_handle;
+
+ u32 ioasc_specific; /* status code specific field */
+#define IPR_IOASC_SPECIFIC_MASK 0x00ffffff
+#define IPR_FIELD_POINTER_VALID (0x80000000 >> 8)
+#define IPR_FIELD_POINTER_MASK 0x0000ffff
+
+ union {
+ struct ipr_ioasa_vset vset;
+ struct ipr_ioasa_af_dasd dasd;
+ struct ipr_ioasa_gpdd gpdd;
+ struct ipr_ioasa_raw raw;
+ } u;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_mode_parm_hdr {
+ u8 length;
+ u8 medium_type;
+ u8 device_spec_parms;
+ u8 block_desc_len;
+}__attribute__((packed));
+
+struct ipr_mode_pages {
+ struct ipr_mode_parm_hdr hdr;
+ u8 data[255 - sizeof(struct ipr_mode_parm_hdr)];
+}__attribute__((packed));
+
+struct ipr_mode_page_hdr {
+ u8 ps_page_code;
+#define IPR_MODE_PAGE_PS 0x80
+#define IPR_GET_MODE_PAGE_CODE(hdr) ((hdr)->ps_page_code & 0x3F)
+ u8 page_length;
+}__attribute__ ((packed));
+
+struct ipr_dev_bus_entry {
+ struct ipr_res_addr res_addr;
+ u8 flags;
+#define IPR_SCSI_ATTR_ENABLE_QAS 0x80
+#define IPR_SCSI_ATTR_DISABLE_QAS 0x40
+#define IPR_SCSI_ATTR_QAS_MASK 0xC0
+#define IPR_SCSI_ATTR_ENABLE_TM 0x20
+#define IPR_SCSI_ATTR_NO_TERM_PWR 0x10
+#define IPR_SCSI_ATTR_TM_SUPPORTED 0x08
+#define IPR_SCSI_ATTR_LVD_TO_SE_NOT_ALLOWED 0x04
+
+ u8 scsi_id;
+ u8 bus_width;
+ u8 extended_reset_delay;
+#define IPR_EXTENDED_RESET_DELAY 7
+
+ u32 max_xfer_rate;
+
+ u8 spinup_delay;
+ u8 reserved3;
+ u16 reserved4;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_mode_page28 {
+ struct ipr_mode_page_hdr hdr;
+ u8 num_entries;
+ u8 entry_length;
+ struct ipr_dev_bus_entry bus[0];
+}__attribute__((packed));
+
+struct ipr_ioa_vpd {
+ struct ipr_std_inq_data std_inq_data;
+ u8 ascii_part_num[12];
+ u8 reserved[40];
+ u8 ascii_plant_code[4];
+}__attribute__((packed));
+
+struct ipr_inquiry_page3 {
+ u8 peri_qual_dev_type;
+ u8 page_code;
+ u8 reserved1;
+ u8 page_length;
+ u8 ascii_len;
+ u8 reserved2[3];
+ u8 load_id[4];
+ u8 major_release;
+ u8 card_type;
+ u8 minor_release[2];
+ u8 ptf_number[4];
+ u8 patch_number[4];
+}__attribute__((packed));
+
+struct ipr_hostrcb_device_data_entry {
+ struct ipr_std_inq_vpids dev_vpids;
+ u8 dev_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_res_addr dev_res_addr;
+ struct ipr_std_inq_vpids new_dev_vpids;
+ u8 new_dev_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids ioa_last_with_dev_vpids;
+ u8 ioa_last_with_dev_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids cfc_last_with_dev_vpids;
+ u8 cfc_last_with_dev_sn[IPR_SERIAL_NUM_LEN];
+ u32 ioa_data[5];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_array_data_entry {
+ struct ipr_std_inq_vpids vpids;
+ u8 serial_num[IPR_SERIAL_NUM_LEN];
+ struct ipr_res_addr expected_dev_res_addr;
+ struct ipr_res_addr dev_res_addr;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_type_ff_error {
+ u32 ioa_data[246];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_type_01_error {
+ u32 seek_counter;
+ u32 read_counter;
+ u8 sense_data[32];
+ u32 ioa_data[236];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_type_02_error {
+ struct ipr_std_inq_vpids ioa_vpids;
+ u8 ioa_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids cfc_vpids;
+ u8 cfc_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids ioa_last_attached_to_cfc_vpids;
+ u8 ioa_last_attached_to_cfc_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids cfc_last_attached_to_ioa_vpids;
+ u8 cfc_last_attached_to_ioa_sn[IPR_SERIAL_NUM_LEN];
+ u32 ioa_data[3];
+ u8 reserved[844];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_type_03_error {
+ struct ipr_std_inq_vpids ioa_vpids;
+ u8 ioa_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids cfc_vpids;
+ u8 cfc_sn[IPR_SERIAL_NUM_LEN];
+ u32 errors_detected;
+ u32 errors_logged;
+ u8 ioa_data[12];
+ struct ipr_hostrcb_device_data_entry dev_entry[3];
+ u8 reserved[444];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_type_04_error {
+ struct ipr_std_inq_vpids ioa_vpids;
+ u8 ioa_sn[IPR_SERIAL_NUM_LEN];
+ struct ipr_std_inq_vpids cfc_vpids;
+ u8 cfc_sn[IPR_SERIAL_NUM_LEN];
+ u8 ioa_data[12];
+ struct ipr_hostrcb_array_data_entry array_member[10];
+ u32 exposed_mode_adn;
+ u32 array_id;
+ struct ipr_std_inq_vpids incomp_dev_vpids;
+ u8 incomp_dev_sn[IPR_SERIAL_NUM_LEN];
+ u32 ioa_data2;
+ struct ipr_hostrcb_array_data_entry array_member2[8];
+ struct ipr_res_addr last_func_vset_res_addr;
+ u8 vset_serial_num[IPR_SERIAL_NUM_LEN];
+ u8 protection_level[8];
+ u8 reserved[124];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_error {
+ u32 failing_dev_ioasc;
+ struct ipr_res_addr failing_dev_res_addr;
+ u32 failing_dev_res_handle;
+ u32 prc;
+ union {
+ struct ipr_hostrcb_type_ff_error type_ff_error;
+ struct ipr_hostrcb_type_01_error type_01_error;
+ struct ipr_hostrcb_type_02_error type_02_error;
+ struct ipr_hostrcb_type_03_error type_03_error;
+ struct ipr_hostrcb_type_04_error type_04_error;
+ } u;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb_raw {
+ u32 data[sizeof(struct ipr_hostrcb_error)/sizeof(u32)];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hcam {
+ u8 op_code;
+#define IPR_HOST_RCB_OP_CODE_CONFIG_CHANGE 0xE1
+#define IPR_HOST_RCB_OP_CODE_LOG_DATA 0xE2
+
+ u8 notify_type;
+#define IPR_HOST_RCB_NOTIF_TYPE_EXISTING_CHANGED 0x00
+#define IPR_HOST_RCB_NOTIF_TYPE_NEW_ENTRY 0x01
+#define IPR_HOST_RCB_NOTIF_TYPE_REM_ENTRY 0x02
+#define IPR_HOST_RCB_NOTIF_TYPE_ERROR_LOG_ENTRY 0x10
+#define IPR_HOST_RCB_NOTIF_TYPE_INFORMATION_ENTRY 0x11
+
+ u8 notifications_lost;
+#define IPR_HOST_RCB_NO_NOTIFICATIONS_LOST 0
+#define IPR_HOST_RCB_NOTIFICATIONS_LOST 0x80
+
+ u8 flags;
+#define IPR_HOSTRCB_INTERNAL_OPER 0x80
+#define IPR_HOSTRCB_ERR_RESP_SENT 0x40
+
+ u8 overlay_id;
+#define IPR_HOST_RCB_OVERLAY_ID_1 0x01
+#define IPR_HOST_RCB_OVERLAY_ID_2 0x02
+#define IPR_HOST_RCB_OVERLAY_ID_3 0x03
+#define IPR_HOST_RCB_OVERLAY_ID_4 0x04
+#define IPR_HOST_RCB_OVERLAY_ID_6 0x06
+#define IPR_HOST_RCB_OVERLAY_ID_DEFAULT 0xFF
+
+ u8 reserved1[3];
+ u32 ilid;
+ u32 time_since_last_ioa_reset;
+ u32 reserved2;
+ u32 length;
+
+ union {
+ struct ipr_hostrcb_error error;
+ struct ipr_hostrcb_cfg_ch_not ccn;
+ struct ipr_hostrcb_raw raw;
+ } u;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_hostrcb {
+ struct ipr_hcam hcam;
+ dma_addr_t hostrcb_dma;
+ struct list_head queue;
+};
+
+/* IPR smart dump table structures */
+struct ipr_sdt_entry {
+ u32 bar_str_offset;
+ u32 end_offset;
+ u8 entry_byte;
+ u8 reserved[3];
+
+ u8 flags;
+#define IPR_SDT_ENDIAN 0x80
+#define IPR_SDT_VALID_ENTRY 0x20
+
+ u8 resv;
+ u16 priority;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_sdt_header {
+ u32 state;
+ u32 num_entries;
+ u32 num_entries_used;
+ u32 dump_size;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_sdt {
+ struct ipr_sdt_header hdr;
+ struct ipr_sdt_entry entry[IPR_NUM_SDT_ENTRIES];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_uc_sdt {
+ struct ipr_sdt_header hdr;
+ struct ipr_sdt_entry entry[1];
+}__attribute__((packed, aligned (4)));
+
+/*
+ * Driver types
+ */
+struct ipr_bus_attributes {
+ u8 bus;
+ u8 qas_enabled;
+ u8 bus_width;
+ u8 reserved;
+ u32 max_xfer_rate;
+};
+
+struct ipr_resource_entry {
+ struct ipr_config_table_entry cfgte;
+ u8 needs_sync_complete:1;
+ u8 in_erp:1;
+ u8 add_to_ml:1;
+ u8 del_from_ml:1;
+ u8 resetting_device:1;
+ u8 tcq_active:1;
+
+ int qdepth;
+ struct scsi_device *sdev;
+ struct list_head queue;
+};
+
+struct ipr_resource_hdr {
+ u16 num_entries;
+ u16 reserved;
+};
+
+struct ipr_resource_table {
+ struct ipr_resource_hdr hdr;
+ struct ipr_resource_entry dev[IPR_MAX_PHYSICAL_DEVS];
+};
+
+struct ipr_misc_cbs {
+ struct ipr_ioa_vpd ioa_vpd;
+ struct ipr_inquiry_page3 page3_data;
+ struct ipr_mode_pages mode_pages;
+ struct ipr_supported_device supp_dev;
+};
+
+struct ipr_interrupt_offsets {
+ unsigned long set_interrupt_mask_reg;
+ unsigned long clr_interrupt_mask_reg;
+ unsigned long sense_interrupt_mask_reg;
+ unsigned long clr_interrupt_reg;
+
+ unsigned long sense_interrupt_reg;
+ unsigned long ioarrin_reg;
+ unsigned long sense_uproc_interrupt_reg;
+ unsigned long set_uproc_interrupt_reg;
+ unsigned long clr_uproc_interrupt_reg;
+};
+
+struct ipr_interrupts {
+ void __iomem *set_interrupt_mask_reg;
+ void __iomem *clr_interrupt_mask_reg;
+ void __iomem *sense_interrupt_mask_reg;
+ void __iomem *clr_interrupt_reg;
+
+ void __iomem *sense_interrupt_reg;
+ void __iomem *ioarrin_reg;
+ void __iomem *sense_uproc_interrupt_reg;
+ void __iomem *set_uproc_interrupt_reg;
+ void __iomem *clr_uproc_interrupt_reg;
+};
+
+struct ipr_chip_cfg_t {
+ u32 mailbox;
+ u8 cache_line_size;
+ struct ipr_interrupt_offsets regs;
+};
+
+enum ipr_shutdown_type {
+ IPR_SHUTDOWN_NORMAL = 0x00,
+ IPR_SHUTDOWN_PREPARE_FOR_NORMAL = 0x40,
+ IPR_SHUTDOWN_ABBREV = 0x80,
+ IPR_SHUTDOWN_NONE = 0x100
+};
+
+struct ipr_trace_entry {
+ u32 time;
+
+ u8 op_code;
+ u8 type;
+#define IPR_TRACE_START 0x00
+#define IPR_TRACE_FINISH 0xff
+ u16 cmd_index;
+
+ u32 res_handle;
+ union {
+ u32 ioasc;
+ u32 add_data;
+ u32 res_addr;
+ } u;
+};
+
+struct ipr_sglist {
+ u32 order;
+ u32 num_sg;
+ u32 buffer_len;
+ struct scatterlist scatterlist[1];
+};
+
+enum ipr_sdt_state {
+ INACTIVE,
+ WAIT_FOR_DUMP,
+ GET_DUMP,
+ ABORT_DUMP,
+ DUMP_OBTAINED
+};
+
+/* Per-controller data */
+struct ipr_ioa_cfg {
+ char eye_catcher[8];
+#define IPR_EYECATCHER "iprcfg"
+
+ struct list_head queue;
+
+ u8 allow_interrupts:1;
+ u8 in_reset_reload:1;
+ u8 in_ioa_bringdown:1;
+ u8 ioa_unit_checked:1;
+ u8 ioa_is_dead:1;
+ u8 dump_taken:1;
+ u8 allow_cmds:1;
+ u8 allow_ml_add_del:1;
+
+ u16 type; /* CCIN of the card */
+
+ u8 log_level;
+#define IPR_MAX_LOG_LEVEL 4
+#define IPR_DEFAULT_LOG_LEVEL 2
+
+#define IPR_NUM_TRACE_INDEX_BITS 8
+#define IPR_NUM_TRACE_ENTRIES (1 << IPR_NUM_TRACE_INDEX_BITS)
+#define IPR_TRACE_SIZE (sizeof(struct ipr_trace_entry) * IPR_NUM_TRACE_ENTRIES)
+ char trace_start[8];
+#define IPR_TRACE_START_LABEL "trace"
+ struct ipr_trace_entry *trace;
+ u32 trace_index:IPR_NUM_TRACE_INDEX_BITS;
+
+ /*
+ * Queue for free command blocks
+ */
+ char ipr_free_label[8];
+#define IPR_FREEQ_LABEL "free-q"
+ struct list_head free_q;
+
+ /*
+ * Queue for command blocks outstanding to the adapter
+ */
+ char ipr_pending_label[8];
+#define IPR_PENDQ_LABEL "pend-q"
+ struct list_head pending_q;
+
+ char cfg_table_start[8];
+#define IPR_CFG_TBL_START "cfg"
+ struct ipr_config_table *cfg_table;
+ dma_addr_t cfg_table_dma;
+
+ char resource_table_label[8];
+#define IPR_RES_TABLE_LABEL "res_tbl"
+ struct ipr_resource_entry *res_entries;
+ struct list_head free_res_q;
+ struct list_head used_res_q;
+
+ char ipr_hcam_label[8];
+#define IPR_HCAM_LABEL "hcams"
+ struct ipr_hostrcb *hostrcb[IPR_NUM_HCAMS];
+ dma_addr_t hostrcb_dma[IPR_NUM_HCAMS];
+ struct list_head hostrcb_free_q;
+ struct list_head hostrcb_pending_q;
+
+ u32 *host_rrq;
+ dma_addr_t host_rrq_dma;
+#define IPR_HRRQ_REQ_RESP_HANDLE_MASK 0xfffffffc
+#define IPR_HRRQ_RESP_BIT_SET 0x00000002
+#define IPR_HRRQ_TOGGLE_BIT 0x00000001
+#define IPR_HRRQ_REQ_RESP_HANDLE_SHIFT 2
+ volatile u32 *hrrq_start;
+ volatile u32 *hrrq_end;
+ volatile u32 *hrrq_curr;
+ volatile u32 toggle_bit;
+
+ struct ipr_bus_attributes bus_attr[IPR_MAX_NUM_BUSES];
+
+ const struct ipr_chip_cfg_t *chip_cfg;
+
+ void __iomem *hdw_dma_regs; /* iomapped PCI memory space */
+ unsigned long hdw_dma_regs_pci; /* raw PCI memory space */
+ void __iomem *ioa_mailbox;
+ struct ipr_interrupts regs;
+
+ u16 saved_pcix_cmd_reg;
+ u16 reset_retries;
+
+ u32 errors_logged;
+
+ struct Scsi_Host *host;
+ struct pci_dev *pdev;
+ struct ipr_sglist *ucode_sglist;
+ struct ipr_mode_pages *saved_mode_pages;
+ u8 saved_mode_page_len;
+
+ struct work_struct work_q;
+
+ wait_queue_head_t reset_wait_q;
+
+ struct ipr_dump *dump;
+ enum ipr_sdt_state sdt_state;
+
+ struct ipr_misc_cbs *vpd_cbs;
+ dma_addr_t vpd_cbs_dma;
+
+ struct pci_pool *ipr_cmd_pool;
+
+ struct ipr_cmnd *reset_cmd;
+
+ char ipr_cmd_label[8];
+#define IPR_CMD_LABEL "ipr_cmnd"
+ struct ipr_cmnd *ipr_cmnd_list[IPR_NUM_CMD_BLKS];
+ u32 ipr_cmnd_list_dma[IPR_NUM_CMD_BLKS];
+};
+
+struct ipr_cmnd {
+ struct ipr_ioarcb ioarcb;
+ struct ipr_ioasa ioasa;
+ struct ipr_ioadl_desc ioadl[IPR_NUM_IOADL_ENTRIES];
+ struct list_head queue;
+ struct scsi_cmnd *scsi_cmd;
+ struct completion completion;
+ struct timer_list timer;
+ void (*done) (struct ipr_cmnd *);
+ int (*job_step) (struct ipr_cmnd *);
+ u16 cmd_index;
+ u8 sense_buffer[SCSI_SENSE_BUFFERSIZE];
+ dma_addr_t sense_buffer_dma;
+ unsigned short dma_use_sg;
+ dma_addr_t dma_handle;
+ struct ipr_cmnd *sibling;
+ union {
+ enum ipr_shutdown_type shutdown_type;
+ struct ipr_hostrcb *hostrcb;
+ unsigned long time_left;
+ unsigned long scratch;
+ struct ipr_resource_entry *res;
+ struct scsi_device *sdev;
+ } u;
+
+ struct ipr_ioa_cfg *ioa_cfg;
+};
+
+struct ipr_ses_table_entry {
+ char product_id[17];
+ char compare_product_id_byte[17];
+ u32 max_bus_speed_limit; /* MB/sec limit for this backplane */
+};
+
+struct ipr_dump_header {
+ u32 eye_catcher;
+#define IPR_DUMP_EYE_CATCHER 0xC5D4E3F2
+ u32 len;
+ u32 num_entries;
+ u32 first_entry_offset;
+ u32 status;
+#define IPR_DUMP_STATUS_SUCCESS 0
+#define IPR_DUMP_STATUS_QUAL_SUCCESS 2
+#define IPR_DUMP_STATUS_FAILED 0xffffffff
+ u32 os;
+#define IPR_DUMP_OS_LINUX 0x4C4E5558
+ u32 driver_name;
+#define IPR_DUMP_DRIVER_NAME 0x49505232
+}__attribute__((packed, aligned (4)));
+
+struct ipr_dump_entry_header {
+ u32 eye_catcher;
+#define IPR_DUMP_EYE_CATCHER 0xC5D4E3F2
+ u32 len;
+ u32 num_elems;
+ u32 offset;
+ u32 data_type;
+#define IPR_DUMP_DATA_TYPE_ASCII 0x41534349
+#define IPR_DUMP_DATA_TYPE_BINARY 0x42494E41
+ u32 id;
+#define IPR_DUMP_IOA_DUMP_ID 0x494F4131
+#define IPR_DUMP_LOCATION_ID 0x4C4F4341
+#define IPR_DUMP_TRACE_ID 0x54524143
+#define IPR_DUMP_DRIVER_VERSION_ID 0x44525652
+#define IPR_DUMP_DRIVER_TYPE_ID 0x54595045
+#define IPR_DUMP_IOA_CTRL_BLK 0x494F4342
+#define IPR_DUMP_PEND_OPS 0x414F5053
+ u32 status;
+}__attribute__((packed, aligned (4)));
+
+struct ipr_dump_location_entry {
+ struct ipr_dump_entry_header hdr;
+ u8 location[BUS_ID_SIZE];
+}__attribute__((packed));
+
+struct ipr_dump_trace_entry {
+ struct ipr_dump_entry_header hdr;
+ u32 trace[IPR_TRACE_SIZE / sizeof(u32)];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_dump_version_entry {
+ struct ipr_dump_entry_header hdr;
+ u8 version[sizeof(IPR_DRIVER_VERSION)];
+};
+
+struct ipr_dump_ioa_type_entry {
+ struct ipr_dump_entry_header hdr;
+ u32 type;
+ u32 fw_version;
+};
+
+struct ipr_driver_dump {
+ struct ipr_dump_header hdr;
+ struct ipr_dump_version_entry version_entry;
+ struct ipr_dump_location_entry location_entry;
+ struct ipr_dump_ioa_type_entry ioa_type_entry;
+ struct ipr_dump_trace_entry trace_entry;
+}__attribute__((packed));
+
+struct ipr_ioa_dump {
+ struct ipr_dump_entry_header hdr;
+ struct ipr_sdt sdt;
+ u32 *ioa_data[IPR_MAX_NUM_DUMP_PAGES];
+ u32 reserved;
+ u32 next_page_index;
+ u32 page_offset;
+ u32 format;
+#define IPR_SDT_FMT2 2
+#define IPR_SDT_UNKNOWN 3
+}__attribute__((packed, aligned (4)));
+
+struct ipr_dump {
+ struct kref kref;
+ struct ipr_ioa_cfg *ioa_cfg;
+ struct ipr_driver_dump driver_dump;
+ struct ipr_ioa_dump ioa_dump;
+};
+
+struct ipr_error_table_t {
+ u32 ioasc;
+ int log_ioasa;
+ int log_hcam;
+ char *error;
+};
+
+struct ipr_software_inq_lid_info {
+ u32 load_id;
+ u32 timestamp[3];
+}__attribute__((packed, aligned (4)));
+
+struct ipr_ucode_image_header {
+ u32 header_length;
+ u32 lid_table_offset;
+ u8 major_release;
+ u8 card_type;
+ u8 minor_release[2];
+ u8 reserved[20];
+ char eyecatcher[16];
+ u32 num_lids;
+ struct ipr_software_inq_lid_info lid[1];
+}__attribute__((packed, aligned (4)));
+
+/*
+ * Macros
+ */
+#if IPR_DEBUG
+#define IPR_DBG_CMD(CMD) do { CMD; } while (0)
+#else
+#define IPR_DBG_CMD(CMD)
+#endif
+
+#define ipr_breakpoint_data KERN_ERR IPR_NAME\
+": %s: %s: Line: %d ioa_cfg: %p\n", __FILE__, \
+__FUNCTION__, __LINE__, ioa_cfg
+
+#if defined(CONFIG_KDB) && !defined(CONFIG_PPC_ISERIES)
+#define ipr_breakpoint {printk(ipr_breakpoint_data); KDB_ENTER();}
+#define ipr_breakpoint_or_die {printk(ipr_breakpoint_data); KDB_ENTER();}
+#else
+#define ipr_breakpoint
+#define ipr_breakpoint_or_die panic(ipr_breakpoint_data)
+#endif
+
+#ifdef CONFIG_SCSI_IPR_TRACE
+#define ipr_create_trace_file(kobj, attr) sysfs_create_bin_file(kobj, attr)
+#define ipr_remove_trace_file(kobj, attr) sysfs_remove_bin_file(kobj, attr)
+#else
+#define ipr_create_trace_file(kobj, attr) 0
+#define ipr_remove_trace_file(kobj, attr) do { } while(0)
+#endif
+
+#ifdef CONFIG_SCSI_IPR_DUMP
+#define ipr_create_dump_file(kobj, attr) sysfs_create_bin_file(kobj, attr)
+#define ipr_remove_dump_file(kobj, attr) sysfs_remove_bin_file(kobj, attr)
+#else
+#define ipr_create_dump_file(kobj, attr) 0
+#define ipr_remove_dump_file(kobj, attr) do { } while(0)
+#endif
+
+/*
+ * Error logging macros
+ */
+#define ipr_err(...) printk(KERN_ERR IPR_NAME ": "__VA_ARGS__)
+#define ipr_info(...) printk(KERN_INFO IPR_NAME ": "__VA_ARGS__)
+#define ipr_crit(...) printk(KERN_CRIT IPR_NAME ": "__VA_ARGS__)
+#define ipr_warn(...) printk(KERN_WARNING IPR_NAME": "__VA_ARGS__)
+#define ipr_dbg(...) IPR_DBG_CMD(printk(KERN_INFO IPR_NAME ": "__VA_ARGS__))
+
+#define ipr_sdev_printk(level, sdev, fmt, ...) \
+ printk(level IPR_NAME ": %d:%d:%d:%d: " fmt, sdev->host->host_no, \
+ sdev->channel, sdev->id, sdev->lun, ##__VA_ARGS__)
+
+#define ipr_sdev_err(sdev, fmt, ...) \
+ ipr_sdev_printk(KERN_ERR, sdev, fmt, ##__VA_ARGS__)
+
+#define ipr_sdev_info(sdev, fmt, ...) \
+ ipr_sdev_printk(KERN_INFO, sdev, fmt, ##__VA_ARGS__)
+
+#define ipr_sdev_dbg(sdev, fmt, ...) \
+ IPR_DBG_CMD(ipr_sdev_printk(KERN_INFO, sdev, fmt, ##__VA_ARGS__))
+
+#define ipr_res_printk(level, ioa_cfg, res, fmt, ...) \
+ printk(level IPR_NAME ": %d:%d:%d:%d: " fmt, ioa_cfg->host->host_no, \
+ res.bus, res.target, res.lun, ##__VA_ARGS__)
+
+#define ipr_res_err(ioa_cfg, res, fmt, ...) \
+ ipr_res_printk(KERN_ERR, ioa_cfg, res, fmt, ##__VA_ARGS__)
+#define ipr_res_dbg(ioa_cfg, res, fmt, ...) \
+ IPR_DBG_CMD(ipr_res_printk(KERN_INFO, ioa_cfg, res, fmt, ##__VA_ARGS__))
+
+#define ipr_trace ipr_dbg("%s: %s: Line: %d\n",\
+ __FILE__, __FUNCTION__, __LINE__)
+
+#if IPR_DBG_TRACE
+#define ENTER printk(KERN_INFO IPR_NAME": Entering %s\n", __FUNCTION__)
+#define LEAVE printk(KERN_INFO IPR_NAME": Leaving %s\n", __FUNCTION__)
+#else
+#define ENTER
+#define LEAVE
+#endif
+
+#define ipr_err_separator \
+ipr_err("----------------------------------------------------------\n")
+
+
+/*
+ * Inlines
+ */
+
+/**
+ * ipr_is_ioa_resource - Determine if a resource is the IOA
+ * @res: resource entry struct
+ *
+ * Return value:
+ * 1 if IOA / 0 if not IOA
+ **/
+static inline int ipr_is_ioa_resource(struct ipr_resource_entry *res)
+{
+ return (res->cfgte.flags & IPR_IS_IOA_RESOURCE) ? 1 : 0;
+}
+
+/**
+ * ipr_is_af_dasd_device - Determine if a resource is an AF DASD
+ * @res: resource entry struct
+ *
+ * Return value:
+ * 1 if AF DASD / 0 if not AF DASD
+ **/
+static inline int ipr_is_af_dasd_device(struct ipr_resource_entry *res)
+{
+ if (IPR_IS_DASD_DEVICE(res->cfgte.std_inq_data) &&
+ !ipr_is_ioa_resource(res) &&
+ IPR_RES_SUBTYPE(res) == IPR_SUBTYPE_AF_DASD)
+ return 1;
+ else
+ return 0;
+}
+
+/**
+ * ipr_is_vset_device - Determine if a resource is a VSET
+ * @res: resource entry struct
+ *
+ * Return value:
+ * 1 if VSET / 0 if not VSET
+ **/
+static inline int ipr_is_vset_device(struct ipr_resource_entry *res)
+{
+ if (IPR_IS_DASD_DEVICE(res->cfgte.std_inq_data) &&
+ !ipr_is_ioa_resource(res) &&
+ IPR_RES_SUBTYPE(res) == IPR_SUBTYPE_VOLUME_SET)
+ return 1;
+ else
+ return 0;
+}
+
+/**
+ * ipr_is_gscsi - Determine if a resource is a generic scsi resource
+ * @res: resource entry struct
+ *
+ * Return value:
+ * 1 if GSCSI / 0 if not GSCSI
+ **/
+static inline int ipr_is_gscsi(struct ipr_resource_entry *res)
+{
+ if (!ipr_is_ioa_resource(res) &&
+ IPR_RES_SUBTYPE(res) == IPR_SUBTYPE_GENERIC_SCSI)
+ return 1;
+ else
+ return 0;
+}
+
+/**
+ * ipr_is_device - Determine if resource address is that of a device
+ * @res_addr: resource address struct
+ *
+ * Return value:
+ * 1 if AF / 0 if not AF
+ **/
+static inline int ipr_is_device(struct ipr_res_addr *res_addr)
+{
+ if ((res_addr->bus < IPR_MAX_NUM_BUSES) &&
+ (res_addr->target < IPR_MAX_NUM_TARGETS_PER_BUS))
+ return 1;
+
+ return 0;
+}
+
+/**
+ * ipr_sdt_is_fmt2 - Determine if a SDT address is in format 2
+ * @sdt_word: SDT address
+ *
+ * Return value:
+ * 1 if format 2 / 0 if not
+ **/
+static inline int ipr_sdt_is_fmt2(u32 sdt_word)
+{
+ u32 bar_sel = IPR_GET_FMT2_BAR_SEL(sdt_word);
+
+ switch (bar_sel) {
+ case IPR_SDT_FMT2_BAR0_SEL:
+ case IPR_SDT_FMT2_BAR1_SEL:
+ case IPR_SDT_FMT2_BAR2_SEL:
+ case IPR_SDT_FMT2_BAR3_SEL:
+ case IPR_SDT_FMT2_BAR4_SEL:
+ case IPR_SDT_FMT2_BAR5_SEL:
+ case IPR_SDT_FMT2_EXP_ROM_SEL:
+ return 1;
+ };
+
+ return 0;
+}
+
+#endif
--- /dev/null
+/*
+* sym53c500_cs.c Bob Tracy (rct@frus.com)
+*
+* A rewrite of the pcmcia-cs add-on driver for newer (circa 1997)
+* New Media Bus Toaster PCMCIA SCSI cards using the Symbios Logic
+* 53c500 controller: intended for use with 2.6 and later kernels.
+* The pcmcia-cs add-on version of this driver is not supported
+* beyond 2.4. It consisted of three files with history/copyright
+* information as follows:
+*
+* SYM53C500.h
+* Bob Tracy (rct@frus.com)
+* Original by Tom Corner (tcorner@via.at).
+* Adapted from NCR53c406a.h which is Copyrighted (C) 1994
+* Normunds Saumanis (normunds@rx.tech.swh.lv)
+*
+* SYM53C500.c
+* Bob Tracy (rct@frus.com)
+* Original driver by Tom Corner (tcorner@via.at) was adapted
+* from NCR53c406a.c which is Copyrighted (C) 1994, 1995, 1996
+* Normunds Saumanis (normunds@fi.ibm.com)
+*
+* sym53c500.c
+* Bob Tracy (rct@frus.com)
+* Original by Tom Corner (tcorner@via.at) was adapted from a
+* driver for the Qlogic SCSI card written by
+* David Hinds (dhinds@allegro.stanford.edu).
+*
+* This program is free software; you can redistribute it and/or modify it
+* under the terms of the GNU General Public License as published by the
+* Free Software Foundation; either version 2, or (at your option) any
+* later version.
+*
+* This program is distributed in the hope that it will be useful, but
+* WITHOUT ANY WARRANTY; without even the implied warranty of
+* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+* General Public License for more details.
+*/
+
+#define SYM53C500_DEBUG 0
+#define VERBOSE_SYM53C500_DEBUG 0
+
+/*
+* Set this to 0 if you encounter kernel lockups while transferring
+* data in PIO mode. Note this can be changed via "sysfs".
+*/
+#define USE_FAST_PIO 1
+
+/* =============== End of user configurable parameters ============== */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/ioport.h>
+#include <linux/blkdev.h>
+#include <linux/spinlock.h>
+#include <linux/bitops.h>
+
+#include <asm/io.h>
+#include <asm/dma.h>
+#include <asm/irq.h>
+
+#include <scsi/scsi_ioctl.h>
+#include <scsi/scsi_cmnd.h>
+#include <scsi/scsi_device.h>
+#include <scsi/scsi.h>
+#include <scsi/scsi_host.h>
+
+#include <pcmcia/cs_types.h>
+#include <pcmcia/cs.h>
+#include <pcmcia/cistpl.h>
+#include <pcmcia/ds.h>
+#include <pcmcia/ciscode.h>
+
+/* ================================================================== */
+
+#ifdef PCMCIA_DEBUG
+static int pc_debug = PCMCIA_DEBUG;
+module_param(pc_debug, int, 0);
+#define DEBUG(n, args...) if (pc_debug>(n)) printk(KERN_DEBUG args)
+static char *version =
+"sym53c500_cs.c 0.9c 2004/10/27 (Bob Tracy)";
+#else
+#define DEBUG(n, args...)
+#endif
+
+/* ================================================================== */
+
+/* Parameters that can be set with 'insmod' */
+
+/* Bit map of interrupts to choose from */
+static unsigned int irq_mask = 0xdeb8; /* 3-5, 7, 9-12, 14, 15 */
+static int irq_list[4] = { -1 };
+
+module_param(irq_mask, int, 0);
+MODULE_PARM_DESC(irq_mask, "IRQ mask bits (default: 0xdeb8)");
+module_param_array(irq_list, int, NULL, 0);
+MODULE_PARM_DESC(irq_list, "Comma-separated list of up to 4 IRQs to try (default: auto select).");
+
+/* ================================================================== */
+
+#define SYNC_MODE 0 /* Synchronous transfer mode */
+
+/* Default configuration */
+#define C1_IMG 0x07 /* ID=7 */
+#define C2_IMG 0x48 /* FE SCSI2 */
+#define C3_IMG 0x20 /* CDB */
+#define C4_IMG 0x04 /* ANE */
+#define C5_IMG 0xa4 /* ? changed from b6= AA PI SIE POL */
+#define C7_IMG 0x80 /* added for SYM53C500 t. corner */
+
+/* Hardware Registers: offsets from io_port (base) */
+
+/* Control Register Set 0 */
+#define TC_LSB 0x00 /* transfer counter lsb */
+#define TC_MSB 0x01 /* transfer counter msb */
+#define SCSI_FIFO 0x02 /* scsi fifo register */
+#define CMD_REG 0x03 /* command register */
+#define STAT_REG 0x04 /* status register */
+#define DEST_ID 0x04 /* selection/reselection bus id */
+#define INT_REG 0x05 /* interrupt status register */
+#define SRTIMOUT 0x05 /* select/reselect timeout reg */
+#define SEQ_REG 0x06 /* sequence step register */
+#define SYNCPRD 0x06 /* synchronous transfer period */
+#define FIFO_FLAGS 0x07 /* indicates # of bytes in fifo */
+#define SYNCOFF 0x07 /* synchronous offset register */
+#define CONFIG1 0x08 /* configuration register */
+#define CLKCONV 0x09 /* clock conversion register */
+/* #define TESTREG 0x0A */ /* test mode register */
+#define CONFIG2 0x0B /* configuration 2 register */
+#define CONFIG3 0x0C /* configuration 3 register */
+#define CONFIG4 0x0D /* configuration 4 register */
+#define TC_HIGH 0x0E /* transfer counter high */
+/* #define FIFO_BOTTOM 0x0F */ /* reserve FIFO byte register */
+
+/* Control Register Set 1 */
+/* #define JUMPER_SENSE 0x00 */ /* jumper sense port reg (r/w) */
+/* #define SRAM_PTR 0x01 */ /* SRAM address pointer reg (r/w) */
+/* #define SRAM_DATA 0x02 */ /* SRAM data register (r/w) */
+#define PIO_FIFO 0x04 /* PIO FIFO registers (r/w) */
+/* #define PIO_FIFO1 0x05 */ /* */
+/* #define PIO_FIFO2 0x06 */ /* */
+/* #define PIO_FIFO3 0x07 */ /* */
+#define PIO_STATUS 0x08 /* PIO status (r/w) */
+/* #define ATA_CMD 0x09 */ /* ATA command/status reg (r/w) */
+/* #define ATA_ERR 0x0A */ /* ATA features/error reg (r/w) */
+#define PIO_FLAG 0x0B /* PIO flag interrupt enable (r/w) */
+#define CONFIG5 0x09 /* configuration 5 register */
+/* #define SIGNATURE 0x0E */ /* signature register (r) */
+/* #define CONFIG6 0x0F */ /* configuration 6 register (r) */
+#define CONFIG7 0x0d
+
+/* select register set 0 */
+#define REG0(x) (outb(C4_IMG, (x) + CONFIG4))
+/* select register set 1 */
+#define REG1(x) outb(C7_IMG, (x) + CONFIG7); outb(C5_IMG, (x) + CONFIG5)
+
+#if SYM53C500_DEBUG
+#define DEB(x) x
+#else
+#define DEB(x)
+#endif
+
+#if VERBOSE_SYM53C500_DEBUG
+#define VDEB(x) x
+#else
+#define VDEB(x)
+#endif
+
+#define LOAD_DMA_COUNT(x, count) \
+ outb(count & 0xff, (x) + TC_LSB); \
+ outb((count >> 8) & 0xff, (x) + TC_MSB); \
+ outb((count >> 16) & 0xff, (x) + TC_HIGH);
+
+/* Chip commands */
+#define DMA_OP 0x80
+
+#define SCSI_NOP 0x00
+#define FLUSH_FIFO 0x01
+#define CHIP_RESET 0x02
+#define SCSI_RESET 0x03
+#define RESELECT 0x40
+#define SELECT_NO_ATN 0x41
+#define SELECT_ATN 0x42
+#define SELECT_ATN_STOP 0x43
+#define ENABLE_SEL 0x44
+#define DISABLE_SEL 0x45
+#define SELECT_ATN3 0x46
+#define RESELECT3 0x47
+#define TRANSFER_INFO 0x10
+#define INIT_CMD_COMPLETE 0x11
+#define MSG_ACCEPT 0x12
+#define TRANSFER_PAD 0x18
+#define SET_ATN 0x1a
+#define RESET_ATN 0x1b
+#define SEND_MSG 0x20
+#define SEND_STATUS 0x21
+#define SEND_DATA 0x22
+#define DISCONN_SEQ 0x23
+#define TERMINATE_SEQ 0x24
+#define TARG_CMD_COMPLETE 0x25
+#define DISCONN 0x27
+#define RECV_MSG 0x28
+#define RECV_CMD 0x29
+#define RECV_DATA 0x2a
+#define RECV_CMD_SEQ 0x2b
+#define TARGET_ABORT_DMA 0x04
+
+/* ================================================================== */
+
+struct scsi_info_t {
+ dev_link_t link;
+ dev_node_t node;
+ struct Scsi_Host *host;
+ unsigned short manf_id;
+};
+
+/*
+* Repository for per-instance host data.
+*/
+struct sym53c500_data {
+ struct scsi_cmnd *current_SC;
+ int fast_pio;
+};
+
+enum Phase {
+ idle,
+ data_out,
+ data_in,
+ command_ph,
+ status_ph,
+ message_out,
+ message_in
+};
+
+/* ================================================================== */
+
+/*
+* Global (within this module) variables other than
+* sym53c500_driver_template (the scsi_host_template).
+*/
+static dev_link_t *dev_list;
+static dev_info_t dev_info = "sym53c500_cs";
+
+/* ================================================================== */
+
+static void
+chip_init(int io_port)
+{
+ REG1(io_port);
+ outb(0x01, io_port + PIO_STATUS);
+ outb(0x00, io_port + PIO_FLAG);
+
+ outb(C4_IMG, io_port + CONFIG4); /* REG0(io_port); */
+ outb(C3_IMG, io_port + CONFIG3);
+ outb(C2_IMG, io_port + CONFIG2);
+ outb(C1_IMG, io_port + CONFIG1);
+
+ outb(0x05, io_port + CLKCONV); /* clock conversion factor */
+ outb(0x9C, io_port + SRTIMOUT); /* Selection timeout */
+ outb(0x05, io_port + SYNCPRD); /* Synchronous transfer period */
+ outb(SYNC_MODE, io_port + SYNCOFF); /* synchronous mode */
+}
+
+static void
+SYM53C500_int_host_reset(int io_port)
+{
+ outb(C4_IMG, io_port + CONFIG4); /* REG0(io_port); */
+ outb(CHIP_RESET, io_port + CMD_REG);
+ outb(SCSI_NOP, io_port + CMD_REG); /* required after reset */
+ outb(SCSI_RESET, io_port + CMD_REG);
+ chip_init(io_port);
+}
+
+static __inline__ int
+SYM53C500_pio_read(int fast_pio, int base, unsigned char *request, unsigned int reqlen)
+{
+ int i;
+ int len; /* current scsi fifo size */
+
+ REG1(base);
+ while (reqlen) {
+ i = inb(base + PIO_STATUS);
+ /* VDEB(printk("pio_status=%x\n", i)); */
+ if (i & 0x80)
+ return 0;
+
+ switch (i & 0x1e) {
+ default:
+ case 0x10: /* fifo empty */
+ len = 0;
+ break;
+ case 0x0:
+ len = 1;
+ break;
+ case 0x8: /* fifo 1/3 full */
+ len = 42;
+ break;
+ case 0xc: /* fifo 2/3 full */
+ len = 84;
+ break;
+ case 0xe: /* fifo full */
+ len = 128;
+ break;
+ }
+
+ if ((i & 0x40) && len == 0) { /* fifo empty and interrupt occurred */
+ return 0;
+ }
+
+ if (len) {
+ if (len > reqlen)
+ len = reqlen;
+
+ if (fast_pio && len > 3) {
+ insl(base + PIO_FIFO, request, len >> 2);
+ request += len & 0xfc;
+ reqlen -= len & 0xfc;
+ } else {
+ while (len--) {
+ *request++ = inb(base + PIO_FIFO);
+ reqlen--;
+ }
+ }
+ }
+ }
+ return 0;
+}
+
+static __inline__ int
+SYM53C500_pio_write(int fast_pio, int base, unsigned char *request, unsigned int reqlen)
+{
+ int i = 0;
+ int len; /* current scsi fifo size */
+
+ REG1(base);
+ while (reqlen && !(i & 0x40)) {
+ i = inb(base + PIO_STATUS);
+ /* VDEB(printk("pio_status=%x\n", i)); */
+ if (i & 0x80) /* error */
+ return 0;
+
+ switch (i & 0x1e) {
+ case 0x10:
+ len = 128;
+ break;
+ case 0x0:
+ len = 84;
+ break;
+ case 0x8:
+ len = 42;
+ break;
+ case 0xc:
+ len = 1;
+ break;
+ default:
+ case 0xe:
+ len = 0;
+ break;
+ }
+
+ if (len) {
+ if (len > reqlen)
+ len = reqlen;
+
+ if (fast_pio && len > 3) {
+ outsl(base + PIO_FIFO, request, len >> 2);
+ request += len & 0xfc;
+ reqlen -= len & 0xfc;
+ } else {
+ while (len--) {
+ outb(*request++, base + PIO_FIFO);
+ reqlen--;
+ }
+ }
+ }
+ }
+ return 0;
+}
+
+static irqreturn_t
+SYM53C500_intr(int irq, void *dev_id, struct pt_regs *regs)
+{
+ unsigned long flags;
+ struct Scsi_Host *dev = dev_id;
+ DEB(unsigned char fifo_size;)
+ DEB(unsigned char seq_reg;)
+ unsigned char status, int_reg;
+ unsigned char pio_status;
+ struct scatterlist *sglist;
+ unsigned int sgcount;
+ int port_base = dev->io_port;
+ struct sym53c500_data *data =
+ (struct sym53c500_data *)dev->hostdata;
+ struct scsi_cmnd *curSC = data->current_SC;
+ int fast_pio = data->fast_pio;
+
+ spin_lock_irqsave(dev->host_lock, flags);
+
+ VDEB(printk("SYM53C500_intr called\n"));
+
+ REG1(port_base);
+ pio_status = inb(port_base + PIO_STATUS);
+ REG0(port_base);
+ status = inb(port_base + STAT_REG);
+ DEB(seq_reg = inb(port_base + SEQ_REG));
+ int_reg = inb(port_base + INT_REG);
+ DEB(fifo_size = inb(port_base + FIFO_FLAGS) & 0x1f);
+
+#if SYM53C500_DEBUG
+ printk("status=%02x, seq_reg=%02x, int_reg=%02x, fifo_size=%02x",
+ status, seq_reg, int_reg, fifo_size);
+ printk(", pio=%02x\n", pio_status);
+#endif /* SYM53C500_DEBUG */
+
+ if (int_reg & 0x80) { /* SCSI reset intr */
+ DEB(printk("SYM53C500: reset intr received\n"));
+ curSC->result = DID_RESET << 16;
+ goto idle_out;
+ }
+
+ if (pio_status & 0x80) {
+ printk("SYM53C500: Warning: PIO error!\n");
+ curSC->result = DID_ERROR << 16;
+ goto idle_out;
+ }
+
+ if (status & 0x20) { /* Parity error */
+ printk("SYM53C500: Warning: parity error!\n");
+ curSC->result = DID_PARITY << 16;
+ goto idle_out;
+ }
+
+ if (status & 0x40) { /* Gross error */
+ printk("SYM53C500: Warning: gross error!\n");
+ curSC->result = DID_ERROR << 16;
+ goto idle_out;
+ }
+
+ if (int_reg & 0x20) { /* Disconnect */
+ DEB(printk("SYM53C500: disconnect intr received\n"));
+ if (curSC->SCp.phase != message_in) { /* Unexpected disconnect */
+ curSC->result = DID_NO_CONNECT << 16;
+ } else { /* Command complete, return status and message */
+ curSC->result = (curSC->SCp.Status & 0xff)
+ | ((curSC->SCp.Message & 0xff) << 8) | (DID_OK << 16);
+ }
+ goto idle_out;
+ }
+
+ switch (status & 0x07) { /* scsi phase */
+ case 0x00: /* DATA-OUT */
+ if (int_reg & 0x10) { /* Target requesting info transfer */
+ curSC->SCp.phase = data_out;
+ VDEB(printk("SYM53C500: Data-Out phase\n"));
+ outb(FLUSH_FIFO, port_base + CMD_REG);
+ LOAD_DMA_COUNT(port_base, curSC->request_bufflen); /* Max transfer size */
+ outb(TRANSFER_INFO | DMA_OP, port_base + CMD_REG);
+ if (!curSC->use_sg) /* Don't use scatter-gather */
+ SYM53C500_pio_write(fast_pio, port_base, curSC->request_buffer, curSC->request_bufflen);
+ else { /* use scatter-gather */
+ sgcount = curSC->use_sg;
+ sglist = curSC->request_buffer;
+ while (sgcount--) {
+ SYM53C500_pio_write(fast_pio, port_base, page_address(sglist->page) + sglist->offset, sglist->length);
+ sglist++;
+ }
+ }
+ REG0(port_base);
+ }
+ break;
+
+ case 0x01: /* DATA-IN */
+ if (int_reg & 0x10) { /* Target requesting info transfer */
+ curSC->SCp.phase = data_in;
+ VDEB(printk("SYM53C500: Data-In phase\n"));
+ outb(FLUSH_FIFO, port_base + CMD_REG);
+ LOAD_DMA_COUNT(port_base, curSC->request_bufflen); /* Max transfer size */
+ outb(TRANSFER_INFO | DMA_OP, port_base + CMD_REG);
+ if (!curSC->use_sg) /* Don't use scatter-gather */
+ SYM53C500_pio_read(fast_pio, port_base, curSC->request_buffer, curSC->request_bufflen);
+ else { /* Use scatter-gather */
+ sgcount = curSC->use_sg;
+ sglist = curSC->request_buffer;
+ while (sgcount--) {
+ SYM53C500_pio_read(fast_pio, port_base, page_address(sglist->page) + sglist->offset, sglist->length);
+ sglist++;
+ }
+ }
+ REG0(port_base);
+ }
+ break;
+
+ case 0x02: /* COMMAND */
+ curSC->SCp.phase = command_ph;
+ printk("SYM53C500: Warning: Unknown interrupt occurred in command phase!\n");
+ break;
+
+ case 0x03: /* STATUS */
+ curSC->SCp.phase = status_ph;
+ VDEB(printk("SYM53C500: Status phase\n"));
+ outb(FLUSH_FIFO, port_base + CMD_REG);
+ outb(INIT_CMD_COMPLETE, port_base + CMD_REG);
+ break;
+
+ case 0x04: /* Reserved */
+ case 0x05: /* Reserved */
+ printk("SYM53C500: WARNING: Reserved phase!!!\n");
+ break;
+
+ case 0x06: /* MESSAGE-OUT */
+ DEB(printk("SYM53C500: Message-Out phase\n"));
+ curSC->SCp.phase = message_out;
+ outb(SET_ATN, port_base + CMD_REG); /* Reject the message */
+ outb(MSG_ACCEPT, port_base + CMD_REG);
+ break;
+
+ case 0x07: /* MESSAGE-IN */
+ VDEB(printk("SYM53C500: Message-In phase\n"));
+ curSC->SCp.phase = message_in;
+
+ curSC->SCp.Status = inb(port_base + SCSI_FIFO);
+ curSC->SCp.Message = inb(port_base + SCSI_FIFO);
+
+ VDEB(printk("SCSI FIFO size=%d\n", inb(port_base + FIFO_FLAGS) & 0x1f));
+ DEB(printk("Status = %02x Message = %02x\n", curSC->SCp.Status, curSC->SCp.Message));
+
+ if (curSC->SCp.Message == SAVE_POINTERS || curSC->SCp.Message == DISCONNECT) {
+ outb(SET_ATN, port_base + CMD_REG); /* Reject message */
+ DEB(printk("Discarding SAVE_POINTERS message\n"));
+ }
+ outb(MSG_ACCEPT, port_base + CMD_REG);
+ break;
+ }
+out:
+ spin_unlock_irqrestore(dev->host_lock, flags);
+ return IRQ_HANDLED;
+
+idle_out:
+ curSC->SCp.phase = idle;
+ curSC->scsi_done(curSC);
+ goto out;
+}
+
+static void
+SYM53C500_release(dev_link_t *link)
+{
+ struct scsi_info_t *info = link->priv;
+ struct Scsi_Host *shost = info->host;
+
+ DEBUG(0, "SYM53C500_release(0x%p)\n", link);
+
+ /*
+ * Do this before releasing/freeing resources.
+ */
+ scsi_remove_host(shost);
+
+ /*
+ * Interrupts getting hosed on card removal. Try
+ * the following code, mostly from qlogicfas.c.
+ */
+ if (shost->irq)
+ free_irq(shost->irq, shost);
+ if (shost->dma_channel != 0xff)
+ free_dma(shost->dma_channel);
+ if (shost->io_port && shost->n_io_port)
+ release_region(shost->io_port, shost->n_io_port);
+
+ link->dev = NULL;
+
+ pcmcia_release_configuration(link->handle);
+ pcmcia_release_io(link->handle, &link->io);
+ pcmcia_release_irq(link->handle, &link->irq);
+
+ link->state &= ~DEV_CONFIG;
+
+ scsi_host_put(shost);
+} /* SYM53C500_release */
+
+static const char*
+SYM53C500_info(struct Scsi_Host *SChost)
+{
+ static char info_msg[256];
+ struct sym53c500_data *data =
+ (struct sym53c500_data *)SChost->hostdata;
+
+ DEB(printk("SYM53C500_info called\n"));
+ (void)snprintf(info_msg, sizeof(info_msg),
+ "SYM53C500 at 0x%lx, IRQ %d, %s PIO mode.",
+ SChost->io_port, SChost->irq, data->fast_pio ? "fast" : "slow");
+ return (info_msg);
+}
+
+static int
+SYM53C500_queue(struct scsi_cmnd *SCpnt, void (*done)(struct scsi_cmnd *))
+{
+ int i;
+ int port_base = SCpnt->device->host->io_port;
+ struct sym53c500_data *data =
+ (struct sym53c500_data *)SCpnt->device->host->hostdata;
+
+ VDEB(printk("SYM53C500_queue called\n"));
+
+ DEB(printk("cmd=%02x, cmd_len=%02x, target=%02x, lun=%02x, bufflen=%d\n",
+ SCpnt->cmnd[0], SCpnt->cmd_len, SCpnt->device->id,
+ SCpnt->device->lun, SCpnt->request_bufflen));
+
+ VDEB(for (i = 0; i < SCpnt->cmd_len; i++)
+ printk("cmd[%d]=%02x ", i, SCpnt->cmnd[i]));
+ VDEB(printk("\n"));
+
+ data->current_SC = SCpnt;
+ data->current_SC->scsi_done = done;
+ data->current_SC->SCp.phase = command_ph;
+ data->current_SC->SCp.Status = 0;
+ data->current_SC->SCp.Message = 0;
+
+ /* We are locked here already by the mid layer */
+ REG0(port_base);
+ outb(SCpnt->device->id, port_base + DEST_ID); /* set destination */
+ outb(FLUSH_FIFO, port_base + CMD_REG); /* reset the fifos */
+
+ for (i = 0; i < SCpnt->cmd_len; i++) {
+ outb(SCpnt->cmnd[i], port_base + SCSI_FIFO);
+ }
+ outb(SELECT_NO_ATN, port_base + CMD_REG);
+
+ return 0;
+}
+
+static int
+SYM53C500_host_reset(struct scsi_cmnd *SCpnt)
+{
+ int port_base = SCpnt->device->host->io_port;
+
+ DEB(printk("SYM53C500_host_reset called\n"));
+ SYM53C500_int_host_reset(port_base);
+
+ return SUCCESS;
+}
+
+static int
+SYM53C500_biosparm(struct scsi_device *disk,
+ struct block_device *dev,
+ sector_t capacity, int *info_array)
+{
+ int size;
+
+ DEB(printk("SYM53C500_biosparm called\n"));
+
+ size = capacity;
+ info_array[0] = 64; /* heads */
+ info_array[1] = 32; /* sectors */
+ info_array[2] = size >> 11; /* cylinders */
+ if (info_array[2] > 1024) { /* big disk */
+ info_array[0] = 255;
+ info_array[1] = 63;
+ info_array[2] = size / (255 * 63);
+ }
+ return 0;
+}
+
+static ssize_t
+SYM53C500_show_pio(struct class_device *cdev, char *buf)
+{
+ struct Scsi_Host *SHp = class_to_shost(cdev);
+ struct sym53c500_data *data =
+ (struct sym53c500_data *)SHp->hostdata;
+
+ return snprintf(buf, 4, "%d\n", data->fast_pio);
+}
+
+static ssize_t
+SYM53C500_store_pio(struct class_device *cdev, const char *buf, size_t count)
+{
+ int pio;
+ struct Scsi_Host *SHp = class_to_shost(cdev);
+ struct sym53c500_data *data =
+ (struct sym53c500_data *)SHp->hostdata;
+
+ pio = simple_strtoul(buf, NULL, 0);
+ if (pio == 0 || pio == 1) {
+ data->fast_pio = pio;
+ return count;
+ }
+ else
+ return -EINVAL;
+}
+
+/*
+* SCSI HBA device attributes we want to
+* make available via sysfs.
+*/
+static struct class_device_attribute SYM53C500_pio_attr = {
+ .attr = {
+ .name = "fast_pio",
+ .mode = (S_IRUGO | S_IWUSR),
+ },
+ .show = SYM53C500_show_pio,
+ .store = SYM53C500_store_pio,
+};
+
+static struct class_device_attribute *SYM53C500_shost_attrs[] = {
+ &SYM53C500_pio_attr,
+ NULL,
+};
+
+/*
+* scsi_host_template initializer
+*/
+static struct scsi_host_template sym53c500_driver_template = {
+ .module = THIS_MODULE,
+ .name = "SYM53C500",
+ .info = SYM53C500_info,
+ .queuecommand = SYM53C500_queue,
+ .eh_host_reset_handler = SYM53C500_host_reset,
+ .bios_param = SYM53C500_biosparm,
+ .proc_name = "SYM53C500",
+ .can_queue = 1,
+ .this_id = 7,
+ .sg_tablesize = 32,
+ .cmd_per_lun = 1,
+ .use_clustering = ENABLE_CLUSTERING,
+ .shost_attrs = SYM53C500_shost_attrs
+};
+
+#define CS_CHECK(fn, ret) \
+do { last_fn = (fn); if ((last_ret = (ret)) != 0) goto cs_failed; } while (0)
+
+static void
+SYM53C500_config(dev_link_t *link)
+{
+ client_handle_t handle = link->handle;
+ struct scsi_info_t *info = link->priv;
+ tuple_t tuple;
+ cisparse_t parse;
+ int i, last_ret, last_fn;
+ int irq_level, port_base;
+ unsigned short tuple_data[32];
+ struct Scsi_Host *host;
+ struct scsi_host_template *tpnt = &sym53c500_driver_template;
+ struct sym53c500_data *data;
+
+ DEBUG(0, "SYM53C500_config(0x%p)\n", link);
+
+ tuple.TupleData = (cisdata_t *)tuple_data;
+ tuple.TupleDataMax = 64;
+ tuple.TupleOffset = 0;
+ tuple.DesiredTuple = CISTPL_CONFIG;
+ CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(handle, &tuple));
+ CS_CHECK(GetTupleData, pcmcia_get_tuple_data(handle, &tuple));
+ CS_CHECK(ParseTuple, pcmcia_parse_tuple(handle, &tuple, &parse));
+ link->conf.ConfigBase = parse.config.base;
+
+ tuple.DesiredTuple = CISTPL_MANFID;
+ if ((pcmcia_get_first_tuple(handle, &tuple) == CS_SUCCESS) &&
+ (pcmcia_get_tuple_data(handle, &tuple) == CS_SUCCESS))
+ info->manf_id = le16_to_cpu(tuple.TupleData[0]);
+
+ /* Configure card */
+ link->state |= DEV_CONFIG;
+
+ tuple.DesiredTuple = CISTPL_CFTABLE_ENTRY;
+ CS_CHECK(GetFirstTuple, pcmcia_get_first_tuple(handle, &tuple));
+ while (1) {
+ if (pcmcia_get_tuple_data(handle, &tuple) != 0 ||
+ pcmcia_parse_tuple(handle, &tuple, &parse) != 0)
+ goto next_entry;
+ link->conf.ConfigIndex = parse.cftable_entry.index;
+ link->io.BasePort1 = parse.cftable_entry.io.win[0].base;
+ link->io.NumPorts1 = parse.cftable_entry.io.win[0].len;
+
+ if (link->io.BasePort1 != 0) {
+ i = pcmcia_request_io(handle, &link->io);
+ if (i == CS_SUCCESS)
+ break;
+ }
+next_entry:
+ CS_CHECK(GetNextTuple, pcmcia_get_next_tuple(handle, &tuple));
+ }
+
+ CS_CHECK(RequestIRQ, pcmcia_request_irq(handle, &link->irq));
+ CS_CHECK(RequestConfiguration, pcmcia_request_configuration(handle, &link->conf));
+
+ /*
+ * That's the trouble with copying liberally from another driver.
+ * Some things probably aren't relevant, and I suspect this entire
+ * section dealing with manufacturer IDs can be scrapped. --rct
+ */
+ if ((info->manf_id == MANFID_MACNICA) ||
+ (info->manf_id == MANFID_PIONEER) ||
+ (info->manf_id == 0x0098)) {
+ /* set ATAcmd */
+ outb(0xb4, link->io.BasePort1 + 0xd);
+ outb(0x24, link->io.BasePort1 + 0x9);
+ outb(0x04, link->io.BasePort1 + 0xd);
+ }
+
+ /*
+ * irq_level == 0 implies tpnt->can_queue == 0, which
+ * is not supported in 2.6. Thus, only irq_level > 0
+ * will be allowed.
+ *
+ * Possible port_base values are as follows:
+ *
+ * 0x130, 0x230, 0x280, 0x290,
+ * 0x320, 0x330, 0x340, 0x350
+ */
+ port_base = link->io.BasePort1;
+ irq_level = link->irq.AssignedIRQ;
+
+ DEB(printk("SYM53C500: port_base=0x%x, irq=%d, fast_pio=%d\n",
+ port_base, irq_level, USE_FAST_PIO);)
+
+ chip_init(port_base);
+
+ host = scsi_host_alloc(tpnt, sizeof(struct sym53c500_data));
+ if (!host) {
+ printk("SYM53C500: Unable to register host, giving up.\n");
+ goto err_release;
+ }
+
+ data = (struct sym53c500_data *)host->hostdata;
+
+ if (irq_level > 0) {
+ if (request_irq(irq_level, SYM53C500_intr, SA_SHIRQ, "SYM53C500", host)) {
+ printk("SYM53C500: unable to allocate IRQ %d\n", irq_level);
+ goto err_free_scsi;
+ }
+ DEB(printk("SYM53C500: allocated IRQ %d\n", irq_level));
+ } else if (irq_level == 0) {
+ DEB(printk("SYM53C500: No interrupts detected\n"));
+ goto err_free_scsi;
+ } else {
+ DEB(printk("SYM53C500: Shouldn't get here!\n"));
+ goto err_free_scsi;
+ }
+
+ host->unique_id = port_base;
+ host->irq = irq_level;
+ host->io_port = port_base;
+ host->n_io_port = 0x10;
+ host->dma_channel = -1;
+
+ /*
+ * Note fast_pio is set to USE_FAST_PIO by
+ * default, but can be changed via "sysfs".
+ */
+ data->fast_pio = USE_FAST_PIO;
+
+ sprintf(info->node.dev_name, "scsi%d", host->host_no);
+ link->dev = &info->node;
+ info->host = host;
+
+ if (scsi_add_host(host, NULL))
+ goto err_free_irq;
+
+ scsi_scan_host(host);
+
+ goto out; /* SUCCESS */
+
+err_free_irq:
+ free_irq(irq_level, host);
+err_free_scsi:
+ scsi_host_put(host);
+err_release:
+ release_region(port_base, 0x10);
+ printk(KERN_INFO "sym53c500_cs: no SCSI devices found\n");
+
+out:
+ link->state &= ~DEV_CONFIG_PENDING;
+ return;
+
+cs_failed:
+ cs_error(link->handle, last_fn, last_ret);
+ SYM53C500_release(link);
+ return;
+} /* SYM53C500_config */
+
+static int
+SYM53C500_event(event_t event, int priority, event_callback_args_t *args)
+{
+ dev_link_t *link = args->client_data;
+ struct scsi_info_t *info = link->priv;
+
+ DEBUG(1, "SYM53C500_event(0x%06x)\n", event);
+
+ switch (event) {
+ case CS_EVENT_CARD_REMOVAL:
+ link->state &= ~DEV_PRESENT;
+ if (link->state & DEV_CONFIG)
+ SYM53C500_release(link);
+ break;
+ case CS_EVENT_CARD_INSERTION:
+ link->state |= DEV_PRESENT | DEV_CONFIG_PENDING;
+ SYM53C500_config(link);
+ break;
+ case CS_EVENT_PM_SUSPEND:
+ link->state |= DEV_SUSPEND;
+ /* Fall through... */
+ case CS_EVENT_RESET_PHYSICAL:
+ if (link->state & DEV_CONFIG)
+ pcmcia_release_configuration(link->handle);
+ break;
+ case CS_EVENT_PM_RESUME:
+ link->state &= ~DEV_SUSPEND;
+ /* Fall through... */
+ case CS_EVENT_CARD_RESET:
+ if (link->state & DEV_CONFIG) {
+ pcmcia_request_configuration(link->handle, &link->conf);
+ /* See earlier comment about manufacturer IDs. */
+ if ((info->manf_id == MANFID_MACNICA) ||
+ (info->manf_id == MANFID_PIONEER) ||
+ (info->manf_id == 0x0098)) {
+ outb(0x80, link->io.BasePort1 + 0xd);
+ outb(0x24, link->io.BasePort1 + 0x9);
+ outb(0x04, link->io.BasePort1 + 0xd);
+ }
+ /*
+ * If things don't work after a "resume",
+ * this is a good place to start looking.
+ */
+ SYM53C500_int_host_reset(link->io.BasePort1);
+ }
+ break;
+ }
+ return 0;
+} /* SYM53C500_event */
+
+static void
+SYM53C500_detach(dev_link_t *link)
+{
+ dev_link_t **linkp;
+
+ DEBUG(0, "SYM53C500_detach(0x%p)\n", link);
+
+ /* Locate device structure */
+ for (linkp = &dev_list; *linkp; linkp = &(*linkp)->next)
+ if (*linkp == link)
+ break;
+ if (*linkp == NULL)
+ return;
+
+ if (link->state & DEV_CONFIG)
+ SYM53C500_release(link);
+
+ if (link->handle)
+ pcmcia_deregister_client(link->handle);
+
+ /* Unlink device structure, free bits. */
+ *linkp = link->next;
+ kfree(link->priv);
+ link->priv = NULL;
+} /* SYM53C500_detach */
+
+static dev_link_t *
+SYM53C500_attach(void)
+{
+ struct scsi_info_t *info;
+ client_reg_t client_reg;
+ dev_link_t *link;
+ int i, ret;
+
+ DEBUG(0, "SYM53C500_attach()\n");
+
+ /* Create new SCSI device */
+ info = kmalloc(sizeof(*info), GFP_KERNEL);
+ if (!info)
+ return NULL;
+ memset(info, 0, sizeof(*info));
+ link = &info->link;
+ link->priv = info;
+ link->io.NumPorts1 = 16;
+ link->io.Attributes1 = IO_DATA_PATH_WIDTH_AUTO;
+ link->io.IOAddrLines = 10;
+ link->irq.Attributes = IRQ_TYPE_EXCLUSIVE;
+ link->irq.IRQInfo1 = IRQ_INFO2_VALID | IRQ_LEVEL_ID;
+ if (irq_list[0] == -1)
+ link->irq.IRQInfo2 = irq_mask;
+ else
+ for (i = 0; i < 4; i++)
+ link->irq.IRQInfo2 |= 1 << irq_list[i];
+ link->conf.Attributes = CONF_ENABLE_IRQ;
+ link->conf.Vcc = 50;
+ link->conf.IntType = INT_MEMORY_AND_IO;
+ link->conf.Present = PRESENT_OPTION;
+
+ /* Register with Card Services */
+ link->next = dev_list;
+ dev_list = link;
+ client_reg.dev_info = &dev_info;
+ client_reg.Attributes = INFO_IO_CLIENT | INFO_CARD_SHARE;
+ client_reg.event_handler = &SYM53C500_event;
+ client_reg.EventMask = CS_EVENT_RESET_REQUEST | CS_EVENT_CARD_RESET |
+ CS_EVENT_CARD_INSERTION | CS_EVENT_CARD_REMOVAL |
+ CS_EVENT_PM_SUSPEND | CS_EVENT_PM_RESUME;
+ client_reg.Version = 0x0210;
+ client_reg.event_callback_args.client_data = link;
+ ret = pcmcia_register_client(&link->handle, &client_reg);
+ if (ret != 0) {
+ cs_error(link->handle, RegisterClient, ret);
+ SYM53C500_detach(link);
+ return NULL;
+ }
+
+ return link;
+} /* SYM53C500_attach */
+
+MODULE_AUTHOR("Bob Tracy <rct@frus.com>");
+MODULE_DESCRIPTION("SYM53C500 PCMCIA SCSI driver");
+MODULE_LICENSE("GPL");
+
+static struct pcmcia_driver sym53c500_cs_driver = {
+ .owner = THIS_MODULE,
+ .drv = {
+ .name = "sym53c500_cs",
+ },
+ .attach = SYM53C500_attach,
+ .detach = SYM53C500_detach,
+};
+
+static int __init
+init_sym53c500_cs(void)
+{
+ return pcmcia_register_driver(&sym53c500_cs_driver);
+}
+
+static void __exit
+exit_sym53c500_cs(void)
+{
+ pcmcia_unregister_driver(&sym53c500_cs_driver);
+}
+
+module_init(init_sym53c500_cs);
+module_exit(exit_sym53c500_cs);
--- /dev/null
+/*
+ * sata_nv.c - NVIDIA nForce SATA
+ *
+ * Copyright 2004 NVIDIA Corp. All rights reserved.
+ * Copyright 2004 Andrew Chew
+ *
+ * The contents of this file are subject to the Open
+ * Software License version 1.1 that can be found at
+ * http://www.opensource.org/licenses/osl-1.1.txt and is included herein
+ * by reference.
+ *
+ * Alternatively, the contents of this file may be used under the terms
+ * of the GNU General Public License version 2 (the "GPL") as distributed
+ * in the kernel source COPYING file, in which case the provisions of
+ * the GPL are applicable instead of the above. If you wish to allow
+ * the use of your version of this file only under the terms of the
+ * GPL and not to allow others to use your version of this file under
+ * the OSL, indicate your decision by deleting the provisions above and
+ * replace them with the notice and other provisions required by the GPL.
+ * If you do not delete the provisions above, a recipient may use your
+ * version of this file under either the OSL or the GPL.
+ *
+ * 0.03
+ * - Fixed a bug where the hotplug handlers for non-CK804/MCP04 were using
+ * mmio_base, which is only set for the CK804/MCP04 case.
+ *
+ * 0.02
+ * - Added support for CK804 SATA controller.
+ *
+ * 0.01
+ * - Initial revision.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include "scsi.h"
+#include <scsi/scsi_host.h>
+#include <linux/libata.h>
+
+#define DRV_NAME "sata_nv"
+#define DRV_VERSION "0.5"
+
+#define NV_PORTS 2
+#define NV_PIO_MASK 0x1f
+#define NV_MWDMA_MASK 0x07
+#define NV_UDMA_MASK 0x7f
+#define NV_PORT0_SCR_REG_OFFSET 0x00
+#define NV_PORT1_SCR_REG_OFFSET 0x40
+
+#define NV_INT_STATUS 0x10
+#define NV_INT_STATUS_CK804 0x440
+#define NV_INT_STATUS_PDEV_INT 0x01
+#define NV_INT_STATUS_PDEV_PM 0x02
+#define NV_INT_STATUS_PDEV_ADDED 0x04
+#define NV_INT_STATUS_PDEV_REMOVED 0x08
+#define NV_INT_STATUS_SDEV_INT 0x10
+#define NV_INT_STATUS_SDEV_PM 0x20
+#define NV_INT_STATUS_SDEV_ADDED 0x40
+#define NV_INT_STATUS_SDEV_REMOVED 0x80
+#define NV_INT_STATUS_PDEV_HOTPLUG (NV_INT_STATUS_PDEV_ADDED | \
+ NV_INT_STATUS_PDEV_REMOVED)
+#define NV_INT_STATUS_SDEV_HOTPLUG (NV_INT_STATUS_SDEV_ADDED | \
+ NV_INT_STATUS_SDEV_REMOVED)
+#define NV_INT_STATUS_HOTPLUG (NV_INT_STATUS_PDEV_HOTPLUG | \
+ NV_INT_STATUS_SDEV_HOTPLUG)
+
+#define NV_INT_ENABLE 0x11
+#define NV_INT_ENABLE_CK804 0x441
+#define NV_INT_ENABLE_PDEV_MASK 0x01
+#define NV_INT_ENABLE_PDEV_PM 0x02
+#define NV_INT_ENABLE_PDEV_ADDED 0x04
+#define NV_INT_ENABLE_PDEV_REMOVED 0x08
+#define NV_INT_ENABLE_SDEV_MASK 0x10
+#define NV_INT_ENABLE_SDEV_PM 0x20
+#define NV_INT_ENABLE_SDEV_ADDED 0x40
+#define NV_INT_ENABLE_SDEV_REMOVED 0x80
+#define NV_INT_ENABLE_PDEV_HOTPLUG (NV_INT_ENABLE_PDEV_ADDED | \
+ NV_INT_ENABLE_PDEV_REMOVED)
+#define NV_INT_ENABLE_SDEV_HOTPLUG (NV_INT_ENABLE_SDEV_ADDED | \
+ NV_INT_ENABLE_SDEV_REMOVED)
+#define NV_INT_ENABLE_HOTPLUG (NV_INT_ENABLE_PDEV_HOTPLUG | \
+ NV_INT_ENABLE_SDEV_HOTPLUG)
+
+#define NV_INT_CONFIG 0x12
+#define NV_INT_CONFIG_METHD 0x01 // 0 = INT, 1 = SMI
+
+// For PCI config register 20
+#define NV_MCP_SATA_CFG_20 0x50
+#define NV_MCP_SATA_CFG_20_SATA_SPACE_EN 0x04
+
+static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
+irqreturn_t nv_interrupt (int irq, void *dev_instance, struct pt_regs *regs);
+static u32 nv_scr_read (struct ata_port *ap, unsigned int sc_reg);
+static void nv_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val);
+static void nv_host_stop (struct ata_host_set *host_set);
+static void nv_enable_hotplug(struct ata_probe_ent *probe_ent);
+static void nv_disable_hotplug(struct ata_host_set *host_set);
+static void nv_check_hotplug(struct ata_host_set *host_set);
+static void nv_enable_hotplug_ck804(struct ata_probe_ent *probe_ent);
+static void nv_disable_hotplug_ck804(struct ata_host_set *host_set);
+static void nv_check_hotplug_ck804(struct ata_host_set *host_set);
+
+enum nv_host_type
+{
+ NFORCE2,
+ NFORCE3,
+ CK804
+};
+
+static struct pci_device_id nv_pci_tbl[] = {
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE2S_SATA,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, NFORCE2 },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE3S_SATA,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, NFORCE3 },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE3S_SATA2,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, NFORCE3 },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_SATA,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_SATA2,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_SATA,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 },
+ { PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_SATA2,
+ PCI_ANY_ID, PCI_ANY_ID, 0, 0, CK804 },
+ { 0, } /* terminate list */
+};
+
+#define NV_HOST_FLAGS_SCR_MMIO 0x00000001
+
+struct nv_host_desc
+{
+ enum nv_host_type host_type;
+ unsigned long host_flags;
+ void (*enable_hotplug)(struct ata_probe_ent *probe_ent);
+ void (*disable_hotplug)(struct ata_host_set *host_set);
+ void (*check_hotplug)(struct ata_host_set *host_set);
+
+};
+static struct nv_host_desc nv_device_tbl[] = {
+ {
+ .host_type = NFORCE2,
+ .host_flags = 0x00000000,
+ .enable_hotplug = nv_enable_hotplug,
+ .disable_hotplug= nv_disable_hotplug,
+ .check_hotplug = nv_check_hotplug,
+ },
+ {
+ .host_type = NFORCE3,
+ .host_flags = 0x00000000,
+ .enable_hotplug = nv_enable_hotplug,
+ .disable_hotplug= nv_disable_hotplug,
+ .check_hotplug = nv_check_hotplug,
+ },
+ { .host_type = CK804,
+ .host_flags = NV_HOST_FLAGS_SCR_MMIO,
+ .enable_hotplug = nv_enable_hotplug_ck804,
+ .disable_hotplug= nv_disable_hotplug_ck804,
+ .check_hotplug = nv_check_hotplug_ck804,
+ },
+};
+
+struct nv_host
+{
+ struct nv_host_desc *host_desc;
+};
+
+static struct pci_driver nv_pci_driver = {
+ .name = DRV_NAME,
+ .id_table = nv_pci_tbl,
+ .probe = nv_init_one,
+ .remove = ata_pci_remove_one,
+};
+
+static Scsi_Host_Template nv_sht = {
+ .module = THIS_MODULE,
+ .name = DRV_NAME,
+ .ioctl = ata_scsi_ioctl,
+ .queuecommand = ata_scsi_queuecmd,
+ .eh_strategy_handler = ata_scsi_error,
+ .can_queue = ATA_DEF_QUEUE,
+ .this_id = ATA_SHT_THIS_ID,
+ .sg_tablesize = LIBATA_MAX_PRD,
+ .max_sectors = ATA_MAX_SECTORS,
+ .cmd_per_lun = ATA_SHT_CMD_PER_LUN,
+ .emulated = ATA_SHT_EMULATED,
+ .use_clustering = ATA_SHT_USE_CLUSTERING,
+ .proc_name = DRV_NAME,
+ .dma_boundary = ATA_DMA_BOUNDARY,
+ .slave_configure = ata_scsi_slave_config,
+ .bios_param = ata_std_bios_param,
+};
+
+static struct ata_port_operations nv_ops = {
+ .port_disable = ata_port_disable,
+ .tf_load = ata_tf_load,
+ .tf_read = ata_tf_read,
+ .exec_command = ata_exec_command,
+ .check_status = ata_check_status,
+ .dev_select = ata_std_dev_select,
+ .phy_reset = sata_phy_reset,
+ .bmdma_setup = ata_bmdma_setup,
+ .bmdma_start = ata_bmdma_start,
+ .qc_prep = ata_qc_prep,
+ .qc_issue = ata_qc_issue_prot,
+ .eng_timeout = ata_eng_timeout,
+ .irq_handler = nv_interrupt,
+ .irq_clear = ata_bmdma_irq_clear,
+ .scr_read = nv_scr_read,
+ .scr_write = nv_scr_write,
+ .port_start = ata_port_start,
+ .port_stop = ata_port_stop,
+ .host_stop = nv_host_stop,
+};
+
+/* FIXME: The hardware provides the necessary SATA PHY controls
+ * to support ATA_FLAG_SATA_RESET. However, it is currently
+ * necessary to disable that flag, to solve misdetection problems.
+ * See http://bugme.osdl.org/show_bug.cgi?id=3352 for more info.
+ *
+ * This problem really needs to be investigated further. But in the
+ * meantime, we avoid ATA_FLAG_SATA_RESET to get people working.
+ */
+static struct ata_port_info nv_port_info = {
+ .sht = &nv_sht,
+ .host_flags = ATA_FLAG_SATA |
+ /* ATA_FLAG_SATA_RESET | */
+ ATA_FLAG_SRST |
+ ATA_FLAG_NO_LEGACY,
+ .pio_mask = NV_PIO_MASK,
+ .mwdma_mask = NV_MWDMA_MASK,
+ .udma_mask = NV_UDMA_MASK,
+ .port_ops = &nv_ops,
+};
+
+MODULE_AUTHOR("NVIDIA");
+MODULE_DESCRIPTION("low-level driver for NVIDIA nForce SATA controller");
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, nv_pci_tbl);
+MODULE_VERSION(DRV_VERSION);
+
+irqreturn_t nv_interrupt (int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ata_host_set *host_set = dev_instance;
+ struct nv_host *host = host_set->private_data;
+ unsigned int i;
+ unsigned int handled = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&host_set->lock, flags);
+
+ for (i = 0; i < host_set->n_ports; i++) {
+ struct ata_port *ap;
+
+ ap = host_set->ports[i];
+ if (ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) {
+ struct ata_queued_cmd *qc;
+
+ qc = ata_qc_from_tag(ap, ap->active_tag);
+ if (qc && (!(qc->tf.ctl & ATA_NIEN)))
+ handled += ata_host_intr(ap, qc);
+ }
+
+ }
+
+ if (host->host_desc->check_hotplug)
+ host->host_desc->check_hotplug(host_set);
+
+ spin_unlock_irqrestore(&host_set->lock, flags);
+
+ return IRQ_RETVAL(handled);
+}
+
+static u32 nv_scr_read (struct ata_port *ap, unsigned int sc_reg)
+{
+ struct ata_host_set *host_set = ap->host_set;
+ struct nv_host *host = host_set->private_data;
+
+ if (sc_reg > SCR_CONTROL)
+ return 0xffffffffU;
+
+ if (host->host_desc->host_flags & NV_HOST_FLAGS_SCR_MMIO)
+ return readl(ap->ioaddr.scr_addr + (sc_reg * 4));
+ else
+ return inl(ap->ioaddr.scr_addr + (sc_reg * 4));
+}
+
+static void nv_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val)
+{
+ struct ata_host_set *host_set = ap->host_set;
+ struct nv_host *host = host_set->private_data;
+
+ if (sc_reg > SCR_CONTROL)
+ return;
+
+ if (host->host_desc->host_flags & NV_HOST_FLAGS_SCR_MMIO)
+ writel(val, ap->ioaddr.scr_addr + (sc_reg * 4));
+ else
+ outl(val, ap->ioaddr.scr_addr + (sc_reg * 4));
+}
+
+static void nv_host_stop (struct ata_host_set *host_set)
+{
+ struct nv_host *host = host_set->private_data;
+
+ // Disable hotplug event interrupts.
+ if (host->host_desc->disable_hotplug)
+ host->host_desc->disable_hotplug(host_set);
+
+ kfree(host);
+}
+
+static int nv_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ static int printed_version = 0;
+ struct nv_host *host;
+ struct ata_port_info *ppi;
+ struct ata_probe_ent *probe_ent;
+ int rc;
+
+ if (!printed_version++)
+ printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
+
+ rc = pci_enable_device(pdev);
+ if (rc)
+ goto err_out;
+
+ rc = pci_request_regions(pdev, DRV_NAME);
+ if (rc)
+ goto err_out_disable;
+
+ rc = pci_set_dma_mask(pdev, ATA_DMA_MASK);
+ if (rc)
+ goto err_out_regions;
+ rc = pci_set_consistent_dma_mask(pdev, ATA_DMA_MASK);
+ if (rc)
+ goto err_out_regions;
+
+ rc = -ENOMEM;
+
+ ppi = &nv_port_info;
+ probe_ent = ata_pci_init_native_mode(pdev, &ppi);
+ if (!probe_ent)
+ goto err_out_regions;
+
+ host = kmalloc(sizeof(struct nv_host), GFP_KERNEL);
+ if (!host)
+ goto err_out_free_ent;
+
+ host->host_desc = &nv_device_tbl[ent->driver_data];
+
+ probe_ent->private_data = host;
+
+ if (host->host_desc->host_flags & NV_HOST_FLAGS_SCR_MMIO) {
+ unsigned long base;
+
+ probe_ent->mmio_base = ioremap(pci_resource_start(pdev, 5),
+ pci_resource_len(pdev, 5));
+ if (probe_ent->mmio_base == NULL) {
+ rc = -EIO;
+ goto err_out_free_host;
+ }
+
+ base = (unsigned long)probe_ent->mmio_base;
+
+ probe_ent->port[0].scr_addr =
+ base + NV_PORT0_SCR_REG_OFFSET;
+ probe_ent->port[1].scr_addr =
+ base + NV_PORT1_SCR_REG_OFFSET;
+ } else {
+
+ probe_ent->port[0].scr_addr =
+ pci_resource_start(pdev, 5) | NV_PORT0_SCR_REG_OFFSET;
+ probe_ent->port[1].scr_addr =
+ pci_resource_start(pdev, 5) | NV_PORT1_SCR_REG_OFFSET;
+ }
+
+ pci_set_master(pdev);
+
+ rc = ata_device_add(probe_ent);
+ if (rc != NV_PORTS)
+ goto err_out_iounmap;
+
+ // Enable hotplug event interrupts.
+ if (host->host_desc->enable_hotplug)
+ host->host_desc->enable_hotplug(probe_ent);
+
+ kfree(probe_ent);
+
+ return 0;
+
+err_out_iounmap:
+ if (host->host_desc->host_flags & NV_HOST_FLAGS_SCR_MMIO)
+ iounmap(probe_ent->mmio_base);
+err_out_free_host:
+ kfree(host);
+err_out_free_ent:
+ kfree(probe_ent);
+err_out_regions:
+ pci_release_regions(pdev);
+err_out_disable:
+ pci_disable_device(pdev);
+err_out:
+ return rc;
+}
+
+static void nv_enable_hotplug(struct ata_probe_ent *probe_ent)
+{
+ u8 intr_mask;
+
+ outb(NV_INT_STATUS_HOTPLUG,
+ probe_ent->port[0].scr_addr + NV_INT_STATUS);
+
+ intr_mask = inb(probe_ent->port[0].scr_addr + NV_INT_ENABLE);
+ intr_mask |= NV_INT_ENABLE_HOTPLUG;
+
+ outb(intr_mask, probe_ent->port[0].scr_addr + NV_INT_ENABLE);
+}
+
+static void nv_disable_hotplug(struct ata_host_set *host_set)
+{
+ u8 intr_mask;
+
+ intr_mask = inb(host_set->ports[0]->ioaddr.scr_addr + NV_INT_ENABLE);
+
+ intr_mask &= ~(NV_INT_ENABLE_HOTPLUG);
+
+ outb(intr_mask, host_set->ports[0]->ioaddr.scr_addr + NV_INT_ENABLE);
+}
+
+static void nv_check_hotplug(struct ata_host_set *host_set)
+{
+ u8 intr_status;
+
+ intr_status = inb(host_set->ports[0]->ioaddr.scr_addr + NV_INT_STATUS);
+
+ // Clear interrupt status.
+ outb(0xff, host_set->ports[0]->ioaddr.scr_addr + NV_INT_STATUS);
+
+ if (intr_status & NV_INT_STATUS_HOTPLUG) {
+ if (intr_status & NV_INT_STATUS_PDEV_ADDED)
+ printk(KERN_WARNING "nv_sata: "
+ "Primary device added\n");
+
+ if (intr_status & NV_INT_STATUS_PDEV_REMOVED)
+ printk(KERN_WARNING "nv_sata: "
+ "Primary device removed\n");
+
+ if (intr_status & NV_INT_STATUS_SDEV_ADDED)
+ printk(KERN_WARNING "nv_sata: "
+ "Secondary device added\n");
+
+ if (intr_status & NV_INT_STATUS_SDEV_REMOVED)
+ printk(KERN_WARNING "nv_sata: "
+ "Secondary device removed\n");
+ }
+}
+
+static void nv_enable_hotplug_ck804(struct ata_probe_ent *probe_ent)
+{
+ struct pci_dev *pdev = to_pci_dev(probe_ent->dev);
+ u8 intr_mask;
+ u8 regval;
+
+ pci_read_config_byte(pdev, NV_MCP_SATA_CFG_20, ®val);
+ regval |= NV_MCP_SATA_CFG_20_SATA_SPACE_EN;
+ pci_write_config_byte(pdev, NV_MCP_SATA_CFG_20, regval);
+
+ writeb(NV_INT_STATUS_HOTPLUG, probe_ent->mmio_base + NV_INT_STATUS_CK804);
+
+ intr_mask = readb(probe_ent->mmio_base + NV_INT_ENABLE_CK804);
+ intr_mask |= NV_INT_ENABLE_HOTPLUG;
+
+ writeb(intr_mask, probe_ent->mmio_base + NV_INT_ENABLE_CK804);
+}
+
+static void nv_disable_hotplug_ck804(struct ata_host_set *host_set)
+{
+ struct pci_dev *pdev = to_pci_dev(host_set->dev);
+ u8 intr_mask;
+ u8 regval;
+
+ intr_mask = readb(host_set->mmio_base + NV_INT_ENABLE_CK804);
+
+ intr_mask &= ~(NV_INT_ENABLE_HOTPLUG);
+
+ writeb(intr_mask, host_set->mmio_base + NV_INT_ENABLE_CK804);
+
+ pci_read_config_byte(pdev, NV_MCP_SATA_CFG_20, ®val);
+ regval &= ~NV_MCP_SATA_CFG_20_SATA_SPACE_EN;
+ pci_write_config_byte(pdev, NV_MCP_SATA_CFG_20, regval);
+}
+
+static void nv_check_hotplug_ck804(struct ata_host_set *host_set)
+{
+ u8 intr_status;
+
+ intr_status = readb(host_set->mmio_base + NV_INT_STATUS_CK804);
+
+ // Clear interrupt status.
+ writeb(0xff, host_set->mmio_base + NV_INT_STATUS_CK804);
+
+ if (intr_status & NV_INT_STATUS_HOTPLUG) {
+ if (intr_status & NV_INT_STATUS_PDEV_ADDED)
+ printk(KERN_WARNING "nv_sata: "
+ "Primary device added\n");
+
+ if (intr_status & NV_INT_STATUS_PDEV_REMOVED)
+ printk(KERN_WARNING "nv_sata: "
+ "Primary device removed\n");
+
+ if (intr_status & NV_INT_STATUS_SDEV_ADDED)
+ printk(KERN_WARNING "nv_sata: "
+ "Secondary device added\n");
+
+ if (intr_status & NV_INT_STATUS_SDEV_REMOVED)
+ printk(KERN_WARNING "nv_sata: "
+ "Secondary device removed\n");
+ }
+}
+
+static int __init nv_init(void)
+{
+ return pci_module_init(&nv_pci_driver);
+}
+
+static void __exit nv_exit(void)
+{
+ pci_unregister_driver(&nv_pci_driver);
+}
+
+module_init(nv_init);
+module_exit(nv_exit);
--- /dev/null
+/*
+ * sata_sx4.c - Promise SATA
+ *
+ * Maintained by: Jeff Garzik <jgarzik@pobox.com>
+ * Please ALWAYS copy linux-ide@vger.kernel.org
+ * on emails.
+ *
+ * Copyright 2003-2004 Red Hat, Inc.
+ *
+ * The contents of this file are subject to the Open
+ * Software License version 1.1 that can be found at
+ * http://www.opensource.org/licenses/osl-1.1.txt and is included herein
+ * by reference.
+ *
+ * Alternatively, the contents of this file may be used under the terms
+ * of the GNU General Public License version 2 (the "GPL") as distributed
+ * in the kernel source COPYING file, in which case the provisions of
+ * the GPL are applicable instead of the above. If you wish to allow
+ * the use of your version of this file only under the terms of the
+ * GPL and not to allow others to use your version of this file under
+ * the OSL, indicate your decision by deleting the provisions above and
+ * replace them with the notice and other provisions required by the GPL.
+ * If you do not delete the provisions above, a recipient may use your
+ * version of this file under either the OSL or the GPL.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include "scsi.h"
+#include <scsi/scsi_host.h>
+#include <linux/libata.h>
+#include <asm/io.h>
+#include "sata_promise.h"
+
+#define DRV_NAME "sata_sx4"
+#define DRV_VERSION "0.7"
+
+
+enum {
+ PDC_PRD_TBL = 0x44, /* Direct command DMA table addr */
+
+ PDC_PKT_SUBMIT = 0x40, /* Command packet pointer addr */
+ PDC_HDMA_PKT_SUBMIT = 0x100, /* Host DMA packet pointer addr */
+ PDC_INT_SEQMASK = 0x40, /* Mask of asserted SEQ INTs */
+ PDC_HDMA_CTLSTAT = 0x12C, /* Host DMA control / status */
+
+ PDC_20621_SEQCTL = 0x400,
+ PDC_20621_SEQMASK = 0x480,
+ PDC_20621_GENERAL_CTL = 0x484,
+ PDC_20621_PAGE_SIZE = (32 * 1024),
+
+ /* chosen, not constant, values; we design our own DIMM mem map */
+ PDC_20621_DIMM_WINDOW = 0x0C, /* page# for 32K DIMM window */
+ PDC_20621_DIMM_BASE = 0x00200000,
+ PDC_20621_DIMM_DATA = (64 * 1024),
+ PDC_DIMM_DATA_STEP = (256 * 1024),
+ PDC_DIMM_WINDOW_STEP = (8 * 1024),
+ PDC_DIMM_HOST_PRD = (6 * 1024),
+ PDC_DIMM_HOST_PKT = (128 * 0),
+ PDC_DIMM_HPKT_PRD = (128 * 1),
+ PDC_DIMM_ATA_PKT = (128 * 2),
+ PDC_DIMM_APKT_PRD = (128 * 3),
+ PDC_DIMM_HEADER_SZ = PDC_DIMM_APKT_PRD + 128,
+ PDC_PAGE_WINDOW = 0x40,
+ PDC_PAGE_DATA = PDC_PAGE_WINDOW +
+ (PDC_20621_DIMM_DATA / PDC_20621_PAGE_SIZE),
+ PDC_PAGE_SET = PDC_DIMM_DATA_STEP / PDC_20621_PAGE_SIZE,
+
+ PDC_CHIP0_OFS = 0xC0000, /* offset of chip #0 */
+
+ PDC_20621_ERR_MASK = (1<<19) | (1<<20) | (1<<21) | (1<<22) |
+ (1<<23),
+
+ board_20621 = 0, /* FastTrak S150 SX4 */
+
+ PDC_RESET = (1 << 11), /* HDMA reset */
+
+ PDC_MAX_HDMA = 32,
+ PDC_HDMA_Q_MASK = (PDC_MAX_HDMA - 1),
+
+ PDC_DIMM0_SPD_DEV_ADDRESS = 0x50,
+ PDC_DIMM1_SPD_DEV_ADDRESS = 0x51,
+ PDC_MAX_DIMM_MODULE = 0x02,
+ PDC_I2C_CONTROL_OFFSET = 0x48,
+ PDC_I2C_ADDR_DATA_OFFSET = 0x4C,
+ PDC_DIMM0_CONTROL_OFFSET = 0x80,
+ PDC_DIMM1_CONTROL_OFFSET = 0x84,
+ PDC_SDRAM_CONTROL_OFFSET = 0x88,
+ PDC_I2C_WRITE = 0x00000000,
+ PDC_I2C_READ = 0x00000040,
+ PDC_I2C_START = 0x00000080,
+ PDC_I2C_MASK_INT = 0x00000020,
+ PDC_I2C_COMPLETE = 0x00010000,
+ PDC_I2C_NO_ACK = 0x00100000,
+ PDC_DIMM_SPD_SUBADDRESS_START = 0x00,
+ PDC_DIMM_SPD_SUBADDRESS_END = 0x7F,
+ PDC_DIMM_SPD_ROW_NUM = 3,
+ PDC_DIMM_SPD_COLUMN_NUM = 4,
+ PDC_DIMM_SPD_MODULE_ROW = 5,
+ PDC_DIMM_SPD_TYPE = 11,
+ PDC_DIMM_SPD_FRESH_RATE = 12,
+ PDC_DIMM_SPD_BANK_NUM = 17,
+ PDC_DIMM_SPD_CAS_LATENCY = 18,
+ PDC_DIMM_SPD_ATTRIBUTE = 21,
+ PDC_DIMM_SPD_ROW_PRE_CHARGE = 27,
+ PDC_DIMM_SPD_ROW_ACTIVE_DELAY = 28,
+ PDC_DIMM_SPD_RAS_CAS_DELAY = 29,
+ PDC_DIMM_SPD_ACTIVE_PRECHARGE = 30,
+ PDC_DIMM_SPD_SYSTEM_FREQ = 126,
+ PDC_CTL_STATUS = 0x08,
+ PDC_DIMM_WINDOW_CTLR = 0x0C,
+ PDC_TIME_CONTROL = 0x3C,
+ PDC_TIME_PERIOD = 0x40,
+ PDC_TIME_COUNTER = 0x44,
+ PDC_GENERAL_CTLR = 0x484,
+ PCI_PLL_INIT = 0x8A531824,
+ PCI_X_TCOUNT = 0xEE1E5CFF
+};
+
+
+struct pdc_port_priv {
+ u8 dimm_buf[(ATA_PRD_SZ * ATA_MAX_PRD) + 512];
+ u8 *pkt;
+ dma_addr_t pkt_dma;
+};
+
+struct pdc_host_priv {
+ void *dimm_mmio;
+
+ unsigned int doing_hdma;
+ unsigned int hdma_prod;
+ unsigned int hdma_cons;
+ struct {
+ struct ata_queued_cmd *qc;
+ unsigned int seq;
+ unsigned long pkt_ofs;
+ } hdma[32];
+};
+
+
+static int pdc_sata_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
+static irqreturn_t pdc20621_interrupt (int irq, void *dev_instance, struct pt_regs *regs);
+static void pdc_eng_timeout(struct ata_port *ap);
+static void pdc_20621_phy_reset (struct ata_port *ap);
+static int pdc_port_start(struct ata_port *ap);
+static void pdc_port_stop(struct ata_port *ap);
+static void pdc20621_qc_prep(struct ata_queued_cmd *qc);
+static void pdc_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf);
+static void pdc_exec_command_mmio(struct ata_port *ap, struct ata_taskfile *tf);
+static void pdc20621_host_stop(struct ata_host_set *host_set);
+static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe);
+static int pdc20621_detect_dimm(struct ata_probe_ent *pe);
+static unsigned int pdc20621_i2c_read(struct ata_probe_ent *pe,
+ u32 device, u32 subaddr, u32 *pdata);
+static int pdc20621_prog_dimm0(struct ata_probe_ent *pe);
+static unsigned int pdc20621_prog_dimm_global(struct ata_probe_ent *pe);
+#ifdef ATA_VERBOSE_DEBUG
+static void pdc20621_get_from_dimm(struct ata_probe_ent *pe,
+ void *psource, u32 offset, u32 size);
+#endif
+static void pdc20621_put_to_dimm(struct ata_probe_ent *pe,
+ void *psource, u32 offset, u32 size);
+static void pdc20621_irq_clear(struct ata_port *ap);
+static int pdc20621_qc_issue_prot(struct ata_queued_cmd *qc);
+
+
+static Scsi_Host_Template pdc_sata_sht = {
+ .module = THIS_MODULE,
+ .name = DRV_NAME,
+ .ioctl = ata_scsi_ioctl,
+ .queuecommand = ata_scsi_queuecmd,
+ .eh_strategy_handler = ata_scsi_error,
+ .can_queue = ATA_DEF_QUEUE,
+ .this_id = ATA_SHT_THIS_ID,
+ .sg_tablesize = LIBATA_MAX_PRD,
+ .max_sectors = ATA_MAX_SECTORS,
+ .cmd_per_lun = ATA_SHT_CMD_PER_LUN,
+ .emulated = ATA_SHT_EMULATED,
+ .use_clustering = ATA_SHT_USE_CLUSTERING,
+ .proc_name = DRV_NAME,
+ .dma_boundary = ATA_DMA_BOUNDARY,
+ .slave_configure = ata_scsi_slave_config,
+ .bios_param = ata_std_bios_param,
+};
+
+static struct ata_port_operations pdc_20621_ops = {
+ .port_disable = ata_port_disable,
+ .tf_load = pdc_tf_load_mmio,
+ .tf_read = ata_tf_read,
+ .check_status = ata_check_status,
+ .exec_command = pdc_exec_command_mmio,
+ .dev_select = ata_std_dev_select,
+ .phy_reset = pdc_20621_phy_reset,
+ .qc_prep = pdc20621_qc_prep,
+ .qc_issue = pdc20621_qc_issue_prot,
+ .eng_timeout = pdc_eng_timeout,
+ .irq_handler = pdc20621_interrupt,
+ .irq_clear = pdc20621_irq_clear,
+ .port_start = pdc_port_start,
+ .port_stop = pdc_port_stop,
+ .host_stop = pdc20621_host_stop,
+};
+
+static struct ata_port_info pdc_port_info[] = {
+ /* board_20621 */
+ {
+ .sht = &pdc_sata_sht,
+ .host_flags = ATA_FLAG_SATA | ATA_FLAG_NO_LEGACY |
+ ATA_FLAG_SRST | ATA_FLAG_MMIO,
+ .pio_mask = 0x1f, /* pio0-4 */
+ .mwdma_mask = 0x07, /* mwdma0-2 */
+ .udma_mask = 0x7f, /* udma0-6 ; FIXME */
+ .port_ops = &pdc_20621_ops,
+ },
+
+};
+
+static struct pci_device_id pdc_sata_pci_tbl[] = {
+ { PCI_VENDOR_ID_PROMISE, 0x6622, PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ board_20621 },
+ { } /* terminate list */
+};
+
+
+static struct pci_driver pdc_sata_pci_driver = {
+ .name = DRV_NAME,
+ .id_table = pdc_sata_pci_tbl,
+ .probe = pdc_sata_init_one,
+ .remove = ata_pci_remove_one,
+};
+
+
+static void pdc20621_host_stop(struct ata_host_set *host_set)
+{
+ struct pdc_host_priv *hpriv = host_set->private_data;
+ void *dimm_mmio = hpriv->dimm_mmio;
+
+ iounmap(dimm_mmio);
+ kfree(hpriv);
+}
+
+static int pdc_port_start(struct ata_port *ap)
+{
+ struct device *dev = ap->host_set->dev;
+ struct pdc_port_priv *pp;
+ int rc;
+
+ rc = ata_port_start(ap);
+ if (rc)
+ return rc;
+
+ pp = kmalloc(sizeof(*pp), GFP_KERNEL);
+ if (!pp) {
+ rc = -ENOMEM;
+ goto err_out;
+ }
+ memset(pp, 0, sizeof(*pp));
+
+ pp->pkt = dma_alloc_coherent(dev, 128, &pp->pkt_dma, GFP_KERNEL);
+ if (!pp->pkt) {
+ rc = -ENOMEM;
+ goto err_out_kfree;
+ }
+
+ ap->private_data = pp;
+
+ return 0;
+
+err_out_kfree:
+ kfree(pp);
+err_out:
+ ata_port_stop(ap);
+ return rc;
+}
+
+
+static void pdc_port_stop(struct ata_port *ap)
+{
+ struct device *dev = ap->host_set->dev;
+ struct pdc_port_priv *pp = ap->private_data;
+
+ ap->private_data = NULL;
+ dma_free_coherent(dev, 128, pp->pkt, pp->pkt_dma);
+ kfree(pp);
+ ata_port_stop(ap);
+}
+
+
+static void pdc_20621_phy_reset (struct ata_port *ap)
+{
+ VPRINTK("ENTER\n");
+ ap->cbl = ATA_CBL_SATA;
+ ata_port_probe(ap);
+ ata_bus_reset(ap);
+}
+
+static inline void pdc20621_ata_sg(struct ata_taskfile *tf, u8 *buf,
+ unsigned int portno,
+ unsigned int total_len)
+{
+ u32 addr;
+ unsigned int dw = PDC_DIMM_APKT_PRD >> 2;
+ u32 *buf32 = (u32 *) buf;
+
+ /* output ATA packet S/G table */
+ addr = PDC_20621_DIMM_BASE + PDC_20621_DIMM_DATA +
+ (PDC_DIMM_DATA_STEP * portno);
+ VPRINTK("ATA sg addr 0x%x, %d\n", addr, addr);
+ buf32[dw] = cpu_to_le32(addr);
+ buf32[dw + 1] = cpu_to_le32(total_len | ATA_PRD_EOT);
+
+ VPRINTK("ATA PSG @ %x == (0x%x, 0x%x)\n",
+ PDC_20621_DIMM_BASE +
+ (PDC_DIMM_WINDOW_STEP * portno) +
+ PDC_DIMM_APKT_PRD,
+ buf32[dw], buf32[dw + 1]);
+}
+
+static inline void pdc20621_host_sg(struct ata_taskfile *tf, u8 *buf,
+ unsigned int portno,
+ unsigned int total_len)
+{
+ u32 addr;
+ unsigned int dw = PDC_DIMM_HPKT_PRD >> 2;
+ u32 *buf32 = (u32 *) buf;
+
+ /* output Host DMA packet S/G table */
+ addr = PDC_20621_DIMM_BASE + PDC_20621_DIMM_DATA +
+ (PDC_DIMM_DATA_STEP * portno);
+
+ buf32[dw] = cpu_to_le32(addr);
+ buf32[dw + 1] = cpu_to_le32(total_len | ATA_PRD_EOT);
+
+ VPRINTK("HOST PSG @ %x == (0x%x, 0x%x)\n",
+ PDC_20621_DIMM_BASE +
+ (PDC_DIMM_WINDOW_STEP * portno) +
+ PDC_DIMM_HPKT_PRD,
+ buf32[dw], buf32[dw + 1]);
+}
+
+static inline unsigned int pdc20621_ata_pkt(struct ata_taskfile *tf,
+ unsigned int devno, u8 *buf,
+ unsigned int portno)
+{
+ unsigned int i, dw;
+ u32 *buf32 = (u32 *) buf;
+ u8 dev_reg;
+
+ unsigned int dimm_sg = PDC_20621_DIMM_BASE +
+ (PDC_DIMM_WINDOW_STEP * portno) +
+ PDC_DIMM_APKT_PRD;
+ VPRINTK("ENTER, dimm_sg == 0x%x, %d\n", dimm_sg, dimm_sg);
+
+ i = PDC_DIMM_ATA_PKT;
+
+ /*
+ * Set up ATA packet
+ */
+ if ((tf->protocol == ATA_PROT_DMA) && (!(tf->flags & ATA_TFLAG_WRITE)))
+ buf[i++] = PDC_PKT_READ;
+ else if (tf->protocol == ATA_PROT_NODATA)
+ buf[i++] = PDC_PKT_NODATA;
+ else
+ buf[i++] = 0;
+ buf[i++] = 0; /* reserved */
+ buf[i++] = portno + 1; /* seq. id */
+ buf[i++] = 0xff; /* delay seq. id */
+
+ /* dimm dma S/G, and next-pkt */
+ dw = i >> 2;
+ if (tf->protocol == ATA_PROT_NODATA)
+ buf32[dw] = 0;
+ else
+ buf32[dw] = cpu_to_le32(dimm_sg);
+ buf32[dw + 1] = 0;
+ i += 8;
+
+ if (devno == 0)
+ dev_reg = ATA_DEVICE_OBS;
+ else
+ dev_reg = ATA_DEVICE_OBS | ATA_DEV1;
+
+ /* select device */
+ buf[i++] = (1 << 5) | PDC_PKT_CLEAR_BSY | ATA_REG_DEVICE;
+ buf[i++] = dev_reg;
+
+ /* device control register */
+ buf[i++] = (1 << 5) | PDC_REG_DEVCTL;
+ buf[i++] = tf->ctl;
+
+ return i;
+}
+
+static inline void pdc20621_host_pkt(struct ata_taskfile *tf, u8 *buf,
+ unsigned int portno)
+{
+ unsigned int dw;
+ u32 tmp, *buf32 = (u32 *) buf;
+
+ unsigned int host_sg = PDC_20621_DIMM_BASE +
+ (PDC_DIMM_WINDOW_STEP * portno) +
+ PDC_DIMM_HOST_PRD;
+ unsigned int dimm_sg = PDC_20621_DIMM_BASE +
+ (PDC_DIMM_WINDOW_STEP * portno) +
+ PDC_DIMM_HPKT_PRD;
+ VPRINTK("ENTER, dimm_sg == 0x%x, %d\n", dimm_sg, dimm_sg);
+ VPRINTK("host_sg == 0x%x, %d\n", host_sg, host_sg);
+
+ dw = PDC_DIMM_HOST_PKT >> 2;
+
+ /*
+ * Set up Host DMA packet
+ */
+ if ((tf->protocol == ATA_PROT_DMA) && (!(tf->flags & ATA_TFLAG_WRITE)))
+ tmp = PDC_PKT_READ;
+ else
+ tmp = 0;
+ tmp |= ((portno + 1 + 4) << 16); /* seq. id */
+ tmp |= (0xff << 24); /* delay seq. id */
+ buf32[dw + 0] = cpu_to_le32(tmp);
+ buf32[dw + 1] = cpu_to_le32(host_sg);
+ buf32[dw + 2] = cpu_to_le32(dimm_sg);
+ buf32[dw + 3] = 0;
+
+ VPRINTK("HOST PKT @ %x == (0x%x 0x%x 0x%x 0x%x)\n",
+ PDC_20621_DIMM_BASE + (PDC_DIMM_WINDOW_STEP * portno) +
+ PDC_DIMM_HOST_PKT,
+ buf32[dw + 0],
+ buf32[dw + 1],
+ buf32[dw + 2],
+ buf32[dw + 3]);
+}
+
+static void pdc20621_dma_prep(struct ata_queued_cmd *qc)
+{
+ struct scatterlist *sg = qc->sg;
+ struct ata_port *ap = qc->ap;
+ struct pdc_port_priv *pp = ap->private_data;
+ void *mmio = ap->host_set->mmio_base;
+ struct pdc_host_priv *hpriv = ap->host_set->private_data;
+ void *dimm_mmio = hpriv->dimm_mmio;
+ unsigned int portno = ap->port_no;
+ unsigned int i, last, idx, total_len = 0, sgt_len;
+ u32 *buf = (u32 *) &pp->dimm_buf[PDC_DIMM_HEADER_SZ];
+
+ assert(qc->flags & ATA_QCFLAG_DMAMAP);
+
+ VPRINTK("ata%u: ENTER\n", ap->id);
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ /*
+ * Build S/G table
+ */
+ last = qc->n_elem;
+ idx = 0;
+ for (i = 0; i < last; i++) {
+ buf[idx++] = cpu_to_le32(sg_dma_address(&sg[i]));
+ buf[idx++] = cpu_to_le32(sg_dma_len(&sg[i]));
+ total_len += sg[i].length;
+ }
+ buf[idx - 1] |= cpu_to_le32(ATA_PRD_EOT);
+ sgt_len = idx * 4;
+
+ /*
+ * Build ATA, host DMA packets
+ */
+ pdc20621_host_sg(&qc->tf, &pp->dimm_buf[0], portno, total_len);
+ pdc20621_host_pkt(&qc->tf, &pp->dimm_buf[0], portno);
+
+ pdc20621_ata_sg(&qc->tf, &pp->dimm_buf[0], portno, total_len);
+ i = pdc20621_ata_pkt(&qc->tf, qc->dev->devno, &pp->dimm_buf[0], portno);
+
+ if (qc->tf.flags & ATA_TFLAG_LBA48)
+ i = pdc_prep_lba48(&qc->tf, &pp->dimm_buf[0], i);
+ else
+ i = pdc_prep_lba28(&qc->tf, &pp->dimm_buf[0], i);
+
+ pdc_pkt_footer(&qc->tf, &pp->dimm_buf[0], i);
+
+ /* copy three S/G tables and two packets to DIMM MMIO window */
+ memcpy_toio(dimm_mmio + (portno * PDC_DIMM_WINDOW_STEP),
+ &pp->dimm_buf, PDC_DIMM_HEADER_SZ);
+ memcpy_toio(dimm_mmio + (portno * PDC_DIMM_WINDOW_STEP) +
+ PDC_DIMM_HOST_PRD,
+ &pp->dimm_buf[PDC_DIMM_HEADER_SZ], sgt_len);
+
+ /* force host FIFO dump */
+ writel(0x00000001, mmio + PDC_20621_GENERAL_CTL);
+
+ readl(dimm_mmio); /* MMIO PCI posting flush */
+
+ VPRINTK("ata pkt buf ofs %u, prd size %u, mmio copied\n", i, sgt_len);
+}
+
+static void pdc20621_nodata_prep(struct ata_queued_cmd *qc)
+{
+ struct ata_port *ap = qc->ap;
+ struct pdc_port_priv *pp = ap->private_data;
+ void *mmio = ap->host_set->mmio_base;
+ struct pdc_host_priv *hpriv = ap->host_set->private_data;
+ void *dimm_mmio = hpriv->dimm_mmio;
+ unsigned int portno = ap->port_no;
+ unsigned int i;
+
+ VPRINTK("ata%u: ENTER\n", ap->id);
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ i = pdc20621_ata_pkt(&qc->tf, qc->dev->devno, &pp->dimm_buf[0], portno);
+
+ if (qc->tf.flags & ATA_TFLAG_LBA48)
+ i = pdc_prep_lba48(&qc->tf, &pp->dimm_buf[0], i);
+ else
+ i = pdc_prep_lba28(&qc->tf, &pp->dimm_buf[0], i);
+
+ pdc_pkt_footer(&qc->tf, &pp->dimm_buf[0], i);
+
+ /* copy three S/G tables and two packets to DIMM MMIO window */
+ memcpy_toio(dimm_mmio + (portno * PDC_DIMM_WINDOW_STEP),
+ &pp->dimm_buf, PDC_DIMM_HEADER_SZ);
+
+ /* force host FIFO dump */
+ writel(0x00000001, mmio + PDC_20621_GENERAL_CTL);
+
+ readl(dimm_mmio); /* MMIO PCI posting flush */
+
+ VPRINTK("ata pkt buf ofs %u, mmio copied\n", i);
+}
+
+static void pdc20621_qc_prep(struct ata_queued_cmd *qc)
+{
+ switch (qc->tf.protocol) {
+ case ATA_PROT_DMA:
+ pdc20621_dma_prep(qc);
+ break;
+ case ATA_PROT_NODATA:
+ pdc20621_nodata_prep(qc);
+ break;
+ default:
+ break;
+ }
+}
+
+static void __pdc20621_push_hdma(struct ata_queued_cmd *qc,
+ unsigned int seq,
+ u32 pkt_ofs)
+{
+ struct ata_port *ap = qc->ap;
+ struct ata_host_set *host_set = ap->host_set;
+ void *mmio = host_set->mmio_base;
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ writel(0x00000001, mmio + PDC_20621_SEQCTL + (seq * 4));
+ readl(mmio + PDC_20621_SEQCTL + (seq * 4)); /* flush */
+
+ writel(pkt_ofs, mmio + PDC_HDMA_PKT_SUBMIT);
+ readl(mmio + PDC_HDMA_PKT_SUBMIT); /* flush */
+}
+
+static void pdc20621_push_hdma(struct ata_queued_cmd *qc,
+ unsigned int seq,
+ u32 pkt_ofs)
+{
+ struct ata_port *ap = qc->ap;
+ struct pdc_host_priv *pp = ap->host_set->private_data;
+ unsigned int idx = pp->hdma_prod & PDC_HDMA_Q_MASK;
+
+ if (!pp->doing_hdma) {
+ __pdc20621_push_hdma(qc, seq, pkt_ofs);
+ pp->doing_hdma = 1;
+ return;
+ }
+
+ pp->hdma[idx].qc = qc;
+ pp->hdma[idx].seq = seq;
+ pp->hdma[idx].pkt_ofs = pkt_ofs;
+ pp->hdma_prod++;
+}
+
+static void pdc20621_pop_hdma(struct ata_queued_cmd *qc)
+{
+ struct ata_port *ap = qc->ap;
+ struct pdc_host_priv *pp = ap->host_set->private_data;
+ unsigned int idx = pp->hdma_cons & PDC_HDMA_Q_MASK;
+
+ /* if nothing on queue, we're done */
+ if (pp->hdma_prod == pp->hdma_cons) {
+ pp->doing_hdma = 0;
+ return;
+ }
+
+ __pdc20621_push_hdma(pp->hdma[idx].qc, pp->hdma[idx].seq,
+ pp->hdma[idx].pkt_ofs);
+ pp->hdma_cons++;
+}
+
+#ifdef ATA_VERBOSE_DEBUG
+static void pdc20621_dump_hdma(struct ata_queued_cmd *qc)
+{
+ struct ata_port *ap = qc->ap;
+ unsigned int port_no = ap->port_no;
+ struct pdc_host_priv *hpriv = ap->host_set->private_data;
+ void *dimm_mmio = hpriv->dimm_mmio;
+
+ dimm_mmio += (port_no * PDC_DIMM_WINDOW_STEP);
+ dimm_mmio += PDC_DIMM_HOST_PKT;
+
+ printk(KERN_ERR "HDMA[0] == 0x%08X\n", readl(dimm_mmio));
+ printk(KERN_ERR "HDMA[1] == 0x%08X\n", readl(dimm_mmio + 4));
+ printk(KERN_ERR "HDMA[2] == 0x%08X\n", readl(dimm_mmio + 8));
+ printk(KERN_ERR "HDMA[3] == 0x%08X\n", readl(dimm_mmio + 12));
+}
+#else
+static inline void pdc20621_dump_hdma(struct ata_queued_cmd *qc) { }
+#endif /* ATA_VERBOSE_DEBUG */
+
+static void pdc20621_packet_start(struct ata_queued_cmd *qc)
+{
+ struct ata_port *ap = qc->ap;
+ struct ata_host_set *host_set = ap->host_set;
+ unsigned int port_no = ap->port_no;
+ void *mmio = host_set->mmio_base;
+ unsigned int rw = (qc->tf.flags & ATA_TFLAG_WRITE);
+ u8 seq = (u8) (port_no + 1);
+ unsigned int port_ofs;
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ VPRINTK("ata%u: ENTER\n", ap->id);
+
+ wmb(); /* flush PRD, pkt writes */
+
+ port_ofs = PDC_20621_DIMM_BASE + (PDC_DIMM_WINDOW_STEP * port_no);
+
+ /* if writing, we (1) DMA to DIMM, then (2) do ATA command */
+ if (rw && qc->tf.protocol == ATA_PROT_DMA) {
+ seq += 4;
+
+ pdc20621_dump_hdma(qc);
+ pdc20621_push_hdma(qc, seq, port_ofs + PDC_DIMM_HOST_PKT);
+ VPRINTK("queued ofs 0x%x (%u), seq %u\n",
+ port_ofs + PDC_DIMM_HOST_PKT,
+ port_ofs + PDC_DIMM_HOST_PKT,
+ seq);
+ } else {
+ writel(0x00000001, mmio + PDC_20621_SEQCTL + (seq * 4));
+ readl(mmio + PDC_20621_SEQCTL + (seq * 4)); /* flush */
+
+ writel(port_ofs + PDC_DIMM_ATA_PKT,
+ (void *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT);
+ readl((void *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT);
+ VPRINTK("submitted ofs 0x%x (%u), seq %u\n",
+ port_ofs + PDC_DIMM_ATA_PKT,
+ port_ofs + PDC_DIMM_ATA_PKT,
+ seq);
+ }
+}
+
+static int pdc20621_qc_issue_prot(struct ata_queued_cmd *qc)
+{
+ switch (qc->tf.protocol) {
+ case ATA_PROT_DMA:
+ case ATA_PROT_NODATA:
+ pdc20621_packet_start(qc);
+ return 0;
+
+ case ATA_PROT_ATAPI_DMA:
+ BUG();
+ break;
+
+ default:
+ break;
+ }
+
+ return ata_qc_issue_prot(qc);
+}
+
+static inline unsigned int pdc20621_host_intr( struct ata_port *ap,
+ struct ata_queued_cmd *qc,
+ unsigned int doing_hdma,
+ void *mmio)
+{
+ unsigned int port_no = ap->port_no;
+ unsigned int port_ofs =
+ PDC_20621_DIMM_BASE + (PDC_DIMM_WINDOW_STEP * port_no);
+ u8 status;
+ unsigned int handled = 0;
+
+ VPRINTK("ENTER\n");
+
+ if ((qc->tf.protocol == ATA_PROT_DMA) && /* read */
+ (!(qc->tf.flags & ATA_TFLAG_WRITE))) {
+
+ /* step two - DMA from DIMM to host */
+ if (doing_hdma) {
+ VPRINTK("ata%u: read hdma, 0x%x 0x%x\n", ap->id,
+ readl(mmio + 0x104), readl(mmio + PDC_HDMA_CTLSTAT));
+ /* get drive status; clear intr; complete txn */
+ ata_qc_complete(qc, ata_wait_idle(ap));
+ pdc20621_pop_hdma(qc);
+ }
+
+ /* step one - exec ATA command */
+ else {
+ u8 seq = (u8) (port_no + 1 + 4);
+ VPRINTK("ata%u: read ata, 0x%x 0x%x\n", ap->id,
+ readl(mmio + 0x104), readl(mmio + PDC_HDMA_CTLSTAT));
+
+ /* submit hdma pkt */
+ pdc20621_dump_hdma(qc);
+ pdc20621_push_hdma(qc, seq,
+ port_ofs + PDC_DIMM_HOST_PKT);
+ }
+ handled = 1;
+
+ } else if (qc->tf.protocol == ATA_PROT_DMA) { /* write */
+
+ /* step one - DMA from host to DIMM */
+ if (doing_hdma) {
+ u8 seq = (u8) (port_no + 1);
+ VPRINTK("ata%u: write hdma, 0x%x 0x%x\n", ap->id,
+ readl(mmio + 0x104), readl(mmio + PDC_HDMA_CTLSTAT));
+
+ /* submit ata pkt */
+ writel(0x00000001, mmio + PDC_20621_SEQCTL + (seq * 4));
+ readl(mmio + PDC_20621_SEQCTL + (seq * 4));
+ writel(port_ofs + PDC_DIMM_ATA_PKT,
+ (void *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT);
+ readl((void *) ap->ioaddr.cmd_addr + PDC_PKT_SUBMIT);
+ }
+
+ /* step two - execute ATA command */
+ else {
+ VPRINTK("ata%u: write ata, 0x%x 0x%x\n", ap->id,
+ readl(mmio + 0x104), readl(mmio + PDC_HDMA_CTLSTAT));
+ /* get drive status; clear intr; complete txn */
+ ata_qc_complete(qc, ata_wait_idle(ap));
+ pdc20621_pop_hdma(qc);
+ }
+ handled = 1;
+
+ /* command completion, but no data xfer */
+ } else if (qc->tf.protocol == ATA_PROT_NODATA) {
+
+ status = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 1000);
+ DPRINTK("BUS_NODATA (drv_stat 0x%X)\n", status);
+ ata_qc_complete(qc, status);
+ handled = 1;
+
+ } else {
+ ap->stats.idle_irq++;
+ }
+
+ return handled;
+}
+
+static void pdc20621_irq_clear(struct ata_port *ap)
+{
+ struct ata_host_set *host_set = ap->host_set;
+ void *mmio = host_set->mmio_base;
+
+ mmio += PDC_CHIP0_OFS;
+
+ readl(mmio + PDC_20621_SEQMASK);
+}
+
+static irqreturn_t pdc20621_interrupt (int irq, void *dev_instance, struct pt_regs *regs)
+{
+ struct ata_host_set *host_set = dev_instance;
+ struct ata_port *ap;
+ u32 mask = 0;
+ unsigned int i, tmp, port_no;
+ unsigned int handled = 0;
+ void *mmio_base;
+
+ VPRINTK("ENTER\n");
+
+ if (!host_set || !host_set->mmio_base) {
+ VPRINTK("QUICK EXIT\n");
+ return IRQ_NONE;
+ }
+
+ mmio_base = host_set->mmio_base;
+
+ /* reading should also clear interrupts */
+ mmio_base += PDC_CHIP0_OFS;
+ mask = readl(mmio_base + PDC_20621_SEQMASK);
+ VPRINTK("mask == 0x%x\n", mask);
+
+ if (mask == 0xffffffff) {
+ VPRINTK("QUICK EXIT 2\n");
+ return IRQ_NONE;
+ }
+ mask &= 0xffff; /* only 16 tags possible */
+ if (!mask) {
+ VPRINTK("QUICK EXIT 3\n");
+ return IRQ_NONE;
+ }
+
+ spin_lock(&host_set->lock);
+
+ for (i = 1; i < 9; i++) {
+ port_no = i - 1;
+ if (port_no > 3)
+ port_no -= 4;
+ if (port_no >= host_set->n_ports)
+ ap = NULL;
+ else
+ ap = host_set->ports[port_no];
+ tmp = mask & (1 << i);
+ VPRINTK("seq %u, port_no %u, ap %p, tmp %x\n", i, port_no, ap, tmp);
+ if (tmp && ap && (!(ap->flags & ATA_FLAG_PORT_DISABLED))) {
+ struct ata_queued_cmd *qc;
+
+ qc = ata_qc_from_tag(ap, ap->active_tag);
+ if (qc && (!(qc->tf.ctl & ATA_NIEN)))
+ handled += pdc20621_host_intr(ap, qc, (i > 4),
+ mmio_base);
+ }
+ }
+
+ spin_unlock(&host_set->lock);
+
+ VPRINTK("mask == 0x%x\n", mask);
+
+ VPRINTK("EXIT\n");
+
+ return IRQ_RETVAL(handled);
+}
+
+static void pdc_eng_timeout(struct ata_port *ap)
+{
+ u8 drv_stat;
+ struct ata_queued_cmd *qc;
+
+ DPRINTK("ENTER\n");
+
+ qc = ata_qc_from_tag(ap, ap->active_tag);
+ if (!qc) {
+ printk(KERN_ERR "ata%u: BUG: timeout without command\n",
+ ap->id);
+ goto out;
+ }
+
+ /* hack alert! We cannot use the supplied completion
+ * function from inside the ->eh_strategy_handler() thread.
+ * libata is the only user of ->eh_strategy_handler() in
+ * any kernel, so the default scsi_done() assumes it is
+ * not being called from the SCSI EH.
+ */
+ qc->scsidone = scsi_finish_command;
+
+ switch (qc->tf.protocol) {
+ case ATA_PROT_DMA:
+ case ATA_PROT_NODATA:
+ printk(KERN_ERR "ata%u: command timeout\n", ap->id);
+ ata_qc_complete(qc, ata_wait_idle(ap) | ATA_ERR);
+ break;
+
+ default:
+ drv_stat = ata_busy_wait(ap, ATA_BUSY | ATA_DRQ, 1000);
+
+ printk(KERN_ERR "ata%u: unknown timeout, cmd 0x%x stat 0x%x\n",
+ ap->id, qc->tf.command, drv_stat);
+
+ ata_qc_complete(qc, drv_stat);
+ break;
+ }
+
+out:
+ DPRINTK("EXIT\n");
+}
+
+static void pdc_tf_load_mmio(struct ata_port *ap, struct ata_taskfile *tf)
+{
+ WARN_ON (tf->protocol == ATA_PROT_DMA ||
+ tf->protocol == ATA_PROT_NODATA);
+ ata_tf_load(ap, tf);
+}
+
+
+static void pdc_exec_command_mmio(struct ata_port *ap, struct ata_taskfile *tf)
+{
+ WARN_ON (tf->protocol == ATA_PROT_DMA ||
+ tf->protocol == ATA_PROT_NODATA);
+ ata_exec_command(ap, tf);
+}
+
+
+static void pdc_sata_setup_port(struct ata_ioports *port, unsigned long base)
+{
+ port->cmd_addr = base;
+ port->data_addr = base;
+ port->feature_addr =
+ port->error_addr = base + 0x4;
+ port->nsect_addr = base + 0x8;
+ port->lbal_addr = base + 0xc;
+ port->lbam_addr = base + 0x10;
+ port->lbah_addr = base + 0x14;
+ port->device_addr = base + 0x18;
+ port->command_addr =
+ port->status_addr = base + 0x1c;
+ port->altstatus_addr =
+ port->ctl_addr = base + 0x38;
+}
+
+
+#ifdef ATA_VERBOSE_DEBUG
+static void pdc20621_get_from_dimm(struct ata_probe_ent *pe, void *psource,
+ u32 offset, u32 size)
+{
+ u32 window_size;
+ u16 idx;
+ u8 page_mask;
+ long dist;
+ void *mmio = pe->mmio_base;
+ struct pdc_host_priv *hpriv = pe->private_data;
+ void *dimm_mmio = hpriv->dimm_mmio;
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ page_mask = 0x00;
+ window_size = 0x2000 * 4; /* 32K byte uchar size */
+ idx = (u16) (offset / window_size);
+
+ writel(0x01, mmio + PDC_GENERAL_CTLR);
+ readl(mmio + PDC_GENERAL_CTLR);
+ writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR);
+ readl(mmio + PDC_DIMM_WINDOW_CTLR);
+
+ offset -= (idx * window_size);
+ idx++;
+ dist = ((long) (window_size - (offset + size))) >= 0 ? size :
+ (long) (window_size - offset);
+ memcpy_fromio((char *) psource, (char *) (dimm_mmio + offset / 4),
+ dist);
+
+ psource += dist;
+ size -= dist;
+ for (; (long) size >= (long) window_size ;) {
+ writel(0x01, mmio + PDC_GENERAL_CTLR);
+ readl(mmio + PDC_GENERAL_CTLR);
+ writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR);
+ readl(mmio + PDC_DIMM_WINDOW_CTLR);
+ memcpy_fromio((char *) psource, (char *) (dimm_mmio),
+ window_size / 4);
+ psource += window_size;
+ size -= window_size;
+ idx ++;
+ }
+
+ if (size) {
+ writel(0x01, mmio + PDC_GENERAL_CTLR);
+ readl(mmio + PDC_GENERAL_CTLR);
+ writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR);
+ readl(mmio + PDC_DIMM_WINDOW_CTLR);
+ memcpy_fromio((char *) psource, (char *) (dimm_mmio),
+ size / 4);
+ }
+}
+#endif
+
+
+static void pdc20621_put_to_dimm(struct ata_probe_ent *pe, void *psource,
+ u32 offset, u32 size)
+{
+ u32 window_size;
+ u16 idx;
+ u8 page_mask;
+ long dist;
+ void *mmio = pe->mmio_base;
+ struct pdc_host_priv *hpriv = pe->private_data;
+ void *dimm_mmio = hpriv->dimm_mmio;
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ page_mask = 0x00;
+ window_size = 0x2000 * 4; /* 32K byte uchar size */
+ idx = (u16) (offset / window_size);
+
+ writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR);
+ readl(mmio + PDC_DIMM_WINDOW_CTLR);
+ offset -= (idx * window_size);
+ idx++;
+ dist = ((long)(s32)(window_size - (offset + size))) >= 0 ? size :
+ (long) (window_size - offset);
+ memcpy_toio((char *) (dimm_mmio + offset / 4), (char *) psource, dist);
+ writel(0x01, mmio + PDC_GENERAL_CTLR);
+ readl(mmio + PDC_GENERAL_CTLR);
+
+ psource += dist;
+ size -= dist;
+ for (; (long) size >= (long) window_size ;) {
+ writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR);
+ readl(mmio + PDC_DIMM_WINDOW_CTLR);
+ memcpy_toio((char *) (dimm_mmio), (char *) psource,
+ window_size / 4);
+ writel(0x01, mmio + PDC_GENERAL_CTLR);
+ readl(mmio + PDC_GENERAL_CTLR);
+ psource += window_size;
+ size -= window_size;
+ idx ++;
+ }
+
+ if (size) {
+ writel(((idx) << page_mask), mmio + PDC_DIMM_WINDOW_CTLR);
+ readl(mmio + PDC_DIMM_WINDOW_CTLR);
+ memcpy_toio((char *) (dimm_mmio), (char *) psource, size / 4);
+ writel(0x01, mmio + PDC_GENERAL_CTLR);
+ readl(mmio + PDC_GENERAL_CTLR);
+ }
+}
+
+
+static unsigned int pdc20621_i2c_read(struct ata_probe_ent *pe, u32 device,
+ u32 subaddr, u32 *pdata)
+{
+ void *mmio = pe->mmio_base;
+ u32 i2creg = 0;
+ u32 status;
+ u32 count =0;
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ i2creg |= device << 24;
+ i2creg |= subaddr << 16;
+
+ /* Set the device and subaddress */
+ writel(i2creg, mmio + PDC_I2C_ADDR_DATA_OFFSET);
+ readl(mmio + PDC_I2C_ADDR_DATA_OFFSET);
+
+ /* Write Control to perform read operation, mask int */
+ writel(PDC_I2C_READ | PDC_I2C_START | PDC_I2C_MASK_INT,
+ mmio + PDC_I2C_CONTROL_OFFSET);
+
+ for (count = 0; count <= 1000; count ++) {
+ status = readl(mmio + PDC_I2C_CONTROL_OFFSET);
+ if (status & PDC_I2C_COMPLETE) {
+ status = readl(mmio + PDC_I2C_ADDR_DATA_OFFSET);
+ break;
+ } else if (count == 1000)
+ return 0;
+ }
+
+ *pdata = (status >> 8) & 0x000000ff;
+ return 1;
+}
+
+
+static int pdc20621_detect_dimm(struct ata_probe_ent *pe)
+{
+ u32 data=0 ;
+ if (pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS,
+ PDC_DIMM_SPD_SYSTEM_FREQ, &data)) {
+ if (data == 100)
+ return 100;
+ } else
+ return 0;
+
+ if (pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS, 9, &data)) {
+ if(data <= 0x75)
+ return 133;
+ } else
+ return 0;
+
+ return 0;
+}
+
+
+static int pdc20621_prog_dimm0(struct ata_probe_ent *pe)
+{
+ u32 spd0[50];
+ u32 data = 0;
+ int size, i;
+ u8 bdimmsize;
+ void *mmio = pe->mmio_base;
+ static const struct {
+ unsigned int reg;
+ unsigned int ofs;
+ } pdc_i2c_read_data [] = {
+ { PDC_DIMM_SPD_TYPE, 11 },
+ { PDC_DIMM_SPD_FRESH_RATE, 12 },
+ { PDC_DIMM_SPD_COLUMN_NUM, 4 },
+ { PDC_DIMM_SPD_ATTRIBUTE, 21 },
+ { PDC_DIMM_SPD_ROW_NUM, 3 },
+ { PDC_DIMM_SPD_BANK_NUM, 17 },
+ { PDC_DIMM_SPD_MODULE_ROW, 5 },
+ { PDC_DIMM_SPD_ROW_PRE_CHARGE, 27 },
+ { PDC_DIMM_SPD_ROW_ACTIVE_DELAY, 28 },
+ { PDC_DIMM_SPD_RAS_CAS_DELAY, 29 },
+ { PDC_DIMM_SPD_ACTIVE_PRECHARGE, 30 },
+ { PDC_DIMM_SPD_CAS_LATENCY, 18 },
+ };
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ for(i=0; i<ARRAY_SIZE(pdc_i2c_read_data); i++)
+ pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS,
+ pdc_i2c_read_data[i].reg,
+ &spd0[pdc_i2c_read_data[i].ofs]);
+
+ data |= (spd0[4] - 8) | ((spd0[21] != 0) << 3) | ((spd0[3]-11) << 4);
+ data |= ((spd0[17] / 4) << 6) | ((spd0[5] / 2) << 7) |
+ ((((spd0[27] + 9) / 10) - 1) << 8) ;
+ data |= (((((spd0[29] > spd0[28])
+ ? spd0[29] : spd0[28]) + 9) / 10) - 1) << 10;
+ data |= ((spd0[30] - spd0[29] + 9) / 10 - 2) << 12;
+
+ if (spd0[18] & 0x08)
+ data |= ((0x03) << 14);
+ else if (spd0[18] & 0x04)
+ data |= ((0x02) << 14);
+ else if (spd0[18] & 0x01)
+ data |= ((0x01) << 14);
+ else
+ data |= (0 << 14);
+
+ /*
+ Calculate the size of bDIMMSize (power of 2) and
+ merge the DIMM size by program start/end address.
+ */
+
+ bdimmsize = spd0[4] + (spd0[5] / 2) + spd0[3] + (spd0[17] / 2) + 3;
+ size = (1 << bdimmsize) >> 20; /* size = xxx(MB) */
+ data |= (((size / 16) - 1) << 16);
+ data |= (0 << 23);
+ data |= 8;
+ writel(data, mmio + PDC_DIMM0_CONTROL_OFFSET);
+ readl(mmio + PDC_DIMM0_CONTROL_OFFSET);
+ return size;
+}
+
+
+static unsigned int pdc20621_prog_dimm_global(struct ata_probe_ent *pe)
+{
+ u32 data, spd0;
+ int error, i;
+ void *mmio = pe->mmio_base;
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ /*
+ Set To Default : DIMM Module Global Control Register (0x022259F1)
+ DIMM Arbitration Disable (bit 20)
+ DIMM Data/Control Output Driving Selection (bit12 - bit15)
+ Refresh Enable (bit 17)
+ */
+
+ data = 0x022259F1;
+ writel(data, mmio + PDC_SDRAM_CONTROL_OFFSET);
+ readl(mmio + PDC_SDRAM_CONTROL_OFFSET);
+
+ /* Turn on for ECC */
+ pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS,
+ PDC_DIMM_SPD_TYPE, &spd0);
+ if (spd0 == 0x02) {
+ data |= (0x01 << 16);
+ writel(data, mmio + PDC_SDRAM_CONTROL_OFFSET);
+ readl(mmio + PDC_SDRAM_CONTROL_OFFSET);
+ printk(KERN_ERR "Local DIMM ECC Enabled\n");
+ }
+
+ /* DIMM Initialization Select/Enable (bit 18/19) */
+ data &= (~(1<<18));
+ data |= (1<<19);
+ writel(data, mmio + PDC_SDRAM_CONTROL_OFFSET);
+
+ error = 1;
+ for (i = 1; i <= 10; i++) { /* polling ~5 secs */
+ data = readl(mmio + PDC_SDRAM_CONTROL_OFFSET);
+ if (!(data & (1<<19))) {
+ error = 0;
+ break;
+ }
+ msleep(i*100);
+ }
+ return error;
+}
+
+
+static unsigned int pdc20621_dimm_init(struct ata_probe_ent *pe)
+{
+ int speed, size, length;
+ u32 addr,spd0,pci_status;
+ u32 tmp=0;
+ u32 time_period=0;
+ u32 tcount=0;
+ u32 ticks=0;
+ u32 clock=0;
+ u32 fparam=0;
+ void *mmio = pe->mmio_base;
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ /* Initialize PLL based upon PCI Bus Frequency */
+
+ /* Initialize Time Period Register */
+ writel(0xffffffff, mmio + PDC_TIME_PERIOD);
+ time_period = readl(mmio + PDC_TIME_PERIOD);
+ VPRINTK("Time Period Register (0x40): 0x%x\n", time_period);
+
+ /* Enable timer */
+ writel(0x00001a0, mmio + PDC_TIME_CONTROL);
+ readl(mmio + PDC_TIME_CONTROL);
+
+ /* Wait 3 seconds */
+ msleep(3000);
+
+ /*
+ When timer is enabled, counter is decreased every internal
+ clock cycle.
+ */
+
+ tcount = readl(mmio + PDC_TIME_COUNTER);
+ VPRINTK("Time Counter Register (0x44): 0x%x\n", tcount);
+
+ /*
+ If SX4 is on PCI-X bus, after 3 seconds, the timer counter
+ register should be >= (0xffffffff - 3x10^8).
+ */
+ if(tcount >= PCI_X_TCOUNT) {
+ ticks = (time_period - tcount);
+ VPRINTK("Num counters 0x%x (%d)\n", ticks, ticks);
+
+ clock = (ticks / 300000);
+ VPRINTK("10 * Internal clk = 0x%x (%d)\n", clock, clock);
+
+ clock = (clock * 33);
+ VPRINTK("10 * Internal clk * 33 = 0x%x (%d)\n", clock, clock);
+
+ /* PLL F Param (bit 22:16) */
+ fparam = (1400000 / clock) - 2;
+ VPRINTK("PLL F Param: 0x%x (%d)\n", fparam, fparam);
+
+ /* OD param = 0x2 (bit 31:30), R param = 0x5 (bit 29:25) */
+ pci_status = (0x8a001824 | (fparam << 16));
+ } else
+ pci_status = PCI_PLL_INIT;
+
+ /* Initialize PLL. */
+ VPRINTK("pci_status: 0x%x\n", pci_status);
+ writel(pci_status, mmio + PDC_CTL_STATUS);
+ readl(mmio + PDC_CTL_STATUS);
+
+ /*
+ Read SPD of DIMM by I2C interface,
+ and program the DIMM Module Controller.
+ */
+ if (!(speed = pdc20621_detect_dimm(pe))) {
+ printk(KERN_ERR "Detect Local DIMM Fail\n");
+ return 1; /* DIMM error */
+ }
+ VPRINTK("Local DIMM Speed = %d\n", speed);
+
+ /* Programming DIMM0 Module Control Register (index_CID0:80h) */
+ size = pdc20621_prog_dimm0(pe);
+ VPRINTK("Local DIMM Size = %dMB\n",size);
+
+ /* Programming DIMM Module Global Control Register (index_CID0:88h) */
+ if (pdc20621_prog_dimm_global(pe)) {
+ printk(KERN_ERR "Programming DIMM Module Global Control Register Fail\n");
+ return 1;
+ }
+
+#ifdef ATA_VERBOSE_DEBUG
+ {
+ u8 test_parttern1[40] = {0x55,0xAA,'P','r','o','m','i','s','e',' ',
+ 'N','o','t',' ','Y','e','t',' ','D','e','f','i','n','e','d',' ',
+ '1','.','1','0',
+ '9','8','0','3','1','6','1','2',0,0};
+ u8 test_parttern2[40] = {0};
+
+ pdc20621_put_to_dimm(pe, (void *) test_parttern2, 0x10040, 40);
+ pdc20621_put_to_dimm(pe, (void *) test_parttern2, 0x40, 40);
+
+ pdc20621_put_to_dimm(pe, (void *) test_parttern1, 0x10040, 40);
+ pdc20621_get_from_dimm(pe, (void *) test_parttern2, 0x40, 40);
+ printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0],
+ test_parttern2[1], &(test_parttern2[2]));
+ pdc20621_get_from_dimm(pe, (void *) test_parttern2, 0x10040,
+ 40);
+ printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0],
+ test_parttern2[1], &(test_parttern2[2]));
+
+ pdc20621_put_to_dimm(pe, (void *) test_parttern1, 0x40, 40);
+ pdc20621_get_from_dimm(pe, (void *) test_parttern2, 0x40, 40);
+ printk(KERN_ERR "%x, %x, %s\n", test_parttern2[0],
+ test_parttern2[1], &(test_parttern2[2]));
+ }
+#endif
+
+ /* ECC initiliazation. */
+
+ pdc20621_i2c_read(pe, PDC_DIMM0_SPD_DEV_ADDRESS,
+ PDC_DIMM_SPD_TYPE, &spd0);
+ if (spd0 == 0x02) {
+ VPRINTK("Start ECC initialization\n");
+ addr = 0;
+ length = size * 1024 * 1024;
+ while (addr < length) {
+ pdc20621_put_to_dimm(pe, (void *) &tmp, addr,
+ sizeof(u32));
+ addr += sizeof(u32);
+ }
+ VPRINTK("Finish ECC initialization\n");
+ }
+ return 0;
+}
+
+
+static void pdc_20621_init(struct ata_probe_ent *pe)
+{
+ u32 tmp;
+ void *mmio = pe->mmio_base;
+
+ /* hard-code chip #0 */
+ mmio += PDC_CHIP0_OFS;
+
+ /*
+ * Select page 0x40 for our 32k DIMM window
+ */
+ tmp = readl(mmio + PDC_20621_DIMM_WINDOW) & 0xffff0000;
+ tmp |= PDC_PAGE_WINDOW; /* page 40h; arbitrarily selected */
+ writel(tmp, mmio + PDC_20621_DIMM_WINDOW);
+
+ /*
+ * Reset Host DMA
+ */
+ tmp = readl(mmio + PDC_HDMA_CTLSTAT);
+ tmp |= PDC_RESET;
+ writel(tmp, mmio + PDC_HDMA_CTLSTAT);
+ readl(mmio + PDC_HDMA_CTLSTAT); /* flush */
+
+ udelay(10);
+
+ tmp = readl(mmio + PDC_HDMA_CTLSTAT);
+ tmp &= ~PDC_RESET;
+ writel(tmp, mmio + PDC_HDMA_CTLSTAT);
+ readl(mmio + PDC_HDMA_CTLSTAT); /* flush */
+}
+
+static int pdc_sata_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ static int printed_version;
+ struct ata_probe_ent *probe_ent = NULL;
+ unsigned long base;
+ void *mmio_base, *dimm_mmio = NULL;
+ struct pdc_host_priv *hpriv = NULL;
+ unsigned int board_idx = (unsigned int) ent->driver_data;
+ int rc;
+
+ if (!printed_version++)
+ printk(KERN_DEBUG DRV_NAME " version " DRV_VERSION "\n");
+
+ /*
+ * If this driver happens to only be useful on Apple's K2, then
+ * we should check that here as it has a normal Serverworks ID
+ */
+ rc = pci_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ rc = pci_request_regions(pdev, DRV_NAME);
+ if (rc)
+ goto err_out;
+
+ rc = pci_set_dma_mask(pdev, ATA_DMA_MASK);
+ if (rc)
+ goto err_out_regions;
+ rc = pci_set_consistent_dma_mask(pdev, ATA_DMA_MASK);
+ if (rc)
+ goto err_out_regions;
+
+ probe_ent = kmalloc(sizeof(*probe_ent), GFP_KERNEL);
+ if (probe_ent == NULL) {
+ rc = -ENOMEM;
+ goto err_out_regions;
+ }
+
+ memset(probe_ent, 0, sizeof(*probe_ent));
+ probe_ent->dev = pci_dev_to_dev(pdev);
+ INIT_LIST_HEAD(&probe_ent->node);
+
+ mmio_base = ioremap(pci_resource_start(pdev, 3),
+ pci_resource_len(pdev, 3));
+ if (mmio_base == NULL) {
+ rc = -ENOMEM;
+ goto err_out_free_ent;
+ }
+ base = (unsigned long) mmio_base;
+
+ hpriv = kmalloc(sizeof(*hpriv), GFP_KERNEL);
+ if (!hpriv) {
+ rc = -ENOMEM;
+ goto err_out_iounmap;
+ }
+ memset(hpriv, 0, sizeof(*hpriv));
+
+ dimm_mmio = ioremap(pci_resource_start(pdev, 4),
+ pci_resource_len(pdev, 4));
+ if (!dimm_mmio) {
+ kfree(hpriv);
+ rc = -ENOMEM;
+ goto err_out_iounmap;
+ }
+
+ hpriv->dimm_mmio = dimm_mmio;
+
+ probe_ent->sht = pdc_port_info[board_idx].sht;
+ probe_ent->host_flags = pdc_port_info[board_idx].host_flags;
+ probe_ent->pio_mask = pdc_port_info[board_idx].pio_mask;
+ probe_ent->mwdma_mask = pdc_port_info[board_idx].mwdma_mask;
+ probe_ent->udma_mask = pdc_port_info[board_idx].udma_mask;
+ probe_ent->port_ops = pdc_port_info[board_idx].port_ops;
+
+ probe_ent->irq = pdev->irq;
+ probe_ent->irq_flags = SA_SHIRQ;
+ probe_ent->mmio_base = mmio_base;
+
+ probe_ent->private_data = hpriv;
+ base += PDC_CHIP0_OFS;
+
+ probe_ent->n_ports = 4;
+ pdc_sata_setup_port(&probe_ent->port[0], base + 0x200);
+ pdc_sata_setup_port(&probe_ent->port[1], base + 0x280);
+ pdc_sata_setup_port(&probe_ent->port[2], base + 0x300);
+ pdc_sata_setup_port(&probe_ent->port[3], base + 0x380);
+
+ pci_set_master(pdev);
+
+ /* initialize adapter */
+ /* initialize local dimm */
+ if (pdc20621_dimm_init(probe_ent)) {
+ rc = -ENOMEM;
+ goto err_out_iounmap_dimm;
+ }
+ pdc_20621_init(probe_ent);
+
+ /* FIXME: check ata_device_add return value */
+ ata_device_add(probe_ent);
+ kfree(probe_ent);
+
+ return 0;
+
+err_out_iounmap_dimm: /* only get to this label if 20621 */
+ kfree(hpriv);
+ iounmap(dimm_mmio);
+err_out_iounmap:
+ iounmap(mmio_base);
+err_out_free_ent:
+ kfree(probe_ent);
+err_out_regions:
+ pci_release_regions(pdev);
+err_out:
+ pci_disable_device(pdev);
+ return rc;
+}
+
+
+static int __init pdc_sata_init(void)
+{
+ return pci_module_init(&pdc_sata_pci_driver);
+}
+
+
+static void __exit pdc_sata_exit(void)
+{
+ pci_unregister_driver(&pdc_sata_pci_driver);
+}
+
+
+MODULE_AUTHOR("Jeff Garzik");
+MODULE_DESCRIPTION("Promise SATA low-level driver");
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, pdc_sata_pci_tbl);
+MODULE_VERSION(DRV_VERSION);
+
+module_init(pdc_sata_init);
+module_exit(pdc_sata_exit);
--- /dev/null
+/*
+ * sata_uli.c - ULi Electronics SATA
+ *
+ * The contents of this file are subject to the Open
+ * Software License version 1.1 that can be found at
+ * http://www.opensource.org/licenses/osl-1.1.txt and is included herein
+ * by reference.
+ *
+ * Alternatively, the contents of this file may be used under the terms
+ * of the GNU General Public License version 2 (the "GPL") as distributed
+ * in the kernel source COPYING file, in which case the provisions of
+ * the GPL are applicable instead of the above. If you wish to allow
+ * the use of your version of this file only under the terms of the
+ * GPL and not to allow others to use your version of this file under
+ * the OSL, indicate your decision by deleting the provisions above and
+ * replace them with the notice and other provisions required by the GPL.
+ * If you do not delete the provisions above, a recipient may use your
+ * version of this file under either the OSL or the GPL.
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/init.h>
+#include <linux/blkdev.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include "scsi.h"
+#include <scsi/scsi_host.h>
+#include <linux/libata.h>
+
+#define DRV_NAME "sata_uli"
+#define DRV_VERSION "0.5"
+
+enum {
+ uli_5289 = 0,
+ uli_5287 = 1,
+ uli_5281 = 2,
+
+ /* PCI configuration registers */
+ ULI5287_BASE = 0x90, /* sata0 phy SCR registers */
+ ULI5287_OFFS = 0x10, /* offset from sata0->sata1 phy regs */
+ ULI5281_BASE = 0x60, /* sata0 phy SCR registers */
+ ULI5281_OFFS = 0x60, /* offset from sata0->sata1 phy regs */
+};
+
+static int uli_init_one (struct pci_dev *pdev, const struct pci_device_id *ent);
+static u32 uli_scr_read (struct ata_port *ap, unsigned int sc_reg);
+static void uli_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val);
+
+static struct pci_device_id uli_pci_tbl[] = {
+ { PCI_VENDOR_ID_AL, 0x5289, PCI_ANY_ID, PCI_ANY_ID, 0, 0, uli_5289 },
+ { PCI_VENDOR_ID_AL, 0x5287, PCI_ANY_ID, PCI_ANY_ID, 0, 0, uli_5287 },
+ { PCI_VENDOR_ID_AL, 0x5281, PCI_ANY_ID, PCI_ANY_ID, 0, 0, uli_5281 },
+ { } /* terminate list */
+};
+
+
+static struct pci_driver uli_pci_driver = {
+ .name = DRV_NAME,
+ .id_table = uli_pci_tbl,
+ .probe = uli_init_one,
+ .remove = ata_pci_remove_one,
+};
+
+static Scsi_Host_Template uli_sht = {
+ .module = THIS_MODULE,
+ .name = DRV_NAME,
+ .ioctl = ata_scsi_ioctl,
+ .queuecommand = ata_scsi_queuecmd,
+ .eh_strategy_handler = ata_scsi_error,
+ .can_queue = ATA_DEF_QUEUE,
+ .this_id = ATA_SHT_THIS_ID,
+ .sg_tablesize = LIBATA_MAX_PRD,
+ .max_sectors = ATA_MAX_SECTORS,
+ .cmd_per_lun = ATA_SHT_CMD_PER_LUN,
+ .emulated = ATA_SHT_EMULATED,
+ .use_clustering = ATA_SHT_USE_CLUSTERING,
+ .proc_name = DRV_NAME,
+ .dma_boundary = ATA_DMA_BOUNDARY,
+ .slave_configure = ata_scsi_slave_config,
+ .bios_param = ata_std_bios_param,
+};
+
+static struct ata_port_operations uli_ops = {
+ .port_disable = ata_port_disable,
+
+ .tf_load = ata_tf_load,
+ .tf_read = ata_tf_read,
+ .check_status = ata_check_status,
+ .exec_command = ata_exec_command,
+ .dev_select = ata_std_dev_select,
+
+ .phy_reset = sata_phy_reset,
+
+ .bmdma_setup = ata_bmdma_setup,
+ .bmdma_start = ata_bmdma_start,
+ .qc_prep = ata_qc_prep,
+ .qc_issue = ata_qc_issue_prot,
+
+ .eng_timeout = ata_eng_timeout,
+
+ .irq_handler = ata_interrupt,
+ .irq_clear = ata_bmdma_irq_clear,
+
+ .scr_read = uli_scr_read,
+ .scr_write = uli_scr_write,
+
+ .port_start = ata_port_start,
+ .port_stop = ata_port_stop,
+};
+
+static struct ata_port_info uli_port_info = {
+ .sht = &uli_sht,
+ .host_flags = ATA_FLAG_SATA | ATA_FLAG_SATA_RESET |
+ ATA_FLAG_NO_LEGACY,
+ .pio_mask = 0x03, //support pio mode 4 (FIXME)
+ .udma_mask = 0x7f, //support udma mode 6
+ .port_ops = &uli_ops,
+};
+
+
+MODULE_AUTHOR("Peer Chen");
+MODULE_DESCRIPTION("low-level driver for ULi Electronics SATA controller");
+MODULE_LICENSE("GPL");
+MODULE_DEVICE_TABLE(pci, uli_pci_tbl);
+MODULE_VERSION(DRV_VERSION);
+
+static unsigned int get_scr_cfg_addr(struct ata_port *ap, unsigned int sc_reg)
+{
+ return ap->ioaddr.scr_addr + (4 * sc_reg);
+}
+
+static u32 uli_scr_cfg_read (struct ata_port *ap, unsigned int sc_reg)
+{
+ struct pci_dev *pdev = to_pci_dev(ap->host_set->dev);
+ unsigned int cfg_addr = get_scr_cfg_addr(ap, sc_reg);
+ u32 val;
+
+ pci_read_config_dword(pdev, cfg_addr, &val);
+ return val;
+}
+
+static void uli_scr_cfg_write (struct ata_port *ap, unsigned int scr, u32 val)
+{
+ struct pci_dev *pdev = to_pci_dev(ap->host_set->dev);
+ unsigned int cfg_addr = get_scr_cfg_addr(ap, scr);
+
+ pci_write_config_dword(pdev, cfg_addr, val);
+}
+
+static u32 uli_scr_read (struct ata_port *ap, unsigned int sc_reg)
+{
+ if (sc_reg > SCR_CONTROL)
+ return 0xffffffffU;
+
+ return uli_scr_cfg_read(ap, sc_reg);
+}
+
+static void uli_scr_write (struct ata_port *ap, unsigned int sc_reg, u32 val)
+{
+ if (sc_reg > SCR_CONTROL) //SCR_CONTROL=2, SCR_ERROR=1, SCR_STATUS=0
+ return;
+
+ uli_scr_cfg_write(ap, sc_reg, val);
+}
+
+/* move to PCI layer, integrate w/ MSI stuff */
+static void pci_enable_intx(struct pci_dev *pdev)
+{
+ u16 pci_command;
+
+ pci_read_config_word(pdev, PCI_COMMAND, &pci_command);
+ if (pci_command & PCI_COMMAND_INTX_DISABLE) {
+ pci_command &= ~PCI_COMMAND_INTX_DISABLE;
+ pci_write_config_word(pdev, PCI_COMMAND, pci_command);
+ }
+}
+
+static int uli_init_one (struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ struct ata_probe_ent *probe_ent;
+ struct ata_port_info *ppi;
+ int rc;
+ unsigned int board_idx = (unsigned int) ent->driver_data;
+
+ rc = pci_enable_device(pdev);
+ if (rc)
+ return rc;
+
+ rc = pci_request_regions(pdev, DRV_NAME);
+ if (rc)
+ goto err_out;
+
+ rc = pci_set_dma_mask(pdev, ATA_DMA_MASK);
+ if (rc)
+ goto err_out_regions;
+ rc = pci_set_consistent_dma_mask(pdev, ATA_DMA_MASK);
+ if (rc)
+ goto err_out_regions;
+
+ ppi = &uli_port_info;
+ probe_ent = ata_pci_init_native_mode(pdev, &ppi);
+ if (!probe_ent) {
+ rc = -ENOMEM;
+ goto err_out_regions;
+ }
+
+ switch (board_idx) {
+ case uli_5287:
+ probe_ent->port[0].scr_addr = ULI5287_BASE;
+ probe_ent->port[1].scr_addr = ULI5287_BASE + ULI5287_OFFS;
+ probe_ent->n_ports = 4;
+
+ probe_ent->port[2].cmd_addr = pci_resource_start(pdev, 0) + 8;
+ probe_ent->port[2].altstatus_addr =
+ probe_ent->port[2].ctl_addr =
+ (pci_resource_start(pdev, 1) | ATA_PCI_CTL_OFS) + 4;
+ probe_ent->port[2].bmdma_addr = pci_resource_start(pdev, 4) + 16;
+ probe_ent->port[2].scr_addr = ULI5287_BASE + ULI5287_OFFS*4;
+
+ probe_ent->port[3].cmd_addr = pci_resource_start(pdev, 2) + 8;
+ probe_ent->port[3].altstatus_addr =
+ probe_ent->port[3].ctl_addr =
+ (pci_resource_start(pdev, 3) | ATA_PCI_CTL_OFS) + 4;
+ probe_ent->port[3].bmdma_addr = pci_resource_start(pdev, 4) + 24;
+ probe_ent->port[3].scr_addr = ULI5287_BASE + ULI5287_OFFS*5;
+
+ ata_std_ports(&probe_ent->port[2]);
+ ata_std_ports(&probe_ent->port[3]);
+ break;
+
+ case uli_5289:
+ probe_ent->port[0].scr_addr = ULI5287_BASE;
+ probe_ent->port[1].scr_addr = ULI5287_BASE + ULI5287_OFFS;
+ break;
+
+ case uli_5281:
+ probe_ent->port[0].scr_addr = ULI5281_BASE;
+ probe_ent->port[1].scr_addr = ULI5281_BASE + ULI5281_OFFS;
+ break;
+
+ default:
+ BUG();
+ break;
+ }
+
+ pci_set_master(pdev);
+ pci_enable_intx(pdev);
+
+ /* FIXME: check ata_device_add return value */
+ ata_device_add(probe_ent);
+ kfree(probe_ent);
+
+ return 0;
+
+err_out_regions:
+ pci_release_regions(pdev);
+
+err_out:
+ pci_disable_device(pdev);
+ return rc;
+
+}
+
+static int __init uli_init(void)
+{
+ return pci_module_init(&uli_pci_driver);
+}
+
+static void __exit uli_exit(void)
+{
+ pci_unregister_driver(&uli_pci_driver);
+}
+
+
+module_init(uli_init);
+module_exit(uli_exit);
--- /dev/null
+/*
+ * linux/drivers/serial/cpm_uart.h
+ *
+ * Driver for CPM (SCC/SMC) serial ports
+ *
+ * Copyright (C) 2004 Freescale Semiconductor, Inc.
+ *
+ */
+#ifndef CPM_UART_H
+#define CPM_UART_H
+
+#include <linux/config.h>
+
+#if defined(CONFIG_CPM2)
+#include "cpm_uart_cpm2.h"
+#elif defined(CONFIG_8xx)
+#include "cpm_uart_cpm1.h"
+#endif
+
+#define SERIAL_CPM_MAJOR 204
+#define SERIAL_CPM_MINOR 46
+
+#define IS_SMC(pinfo) (pinfo->flags & FLAG_SMC)
+#define IS_DISCARDING(pinfo) (pinfo->flags & FLAG_DISCARDING)
+#define FLAG_DISCARDING 0x00000004 /* when set, don't discard */
+#define FLAG_SMC 0x00000002
+#define FLAG_CONSOLE 0x00000001
+
+#define UART_SMC1 0
+#define UART_SMC2 1
+#define UART_SCC1 2
+#define UART_SCC2 3
+#define UART_SCC3 4
+#define UART_SCC4 5
+
+#define UART_NR 6
+
+#define RX_NUM_FIFO 4
+#define RX_BUF_SIZE 32
+#define TX_NUM_FIFO 4
+#define TX_BUF_SIZE 32
+
+struct uart_cpm_port {
+ struct uart_port port;
+ u16 rx_nrfifos;
+ u16 rx_fifosize;
+ u16 tx_nrfifos;
+ u16 tx_fifosize;
+ smc_t *smcp;
+ smc_uart_t *smcup;
+ scc_t *sccp;
+ scc_uart_t *sccup;
+ volatile cbd_t *rx_bd_base;
+ volatile cbd_t *rx_cur;
+ volatile cbd_t *tx_bd_base;
+ volatile cbd_t *tx_cur;
+ unsigned char *tx_buf;
+ unsigned char *rx_buf;
+ u32 flags;
+ void (*set_lineif)(struct uart_cpm_port *);
+ u8 brg;
+ uint dp_addr;
+ void *mem_addr;
+ dma_addr_t dma_addr;
+ /* helpers */
+ int baud;
+ int bits;
+ /* Keep track of 'odd' SMC2 wirings */
+ int is_portb;
+};
+
+extern int cpm_uart_port_map[UART_NR];
+extern int cpm_uart_nr;
+extern struct uart_cpm_port cpm_uart_ports[UART_NR];
+
+/* these are located in their respective files */
+void cpm_line_cr_cmd(int line, int cmd);
+int cpm_uart_init_portdesc(void);
+int cpm_uart_allocbuf(struct uart_cpm_port *pinfo, unsigned int is_con);
+void cpm_uart_freebuf(struct uart_cpm_port *pinfo);
+
+void smc1_lineif(struct uart_cpm_port *pinfo);
+void smc2_lineif(struct uart_cpm_port *pinfo);
+void scc1_lineif(struct uart_cpm_port *pinfo);
+void scc2_lineif(struct uart_cpm_port *pinfo);
+void scc3_lineif(struct uart_cpm_port *pinfo);
+void scc4_lineif(struct uart_cpm_port *pinfo);
+
+#endif /* CPM_UART_H */
--- /dev/null
+/*
+ * linux/drivers/serial/cpm_uart.c
+ *
+ * Driver for CPM (SCC/SMC) serial ports; core driver
+ *
+ * Based on arch/ppc/cpm2_io/uart.c by Dan Malek
+ * Based on ppc8xx.c by Thomas Gleixner
+ * Based on drivers/serial/amba.c by Russell King
+ *
+ * Maintainer: Kumar Gala (kumar.gala@freescale.com) (CPM2)
+ * Pantelis Antoniou (panto@intracom.gr) (CPM1)
+ *
+ * Copyright (C) 2004 Freescale Semiconductor, Inc.
+ * (C) 2004 Intracom, S.A.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/tty.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/serial.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/device.h>
+#include <linux/bootmem.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/delay.h>
+
+#if defined(CONFIG_SERIAL_CPM_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#include <linux/serial_core.h>
+#include <linux/kernel.h>
+
+#include "cpm_uart.h"
+
+/***********************************************************************/
+
+/* Track which ports are configured as uarts */
+int cpm_uart_port_map[UART_NR];
+/* How many ports did we config as uarts */
+int cpm_uart_nr;
+
+/**************************************************************/
+
+static int cpm_uart_tx_pump(struct uart_port *port);
+static void cpm_uart_init_smc(struct uart_cpm_port *pinfo);
+static void cpm_uart_init_scc(struct uart_cpm_port *pinfo);
+static void cpm_uart_initbd(struct uart_cpm_port *pinfo);
+
+/**************************************************************/
+
+/*
+ * Check, if transmit buffers are processed
+*/
+static unsigned int cpm_uart_tx_empty(struct uart_port *port)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile cbd_t *bdp = pinfo->tx_bd_base;
+ int ret = 0;
+
+ while (1) {
+ if (bdp->cbd_sc & BD_SC_READY)
+ break;
+
+ if (bdp->cbd_sc & BD_SC_WRAP) {
+ ret = TIOCSER_TEMT;
+ break;
+ }
+ bdp++;
+ }
+
+ pr_debug("CPM uart[%d]:tx_empty: %d\n", port->line, ret);
+
+ return ret;
+}
+
+static void cpm_uart_set_mctrl(struct uart_port *port, unsigned int mctrl)
+{
+ /* Whee. Do nothing. */
+}
+
+static unsigned int cpm_uart_get_mctrl(struct uart_port *port)
+{
+ /* Whee. Do nothing. */
+ return TIOCM_CAR | TIOCM_DSR | TIOCM_CTS;
+}
+
+/*
+ * Stop transmitter
+ */
+static void cpm_uart_stop_tx(struct uart_port *port, unsigned int tty_stop)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile smc_t *smcp = pinfo->smcp;
+ volatile scc_t *sccp = pinfo->sccp;
+
+ pr_debug("CPM uart[%d]:stop tx\n", port->line);
+
+ if (IS_SMC(pinfo))
+ smcp->smc_smcm &= ~SMCM_TX;
+ else
+ sccp->scc_sccm &= ~UART_SCCM_TX;
+}
+
+/*
+ * Start transmitter
+ */
+static void cpm_uart_start_tx(struct uart_port *port, unsigned int tty_start)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile smc_t *smcp = pinfo->smcp;
+ volatile scc_t *sccp = pinfo->sccp;
+
+ pr_debug("CPM uart[%d]:start tx\n", port->line);
+
+ if (IS_SMC(pinfo)) {
+ if (smcp->smc_smcm & SMCM_TX)
+ return;
+ } else {
+ if (sccp->scc_sccm & UART_SCCM_TX)
+ return;
+ }
+
+ if (cpm_uart_tx_pump(port) != 0) {
+ if (IS_SMC(pinfo))
+ smcp->smc_smcm |= SMCM_TX;
+ else
+ sccp->scc_sccm |= UART_SCCM_TX;
+ }
+}
+
+/*
+ * Stop receiver
+ */
+static void cpm_uart_stop_rx(struct uart_port *port)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile smc_t *smcp = pinfo->smcp;
+ volatile scc_t *sccp = pinfo->sccp;
+
+ pr_debug("CPM uart[%d]:stop rx\n", port->line);
+
+ if (IS_SMC(pinfo))
+ smcp->smc_smcm &= ~SMCM_RX;
+ else
+ sccp->scc_sccm &= ~UART_SCCM_RX;
+}
+
+/*
+ * Enable Modem status interrupts
+ */
+static void cpm_uart_enable_ms(struct uart_port *port)
+{
+ pr_debug("CPM uart[%d]:enable ms\n", port->line);
+}
+
+/*
+ * Generate a break.
+ */
+static void cpm_uart_break_ctl(struct uart_port *port, int break_state)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ int line = pinfo - cpm_uart_ports;
+
+ pr_debug("CPM uart[%d]:break ctrl, break_state: %d\n", port->line,
+ break_state);
+
+ if (break_state)
+ cpm_line_cr_cmd(line, CPM_CR_STOP_TX);
+ else
+ cpm_line_cr_cmd(line, CPM_CR_RESTART_TX);
+}
+
+/*
+ * Transmit characters, refill buffer descriptor, if possible
+ */
+static void cpm_uart_int_tx(struct uart_port *port, struct pt_regs *regs)
+{
+ pr_debug("CPM uart[%d]:TX INT\n", port->line);
+
+ cpm_uart_tx_pump(port);
+}
+
+/*
+ * Receive characters
+ */
+static void cpm_uart_int_rx(struct uart_port *port, struct pt_regs *regs)
+{
+ int i;
+ unsigned char ch, *cp;
+ struct tty_struct *tty = port->info->tty;
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile cbd_t *bdp;
+ u16 status;
+ unsigned int flg;
+
+ pr_debug("CPM uart[%d]:RX INT\n", port->line);
+
+ /* Just loop through the closed BDs and copy the characters into
+ * the buffer.
+ */
+ bdp = pinfo->rx_cur;
+ for (;;) {
+ /* get status */
+ status = bdp->cbd_sc;
+ /* If this one is empty, return happy */
+ if (status & BD_SC_EMPTY)
+ break;
+
+ /* get number of characters, and check spce in flip-buffer */
+ i = bdp->cbd_datlen;
+
+ /* If we have not enough room in tty flip buffer, then we try
+ * later, which will be the next rx-interrupt or a timeout
+ */
+ if ((tty->flip.count + i) >= TTY_FLIPBUF_SIZE) {
+ tty->flip.work.func((void *)tty);
+ if ((tty->flip.count + i) >= TTY_FLIPBUF_SIZE) {
+ printk(KERN_WARNING "TTY_DONT_FLIP set\n");
+ return;
+ }
+ }
+
+ /* get pointer */
+ cp = (unsigned char *)bus_to_virt(bdp->cbd_bufaddr);
+
+ /* loop through the buffer */
+ while (i-- > 0) {
+ ch = *cp++;
+ port->icount.rx++;
+ flg = TTY_NORMAL;
+
+ if (status &
+ (BD_SC_BR | BD_SC_FR | BD_SC_PR | BD_SC_OV))
+ goto handle_error;
+ if (uart_handle_sysrq_char(port, ch, regs))
+ continue;
+
+ error_return:
+ *tty->flip.char_buf_ptr++ = ch;
+ *tty->flip.flag_buf_ptr++ = flg;
+ tty->flip.count++;
+
+ } /* End while (i--) */
+
+ /* This BD is ready to be used again. Clear status. get next */
+ bdp->cbd_sc &= ~(BD_SC_BR | BD_SC_FR | BD_SC_PR | BD_SC_OV);
+ bdp->cbd_sc |= BD_SC_EMPTY;
+
+ if (bdp->cbd_sc & BD_SC_WRAP)
+ bdp = pinfo->rx_bd_base;
+ else
+ bdp++;
+ } /* End for (;;) */
+
+ /* Write back buffer pointer */
+ pinfo->rx_cur = (volatile cbd_t *) bdp;
+
+ /* activate BH processing */
+ tty_flip_buffer_push(tty);
+
+ return;
+
+ /* Error processing */
+
+ handle_error:
+ /* Statistics */
+ if (status & BD_SC_BR)
+ port->icount.brk++;
+ if (status & BD_SC_PR)
+ port->icount.parity++;
+ if (status & BD_SC_FR)
+ port->icount.frame++;
+ if (status & BD_SC_OV)
+ port->icount.overrun++;
+
+ /* Mask out ignored conditions */
+ status &= port->read_status_mask;
+
+ /* Handle the remaining ones */
+ if (status & BD_SC_BR)
+ flg = TTY_BREAK;
+ else if (status & BD_SC_PR)
+ flg = TTY_PARITY;
+ else if (status & BD_SC_FR)
+ flg = TTY_FRAME;
+
+ /* overrun does not affect the current character ! */
+ if (status & BD_SC_OV) {
+ ch = 0;
+ flg = TTY_OVERRUN;
+ /* We skip this buffer */
+ /* CHECK: Is really nothing senseful there */
+ /* ASSUMPTION: it contains nothing valid */
+ i = 0;
+ }
+#ifdef SUPPORT_SYSRQ
+ port->sysrq = 0;
+#endif
+ goto error_return;
+}
+
+/*
+ * Asynchron mode interrupt handler
+ */
+static irqreturn_t cpm_uart_int(int irq, void *data, struct pt_regs *regs)
+{
+ u8 events;
+ struct uart_port *port = (struct uart_port *)data;
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile smc_t *smcp = pinfo->smcp;
+ volatile scc_t *sccp = pinfo->sccp;
+
+ pr_debug("CPM uart[%d]:IRQ\n", port->line);
+
+ if (IS_SMC(pinfo)) {
+ events = smcp->smc_smce;
+ if (events & SMCM_BRKE)
+ uart_handle_break(port);
+ if (events & SMCM_RX)
+ cpm_uart_int_rx(port, regs);
+ if (events & SMCM_TX)
+ cpm_uart_int_tx(port, regs);
+ smcp->smc_smce = events;
+ } else {
+ events = sccp->scc_scce;
+ if (events & UART_SCCM_BRKE)
+ uart_handle_break(port);
+ if (events & UART_SCCM_RX)
+ cpm_uart_int_rx(port, regs);
+ if (events & UART_SCCM_TX)
+ cpm_uart_int_tx(port, regs);
+ sccp->scc_scce = events;
+ }
+ return (events) ? IRQ_HANDLED : IRQ_NONE;
+}
+
+static int cpm_uart_startup(struct uart_port *port)
+{
+ int retval;
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+
+ pr_debug("CPM uart[%d]:startup\n", port->line);
+
+ /* Install interrupt handler. */
+ retval = request_irq(port->irq, cpm_uart_int, 0, "cpm_uart", port);
+ if (retval)
+ return retval;
+
+ /* Startup rx-int */
+ if (IS_SMC(pinfo)) {
+ pinfo->smcp->smc_smcm |= SMCM_RX;
+ pinfo->smcp->smc_smcmr |= SMCMR_REN;
+ } else {
+ pinfo->sccp->scc_sccm |= UART_SCCM_RX;
+ }
+
+ return 0;
+}
+
+/*
+ * Shutdown the uart
+ */
+static void cpm_uart_shutdown(struct uart_port *port)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ int line = pinfo - cpm_uart_ports;
+
+ pr_debug("CPM uart[%d]:shutdown\n", port->line);
+
+ /* free interrupt handler */
+ free_irq(port->irq, port);
+
+ /* If the port is not the console, disable Rx and Tx. */
+ if (!(pinfo->flags & FLAG_CONSOLE)) {
+ /* Stop uarts */
+ if (IS_SMC(pinfo)) {
+ volatile smc_t *smcp = pinfo->smcp;
+ smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ smcp->smc_smcm &= ~(SMCM_RX | SMCM_TX);
+ } else {
+ volatile scc_t *sccp = pinfo->sccp;
+ sccp->scc_gsmrl &= ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ sccp->scc_sccm &= ~(UART_SCCM_TX | UART_SCCM_RX);
+ }
+
+ /* Shut them really down and reinit buffer descriptors */
+ cpm_line_cr_cmd(line, CPM_CR_STOP_TX);
+ cpm_uart_initbd(pinfo);
+ }
+}
+
+static void cpm_uart_set_termios(struct uart_port *port,
+ struct termios *termios, struct termios *old)
+{
+ int baud;
+ unsigned long flags;
+ u16 cval, scval, prev_mode;
+ int bits, sbits;
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ volatile smc_t *smcp = pinfo->smcp;
+ volatile scc_t *sccp = pinfo->sccp;
+
+ pr_debug("CPM uart[%d]:set_termios\n", port->line);
+
+ baud = uart_get_baud_rate(port, termios, old, 0, port->uartclk / 16);
+
+ /* Character length programmed into the mode register is the
+ * sum of: 1 start bit, number of data bits, 0 or 1 parity bit,
+ * 1 or 2 stop bits, minus 1.
+ * The value 'bits' counts this for us.
+ */
+ cval = 0;
+ scval = 0;
+
+ /* byte size */
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ bits = 5;
+ break;
+ case CS6:
+ bits = 6;
+ break;
+ case CS7:
+ bits = 7;
+ break;
+ case CS8:
+ bits = 8;
+ break;
+ /* Never happens, but GCC is too dumb to figure it out */
+ default:
+ bits = 8;
+ break;
+ }
+ sbits = bits - 5;
+
+ if (termios->c_cflag & CSTOPB) {
+ cval |= SMCMR_SL; /* Two stops */
+ scval |= SCU_PSMR_SL;
+ bits++;
+ }
+
+ if (termios->c_cflag & PARENB) {
+ cval |= SMCMR_PEN;
+ scval |= SCU_PSMR_PEN;
+ bits++;
+ if (!(termios->c_cflag & PARODD)) {
+ cval |= SMCMR_PM_EVEN;
+ scval |= (SCU_PSMR_REVP | SCU_PSMR_TEVP);
+ }
+ }
+
+ /*
+ * Set up parity check flag
+ */
+#define RELEVANT_IFLAG(iflag) (iflag & (IGNBRK|BRKINT|IGNPAR|PARMRK|INPCK))
+
+ port->read_status_mask = (BD_SC_EMPTY | BD_SC_OV);
+ if (termios->c_iflag & INPCK)
+ port->read_status_mask |= BD_SC_FR | BD_SC_PR;
+ if ((termios->c_iflag & BRKINT) || (termios->c_iflag & PARMRK))
+ port->read_status_mask |= BD_SC_BR;
+
+ /*
+ * Characters to ignore
+ */
+ port->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= BD_SC_PR | BD_SC_FR;
+ if (termios->c_iflag & IGNBRK) {
+ port->ignore_status_mask |= BD_SC_BR;
+ /*
+ * If we're ignore parity and break indicators, ignore
+ * overruns too. (For real raw support).
+ */
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= BD_SC_OV;
+ }
+ /*
+ * !!! ignore all characters if CREAD is not set
+ */
+ if ((termios->c_cflag & CREAD) == 0)
+ port->read_status_mask &= ~BD_SC_EMPTY;
+
+ spin_lock_irqsave(&port->lock, flags);
+
+ /* Start bit has not been added (so don't, because we would just
+ * subtract it later), and we need to add one for the number of
+ * stops bits (there is always at least one).
+ */
+ bits++;
+ if (IS_SMC(pinfo)) {
+ /* Set the mode register. We want to keep a copy of the
+ * enables, because we want to put them back if they were
+ * present.
+ */
+ prev_mode = smcp->smc_smcmr;
+ smcp->smc_smcmr = smcr_mk_clen(bits) | cval | SMCMR_SM_UART;
+ smcp->smc_smcmr |= (prev_mode & (SMCMR_REN | SMCMR_TEN));
+ } else {
+ sccp->scc_psmr = (sbits << 12) | scval;
+ }
+
+ cpm_set_brg(pinfo->brg - 1, baud);
+ spin_unlock_irqrestore(&port->lock, flags);
+
+}
+
+static const char *cpm_uart_type(struct uart_port *port)
+{
+ pr_debug("CPM uart[%d]:uart_type\n", port->line);
+
+ return port->type == PORT_CPM ? "CPM UART" : NULL;
+}
+
+/*
+ * verify the new serial_struct (for TIOCSSERIAL).
+ */
+static int cpm_uart_verify_port(struct uart_port *port,
+ struct serial_struct *ser)
+{
+ int ret = 0;
+
+ pr_debug("CPM uart[%d]:verify_port\n", port->line);
+
+ if (ser->type != PORT_UNKNOWN && ser->type != PORT_CPM)
+ ret = -EINVAL;
+ if (ser->irq < 0 || ser->irq >= NR_IRQS)
+ ret = -EINVAL;
+ if (ser->baud_base < 9600)
+ ret = -EINVAL;
+ return ret;
+}
+
+/*
+ * Transmit characters, refill buffer descriptor, if possible
+ */
+static int cpm_uart_tx_pump(struct uart_port *port)
+{
+ volatile cbd_t *bdp;
+ unsigned char *p;
+ int count;
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ struct circ_buf *xmit = &port->info->xmit;
+
+ /* Handle xon/xoff */
+ if (port->x_char) {
+ /* Pick next descriptor and fill from buffer */
+ bdp = pinfo->tx_cur;
+
+ p = bus_to_virt(bdp->cbd_bufaddr);
+ *p++ = xmit->buf[xmit->tail];
+ bdp->cbd_datlen = 1;
+ bdp->cbd_sc |= BD_SC_READY;
+ /* Get next BD. */
+ if (bdp->cbd_sc & BD_SC_WRAP)
+ bdp = pinfo->tx_bd_base;
+ else
+ bdp++;
+ pinfo->tx_cur = bdp;
+
+ port->icount.tx++;
+ port->x_char = 0;
+ return 1;
+ }
+
+ if (uart_circ_empty(xmit) || uart_tx_stopped(port)) {
+ cpm_uart_stop_tx(port, 0);
+ return 0;
+ }
+
+ /* Pick next descriptor and fill from buffer */
+ bdp = pinfo->tx_cur;
+
+ while (!(bdp->cbd_sc & BD_SC_READY) && (xmit->tail != xmit->head)) {
+ count = 0;
+ p = bus_to_virt(bdp->cbd_bufaddr);
+ while (count < pinfo->tx_fifosize) {
+ *p++ = xmit->buf[xmit->tail];
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ port->icount.tx++;
+ count++;
+ if (xmit->head == xmit->tail)
+ break;
+ }
+ bdp->cbd_datlen = count;
+ bdp->cbd_sc |= BD_SC_READY;
+ /* Get next BD. */
+ if (bdp->cbd_sc & BD_SC_WRAP)
+ bdp = pinfo->tx_bd_base;
+ else
+ bdp++;
+ }
+ pinfo->tx_cur = bdp;
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(port);
+
+ if (uart_circ_empty(xmit)) {
+ cpm_uart_stop_tx(port, 0);
+ return 0;
+ }
+
+ return 1;
+}
+
+/*
+ * init buffer descriptors
+ */
+static void cpm_uart_initbd(struct uart_cpm_port *pinfo)
+{
+ int i;
+ u8 *mem_addr;
+ volatile cbd_t *bdp;
+
+ pr_debug("CPM uart[%d]:initbd\n", pinfo->port.line);
+
+ /* Set the physical address of the host memory
+ * buffers in the buffer descriptors, and the
+ * virtual address for us to work with.
+ */
+ mem_addr = pinfo->mem_addr;
+ bdp = pinfo->rx_cur = pinfo->rx_bd_base;
+ for (i = 0; i < (pinfo->rx_nrfifos - 1); i++, bdp++) {
+ bdp->cbd_bufaddr = virt_to_bus(mem_addr);
+ bdp->cbd_sc = BD_SC_EMPTY | BD_SC_INTRPT;
+ mem_addr += pinfo->rx_fifosize;
+ }
+
+ bdp->cbd_bufaddr = virt_to_bus(mem_addr);
+ bdp->cbd_sc = BD_SC_WRAP | BD_SC_EMPTY | BD_SC_INTRPT;
+
+ /* Set the physical address of the host memory
+ * buffers in the buffer descriptors, and the
+ * virtual address for us to work with.
+ */
+ mem_addr = pinfo->mem_addr + L1_CACHE_ALIGN(pinfo->rx_nrfifos * pinfo->rx_fifosize);
+ bdp = pinfo->tx_cur = pinfo->tx_bd_base;
+ for (i = 0; i < (pinfo->tx_nrfifos - 1); i++, bdp++) {
+ bdp->cbd_bufaddr = virt_to_bus(mem_addr);
+ bdp->cbd_sc = BD_SC_INTRPT;
+ mem_addr += pinfo->tx_fifosize;
+ }
+
+ bdp->cbd_bufaddr = virt_to_bus(mem_addr);
+ bdp->cbd_sc = BD_SC_WRAP | BD_SC_INTRPT;
+}
+
+static void cpm_uart_init_scc(struct uart_cpm_port *pinfo)
+{
+ int line = pinfo - cpm_uart_ports;
+ volatile scc_t *scp;
+ volatile scc_uart_t *sup;
+
+ pr_debug("CPM uart[%d]:init_scc\n", pinfo->port.line);
+
+ scp = pinfo->sccp;
+ sup = pinfo->sccup;
+
+ /* Store address */
+ pinfo->sccup->scc_genscc.scc_rbase = (unsigned char *)pinfo->rx_bd_base - DPRAM_BASE;
+ pinfo->sccup->scc_genscc.scc_tbase = (unsigned char *)pinfo->tx_bd_base - DPRAM_BASE;
+
+ /* Set up the uart parameters in the
+ * parameter ram.
+ */
+
+ cpm_set_scc_fcr(sup);
+
+ sup->scc_genscc.scc_mrblr = pinfo->rx_fifosize;
+ sup->scc_maxidl = pinfo->rx_fifosize;
+ sup->scc_brkcr = 1;
+ sup->scc_parec = 0;
+ sup->scc_frmec = 0;
+ sup->scc_nosec = 0;
+ sup->scc_brkec = 0;
+ sup->scc_uaddr1 = 0;
+ sup->scc_uaddr2 = 0;
+ sup->scc_toseq = 0;
+ sup->scc_char1 = 0x8000;
+ sup->scc_char2 = 0x8000;
+ sup->scc_char3 = 0x8000;
+ sup->scc_char4 = 0x8000;
+ sup->scc_char5 = 0x8000;
+ sup->scc_char6 = 0x8000;
+ sup->scc_char7 = 0x8000;
+ sup->scc_char8 = 0x8000;
+ sup->scc_rccm = 0xc0ff;
+
+ /* Send the CPM an initialize command.
+ */
+ cpm_line_cr_cmd(line, CPM_CR_INIT_TRX);
+
+ /* Set UART mode, 8 bit, no parity, one stop.
+ * Enable receive and transmit.
+ */
+ scp->scc_gsmrh = 0;
+ scp->scc_gsmrl =
+ (SCC_GSMRL_MODE_UART | SCC_GSMRL_TDCR_16 | SCC_GSMRL_RDCR_16);
+
+ /* Enable rx interrupts and clear all pending events. */
+ scp->scc_sccm = 0;
+ scp->scc_scce = 0xffff;
+ scp->scc_dsr = 0x7e7e;
+ scp->scc_psmr = 0x3000;
+
+ scp->scc_gsmrl |= (SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+}
+
+static void cpm_uart_init_smc(struct uart_cpm_port *pinfo)
+{
+ int line = pinfo - cpm_uart_ports;
+ volatile smc_t *sp;
+ volatile smc_uart_t *up;
+
+ pr_debug("CPM uart[%d]:init_smc\n", pinfo->port.line);
+
+ sp = pinfo->smcp;
+ up = pinfo->smcup;
+
+ /* Store address */
+ pinfo->smcup->smc_rbase = (u_char *)pinfo->rx_bd_base - DPRAM_BASE;
+ pinfo->smcup->smc_tbase = (u_char *)pinfo->tx_bd_base - DPRAM_BASE;
+
+/*
+ * In case SMC1 is being relocated...
+ */
+#if defined (CONFIG_I2C_SPI_SMC1_UCODE_PATCH)
+ up->smc_rbptr = pinfo->smcup->smc_rbase;
+ up->smc_tbptr = pinfo->smcup->smc_tbase;
+ up->smc_rstate = 0;
+ up->smc_tstate = 0;
+ up->smc_brkcr = 1; /* number of break chars */
+ up->smc_brkec = 0;
+#endif
+
+ /* Set up the uart parameters in the
+ * parameter ram.
+ */
+ cpm_set_smc_fcr(up);
+
+ /* Using idle charater time requires some additional tuning. */
+ up->smc_mrblr = pinfo->rx_fifosize;
+ up->smc_maxidl = pinfo->rx_fifosize;
+ up->smc_brkcr = 1;
+
+ cpm_line_cr_cmd(line, CPM_CR_INIT_TRX);
+
+ /* Set UART mode, 8 bit, no parity, one stop.
+ * Enable receive and transmit.
+ */
+ sp->smc_smcmr = smcr_mk_clen(9) | SMCMR_SM_UART;
+
+ /* Enable only rx interrupts clear all pending events. */
+ sp->smc_smcm = 0;
+ sp->smc_smce = 0xff;
+
+ sp->smc_smcmr |= (SMCMR_REN | SMCMR_TEN);
+}
+
+/*
+ * Initialize port. This is called from early_console stuff
+ * so we have to be careful here !
+ */
+static int cpm_uart_request_port(struct uart_port *port)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+ int ret;
+
+ pr_debug("CPM uart[%d]:request port\n", port->line);
+
+ if (pinfo->flags & FLAG_CONSOLE)
+ return 0;
+
+ /*
+ * Setup any port IO, connect any baud rate generators,
+ * etc. This is expected to be handled by board
+ * dependant code
+ */
+ if (pinfo->set_lineif)
+ pinfo->set_lineif(pinfo);
+
+ if (IS_SMC(pinfo)) {
+ pinfo->smcp->smc_smcm &= ~(SMCM_RX | SMCM_TX);
+ pinfo->smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ } else {
+ pinfo->sccp->scc_sccm &= ~(UART_SCCM_TX | UART_SCCM_RX);
+ pinfo->sccp->scc_gsmrl &= ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ }
+
+ ret = cpm_uart_allocbuf(pinfo, 0);
+
+ if (ret)
+ return ret;
+
+ cpm_uart_initbd(pinfo);
+
+ return 0;
+}
+
+static void cpm_uart_release_port(struct uart_port *port)
+{
+ struct uart_cpm_port *pinfo = (struct uart_cpm_port *)port;
+
+ if (!(pinfo->flags & FLAG_CONSOLE))
+ cpm_uart_freebuf(pinfo);
+}
+
+/*
+ * Configure/autoconfigure the port.
+ */
+static void cpm_uart_config_port(struct uart_port *port, int flags)
+{
+ pr_debug("CPM uart[%d]:config_port\n", port->line);
+
+ if (flags & UART_CONFIG_TYPE) {
+ port->type = PORT_CPM;
+ cpm_uart_request_port(port);
+ }
+}
+static struct uart_ops cpm_uart_pops = {
+ .tx_empty = cpm_uart_tx_empty,
+ .set_mctrl = cpm_uart_set_mctrl,
+ .get_mctrl = cpm_uart_get_mctrl,
+ .stop_tx = cpm_uart_stop_tx,
+ .start_tx = cpm_uart_start_tx,
+ .stop_rx = cpm_uart_stop_rx,
+ .enable_ms = cpm_uart_enable_ms,
+ .break_ctl = cpm_uart_break_ctl,
+ .startup = cpm_uart_startup,
+ .shutdown = cpm_uart_shutdown,
+ .set_termios = cpm_uart_set_termios,
+ .type = cpm_uart_type,
+ .release_port = cpm_uart_release_port,
+ .request_port = cpm_uart_request_port,
+ .config_port = cpm_uart_config_port,
+ .verify_port = cpm_uart_verify_port,
+};
+
+struct uart_cpm_port cpm_uart_ports[UART_NR] = {
+ [UART_SMC1] = {
+ .port = {
+ .irq = SMC1_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .flags = FLAG_SMC,
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = smc1_lineif,
+ },
+ [UART_SMC2] = {
+ .port = {
+ .irq = SMC2_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .flags = FLAG_SMC,
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = smc2_lineif,
+#ifdef CONFIG_SERIAL_CPM_ALT_SMC2
+ .is_portb = 1,
+#endif
+ },
+ [UART_SCC1] = {
+ .port = {
+ .irq = SCC1_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = scc1_lineif,
+ },
+ [UART_SCC2] = {
+ .port = {
+ .irq = SCC2_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = scc2_lineif,
+ },
+ [UART_SCC3] = {
+ .port = {
+ .irq = SCC3_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = scc3_lineif,
+ },
+ [UART_SCC4] = {
+ .port = {
+ .irq = SCC4_IRQ,
+ .ops = &cpm_uart_pops,
+ .iotype = SERIAL_IO_MEM,
+ },
+ .tx_nrfifos = TX_NUM_FIFO,
+ .tx_fifosize = TX_BUF_SIZE,
+ .rx_nrfifos = RX_NUM_FIFO,
+ .rx_fifosize = RX_BUF_SIZE,
+ .set_lineif = scc4_lineif,
+ },
+};
+
+#ifdef CONFIG_SERIAL_CPM_CONSOLE
+/*
+ * Print a string to the serial port trying not to disturb
+ * any possible real use of the port...
+ *
+ * Note that this is called with interrupts already disabled
+ */
+static void cpm_uart_console_write(struct console *co, const char *s,
+ u_int count)
+{
+ struct uart_cpm_port *pinfo =
+ &cpm_uart_ports[cpm_uart_port_map[co->index]];
+ unsigned int i;
+ volatile cbd_t *bdp, *bdbase;
+ volatile unsigned char *cp;
+
+ /* Get the address of the host memory buffer.
+ */
+ bdp = pinfo->tx_cur;
+ bdbase = pinfo->tx_bd_base;
+
+ /*
+ * Now, do each character. This is not as bad as it looks
+ * since this is a holding FIFO and not a transmitting FIFO.
+ * We could add the complexity of filling the entire transmit
+ * buffer, but we would just wait longer between accesses......
+ */
+ for (i = 0; i < count; i++, s++) {
+ /* Wait for transmitter fifo to empty.
+ * Ready indicates output is ready, and xmt is doing
+ * that, not that it is ready for us to send.
+ */
+ while ((bdp->cbd_sc & BD_SC_READY) != 0)
+ ;
+
+ /* Send the character out.
+ * If the buffer address is in the CPM DPRAM, don't
+ * convert it.
+ */
+ if ((uint) (bdp->cbd_bufaddr) > (uint) CPM_ADDR)
+ cp = (unsigned char *) (bdp->cbd_bufaddr);
+ else
+ cp = bus_to_virt(bdp->cbd_bufaddr);
+
+ *cp = *s;
+
+ bdp->cbd_datlen = 1;
+ bdp->cbd_sc |= BD_SC_READY;
+
+ if (bdp->cbd_sc & BD_SC_WRAP)
+ bdp = bdbase;
+ else
+ bdp++;
+
+ /* if a LF, also do CR... */
+ if (*s == 10) {
+ while ((bdp->cbd_sc & BD_SC_READY) != 0)
+ ;
+
+ if ((uint) (bdp->cbd_bufaddr) > (uint) CPM_ADDR)
+ cp = (unsigned char *) (bdp->cbd_bufaddr);
+ else
+ cp = bus_to_virt(bdp->cbd_bufaddr);
+
+ *cp = 13;
+ bdp->cbd_datlen = 1;
+ bdp->cbd_sc |= BD_SC_READY;
+
+ if (bdp->cbd_sc & BD_SC_WRAP)
+ bdp = bdbase;
+ else
+ bdp++;
+ }
+ }
+
+ /*
+ * Finally, Wait for transmitter & holding register to empty
+ * and restore the IER
+ */
+ while ((bdp->cbd_sc & BD_SC_READY) != 0)
+ ;
+
+ pinfo->tx_cur = (volatile cbd_t *) bdp;
+}
+
+/*
+ * Setup console. Be careful is called early !
+ */
+static int __init cpm_uart_console_setup(struct console *co, char *options)
+{
+ struct uart_port *port;
+ struct uart_cpm_port *pinfo;
+ int baud = 38400;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+ int ret;
+
+ port =
+ (struct uart_port *)&cpm_uart_ports[cpm_uart_port_map[co->index]];
+ pinfo = (struct uart_cpm_port *)port;
+
+ pinfo->flags |= FLAG_CONSOLE;
+
+ if (options) {
+ uart_parse_options(options, &baud, &parity, &bits, &flow);
+ } else {
+ bd_t *bd = (bd_t *) __res;
+
+ if (bd->bi_baudrate)
+ baud = bd->bi_baudrate;
+ else
+ baud = 9600;
+ }
+
+ /*
+ * Setup any port IO, connect any baud rate generators,
+ * etc. This is expected to be handled by board
+ * dependant code
+ */
+ if (pinfo->set_lineif)
+ pinfo->set_lineif(pinfo);
+
+ if (IS_SMC(pinfo)) {
+ pinfo->smcp->smc_smcm &= ~(SMCM_RX | SMCM_TX);
+ pinfo->smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ } else {
+ pinfo->sccp->scc_sccm &= ~(UART_SCCM_TX | UART_SCCM_RX);
+ pinfo->sccp->scc_gsmrl &= ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ }
+
+ ret = cpm_uart_allocbuf(pinfo, 1);
+
+ if (ret)
+ return ret;
+
+ cpm_uart_initbd(pinfo);
+
+ if (IS_SMC(pinfo))
+ cpm_uart_init_smc(pinfo);
+ else
+ cpm_uart_init_scc(pinfo);
+
+ uart_set_options(port, co, baud, parity, bits, flow);
+
+ return 0;
+}
+
+extern struct uart_driver cpm_reg;
+static struct console cpm_scc_uart_console = {
+ .name "ttyCPM",
+ .write cpm_uart_console_write,
+ .device uart_console_device,
+ .setup cpm_uart_console_setup,
+ .flags CON_PRINTBUFFER,
+ .index -1,
+ .data = &cpm_reg,
+};
+
+int __init cpm_uart_console_init(void)
+{
+ int ret = cpm_uart_init_portdesc();
+
+ if (!ret)
+ register_console(&cpm_scc_uart_console);
+ return ret;
+}
+
+console_initcall(cpm_uart_console_init);
+
+#define CPM_UART_CONSOLE &cpm_scc_uart_console
+#else
+#define CPM_UART_CONSOLE NULL
+#endif
+
+static struct uart_driver cpm_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "ttyCPM",
+ .dev_name = "ttyCPM",
+ .major = SERIAL_CPM_MAJOR,
+ .minor = SERIAL_CPM_MINOR,
+ .cons = CPM_UART_CONSOLE,
+};
+
+static int __init cpm_uart_init(void)
+{
+ int ret, i;
+
+ printk(KERN_INFO "Serial: CPM driver $Revision: 0.01 $\n");
+
+#ifndef CONFIG_SERIAL_CPM_CONSOLE
+ ret = cpm_uart_init_portdesc();
+ if (ret)
+ return ret;
+#endif
+
+ cpm_reg.nr = cpm_uart_nr;
+ ret = uart_register_driver(&cpm_reg);
+
+ if (ret)
+ return ret;
+
+ for (i = 0; i < cpm_uart_nr; i++) {
+ int con = cpm_uart_port_map[i];
+ cpm_uart_ports[con].port.line = i;
+ cpm_uart_ports[con].port.flags = UPF_BOOT_AUTOCONF;
+ uart_add_one_port(&cpm_reg, &cpm_uart_ports[con].port);
+ }
+
+ return ret;
+}
+
+static void __exit cpm_uart_exit(void)
+{
+ int i;
+
+ for (i = 0; i < cpm_uart_nr; i++) {
+ int con = cpm_uart_port_map[i];
+ uart_remove_one_port(&cpm_reg, &cpm_uart_ports[con].port);
+ }
+
+ uart_unregister_driver(&cpm_reg);
+}
+
+module_init(cpm_uart_init);
+module_exit(cpm_uart_exit);
+
+MODULE_AUTHOR("Kumar Gala/Antoniou Pantelis");
+MODULE_DESCRIPTION("CPM SCC/SMC port driver $Revision: 0.01 $");
+MODULE_LICENSE("GPL");
+MODULE_ALIAS_CHARDEV(SERIAL_CPM_MAJOR, SERIAL_CPM_MINOR);
--- /dev/null
+/*
+ * linux/drivers/serial/cpm_uart.c
+ *
+ * Driver for CPM (SCC/SMC) serial ports; CPM1 definitions
+ *
+ * Maintainer: Kumar Gala (kumar.gala@freescale.com) (CPM2)
+ * Pantelis Antoniou (panto@intracom.gr) (CPM1)
+ *
+ * Copyright (C) 2004 Freescale Semiconductor, Inc.
+ * (C) 2004 Intracom, S.A.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/tty.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/serial.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/device.h>
+#include <linux/bootmem.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#include <linux/serial_core.h>
+#include <linux/kernel.h>
+
+#include "cpm_uart.h"
+
+/**************************************************************/
+
+void cpm_line_cr_cmd(int line, int cmd)
+{
+ ushort val;
+ volatile cpm8xx_t *cp = cpmp;
+
+ switch (line) {
+ case UART_SMC1:
+ val = mk_cr_cmd(CPM_CR_CH_SMC1, cmd) | CPM_CR_FLG;
+ break;
+ case UART_SMC2:
+ val = mk_cr_cmd(CPM_CR_CH_SMC2, cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC1:
+ val = mk_cr_cmd(CPM_CR_CH_SCC1, cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC2:
+ val = mk_cr_cmd(CPM_CR_CH_SCC2, cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC3:
+ val = mk_cr_cmd(CPM_CR_CH_SCC3, cmd) | CPM_CR_FLG;
+ break;
+ case UART_SCC4:
+ val = mk_cr_cmd(CPM_CR_CH_SCC4, cmd) | CPM_CR_FLG;
+ break;
+ default:
+ return;
+
+ }
+ cp->cp_cpcr = val;
+ while (cp->cp_cpcr & CPM_CR_FLG) ;
+}
+
+void smc1_lineif(struct uart_cpm_port *pinfo)
+{
+ volatile cpm8xx_t *cp = cpmp;
+ unsigned int iobits = 0x000000c0;
+
+ if (!pinfo->is_portb) {
+ cp->cp_pbpar |= iobits;
+ cp->cp_pbdir &= ~iobits;
+ cp->cp_pbodr &= ~iobits;
+ } else {
+ ((immap_t *)IMAP_ADDR)->im_ioport.iop_papar |= iobits;
+ ((immap_t *)IMAP_ADDR)->im_ioport.iop_padir &= ~iobits;
+ ((immap_t *)IMAP_ADDR)->im_ioport.iop_paodr &= ~iobits;
+ }
+
+ pinfo->brg = 1;
+}
+
+void smc2_lineif(struct uart_cpm_port *pinfo)
+{
+ /* XXX SMC2: insert port configuration here */
+ pinfo->brg = 2;
+}
+
+void scc1_lineif(struct uart_cpm_port *pinfo)
+{
+ /* XXX SCC1: insert port configuration here */
+ pinfo->brg = 1;
+}
+
+void scc2_lineif(struct uart_cpm_port *pinfo)
+{
+ /* XXX SCC2: insert port configuration here */
+ pinfo->brg = 2;
+}
+
+void scc3_lineif(struct uart_cpm_port *pinfo)
+{
+ /* XXX SCC3: insert port configuration here */
+ pinfo->brg = 3;
+}
+
+void scc4_lineif(struct uart_cpm_port *pinfo)
+{
+ /* XXX SCC4: insert port configuration here */
+ pinfo->brg = 4;
+}
+
+/*
+ * Allocate DP-Ram and memory buffers. We need to allocate a transmit and
+ * receive buffer descriptors from dual port ram, and a character
+ * buffer area from host mem. If we are allocating for the console we need
+ * to do it from bootmem
+ */
+int cpm_uart_allocbuf(struct uart_cpm_port *pinfo, unsigned int is_con)
+{
+ int dpmemsz, memsz;
+ u8 *dp_mem;
+ uint dp_offset;
+ u8 *mem_addr;
+ dma_addr_t dma_addr = 0;
+
+ pr_debug("CPM uart[%d]:allocbuf\n", pinfo->port.line);
+
+ dpmemsz = sizeof(cbd_t) * (pinfo->rx_nrfifos + pinfo->tx_nrfifos);
+ dp_offset = cpm_dpalloc(dpmemsz, 8);
+ if (IS_DPERR(dp_offset)) {
+ printk(KERN_ERR
+ "cpm_uart_cpm1.c: could not allocate buffer descriptors\n");
+ return -ENOMEM;
+ }
+ dp_mem = cpm_dpram_addr(dp_offset);
+
+ memsz = L1_CACHE_ALIGN(pinfo->rx_nrfifos * pinfo->rx_fifosize) +
+ L1_CACHE_ALIGN(pinfo->tx_nrfifos * pinfo->tx_fifosize);
+ if (is_con) {
+ mem_addr = (u8 *) m8xx_cpm_hostalloc(memsz);
+ dma_addr = 0;
+ } else
+ mem_addr = dma_alloc_coherent(NULL, memsz, &dma_addr,
+ GFP_KERNEL);
+
+ if (mem_addr == NULL) {
+ cpm_dpfree(dp_offset);
+ printk(KERN_ERR
+ "cpm_uart_cpm1.c: could not allocate coherent memory\n");
+ return -ENOMEM;
+ }
+
+ pinfo->dp_addr = dp_offset;
+ pinfo->mem_addr = mem_addr;
+ pinfo->dma_addr = dma_addr;
+
+ pinfo->rx_buf = mem_addr;
+ pinfo->tx_buf = pinfo->rx_buf + L1_CACHE_ALIGN(pinfo->rx_nrfifos
+ * pinfo->rx_fifosize);
+
+ pinfo->rx_bd_base = (volatile cbd_t *)dp_mem;
+ pinfo->tx_bd_base = pinfo->rx_bd_base + pinfo->rx_nrfifos;
+
+ return 0;
+}
+
+void cpm_uart_freebuf(struct uart_cpm_port *pinfo)
+{
+ dma_free_coherent(NULL, L1_CACHE_ALIGN(pinfo->rx_nrfifos *
+ pinfo->rx_fifosize) +
+ L1_CACHE_ALIGN(pinfo->tx_nrfifos *
+ pinfo->tx_fifosize), pinfo->mem_addr,
+ pinfo->dma_addr);
+
+ cpm_dpfree(pinfo->dp_addr);
+}
+
+/* Setup any dynamic params in the uart desc */
+int cpm_uart_init_portdesc(void)
+{
+ pr_debug("CPM uart[-]:init portdesc\n");
+
+ cpm_uart_nr = 0;
+#ifdef CONFIG_SERIAL_CPM_SMC1
+ cpm_uart_ports[UART_SMC1].smcp = &cpmp->cp_smc[0];
+/*
+ * Is SMC1 being relocated?
+ */
+# ifdef CONFIG_I2C_SPI_SMC1_UCODE_PATCH
+ cpm_uart_ports[UART_SMC1].smcup =
+ (smc_uart_t *) & cpmp->cp_dparam[0x3C0];
+# else
+ cpm_uart_ports[UART_SMC1].smcup =
+ (smc_uart_t *) & cpmp->cp_dparam[PROFF_SMC1];
+# endif
+ cpm_uart_ports[UART_SMC1].port.mapbase =
+ (unsigned long)&cpmp->cp_smc[0];
+ cpm_uart_ports[UART_SMC1].smcp->smc_smcm |= (SMCM_RX | SMCM_TX);
+ cpm_uart_ports[UART_SMC1].smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ cpm_uart_ports[UART_SMC1].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SMC1;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SMC2
+ cpm_uart_ports[UART_SMC2].smcp = &cpmp->cp_smc[1];
+ cpm_uart_ports[UART_SMC2].smcup =
+ (smc_uart_t *) & cpmp->cp_dparam[PROFF_SMC2];
+ cpm_uart_ports[UART_SMC2].port.mapbase =
+ (unsigned long)&cpmp->cp_smc[1];
+ cpm_uart_ports[UART_SMC2].smcp->smc_smcm |= (SMCM_RX | SMCM_TX);
+ cpm_uart_ports[UART_SMC2].smcp->smc_smcmr &= ~(SMCMR_REN | SMCMR_TEN);
+ cpm_uart_ports[UART_SMC2].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SMC2;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC1
+ cpm_uart_ports[UART_SCC1].sccp = &cpmp->cp_scc[0];
+ cpm_uart_ports[UART_SCC1].sccup =
+ (scc_uart_t *) & cpmp->cp_dparam[PROFF_SCC1];
+ cpm_uart_ports[UART_SCC1].port.mapbase =
+ (unsigned long)&cpmp->cp_scc[0];
+ cpm_uart_ports[UART_SCC1].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC1].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC1].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC1;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC2
+ cpm_uart_ports[UART_SCC2].sccp = &cpmp->cp_scc[1];
+ cpm_uart_ports[UART_SCC2].sccup =
+ (scc_uart_t *) & cpmp->cp_dparam[PROFF_SCC2];
+ cpm_uart_ports[UART_SCC2].port.mapbase =
+ (unsigned long)&cpmp->cp_scc[1];
+ cpm_uart_ports[UART_SCC2].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC2].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC2].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC2;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC3
+ cpm_uart_ports[UART_SCC3].sccp = &cpmp->cp_scc[2];
+ cpm_uart_ports[UART_SCC3].sccup =
+ (scc_uart_t *) & cpmp->cp_dparam[PROFF_SCC3];
+ cpm_uart_ports[UART_SCC3].port.mapbase =
+ (unsigned long)&cpmp->cp_scc[2];
+ cpm_uart_ports[UART_SCC3].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC3].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC3].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC3;
+#endif
+
+#ifdef CONFIG_SERIAL_CPM_SCC4
+ cpm_uart_ports[UART_SCC4].sccp = &cpmp->cp_scc[3];
+ cpm_uart_ports[UART_SCC4].sccup =
+ (scc_uart_t *) & cpmp->cp_dparam[PROFF_SCC4];
+ cpm_uart_ports[UART_SCC4].port.mapbase =
+ (unsigned long)&cpmp->cp_scc[3];
+ cpm_uart_ports[UART_SCC4].sccp->scc_sccm &=
+ ~(UART_SCCM_TX | UART_SCCM_RX);
+ cpm_uart_ports[UART_SCC4].sccp->scc_gsmrl &=
+ ~(SCC_GSMRL_ENR | SCC_GSMRL_ENT);
+ cpm_uart_ports[UART_SCC4].port.uartclk = (((bd_t *) __res)->bi_intfreq);
+ cpm_uart_port_map[cpm_uart_nr++] = UART_SCC4;
+#endif
+ return 0;
+}
--- /dev/null
+/* drivers/serial/serial_lh7a40x.c
+ *
+ * Copyright (C) 2004 Coastal Environmental Systems
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ */
+
+/* Driver for Sharp LH7A40X embedded serial ports
+ *
+ * Based on drivers/char/serial.c, by Linus Torvalds, Theodore Ts'o.
+ * Based on drivers/serial/amba.c, by Deep Blue Solutions Ltd.
+ *
+ * ---
+ *
+ * This driver supports the embedded UARTs of the Sharp LH7A40X series
+ * CPUs. While similar to the 16550 and other UART chips, there is
+ * nothing close to register compatibility. Moreover, some of the
+ * modem control lines are not available, either in the chip or they
+ * are lacking in the board-level implementation.
+ *
+ * - Use of SIRDIS
+ * For simplicity, we disable the IR functions of any UART whenever
+ * we enable it.
+ *
+ */
+
+#include <linux/config.h>
+
+#if defined(CONFIG_SERIAL_LH7A40X_CONSOLE) && defined(CONFIG_MAGIC_SYSRQ)
+#define SUPPORT_SYSRQ
+#endif
+
+#include <linux/module.h>
+#include <linux/ioport.h>
+#include <linux/init.h>
+#include <linux/console.h>
+#include <linux/sysrq.h>
+#include <linux/tty.h>
+#include <linux/tty_flip.h>
+#include <linux/serial_core.h>
+#include <linux/serial.h>
+
+#include <asm/io.h>
+#include <asm/irq.h>
+
+#define DEV_MAJOR 204
+#define DEV_MINOR 16
+#define DEV_NR 3
+
+#define ISR_LOOP_LIMIT 256
+
+#define UR(p,o) _UR ((p)->membase, o)
+#define _UR(b,o) (*((volatile unsigned int*)(((unsigned char*) b) + (o))))
+#define BIT_CLR(p,o,m) UR(p,o) = UR(p,o) & (~(unsigned int)m)
+#define BIT_SET(p,o,m) UR(p,o) = UR(p,o) | ( (unsigned int)m)
+
+#define UART_REG_SIZE 32
+
+#define UART_R_DATA (0x00)
+#define UART_R_FCON (0x04)
+#define UART_R_BRCON (0x08)
+#define UART_R_CON (0x0c)
+#define UART_R_STATUS (0x10)
+#define UART_R_RAWISR (0x14)
+#define UART_R_INTEN (0x18)
+#define UART_R_ISR (0x1c)
+
+#define UARTEN (0x01) /* UART enable */
+#define SIRDIS (0x02) /* Serial IR disable (UART1 only) */
+
+#define RxEmpty (0x10)
+#define TxEmpty (0x80)
+#define TxFull (0x20)
+#define nRxRdy RxEmpty
+#define nTxRdy TxFull
+#define TxBusy (0x08)
+
+#define RxBreak (0x0800)
+#define RxOverrunError (0x0400)
+#define RxParityError (0x0200)
+#define RxFramingError (0x0100)
+#define RxError (RxBreak | RxOverrunError | RxParityError | RxFramingError)
+
+#define DCD (0x04)
+#define DSR (0x02)
+#define CTS (0x01)
+
+#define RxInt (0x01)
+#define TxInt (0x02)
+#define ModemInt (0x04)
+#define RxTimeoutInt (0x08)
+
+#define MSEOI (0x10)
+
+#define WLEN_8 (0x60)
+#define WLEN_7 (0x40)
+#define WLEN_6 (0x20)
+#define WLEN_5 (0x00)
+#define WLEN (0x60) /* Mask for all word-length bits */
+#define STP2 (0x08)
+#define PEN (0x02) /* Parity Enable */
+#define EPS (0x04) /* Even Parity Set */
+#define FEN (0x10) /* FIFO Enable */
+#define BRK (0x01) /* Send Break */
+
+
+struct uart_port_lh7a40x {
+ struct uart_port port;
+ unsigned int statusPrev; /* Most recently read modem status */
+};
+
+static void lh7a40xuart_stop_tx (struct uart_port* port, unsigned int tty_stop)
+{
+ BIT_CLR (port, UART_R_INTEN, TxInt);
+}
+
+static void lh7a40xuart_start_tx (struct uart_port* port,
+ unsigned int tty_start)
+{
+ BIT_SET (port, UART_R_INTEN, TxInt);
+
+ /* *** FIXME: do I need to check for startup of the
+ transmitter? The old driver did, but AMBA
+ doesn't . */
+}
+
+static void lh7a40xuart_stop_rx (struct uart_port* port)
+{
+ BIT_SET (port, UART_R_INTEN, RxTimeoutInt | RxInt);
+}
+
+static void lh7a40xuart_enable_ms (struct uart_port* port)
+{
+ BIT_SET (port, UART_R_INTEN, ModemInt);
+}
+
+static void
+#ifdef SUPPORT_SYSRQ
+lh7a40xuart_rx_chars (struct uart_port* port, struct pt_regs* regs)
+#else
+lh7a40xuart_rx_chars (struct uart_port* port)
+#endif
+{
+ struct tty_struct* tty = port->info->tty;
+ int cbRxMax = 256; /* (Gross) limit on receive */
+ unsigned int data, flag;/* Received data and status */
+
+ while (!(UR (port, UART_R_STATUS) & nRxRdy) && --cbRxMax) {
+ if (tty->flip.count >= TTY_FLIPBUF_SIZE) {
+ if (tty->low_latency)
+ tty_flip_buffer_push(tty);
+ /*
+ * If this failed then we will throw away the
+ * bytes but must do so to clear interrupts
+ */
+ }
+
+ data = UR (port, UART_R_DATA);
+ flag = TTY_NORMAL;
+ ++port->icount.rx;
+
+ if (data & RxError) { /* Quick check, short-circuit */
+ if (data & RxBreak) {
+ data &= ~(RxFramingError | RxParityError);
+ ++port->icount.brk;
+ if (uart_handle_break (port))
+ continue;
+ }
+ else if (data & RxParityError)
+ ++port->icount.parity;
+ else if (data & RxFramingError)
+ ++port->icount.frame;
+ if (data & RxOverrunError)
+ ++port->icount.overrun;
+
+ /* Mask by termios, leave Rx'd byte */
+ data &= port->read_status_mask | 0xff;
+
+ if (data & RxBreak)
+ flag = TTY_BREAK;
+ else if (data & RxParityError)
+ flag = TTY_PARITY;
+ else if (data & RxFramingError)
+ flag = TTY_FRAME;
+ }
+
+ if (uart_handle_sysrq_char (port, (unsigned char) data, regs))
+ continue;
+
+ if ((data & port->ignore_status_mask) == 0) {
+ tty_insert_flip_char(tty, data, flag);
+ }
+ if ((data & RxOverrunError)
+ && tty->flip.count < TTY_FLIPBUF_SIZE) {
+ /*
+ * Overrun is special, since it's reported
+ * immediately, and doesn't affect the current
+ * character
+ */
+ tty_insert_flip_char(tty, 0, TTY_OVERRUN);
+ }
+ }
+ tty_flip_buffer_push (tty);
+ return;
+}
+
+static void lh7a40xuart_tx_chars (struct uart_port* port)
+{
+ struct circ_buf* xmit = &port->info->xmit;
+ int cbTxMax = port->fifosize;
+
+ if (port->x_char) {
+ UR (port, UART_R_DATA) = port->x_char;
+ ++port->icount.tx;
+ port->x_char = 0;
+ return;
+ }
+ if (uart_circ_empty (xmit) || uart_tx_stopped (port)) {
+ lh7a40xuart_stop_tx (port, 0);
+ return;
+ }
+
+ /* Unlike the AMBA UART, the lh7a40x UART does not guarantee
+ that at least half of the FIFO is empty. Instead, we check
+ status for every character. Using the AMBA method causes
+ the transmitter to drop characters. */
+
+ do {
+ UR (port, UART_R_DATA) = xmit->buf[xmit->tail];
+ xmit->tail = (xmit->tail + 1) & (UART_XMIT_SIZE - 1);
+ ++port->icount.tx;
+ if (uart_circ_empty(xmit))
+ break;
+ } while (!(UR (port, UART_R_STATUS) & nTxRdy)
+ && cbTxMax--);
+
+ if (uart_circ_chars_pending (xmit) < WAKEUP_CHARS)
+ uart_write_wakeup (port);
+
+ if (uart_circ_empty (xmit))
+ lh7a40xuart_stop_tx (port, 0);
+}
+
+static void lh7a40xuart_modem_status (struct uart_port* port)
+{
+ unsigned int status = UR (port, UART_R_STATUS);
+ unsigned int delta
+ = status ^ ((struct uart_port_lh7a40x*) port)->statusPrev;
+
+ BIT_SET (port, UART_R_RAWISR, MSEOI); /* Clear modem status intr */
+
+ if (!delta) /* Only happens if we missed 2 transitions */
+ return;
+
+ ((struct uart_port_lh7a40x*) port)->statusPrev = status;
+
+ if (delta & DCD)
+ uart_handle_dcd_change (port, status & DCD);
+
+ if (delta & DSR)
+ ++port->icount.dsr;
+
+ if (delta & CTS)
+ uart_handle_cts_change (port, status & CTS);
+
+ wake_up_interruptible (&port->info->delta_msr_wait);
+}
+
+static irqreturn_t lh7a40xuart_int (int irq, void* dev_id,
+ struct pt_regs* regs)
+{
+ struct uart_port* port = dev_id;
+ unsigned int cLoopLimit = ISR_LOOP_LIMIT;
+ unsigned int isr = UR (port, UART_R_ISR);
+
+
+ do {
+ if (isr & (RxInt | RxTimeoutInt))
+#ifdef SUPPORT_SYSRQ
+ lh7a40xuart_rx_chars(port, regs);
+#else
+ lh7a40xuart_rx_chars(port);
+#endif
+ if (isr & ModemInt)
+ lh7a40xuart_modem_status (port);
+ if (isr & TxInt)
+ lh7a40xuart_tx_chars (port);
+
+ if (--cLoopLimit == 0)
+ break;
+
+ isr = UR (port, UART_R_ISR);
+ } while (isr & (RxInt | TxInt | RxTimeoutInt));
+
+ return IRQ_HANDLED;
+}
+
+static unsigned int lh7a40xuart_tx_empty (struct uart_port* port)
+{
+ return (UR (port, UART_R_STATUS) & TxEmpty) ? TIOCSER_TEMT : 0;
+}
+
+static unsigned int lh7a40xuart_get_mctrl (struct uart_port* port)
+{
+ unsigned int result = 0;
+ unsigned int status = UR (port, UART_R_STATUS);
+
+ if (status & DCD)
+ result |= TIOCM_CAR;
+ if (status & DSR)
+ result |= TIOCM_DSR;
+ if (status & CTS)
+ result |= TIOCM_CTS;
+
+ return result;
+}
+
+static void lh7a40xuart_set_mctrl (struct uart_port* port, unsigned int mctrl)
+{
+ /* None of the ports supports DTR. UART1 supports RTS through GPIO. */
+ /* Note, kernel appears to be setting DTR and RTS on console. */
+
+ /* *** FIXME: this deserves more work. There's some work in
+ tracing all of the IO pins. */
+#if 0
+ if( port->mapbase == UART1_PHYS) {
+ gpioRegs_t *gpio = (gpioRegs_t *)IO_ADDRESS(GPIO_PHYS);
+
+ if (mctrl & TIOCM_RTS)
+ gpio->pbdr &= ~GPIOB_UART1_RTS;
+ else
+ gpio->pbdr |= GPIOB_UART1_RTS;
+ }
+#endif
+}
+
+static void lh7a40xuart_break_ctl (struct uart_port* port, int break_state)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&port->lock, flags);
+ if (break_state == -1)
+ BIT_SET (port, UART_R_FCON, BRK); /* Assert break */
+ else
+ BIT_CLR (port, UART_R_FCON, BRK); /* Deassert break */
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static int lh7a40xuart_startup (struct uart_port* port)
+{
+ int retval;
+
+ retval = request_irq (port->irq, lh7a40xuart_int, 0,
+ "serial_lh7a40x", port);
+ if (retval)
+ return retval;
+
+ /* Initial modem control-line settings */
+ ((struct uart_port_lh7a40x*) port)->statusPrev
+ = UR (port, UART_R_STATUS);
+
+ /* There is presently no configuration option to enable IR.
+ Thus, we always disable it. */
+
+ BIT_SET (port, UART_R_CON, UARTEN | SIRDIS);
+ BIT_SET (port, UART_R_INTEN, RxTimeoutInt | RxInt);
+
+ return 0;
+}
+
+static void lh7a40xuart_shutdown (struct uart_port* port)
+{
+ free_irq (port->irq, port);
+ BIT_CLR (port, UART_R_FCON, BRK | FEN);
+ BIT_CLR (port, UART_R_CON, UARTEN);
+}
+
+static void lh7a40xuart_set_termios (struct uart_port* port,
+ struct termios* termios,
+ struct termios* old)
+{
+ unsigned int con;
+ unsigned int inten;
+ unsigned int fcon;
+ unsigned long flags;
+ unsigned int baud;
+ unsigned int quot;
+
+ baud = uart_get_baud_rate (port, termios, old, 8, port->uartclk/16);
+ quot = uart_get_divisor (port, baud); /* -1 performed elsewhere */
+
+ switch (termios->c_cflag & CSIZE) {
+ case CS5:
+ fcon = WLEN_5;
+ break;
+ case CS6:
+ fcon = WLEN_6;
+ break;
+ case CS7:
+ fcon = WLEN_7;
+ break;
+ case CS8:
+ default:
+ fcon = WLEN_8;
+ break;
+ }
+ if (termios->c_cflag & CSTOPB)
+ fcon |= STP2;
+ if (termios->c_cflag & PARENB) {
+ fcon |= PEN;
+ if (!(termios->c_cflag & PARODD))
+ fcon |= EPS;
+ }
+ if (port->fifosize > 1)
+ fcon |= FEN;
+
+ spin_lock_irqsave (&port->lock, flags);
+
+ uart_update_timeout (port, termios->c_cflag, baud);
+
+ port->read_status_mask = RxOverrunError;
+ if (termios->c_iflag & INPCK)
+ port->read_status_mask |= RxFramingError | RxParityError;
+ if (termios->c_iflag & (BRKINT | PARMRK))
+ port->read_status_mask |= RxBreak;
+
+ /* Figure mask for status we ignore */
+ port->ignore_status_mask = 0;
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= RxFramingError | RxParityError;
+ if (termios->c_iflag & IGNBRK) {
+ port->ignore_status_mask |= RxBreak;
+ /* Ignore overrun when ignorning parity */
+ /* *** FIXME: is this in the right place? */
+ if (termios->c_iflag & IGNPAR)
+ port->ignore_status_mask |= RxOverrunError;
+ }
+
+ /* Ignore all receive errors when receive disabled */
+ if ((termios->c_cflag & CREAD) == 0)
+ port->ignore_status_mask |= RxError;
+
+ con = UR (port, UART_R_CON);
+ inten = (UR (port, UART_R_INTEN) & ~ModemInt);
+
+ if (UART_ENABLE_MS (port, termios->c_cflag))
+ inten |= ModemInt;
+
+ BIT_CLR (port, UART_R_CON, UARTEN); /* Disable UART */
+ UR (port, UART_R_INTEN) = 0; /* Disable interrupts */
+ UR (port, UART_R_BRCON) = quot - 1; /* Set baud rate divisor */
+ UR (port, UART_R_FCON) = fcon; /* Set FIFO and frame ctrl */
+ UR (port, UART_R_INTEN) = inten; /* Enable interrupts */
+ UR (port, UART_R_CON) = con; /* Restore UART mode */
+
+ spin_unlock_irqrestore(&port->lock, flags);
+}
+
+static const char* lh7a40xuart_type (struct uart_port* port)
+{
+ return port->type == PORT_LH7A40X ? "LH7A40X" : NULL;
+}
+
+static void lh7a40xuart_release_port (struct uart_port* port)
+{
+ release_mem_region (port->mapbase, UART_REG_SIZE);
+}
+
+static int lh7a40xuart_request_port (struct uart_port* port)
+{
+ return request_mem_region (port->mapbase, UART_REG_SIZE,
+ "serial_lh7a40x") != NULL
+ ? 0 : -EBUSY;
+}
+
+static void lh7a40xuart_config_port (struct uart_port* port, int flags)
+{
+ if (flags & UART_CONFIG_TYPE) {
+ port->type = PORT_LH7A40X;
+ lh7a40xuart_request_port (port);
+ }
+}
+
+static int lh7a40xuart_verify_port (struct uart_port* port,
+ struct serial_struct* ser)
+{
+ int ret = 0;
+
+ if (ser->type != PORT_UNKNOWN && ser->type != PORT_LH7A40X)
+ ret = -EINVAL;
+ if (ser->irq < 0 || ser->irq >= NR_IRQS)
+ ret = -EINVAL;
+ if (ser->baud_base < 9600) /* *** FIXME: is this true? */
+ ret = -EINVAL;
+ return ret;
+}
+
+static struct uart_ops lh7a40x_uart_ops = {
+ .tx_empty = lh7a40xuart_tx_empty,
+ .set_mctrl = lh7a40xuart_set_mctrl,
+ .get_mctrl = lh7a40xuart_get_mctrl,
+ .stop_tx = lh7a40xuart_stop_tx,
+ .start_tx = lh7a40xuart_start_tx,
+ .stop_rx = lh7a40xuart_stop_rx,
+ .enable_ms = lh7a40xuart_enable_ms,
+ .break_ctl = lh7a40xuart_break_ctl,
+ .startup = lh7a40xuart_startup,
+ .shutdown = lh7a40xuart_shutdown,
+ .set_termios = lh7a40xuart_set_termios,
+ .type = lh7a40xuart_type,
+ .release_port = lh7a40xuart_release_port,
+ .request_port = lh7a40xuart_request_port,
+ .config_port = lh7a40xuart_config_port,
+ .verify_port = lh7a40xuart_verify_port,
+};
+
+static struct uart_port_lh7a40x lh7a40x_ports[DEV_NR] = {
+ {
+ .port = {
+ .membase = (void*) io_p2v (UART1_PHYS),
+ .mapbase = UART1_PHYS,
+ .iotype = SERIAL_IO_MEM,
+ .irq = IRQ_UART1INTR,
+ .uartclk = 14745600/2,
+ .fifosize = 16,
+ .ops = &lh7a40x_uart_ops,
+ .flags = ASYNC_BOOT_AUTOCONF,
+ .line = 0,
+ },
+ },
+ {
+ .port = {
+ .membase = (void*) io_p2v (UART2_PHYS),
+ .mapbase = UART2_PHYS,
+ .iotype = SERIAL_IO_MEM,
+ .irq = IRQ_UART2INTR,
+ .uartclk = 14745600/2,
+ .fifosize = 16,
+ .ops = &lh7a40x_uart_ops,
+ .flags = ASYNC_BOOT_AUTOCONF,
+ .line = 1,
+ },
+ },
+ {
+ .port = {
+ .membase = (void*) io_p2v (UART3_PHYS),
+ .mapbase = UART3_PHYS,
+ .iotype = SERIAL_IO_MEM,
+ .irq = IRQ_UART3INTR,
+ .uartclk = 14745600/2,
+ .fifosize = 16,
+ .ops = &lh7a40x_uart_ops,
+ .flags = ASYNC_BOOT_AUTOCONF,
+ .line = 2,
+ },
+ },
+};
+
+#ifndef CONFIG_SERIAL_LH7A40X_CONSOLE
+# define LH7A40X_CONSOLE NULL
+#else
+# define LH7A40X_CONSOLE &lh7a40x_console
+
+
+static void lh7a40xuart_console_write (struct console* co,
+ const char* s,
+ unsigned int count)
+{
+ struct uart_port* port = &lh7a40x_ports[co->index].port;
+ unsigned int con = UR (port, UART_R_CON);
+ unsigned int inten = UR (port, UART_R_INTEN);
+
+
+ UR (port, UART_R_INTEN) = 0; /* Disable all interrupts */
+ BIT_SET (port, UART_R_CON, UARTEN | SIRDIS); /* Enable UART */
+
+ for (; count-- > 0; ++s) {
+ while (UR (port, UART_R_STATUS) & nTxRdy)
+ ;
+ UR (port, UART_R_DATA) = *s;
+ if (*s == '\n') {
+ while ((UR (port, UART_R_STATUS) & TxBusy))
+ ;
+ UR (port, UART_R_DATA) = '\r';
+ }
+ }
+
+ /* Wait until all characters are sent */
+ while (UR (port, UART_R_STATUS) & TxBusy)
+ ;
+
+ /* Restore control and interrupt mask */
+ UR (port, UART_R_CON) = con;
+ UR (port, UART_R_INTEN) = inten;
+}
+
+static void __init lh7a40xuart_console_get_options (struct uart_port* port,
+ int* baud,
+ int* parity,
+ int* bits)
+{
+ if (UR (port, UART_R_CON) & UARTEN) {
+ unsigned int fcon = UR (port, UART_R_FCON);
+ unsigned int quot = UR (port, UART_R_BRCON) + 1;
+
+ switch (fcon & (PEN | EPS)) {
+ default: *parity = 'n'; break;
+ case PEN: *parity = 'o'; break;
+ case PEN | EPS: *parity = 'e'; break;
+ }
+
+ switch (fcon & WLEN) {
+ default:
+ case WLEN_8: *bits = 8; break;
+ case WLEN_7: *bits = 7; break;
+ case WLEN_6: *bits = 6; break;
+ case WLEN_5: *bits = 5; break;
+ }
+
+ *baud = port->uartclk/(16*quot);
+ }
+}
+
+static int __init lh7a40xuart_console_setup (struct console* co, char* options)
+{
+ struct uart_port* port;
+ int baud = 38400;
+ int bits = 8;
+ int parity = 'n';
+ int flow = 'n';
+
+ if (co->index >= DEV_NR) /* Bounds check on device number */
+ co->index = 0;
+ port = &lh7a40x_ports[co->index].port;
+
+ if (options)
+ uart_parse_options (options, &baud, &parity, &bits, &flow);
+ else
+ lh7a40xuart_console_get_options (port, &baud, &parity, &bits);
+
+ return uart_set_options (port, co, baud, parity, bits, flow);
+}
+
+extern struct uart_driver lh7a40x_reg;
+static struct console lh7a40x_console = {
+ .name = "ttyAM",
+ .write = lh7a40xuart_console_write,
+ .device = uart_console_device,
+ .setup = lh7a40xuart_console_setup,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+ .data = &lh7a40x_reg,
+};
+
+static int __init lh7a40xuart_console_init(void)
+{
+ register_console (&lh7a40x_console);
+ return 0;
+}
+
+console_initcall (lh7a40xuart_console_init);
+
+#endif
+
+static struct uart_driver lh7a40x_reg = {
+ .owner = THIS_MODULE,
+ .driver_name = "ttyAM",
+ .dev_name = "ttyAM",
+ .major = DEV_MAJOR,
+ .minor = DEV_MINOR,
+ .nr = DEV_NR,
+ .cons = LH7A40X_CONSOLE,
+};
+
+static int __init lh7a40xuart_init(void)
+{
+ int ret;
+
+ printk (KERN_INFO "serial: LH7A40X serial driver\n");
+
+ ret = uart_register_driver (&lh7a40x_reg);
+
+ if (ret == 0) {
+ int i;
+
+ for (i = 0; i < DEV_NR; i++)
+ uart_add_one_port (&lh7a40x_reg,
+ &lh7a40x_ports[i].port);
+ }
+ return ret;
+}
+
+static void __exit lh7a40xuart_exit(void)
+{
+ int i;
+
+ for (i = 0; i < DEV_NR; i++)
+ uart_remove_one_port (&lh7a40x_reg, &lh7a40x_ports[i].port);
+
+ uart_unregister_driver (&lh7a40x_reg);
+}
+
+module_init (lh7a40xuart_init);
+module_exit (lh7a40xuart_exit);
+
+MODULE_AUTHOR ("Marc Singer");
+MODULE_DESCRIPTION ("Sharp LH7A40X serial port driver");
+MODULE_LICENSE ("GPL");
--- /dev/null
+/*
+ * C-Brick Serial Port (and console) driver for SGI Altix machines.
+ *
+ * This driver is NOT suitable for talking to the l1-controller for
+ * anything other than 'console activities' --- please use the l1
+ * driver for that.
+ *
+ *
+ * Copyright (c) 2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; if not, write the Free Software
+ * Foundation, Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1500 Crittenden Lane,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/NoticeExplan
+ */
+
+#include <linux/config.h>
+#include <linux/interrupt.h>
+#include <linux/tty.h>
+#include <linux/serial.h>
+#include <linux/console.h>
+#include <linux/module.h>
+#include <linux/sysrq.h>
+#include <linux/circ_buf.h>
+#include <linux/serial_reg.h>
+#include <linux/delay.h> /* for mdelay */
+#include <linux/miscdevice.h>
+#include <linux/serial_core.h>
+
+#include <asm/io.h>
+#include <asm/sn/simulator.h>
+#include <asm/sn/sn_sal.h>
+
+/* number of characters we can transmit to the SAL console at a time */
+#define SN_SAL_MAX_CHARS 120
+
+/* 64K, when we're asynch, it must be at least printk's LOG_BUF_LEN to
+ * avoid losing chars, (always has to be a power of 2) */
+#define SN_SAL_BUFFER_SIZE (64 * (1 << 10))
+
+#define SN_SAL_UART_FIFO_DEPTH 16
+#define SN_SAL_UART_FIFO_SPEED_CPS 9600/10
+
+/* sn_transmit_chars() calling args */
+#define TRANSMIT_BUFFERED 0
+#define TRANSMIT_RAW 1
+
+/* To use dynamic numbers only and not use the assigned major and minor,
+ * define the following.. */
+ /* #define USE_DYNAMIC_MINOR 1 *//* use dynamic minor number */
+#define USE_DYNAMIC_MINOR 0 /* Don't rely on misc_register dynamic minor */
+
+/* Device name we're using */
+#define DEVICE_NAME "ttySG"
+#define DEVICE_NAME_DYNAMIC "ttySG0" /* need full name for misc_register */
+/* The major/minor we are using, ignored for USE_DYNAMIC_MINOR */
+#define DEVICE_MAJOR 204
+#define DEVICE_MINOR 40
+
+#ifdef CONFIG_MAGIC_SYSRQ
+static char sysrq_serial_str[] = "\eSYS";
+static char *sysrq_serial_ptr = sysrq_serial_str;
+static unsigned long sysrq_requested;
+#endif /* CONFIG_MAGIC_SYSRQ */
+
+/*
+ * Port definition - this kinda drives it all
+ */
+struct sn_cons_port {
+ struct timer_list sc_timer;
+ struct uart_port sc_port;
+ struct sn_sal_ops {
+ int (*sal_puts_raw) (const char *s, int len);
+ int (*sal_puts) (const char *s, int len);
+ int (*sal_getc) (void);
+ int (*sal_input_pending) (void);
+ void (*sal_wakeup_transmit) (struct sn_cons_port *, int);
+ } *sc_ops;
+ unsigned long sc_interrupt_timeout;
+ int sc_is_asynch;
+};
+
+static struct sn_cons_port sal_console_port;
+
+/* Only used if USE_DYNAMIC_MINOR is set to 1 */
+static struct miscdevice misc; /* used with misc_register for dynamic */
+
+extern void early_sn_setup(void);
+
+#undef DEBUG
+#ifdef DEBUG
+static int sn_debug_printf(const char *fmt, ...);
+#define DPRINTF(x...) sn_debug_printf(x)
+#else
+#define DPRINTF(x...) do { } while (0)
+#endif
+
+/* Prototypes */
+static int snt_hw_puts_raw(const char *, int);
+static int snt_hw_puts_buffered(const char *, int);
+static int snt_poll_getc(void);
+static int snt_poll_input_pending(void);
+static int snt_intr_getc(void);
+static int snt_intr_input_pending(void);
+static void sn_transmit_chars(struct sn_cons_port *, int);
+
+/* A table for polling:
+ */
+static struct sn_sal_ops poll_ops = {
+ .sal_puts_raw = snt_hw_puts_raw,
+ .sal_puts = snt_hw_puts_raw,
+ .sal_getc = snt_poll_getc,
+ .sal_input_pending = snt_poll_input_pending
+};
+
+/* A table for interrupts enabled */
+static struct sn_sal_ops intr_ops = {
+ .sal_puts_raw = snt_hw_puts_raw,
+ .sal_puts = snt_hw_puts_buffered,
+ .sal_getc = snt_intr_getc,
+ .sal_input_pending = snt_intr_input_pending,
+ .sal_wakeup_transmit = sn_transmit_chars
+};
+
+/* the console does output in two distinctly different ways:
+ * synchronous (raw) and asynchronous (buffered). initally, early_printk
+ * does synchronous output. any data written goes directly to the SAL
+ * to be output (incidentally, it is internally buffered by the SAL)
+ * after interrupts and timers are initialized and available for use,
+ * the console init code switches to asynchronous output. this is
+ * also the earliest opportunity to begin polling for console input.
+ * after console initialization, console output and tty (serial port)
+ * output is buffered and sent to the SAL asynchronously (either by
+ * timer callback or by UART interrupt) */
+
+/* routines for running the console in polling mode */
+
+/**
+ * snt_poll_getc - Get a character from the console in polling mode
+ *
+ */
+static int snt_poll_getc(void)
+{
+ int ch;
+
+ ia64_sn_console_getc(&ch);
+ return ch;
+}
+
+/**
+ * snt_poll_input_pending - Check if any input is waiting - polling mode.
+ *
+ */
+static int snt_poll_input_pending(void)
+{
+ int status, input;
+
+ status = ia64_sn_console_check(&input);
+ return !status && input;
+}
+
+/* routines for an interrupt driven console (normal) */
+
+/**
+ * snt_intr_getc - Get a character from the console, interrupt mode
+ *
+ */
+static int snt_intr_getc(void)
+{
+ return ia64_sn_console_readc();
+}
+
+/**
+ * snt_intr_input_pending - Check if input is pending, interrupt mode
+ *
+ */
+static int snt_intr_input_pending(void)
+{
+ return ia64_sn_console_intr_status() & SAL_CONSOLE_INTR_RECV;
+}
+
+/* these functions are polled and interrupt */
+
+/**
+ * snt_hw_puts_raw - Send raw string to the console, polled or interrupt mode
+ * @s: String
+ * @len: Length
+ *
+ */
+static int snt_hw_puts_raw(const char *s, int len)
+{
+ /* this will call the PROM and not return until this is done */
+ return ia64_sn_console_putb(s, len);
+}
+
+/**
+ * snt_hw_puts_buffered - Send string to console, polled or interrupt mode
+ * @s: String
+ * @len: Length
+ *
+ */
+static int snt_hw_puts_buffered(const char *s, int len)
+{
+ /* queue data to the PROM */
+ return ia64_sn_console_xmit_chars((char *)s, len);
+}
+
+/* uart interface structs
+ * These functions are associated with the uart_port that the serial core
+ * infrastructure calls.
+ *
+ * Note: Due to how the console works, many routines are no-ops.
+ */
+
+/**
+ * snp_type - What type of console are we?
+ * @port: Port to operate with (we ignore since we only have one port)
+ *
+ */
+static const char *snp_type(struct uart_port *port)
+{
+ return ("SGI SN L1");
+}
+
+/**
+ * snp_tx_empty - Is the transmitter empty? We pretend we're always empty
+ * @port: Port to operate on (we ignore since we only have one port)
+ *
+ */
+static unsigned int snp_tx_empty(struct uart_port *port)
+{
+ return 1;
+}
+
+/**
+ * snp_stop_tx - stop the transmitter - no-op for us
+ * @port: Port to operat eon - we ignore - no-op function
+ * @tty_stop: Set to 1 if called via uart_stop
+ *
+ */
+static void snp_stop_tx(struct uart_port *port, unsigned int tty_stop)
+{
+}
+
+/**
+ * snp_release_port - Free i/o and resources for port - no-op for us
+ * @port: Port to operate on - we ignore - no-op function
+ *
+ */
+static void snp_release_port(struct uart_port *port)
+{
+}
+
+/**
+ * snp_enable_ms - Force modem status interrupts on - no-op for us
+ * @port: Port to operate on - we ignore - no-op function
+ *
+ */
+static void snp_enable_ms(struct uart_port *port)
+{
+}
+
+/**
+ * snp_shutdown - shut down the port - free irq and disable - no-op for us
+ * @port: Port to shut down - we ignore
+ *
+ */
+static void snp_shutdown(struct uart_port *port)
+{
+}
+
+/**
+ * snp_set_mctrl - set control lines (dtr, rts, etc) - no-op for our console
+ * @port: Port to operate on - we ignore
+ * @mctrl: Lines to set/unset - we ignore
+ *
+ */
+static void snp_set_mctrl(struct uart_port *port, unsigned int mctrl)
+{
+}
+
+/**
+ * snp_get_mctrl - get contorl line info, we just return a static value
+ * @port: port to operate on - we only have one port so we ignore this
+ *
+ */
+static unsigned int snp_get_mctrl(struct uart_port *port)
+{
+ return TIOCM_CAR | TIOCM_RNG | TIOCM_DSR | TIOCM_CTS;
+}
+
+/**
+ * snp_stop_rx - Stop the receiver - we ignor ethis
+ * @port: Port to operate on - we ignore
+ *
+ */
+static void snp_stop_rx(struct uart_port *port)
+{
+}
+
+/**
+ * snp_start_tx - Start transmitter
+ * @port: Port to operate on
+ * @tty_stop: Set to 1 if called via uart_start
+ *
+ */
+static void snp_start_tx(struct uart_port *port, unsigned int tty_stop)
+{
+ if (sal_console_port.sc_ops->sal_wakeup_transmit)
+ sal_console_port.sc_ops->sal_wakeup_transmit(&sal_console_port,
+ TRANSMIT_BUFFERED);
+
+}
+
+/**
+ * snp_break_ctl - handle breaks - ignored by us
+ * @port: Port to operate on
+ * @break_state: Break state
+ *
+ */
+static void snp_break_ctl(struct uart_port *port, int break_state)
+{
+}
+
+/**
+ * snp_startup - Start up the serial port - always return 0 (We're always on)
+ * @port: Port to operate on
+ *
+ */
+static int snp_startup(struct uart_port *port)
+{
+ return 0;
+}
+
+/**
+ * snp_set_termios - set termios stuff - we ignore these
+ * @port: port to operate on
+ * @termios: New settings
+ * @termios: Old
+ *
+ */
+static void
+snp_set_termios(struct uart_port *port, struct termios *termios,
+ struct termios *old)
+{
+}
+
+/**
+ * snp_request_port - allocate resources for port - ignored by us
+ * @port: port to operate on
+ *
+ */
+static int snp_request_port(struct uart_port *port)
+{
+ return 0;
+}
+
+/**
+ * snp_config_port - allocate resources, set up - we ignore, we're always on
+ * @port: Port to operate on
+ * @flags: flags used for port setup
+ *
+ */
+static void snp_config_port(struct uart_port *port, int flags)
+{
+}
+
+/* Associate the uart functions above - given to serial core */
+
+static struct uart_ops sn_console_ops = {
+ .tx_empty = snp_tx_empty,
+ .set_mctrl = snp_set_mctrl,
+ .get_mctrl = snp_get_mctrl,
+ .stop_tx = snp_stop_tx,
+ .start_tx = snp_start_tx,
+ .stop_rx = snp_stop_rx,
+ .enable_ms = snp_enable_ms,
+ .break_ctl = snp_break_ctl,
+ .startup = snp_startup,
+ .shutdown = snp_shutdown,
+ .set_termios = snp_set_termios,
+ .pm = NULL,
+ .type = snp_type,
+ .release_port = snp_release_port,
+ .request_port = snp_request_port,
+ .config_port = snp_config_port,
+ .verify_port = NULL,
+};
+
+/* End of uart struct functions and defines */
+
+#ifdef DEBUG
+
+/**
+ * sn_debug_printf - close to hardware debugging printf
+ * @fmt: printf format
+ *
+ * This is as "close to the metal" as we can get, used when the driver
+ * itself may be broken.
+ *
+ */
+static int sn_debug_printf(const char *fmt, ...)
+{
+ static char printk_buf[1024];
+ int printed_len;
+ va_list args;
+
+ va_start(args, fmt);
+ printed_len = vsnprintf(printk_buf, sizeof(printk_buf), fmt, args);
+
+ if (!sal_console_port.sc_ops) {
+ sal_console_port.sc_ops = &poll_ops;
+ early_sn_setup();
+ }
+ sal_console_port.sc_ops->sal_puts_raw(printk_buf, printed_len);
+
+ va_end(args);
+ return printed_len;
+}
+#endif /* DEBUG */
+
+/*
+ * Interrupt handling routines.
+ */
+
+/**
+ * sn_receive_chars - Grab characters, pass them to tty layer
+ * @port: Port to operate on
+ * @regs: Saved registers (needed by uart_handle_sysrq_char)
+ * @flags: irq flags
+ *
+ * Note: If we're not registered with the serial core infrastructure yet,
+ * we don't try to send characters to it...
+ *
+ */
+static void
+sn_receive_chars(struct sn_cons_port *port, struct pt_regs *regs,
+ unsigned long flags)
+{
+ int ch;
+ struct tty_struct *tty;
+
+ if (!port) {
+ printk(KERN_ERR "sn_receive_chars - port NULL so can't receieve\n");
+ return;
+ }
+
+ if (!port->sc_ops) {
+ printk(KERN_ERR "sn_receive_chars - port->sc_ops NULL so can't receieve\n");
+ return;
+ }
+
+ if (port->sc_port.info) {
+ /* The serial_core stuffs are initilized, use them */
+ tty = port->sc_port.info->tty;
+ }
+ else {
+ /* Not registered yet - can't pass to tty layer. */
+ tty = NULL;
+ }
+
+ while (port->sc_ops->sal_input_pending()) {
+ ch = port->sc_ops->sal_getc();
+ if (ch < 0) {
+ printk(KERN_ERR "sn_console: An error occured while "
+ "obtaining data from the console (0x%0x)\n", ch);
+ break;
+ }
+#ifdef CONFIG_MAGIC_SYSRQ
+ if (sysrq_requested) {
+ unsigned long sysrq_timeout = sysrq_requested + HZ*5;
+
+ sysrq_requested = 0;
+ if (ch && time_before(jiffies, sysrq_timeout)) {
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+ handle_sysrq(ch, regs, NULL);
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+ /* ignore actual sysrq command char */
+ continue;
+ }
+ }
+ if (ch == *sysrq_serial_ptr) {
+ if (!(*++sysrq_serial_ptr)) {
+ sysrq_requested = jiffies;
+ sysrq_serial_ptr = sysrq_serial_str;
+ }
+ /*
+ * ignore the whole sysrq string except for the
+ * leading escape
+ */
+ if (ch != '\e')
+ continue;
+ }
+ else
+ sysrq_serial_ptr = sysrq_serial_str;
+#endif /* CONFIG_MAGIC_SYSRQ */
+
+ /* record the character to pass up to the tty layer */
+ if (tty) {
+ *tty->flip.char_buf_ptr = ch;
+ *tty->flip.flag_buf_ptr = TTY_NORMAL;
+ tty->flip.char_buf_ptr++;
+ tty->flip.count++;
+ if (tty->flip.count == TTY_FLIPBUF_SIZE)
+ break;
+ }
+ port->sc_port.icount.rx++;
+ }
+
+ if (tty)
+ tty_flip_buffer_push(tty);
+}
+
+/**
+ * sn_transmit_chars - grab characters from serial core, send off
+ * @port: Port to operate on
+ * @raw: Transmit raw or buffered
+ *
+ * Note: If we're early, before we're registered with serial core, the
+ * writes are going through sn_sal_console_write because that's how
+ * register_console has been set up. We currently could have asynch
+ * polls calling this function due to sn_sal_switch_to_asynch but we can
+ * ignore them until we register with the serial core stuffs.
+ *
+ */
+static void sn_transmit_chars(struct sn_cons_port *port, int raw)
+{
+ int xmit_count, tail, head, loops, ii;
+ int result;
+ char *start;
+ struct circ_buf *xmit;
+
+ if (!port)
+ return;
+
+ BUG_ON(!port->sc_is_asynch);
+
+ if (port->sc_port.info) {
+ /* We're initilized, using serial core infrastructure */
+ xmit = &port->sc_port.info->xmit;
+ } else {
+ /* Probably sn_sal_switch_to_asynch has been run but serial core isn't
+ * initilized yet. Just return. Writes are going through
+ * sn_sal_console_write (due to register_console) at this time.
+ */
+ return;
+ }
+
+ if (uart_circ_empty(xmit) || uart_tx_stopped(&port->sc_port)) {
+ /* Nothing to do. */
+ return;
+ }
+
+ head = xmit->head;
+ tail = xmit->tail;
+ start = &xmit->buf[tail];
+
+ /* twice around gets the tail to the end of the buffer and
+ * then to the head, if needed */
+ loops = (head < tail) ? 2 : 1;
+
+ for (ii = 0; ii < loops; ii++) {
+ xmit_count = (head < tail) ?
+ (UART_XMIT_SIZE - tail) : (head - tail);
+
+ if (xmit_count > 0) {
+ if (raw == TRANSMIT_RAW)
+ result =
+ port->sc_ops->sal_puts_raw(start,
+ xmit_count);
+ else
+ result =
+ port->sc_ops->sal_puts(start, xmit_count);
+#ifdef DEBUG
+ if (!result)
+ DPRINTF("`");
+#endif
+ if (result > 0) {
+ xmit_count -= result;
+ port->sc_port.icount.tx += result;
+ tail += result;
+ tail &= UART_XMIT_SIZE - 1;
+ xmit->tail = tail;
+ start = &xmit->buf[tail];
+ }
+ }
+ }
+
+ if (uart_circ_chars_pending(xmit) < WAKEUP_CHARS)
+ uart_write_wakeup(&port->sc_port);
+
+ if (uart_circ_empty(xmit))
+ snp_stop_tx(&port->sc_port, 0); /* no-op for us */
+}
+
+/**
+ * sn_sal_interrupt - Handle console interrupts
+ * @irq: irq #, useful for debug statements
+ * @dev_id: our pointer to our port (sn_cons_port which contains the uart port)
+ * @regs: Saved registers, used by sn_receive_chars for uart_handle_sysrq_char
+ *
+ */
+static irqreturn_t sn_sal_interrupt(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct sn_cons_port *port = (struct sn_cons_port *)dev_id;
+ unsigned long flags;
+ int status = ia64_sn_console_intr_status();
+
+ if (!port)
+ return IRQ_NONE;
+
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+ if (status & SAL_CONSOLE_INTR_RECV) {
+ sn_receive_chars(port, regs, flags);
+ }
+ if (status & SAL_CONSOLE_INTR_XMIT) {
+ sn_transmit_chars(port, TRANSMIT_BUFFERED);
+ }
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+ return IRQ_HANDLED;
+}
+
+/**
+ * sn_sal_connect_interrupt - Request interrupt, handled by sn_sal_interrupt
+ * @port: Our sn_cons_port (which contains the uart port)
+ *
+ * returns the console irq if interrupt is successfully registered, else 0
+ *
+ */
+static int sn_sal_connect_interrupt(struct sn_cons_port *port)
+{
+ if (request_irq(SGI_UART_VECTOR, sn_sal_interrupt,
+ SA_INTERRUPT | SA_SHIRQ,
+ "SAL console driver", port) >= 0) {
+ return SGI_UART_VECTOR;
+ }
+
+ printk(KERN_INFO "sn_console: console proceeding in polled mode\n");
+ return 0;
+}
+
+/**
+ * sn_sal_timer_poll - this function handles polled console mode
+ * @data: A pointer to our sn_cons_port (which contains the uart port)
+ *
+ * data is the pointer that init_timer will store for us. This function is
+ * associated with init_timer to see if there is any console traffic.
+ * Obviously not used in interrupt mode
+ *
+ */
+static void sn_sal_timer_poll(unsigned long data)
+{
+ struct sn_cons_port *port = (struct sn_cons_port *)data;
+ unsigned long flags;
+
+ if (!port)
+ return;
+
+ if (!port->sc_port.irq) {
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+ sn_receive_chars(port, NULL, flags);
+ sn_transmit_chars(port, TRANSMIT_RAW);
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+ mod_timer(&port->sc_timer,
+ jiffies + port->sc_interrupt_timeout);
+ }
+}
+
+/*
+ * Boot-time initialization code
+ */
+
+/**
+ * sn_sal_switch_to_asynch - Switch to async mode (as opposed to synch)
+ * @port: Our sn_cons_port (which contains the uart port)
+ *
+ * So this is used by sn_sal_serial_console_init (early on, before we're
+ * registered with serial core). It's also used by sn_sal_module_init
+ * right after we've registered with serial core. The later only happens
+ * if we didn't already come through here via sn_sal_serial_console_init.
+ *
+ */
+static void __init sn_sal_switch_to_asynch(struct sn_cons_port *port)
+{
+ unsigned long flags;
+
+ if (!port)
+ return;
+
+ DPRINTF("sn_console: about to switch to asynchronous console\n");
+
+ /* without early_printk, we may be invoked late enough to race
+ * with other cpus doing console IO at this point, however
+ * console interrupts will never be enabled */
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+
+ /* early_printk invocation may have done this for us */
+ if (!port->sc_ops)
+ port->sc_ops = &poll_ops;
+
+ /* we can't turn on the console interrupt (as request_irq
+ * calls kmalloc, which isn't set up yet), so we rely on a
+ * timer to poll for input and push data from the console
+ * buffer.
+ */
+ init_timer(&port->sc_timer);
+ port->sc_timer.function = sn_sal_timer_poll;
+ port->sc_timer.data = (unsigned long)port;
+
+ if (IS_RUNNING_ON_SIMULATOR())
+ port->sc_interrupt_timeout = 6;
+ else {
+ /* 960cps / 16 char FIFO = 60HZ
+ * HZ / (SN_SAL_FIFO_SPEED_CPS / SN_SAL_FIFO_DEPTH) */
+ port->sc_interrupt_timeout =
+ HZ * SN_SAL_UART_FIFO_DEPTH / SN_SAL_UART_FIFO_SPEED_CPS;
+ }
+ mod_timer(&port->sc_timer, jiffies + port->sc_interrupt_timeout);
+
+ port->sc_is_asynch = 1;
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+}
+
+/**
+ * sn_sal_switch_to_interrupts - Switch to interrupt driven mode
+ * @port: Our sn_cons_port (which contains the uart port)
+ *
+ * In sn_sal_module_init, after we're registered with serial core and
+ * the port is added, this function is called to switch us to interrupt
+ * mode. We were previously in asynch/polling mode (using init_timer).
+ *
+ * We attempt to switch to interrupt mode here by calling
+ * sn_sal_connect_interrupt. If that works out, we enable receive interrupts.
+ */
+static void __init sn_sal_switch_to_interrupts(struct sn_cons_port *port)
+{
+ int irq;
+ unsigned long flags;
+
+ if (!port)
+ return;
+
+ DPRINTF("sn_console: switching to interrupt driven console\n");
+
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+
+ irq = sn_sal_connect_interrupt(port);
+
+ if (irq) {
+ port->sc_port.irq = irq;
+ port->sc_ops = &intr_ops;
+
+ /* turn on receive interrupts */
+ ia64_sn_console_intr_enable(SAL_CONSOLE_INTR_RECV);
+ }
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+}
+
+/*
+ * Kernel console definitions
+ */
+
+static void sn_sal_console_write(struct console *, const char *, unsigned);
+static int __init sn_sal_console_setup(struct console *, char *);
+extern struct uart_driver sal_console_uart;
+extern struct tty_driver *uart_console_device(struct console *, int *);
+
+static struct console sal_console = {
+ .name = DEVICE_NAME,
+ .write = sn_sal_console_write,
+ .device = uart_console_device,
+ .setup = sn_sal_console_setup,
+ .index = -1, /* unspecified */
+ .data = &sal_console_uart,
+};
+
+#define SAL_CONSOLE &sal_console
+
+static struct uart_driver sal_console_uart = {
+ .owner = THIS_MODULE,
+ .driver_name = "sn_console",
+ .dev_name = DEVICE_NAME,
+ .major = 0, /* major/minor set at registration time per USE_DYNAMIC_MINOR */
+ .minor = 0,
+ .nr = 1, /* one port */
+ .cons = SAL_CONSOLE,
+};
+
+/**
+ * sn_sal_module_init - When the kernel loads us, get us rolling w/ serial core
+ *
+ * Before this is called, we've been printing kernel messages in a special
+ * early mode not making use of the serial core infrastructure. When our
+ * driver is loaded for real, we register the driver and port with serial
+ * core and try to enable interrupt driven mode.
+ *
+ */
+static int __init sn_sal_module_init(void)
+{
+ int retval;
+
+ if (!ia64_platform_is("sn2"))
+ return -ENODEV;
+
+ printk(KERN_INFO "sn_console: Console driver init\n");
+
+ if (USE_DYNAMIC_MINOR == 1) {
+ misc.minor = MISC_DYNAMIC_MINOR;
+ misc.name = DEVICE_NAME_DYNAMIC;
+ retval = misc_register(&misc);
+ if (retval != 0) {
+ printk
+ ("Failed to register console device using misc_register.\n");
+ return -ENODEV;
+ }
+ sal_console_uart.major = MISC_MAJOR;
+ sal_console_uart.minor = misc.minor;
+ } else {
+ sal_console_uart.major = DEVICE_MAJOR;
+ sal_console_uart.minor = DEVICE_MINOR;
+ }
+
+ /* We register the driver and the port before switching to interrupts
+ * or async above so the proper uart structures are populated */
+
+ if (uart_register_driver(&sal_console_uart) < 0) {
+ printk
+ ("ERROR sn_sal_module_init failed uart_register_driver, line %d\n",
+ __LINE__);
+ return -ENODEV;
+ }
+
+ sal_console_port.sc_port.lock = SPIN_LOCK_UNLOCKED;
+
+ /* Setup the port struct with the minimum needed */
+ sal_console_port.sc_port.membase = (char *)1; /* just needs to be non-zero */
+ sal_console_port.sc_port.type = PORT_16550A;
+ sal_console_port.sc_port.fifosize = SN_SAL_MAX_CHARS;
+ sal_console_port.sc_port.ops = &sn_console_ops;
+ sal_console_port.sc_port.line = 0;
+
+ if (uart_add_one_port(&sal_console_uart, &sal_console_port.sc_port) < 0) {
+ /* error - not sure what I'd do - so I'll do nothing */
+ printk(KERN_ERR "%s: unable to add port\n", __FUNCTION__);
+ }
+
+ /* when this driver is compiled in, the console initialization
+ * will have already switched us into asynchronous operation
+ * before we get here through the module initcalls */
+ if (!sal_console_port.sc_is_asynch) {
+ sn_sal_switch_to_asynch(&sal_console_port);
+ }
+
+ /* at this point (module_init) we can try to turn on interrupts */
+ if (!IS_RUNNING_ON_SIMULATOR()) {
+ sn_sal_switch_to_interrupts(&sal_console_port);
+ }
+ return 0;
+}
+
+/**
+ * sn_sal_module_exit - When we're unloaded, remove the driver/port
+ *
+ */
+static void __exit sn_sal_module_exit(void)
+{
+ del_timer_sync(&sal_console_port.sc_timer);
+ uart_remove_one_port(&sal_console_uart, &sal_console_port.sc_port);
+ uart_unregister_driver(&sal_console_uart);
+ misc_deregister(&misc);
+}
+
+module_init(sn_sal_module_init);
+module_exit(sn_sal_module_exit);
+
+/**
+ * puts_raw_fixed - sn_sal_console_write helper for adding \r's as required
+ * @puts_raw : puts function to do the writing
+ * @s: input string
+ * @count: length
+ *
+ * We need a \r ahead of every \n for direct writes through
+ * ia64_sn_console_putb (what sal_puts_raw below actually does).
+ *
+ */
+
+static void puts_raw_fixed(int (*puts_raw) (const char *s, int len),
+ const char *s, int count)
+{
+ const char *s1;
+
+ /* Output '\r' before each '\n' */
+ while ((s1 = memchr(s, '\n', count)) != NULL) {
+ puts_raw(s, s1 - s);
+ puts_raw("\r\n", 2);
+ count -= s1 + 1 - s;
+ s = s1 + 1;
+ }
+ puts_raw(s, count);
+}
+
+/**
+ * sn_sal_console_write - Print statements before serial core available
+ * @console: Console to operate on - we ignore since we have just one
+ * @s: String to send
+ * @count: length
+ *
+ * This is referenced in the console struct. It is used for early
+ * console printing before we register with serial core and for things
+ * such as kdb. The console_lock must be held when we get here.
+ *
+ * This function has some code for trying to print output even if the lock
+ * is held. We try to cover the case where a lock holder could have died.
+ * We don't use this special case code if we're not registered with serial
+ * core yet. After we're registered with serial core, the only time this
+ * function would be used is for high level kernel output like magic sys req,
+ * kdb, and printk's.
+ */
+static void
+sn_sal_console_write(struct console *co, const char *s, unsigned count)
+{
+ unsigned long flags = 0;
+ struct sn_cons_port *port = &sal_console_port;
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
+ static int stole_lock = 0;
+#endif
+
+ BUG_ON(!port->sc_is_asynch);
+
+ /* We can't look at the xmit buffer if we're not registered with serial core
+ * yet. So only do the fancy recovery after registering
+ */
+ if (port->sc_port.info) {
+
+ /* somebody really wants this output, might be an
+ * oops, kdb, panic, etc. make sure they get it. */
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
+ if (spin_is_locked(&port->sc_port.lock)) {
+ int lhead = port->sc_port.info->xmit.head;
+ int ltail = port->sc_port.info->xmit.tail;
+ int counter, got_lock = 0;
+
+ /*
+ * We attempt to determine if someone has died with the
+ * lock. We wait ~20 secs after the head and tail ptrs
+ * stop moving and assume the lock holder is not functional
+ * and plow ahead. If the lock is freed within the time out
+ * period we re-get the lock and go ahead normally. We also
+ * remember if we have plowed ahead so that we don't have
+ * to wait out the time out period again - the asumption
+ * is that we will time out again.
+ */
+
+ for (counter = 0; counter < 150; mdelay(125), counter++) {
+ if (!spin_is_locked(&port->sc_port.lock)
+ || stole_lock) {
+ if (!stole_lock) {
+ spin_lock_irqsave(&port->
+ sc_port.lock,
+ flags);
+ got_lock = 1;
+ }
+ break;
+ } else {
+ /* still locked */
+ if ((lhead !=
+ port->sc_port.info->xmit.head)
+ || (ltail !=
+ port->sc_port.info->xmit.
+ tail)) {
+ lhead =
+ port->sc_port.info->xmit.
+ head;
+ ltail =
+ port->sc_port.info->xmit.
+ tail;
+ counter = 0;
+ }
+ }
+ }
+ /* flush anything in the serial core xmit buffer, raw */
+ sn_transmit_chars(port, 1);
+ if (got_lock) {
+ spin_unlock_irqrestore(&port->sc_port.lock,
+ flags);
+ stole_lock = 0;
+ } else {
+ /* fell thru */
+ stole_lock = 1;
+ }
+ puts_raw_fixed(port->sc_ops->sal_puts_raw, s, count);
+ } else {
+ stole_lock = 0;
+#endif
+ spin_lock_irqsave(&port->sc_port.lock, flags);
+ sn_transmit_chars(port, 1);
+ spin_unlock_irqrestore(&port->sc_port.lock, flags);
+
+ puts_raw_fixed(port->sc_ops->sal_puts_raw, s, count);
+#if defined(CONFIG_SMP) || defined(CONFIG_PREEMPT)
+ }
+#endif
+ }
+ else {
+ /* Not yet registered with serial core - simple case */
+ puts_raw_fixed(port->sc_ops->sal_puts_raw, s, count);
+ }
+}
+
+
+/**
+ * sn_sal_console_setup - Set up console for early printing
+ * @co: Console to work with
+ * @options: Options to set
+ *
+ * Altix console doesn't do anything with baud rates, etc, anyway.
+ *
+ * This isn't required since not providing the setup function in the
+ * console struct is ok. However, other patches like KDB plop something
+ * here so providing it is easier.
+ *
+ */
+static int __init sn_sal_console_setup(struct console *co, char *options)
+{
+ return 0;
+}
+
+/**
+ * sn_sal_console_write_early - simple early output routine
+ * @co - console struct
+ * @s - string to print
+ * @count - count
+ *
+ * Simple function to provide early output, before even
+ * sn_sal_serial_console_init is called. Referenced in the
+ * console struct registerd in sn_serial_console_early_setup.
+ *
+ */
+static void __init
+sn_sal_console_write_early(struct console *co, const char *s, unsigned count)
+{
+ puts_raw_fixed(sal_console_port.sc_ops->sal_puts_raw, s, count);
+}
+
+/* Used for very early console printing - again, before
+ * sn_sal_serial_console_init is run */
+static struct console sal_console_early __initdata = {
+ .name = "sn_sal",
+ .write = sn_sal_console_write_early,
+ .flags = CON_PRINTBUFFER,
+ .index = -1,
+};
+
+/**
+ * sn_serial_console_early_setup - Sets up early console output support
+ *
+ * Register a console early on... This is for output before even
+ * sn_sal_serial_cosnole_init is called. This function is called from
+ * setup.c. This allows us to do really early polled writes. When
+ * sn_sal_serial_console_init is called, this console is unregistered
+ * and a new one registered.
+ */
+int __init sn_serial_console_early_setup(void)
+{
+ if (!ia64_platform_is("sn2"))
+ return -1;
+
+ sal_console_port.sc_ops = &poll_ops;
+ early_sn_setup(); /* Find SAL entry points */
+ register_console(&sal_console_early);
+
+ return 0;
+}
+
+/**
+ * sn_sal_serial_console_init - Early console output - set up for register
+ *
+ * This function is called when regular console init happens. Because we
+ * support even earlier console output with sn_serial_console_early_setup
+ * (called from setup.c directly), this function unregisters the really
+ * early console.
+ *
+ * Note: Even if setup.c doesn't register sal_console_early, unregistering
+ * it here doesn't hurt anything.
+ *
+ */
+static int __init sn_sal_serial_console_init(void)
+{
+ if (ia64_platform_is("sn2")) {
+ sn_sal_switch_to_asynch(&sal_console_port);
+ DPRINTF("sn_sal_serial_console_init : register console\n");
+ register_console(&sal_console);
+ unregister_console(&sal_console_early);
+ }
+ return 0;
+}
+
+console_initcall(sn_sal_serial_console_init);
--- /dev/null
+#
+# USB ATM driver configuration
+#
+comment "USB ATM/DSL drivers"
+ depends on USB
+
+config USB_ATM
+ tristate "Generic USB ATM/DSL core I/O support"
+ depends on USB && ATM
+ select CRC32
+ default n
+ help
+ This provides a library which is used for packet I/O by USB DSL
+ modems, such as the SpeedTouch driver below.
+
+ To compile this driver as a module, choose M here: the
+ module will be called usb_atm.
+
+config USB_SPEEDTOUCH
+ tristate "Alcatel Speedtouch USB support"
+ depends on USB && ATM
+ select USB_ATM
+ help
+ Say Y here if you have an Alcatel SpeedTouch USB or SpeedTouch 330
+ modem. In order to use your modem you will need to install the
+ two parts of the firmware, extracted by the user space tools; see
+ <http://www.linux-usb.org/SpeedTouch/> for details.
+
+ To compile this driver as a module, choose M here: the
+ module will be called speedtch.
--- /dev/null
+#
+# Makefile for the rest of the USB drivers
+# (the ones that don't fit into any other categories)
+#
+
+obj-$(CONFIG_USB_ATM) += usb_atm.o
+obj-$(CONFIG_USB_SPEEDTOUCH) += speedtch.o
--- /dev/null
+/******************************************************************************
+ * speedtch.c - Alcatel SpeedTouch USB xDSL modem driver
+ *
+ * Copyright (C) 2001, Alcatel
+ * Copyright (C) 2003, Duncan Sands
+ * Copyright (C) 2004, David Woodhouse
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ ******************************************************************************/
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/gfp.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/timer.h>
+#include <linux/errno.h>
+#include <linux/proc_fs.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include <linux/list.h>
+#include <asm/processor.h>
+#include <asm/uaccess.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+#include <linux/atm.h>
+#include <linux/atmdev.h>
+#include <linux/crc32.h>
+#include <linux/init.h>
+#include <linux/firmware.h>
+
+#include "usb_atm.h"
+
+/*
+#define DEBUG
+#define VERBOSE_DEBUG
+*/
+
+#if !defined (DEBUG) && defined (CONFIG_USB_DEBUG)
+# define DEBUG
+#endif
+
+#include <linux/usb.h>
+
+#if defined(CONFIG_FW_LOADER) || defined(CONFIG_FW_LOADER_MODULE)
+# define USE_FW_LOADER
+#endif
+
+#ifdef VERBOSE_DEBUG
+static int udsl_print_packet(const unsigned char *data, int len);
+#define PACKETDEBUG(arg...) udsl_print_packet (arg)
+#define vdbg(arg...) dbg (arg)
+#else
+#define PACKETDEBUG(arg...)
+#define vdbg(arg...)
+#endif
+
+#define DRIVER_AUTHOR "Johan Verrept, Duncan Sands <duncan.sands@free.fr>"
+#define DRIVER_VERSION "1.8"
+#define DRIVER_DESC "Alcatel SpeedTouch USB driver version " DRIVER_VERSION
+
+static const char speedtch_driver_name[] = "speedtch";
+
+#define SPEEDTOUCH_VENDORID 0x06b9
+#define SPEEDTOUCH_PRODUCTID 0x4061
+
+/* Timeout in jiffies */
+#define CTRL_TIMEOUT (2*HZ)
+#define DATA_TIMEOUT (2*HZ)
+
+#define OFFSET_7 0 /* size 1 */
+#define OFFSET_b 1 /* size 8 */
+#define OFFSET_d 9 /* size 4 */
+#define OFFSET_e 13 /* size 1 */
+#define OFFSET_f 14 /* size 1 */
+#define TOTAL 15
+
+#define SIZE_7 1
+#define SIZE_b 8
+#define SIZE_d 4
+#define SIZE_e 1
+#define SIZE_f 1
+
+static int dl_512_first = 0;
+static int sw_buffering = 0;
+
+module_param(dl_512_first, bool, 0444);
+MODULE_PARM_DESC(dl_512_first, "Read 512 bytes before sending firmware");
+
+module_param(sw_buffering, uint, 0444);
+MODULE_PARM_DESC(sw_buffering, "Enable software buffering");
+
+#define UDSL_IOCTL_LINE_UP 1
+#define UDSL_IOCTL_LINE_DOWN 2
+
+#define SPEEDTCH_ENDPOINT_INT 0x81
+#define SPEEDTCH_ENDPOINT_DATA 0x07
+#define SPEEDTCH_ENDPOINT_FIRMWARE 0x05
+
+#define hex2int(c) ( (c >= '0') && (c <= '9') ? (c - '0') : ((c & 0xf) + 9) )
+
+static struct usb_device_id speedtch_usb_ids[] = {
+ {USB_DEVICE(SPEEDTOUCH_VENDORID, SPEEDTOUCH_PRODUCTID)},
+ {}
+};
+
+MODULE_DEVICE_TABLE(usb, speedtch_usb_ids);
+
+struct speedtch_instance_data {
+ struct udsl_instance_data u;
+
+ /* Status */
+ struct urb *int_urb;
+ unsigned char int_data[16];
+ struct work_struct poll_work;
+ struct timer_list poll_timer;
+};
+/* USB */
+
+static int speedtch_usb_probe(struct usb_interface *intf,
+ const struct usb_device_id *id);
+static void speedtch_usb_disconnect(struct usb_interface *intf);
+static int speedtch_usb_ioctl(struct usb_interface *intf, unsigned int code,
+ void *user_data);
+static void speedtch_handle_int(struct urb *urb, struct pt_regs *regs);
+static void speedtch_poll_status(struct speedtch_instance_data *instance);
+
+static struct usb_driver speedtch_usb_driver = {
+ .owner = THIS_MODULE,
+ .name = speedtch_driver_name,
+ .probe = speedtch_usb_probe,
+ .disconnect = speedtch_usb_disconnect,
+ .ioctl = speedtch_usb_ioctl,
+ .id_table = speedtch_usb_ids,
+};
+
+/***************
+** firmware **
+***************/
+
+static void speedtch_got_firmware(struct speedtch_instance_data *instance,
+ int got_it)
+{
+ int err;
+ struct usb_interface *intf;
+
+ down(&instance->u.serialize); /* vs self, speedtch_firmware_start */
+ if (instance->u.status == UDSL_LOADED_FIRMWARE)
+ goto out;
+ if (!got_it) {
+ instance->u.status = UDSL_NO_FIRMWARE;
+ goto out;
+ }
+ if ((err = usb_set_interface(instance->u.usb_dev, 1, 1)) < 0) {
+ dbg("speedtch_got_firmware: usb_set_interface returned %d!", err);
+ instance->u.status = UDSL_NO_FIRMWARE;
+ goto out;
+ }
+
+ /* Set up interrupt endpoint */
+ intf = usb_ifnum_to_if(instance->u.usb_dev, 0);
+ if (intf && !usb_driver_claim_interface(&speedtch_usb_driver, intf, NULL)) {
+
+ instance->int_urb = usb_alloc_urb(0, GFP_KERNEL);
+ if (instance->int_urb) {
+
+ usb_fill_int_urb(instance->int_urb, instance->u.usb_dev,
+ usb_rcvintpipe(instance->u.usb_dev, SPEEDTCH_ENDPOINT_INT),
+ instance->int_data,
+ sizeof(instance->int_data),
+ speedtch_handle_int, instance, 50);
+ err = usb_submit_urb(instance->int_urb, GFP_KERNEL);
+ if (err) {
+ /* Doesn't matter; we'll poll anyway */
+ dbg("speedtch_got_firmware: Submission of interrupt URB failed %d", err);
+ usb_free_urb(instance->int_urb);
+ instance->int_urb = NULL;
+ usb_driver_release_interface(&speedtch_usb_driver, intf);
+ }
+ }
+ }
+ /* Start status polling */
+ mod_timer(&instance->poll_timer, jiffies + (1 * HZ));
+
+ instance->u.status = UDSL_LOADED_FIRMWARE;
+ tasklet_schedule(&instance->u.receive_tasklet);
+ out:
+ up(&instance->u.serialize);
+ wake_up_interruptible(&instance->u.firmware_waiters);
+}
+
+static int speedtch_set_swbuff(struct speedtch_instance_data *instance,
+ int state)
+{
+ struct usb_device *dev = instance->u.usb_dev;
+ int ret;
+
+ ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
+ 0x32, 0x40, state ? 0x01 : 0x00,
+ 0x00, NULL, 0, 100);
+ if (ret < 0) {
+ printk("Warning: %sabling SW buffering: usb_control_msg returned %d\n",
+ state ? "En" : "Dis", ret);
+ return ret;
+ }
+
+ dbg("speedtch_set_swbuff: %sbled SW buffering", state ? "En" : "Dis");
+ return 0;
+}
+
+static void speedtch_test_sequence(struct speedtch_instance_data *instance)
+{
+ struct usb_device *dev = instance->u.usb_dev;
+ unsigned char buf[10];
+ int ret;
+
+ /* URB 147 */
+ buf[0] = 0x1c;
+ buf[1] = 0x50;
+ ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
+ 0x01, 0x40, 0x0b, 0x00, buf, 2, 100);
+ if (ret < 0)
+ printk(KERN_WARNING "%s failed on URB147: %d\n", __func__, ret);
+
+ /* URB 148 */
+ buf[0] = 0x32;
+ buf[1] = 0x00;
+ ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
+ 0x01, 0x40, 0x02, 0x00, buf, 2, 100);
+ if (ret < 0)
+ printk(KERN_WARNING "%s failed on URB148: %d\n", __func__, ret);
+
+ /* URB 149 */
+ buf[0] = 0x01;
+ buf[1] = 0x00;
+ buf[2] = 0x01;
+ ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
+ 0x01, 0x40, 0x03, 0x00, buf, 3, 100);
+ if (ret < 0)
+ printk(KERN_WARNING "%s failed on URB149: %d\n", __func__, ret);
+
+ /* URB 150 */
+ buf[0] = 0x01;
+ buf[1] = 0x00;
+ buf[2] = 0x01;
+ ret = usb_control_msg(dev, usb_sndctrlpipe(dev, 0),
+ 0x01, 0x40, 0x04, 0x00, buf, 3, 100);
+ if (ret < 0)
+ printk(KERN_WARNING "%s failed on URB150: %d\n", __func__, ret);
+}
+
+static int speedtch_start_synchro(struct speedtch_instance_data *instance)
+{
+ struct usb_device *dev = instance->u.usb_dev;
+ unsigned char buf[2];
+ int ret;
+
+ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ 0x12, 0xc0, 0x04, 0x00,
+ buf, sizeof(buf), CTRL_TIMEOUT);
+ if (ret < 0) {
+ printk(KERN_WARNING "SpeedTouch: Failed to start ADSL synchronisation: %d\n", ret);
+ return ret;
+ }
+
+ dbg("speedtch_start_synchro: modem prodded. %d Bytes returned: %02x %02x", ret, buf[0], buf[1]);
+ return 0;
+}
+
+static void speedtch_handle_int(struct urb *urb, struct pt_regs *regs)
+{
+ struct speedtch_instance_data *instance = urb->context;
+ unsigned int count = urb->actual_length;
+ int ret;
+
+ /* The magic interrupt for "up state" */
+ const static unsigned char up_int[6] = { 0xa1, 0x00, 0x01, 0x00, 0x00, 0x00 };
+ /* The magic interrupt for "down state" */
+ const static unsigned char down_int[6] = { 0xa1, 0x00, 0x00, 0x00, 0x00, 0x00 };
+
+ switch (urb->status) {
+ case 0:
+ /* success */
+ break;
+ case -ECONNRESET:
+ case -ENOENT:
+ case -ESHUTDOWN:
+ /* this urb is terminated; clean up */
+ dbg("%s - urb shutting down with status: %d", __func__, urb->status);
+ return;
+ default:
+ dbg("%s - nonzero urb status received: %d", __func__, urb->status);
+ goto exit;
+ }
+
+ if (count < 6) {
+ dbg("%s - int packet too short", __func__);
+ goto exit;
+ }
+
+ if (!memcmp(up_int, instance->int_data, 6)) {
+ del_timer(&instance->poll_timer);
+ printk(KERN_NOTICE "DSL line goes up\n");
+ } else if (!memcmp(down_int, instance->int_data, 6)) {
+ printk(KERN_NOTICE "DSL line goes down\n");
+ } else {
+ int i;
+
+ printk(KERN_DEBUG "Unknown interrupt packet of %d bytes:", count);
+ for (i = 0; i < count; i++)
+ printk(" %02x", instance->int_data[i]);
+ printk("\n");
+ }
+ schedule_work(&instance->poll_work);
+
+ exit:
+ rmb();
+ if (!instance->int_urb)
+ return;
+
+ ret = usb_submit_urb(urb, GFP_ATOMIC);
+ if (ret)
+ err("%s - usb_submit_urb failed with result %d", __func__, ret);
+}
+
+static int speedtch_get_status(struct speedtch_instance_data *instance,
+ unsigned char *buf)
+{
+ struct usb_device *dev = instance->u.usb_dev;
+ int ret;
+
+ memset(buf, 0, TOTAL);
+
+ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ 0x12, 0xc0, 0x07, 0x00, buf + OFFSET_7, SIZE_7,
+ CTRL_TIMEOUT);
+ if (ret < 0) {
+ dbg("MSG 7 failed");
+ return ret;
+ }
+
+ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ 0x12, 0xc0, 0x0b, 0x00, buf + OFFSET_b, SIZE_b,
+ CTRL_TIMEOUT);
+ if (ret < 0) {
+ dbg("MSG B failed");
+ return ret;
+ }
+
+ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ 0x12, 0xc0, 0x0d, 0x00, buf + OFFSET_d, SIZE_d,
+ CTRL_TIMEOUT);
+ if (ret < 0) {
+ dbg("MSG D failed");
+ return ret;
+ }
+
+ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ 0x01, 0xc0, 0x0e, 0x00, buf + OFFSET_e, SIZE_e,
+ CTRL_TIMEOUT);
+ if (ret < 0) {
+ dbg("MSG E failed");
+ return ret;
+ }
+
+ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ 0x01, 0xc0, 0x0f, 0x00, buf + OFFSET_f, SIZE_f,
+ CTRL_TIMEOUT);
+ if (ret < 0) {
+ dbg("MSG F failed");
+ return ret;
+ }
+
+ return 0;
+}
+
+static void speedtch_poll_status(struct speedtch_instance_data *instance)
+{
+ unsigned char buf[TOTAL];
+ int ret;
+
+ ret = speedtch_get_status(instance, buf);
+ if (ret) {
+ printk(KERN_WARNING
+ "SpeedTouch: Error %d fetching device status\n", ret);
+ return;
+ }
+
+ dbg("Line state %02x", buf[OFFSET_7]);
+
+ switch (buf[OFFSET_7]) {
+ case 0:
+ if (instance->u.atm_dev->signal != ATM_PHY_SIG_LOST) {
+ instance->u.atm_dev->signal = ATM_PHY_SIG_LOST;
+ printk(KERN_NOTICE "ADSL line is down\n");
+ }
+ break;
+
+ case 0x08:
+ if (instance->u.atm_dev->signal != ATM_PHY_SIG_UNKNOWN) {
+ instance->u.atm_dev->signal = ATM_PHY_SIG_UNKNOWN;
+ printk(KERN_NOTICE "ADSL line is blocked?\n");
+ }
+ break;
+
+ case 0x10:
+ if (instance->u.atm_dev->signal != ATM_PHY_SIG_LOST) {
+ instance->u.atm_dev->signal = ATM_PHY_SIG_LOST;
+ printk(KERN_NOTICE "ADSL line is synchronising\n");
+ }
+ break;
+
+ case 0x20:
+ if (instance->u.atm_dev->signal != ATM_PHY_SIG_FOUND) {
+ int down_speed = buf[OFFSET_b] | (buf[OFFSET_b + 1] << 8)
+ | (buf[OFFSET_b + 2] << 16) | (buf[OFFSET_b + 3] << 24);
+ int up_speed = buf[OFFSET_b + 4] | (buf[OFFSET_b + 5] << 8)
+ | (buf[OFFSET_b + 6] << 16) | (buf[OFFSET_b + 7] << 24);
+
+ if (!(down_speed & 0x0000ffff) &&
+ !(up_speed & 0x0000ffff)) {
+ down_speed >>= 16;
+ up_speed >>= 16;
+ }
+ instance->u.atm_dev->link_rate = down_speed * 1000 / 424;
+ instance->u.atm_dev->signal = ATM_PHY_SIG_FOUND;
+
+ printk(KERN_NOTICE
+ "ADSL line is up (%d Kib/s down | %d Kib/s up)\n",
+ down_speed, up_speed);
+ }
+ break;
+
+ default:
+ if (instance->u.atm_dev->signal != ATM_PHY_SIG_UNKNOWN) {
+ instance->u.atm_dev->signal = ATM_PHY_SIG_UNKNOWN;
+ printk(KERN_NOTICE "Unknown line state %02x\n", buf[OFFSET_7]);
+ }
+ break;
+ }
+}
+
+static void speedtch_timer_poll(unsigned long data)
+{
+ struct speedtch_instance_data *instance = (void *)data;
+
+ schedule_work(&instance->poll_work);
+ mod_timer(&instance->poll_timer, jiffies + (5 * HZ));
+}
+
+#ifdef USE_FW_LOADER
+static void speedtch_upload_firmware(struct speedtch_instance_data *instance,
+ const struct firmware *fw1,
+ const struct firmware *fw2)
+{
+ unsigned char *buffer;
+ struct usb_device *usb_dev = instance->u.usb_dev;
+ struct usb_interface *intf;
+ int actual_length, ret;
+ int offset;
+
+ dbg("speedtch_upload_firmware");
+
+ if (!(intf = usb_ifnum_to_if(usb_dev, 2))) {
+ dbg("speedtch_upload_firmware: interface not found!");
+ goto fail;
+ }
+
+ if (!(buffer = (unsigned char *)__get_free_page(GFP_KERNEL))) {
+ dbg("speedtch_upload_firmware: no memory for buffer!");
+ goto fail;
+ }
+
+ /* A user-space firmware loader may already have claimed interface #2 */
+ if ((ret =
+ usb_driver_claim_interface(&speedtch_usb_driver, intf, NULL)) < 0) {
+ dbg("speedtch_upload_firmware: interface in use (%d)!", ret);
+ goto fail_free;
+ }
+
+ /* URB 7 */
+ if (dl_512_first) { /* some modems need a read before writing the firmware */
+ ret = usb_bulk_msg(usb_dev, usb_rcvbulkpipe(usb_dev, SPEEDTCH_ENDPOINT_FIRMWARE),
+ buffer, 0x200, &actual_length, 2 * HZ);
+
+ if (ret < 0 && ret != -ETIMEDOUT)
+ dbg("speedtch_upload_firmware: read BLOCK0 from modem failed (%d)!", ret);
+ else
+ dbg("speedtch_upload_firmware: BLOCK0 downloaded (%d bytes)", ret);
+ }
+
+ /* URB 8 : both leds are static green */
+ for (offset = 0; offset < fw1->size; offset += PAGE_SIZE) {
+ int thislen = min_t(int, PAGE_SIZE, fw1->size - offset);
+ memcpy(buffer, fw1->data + offset, thislen);
+
+ ret = usb_bulk_msg(usb_dev, usb_sndbulkpipe(usb_dev, SPEEDTCH_ENDPOINT_FIRMWARE),
+ buffer, thislen, &actual_length, DATA_TIMEOUT);
+
+ if (ret < 0) {
+ dbg("speedtch_upload_firmware: write BLOCK1 to modem failed (%d)!", ret);
+ goto fail_release;
+ }
+ dbg("speedtch_upload_firmware: BLOCK1 uploaded (%d bytes)", fw1->size);
+ }
+
+ /* USB led blinking green, ADSL led off */
+
+ /* URB 11 */
+ ret = usb_bulk_msg(usb_dev, usb_rcvbulkpipe(usb_dev, SPEEDTCH_ENDPOINT_FIRMWARE),
+ buffer, 0x200, &actual_length, DATA_TIMEOUT);
+
+ if (ret < 0) {
+ dbg("speedtch_upload_firmware: read BLOCK2 from modem failed (%d)!", ret);
+ goto fail_release;
+ }
+ dbg("speedtch_upload_firmware: BLOCK2 downloaded (%d bytes)", actual_length);
+
+ /* URBs 12 to 139 - USB led blinking green, ADSL led off */
+ for (offset = 0; offset < fw2->size; offset += PAGE_SIZE) {
+ int thislen = min_t(int, PAGE_SIZE, fw2->size - offset);
+ memcpy(buffer, fw2->data + offset, thislen);
+
+ ret = usb_bulk_msg(usb_dev, usb_sndbulkpipe(usb_dev, SPEEDTCH_ENDPOINT_FIRMWARE),
+ buffer, thislen, &actual_length, DATA_TIMEOUT);
+
+ if (ret < 0) {
+ dbg("speedtch_upload_firmware: write BLOCK3 to modem failed (%d)!", ret);
+ goto fail_release;
+ }
+ }
+ dbg("speedtch_upload_firmware: BLOCK3 uploaded (%d bytes)", fw2->size);
+
+ /* USB led static green, ADSL led static red */
+
+ /* URB 142 */
+ ret = usb_bulk_msg(usb_dev, usb_rcvbulkpipe(usb_dev, SPEEDTCH_ENDPOINT_FIRMWARE),
+ buffer, 0x200, &actual_length, DATA_TIMEOUT);
+
+ if (ret < 0) {
+ dbg("speedtch_upload_firmware: read BLOCK4 from modem failed (%d)!", ret);
+ goto fail_release;
+ }
+
+ /* success */
+ dbg("speedtch_upload_firmware: BLOCK4 downloaded (%d bytes)", actual_length);
+
+ /* Delay to allow firmware to start up. We can do this here
+ because we're in our own kernel thread anyway. */
+ msleep(1000);
+
+ /* Enable software buffering, if requested */
+ if (sw_buffering)
+ speedtch_set_swbuff(instance, 1);
+
+ /* Magic spell; don't ask us what this does */
+ speedtch_test_sequence(instance);
+
+ /* Start modem synchronisation */
+ if (speedtch_start_synchro(instance))
+ dbg("speedtch_start_synchro: failed");
+
+ speedtch_got_firmware(instance, 1);
+
+ free_page((unsigned long)buffer);
+ return;
+
+ fail_release:
+ /* Only release interface #2 if uploading failed; we don't release it
+ we succeeded. This prevents the userspace tools from trying to load
+ the firmware themselves */
+ usb_driver_release_interface(&speedtch_usb_driver, intf);
+ fail_free:
+ free_page((unsigned long)buffer);
+ fail:
+ speedtch_got_firmware(instance, 0);
+}
+
+static int speedtch_find_firmware(struct speedtch_instance_data
+ *instance, int phase,
+ const struct firmware **fw_p)
+{
+ char buf[24];
+ const u16 bcdDevice = instance->u.usb_dev->descriptor.bcdDevice;
+ const u8 major_revision = bcdDevice >> 8;
+ const u8 minor_revision = bcdDevice & 0xff;
+
+ sprintf(buf, "speedtch-%d.bin.%x.%02x", phase, major_revision, minor_revision);
+ dbg("speedtch_find_firmware: looking for %s", buf);
+
+ if (request_firmware(fw_p, buf, &instance->u.usb_dev->dev)) {
+ sprintf(buf, "speedtch-%d.bin.%x", phase, major_revision);
+ dbg("speedtch_find_firmware: looking for %s", buf);
+
+ if (request_firmware(fw_p, buf, &instance->u.usb_dev->dev)) {
+ sprintf(buf, "speedtch-%d.bin", phase);
+ dbg("speedtch_find_firmware: looking for %s", buf);
+
+ if (request_firmware(fw_p, buf, &instance->u.usb_dev->dev)) {
+ dev_warn(&instance->u.usb_dev->dev, "no stage %d firmware found!", phase);
+ return -ENOENT;
+ }
+ }
+ }
+
+ dev_info(&instance->u.usb_dev->dev, "found stage %d firmware %s\n", phase, buf);
+
+ return 0;
+}
+
+static int speedtch_load_firmware(void *arg)
+{
+ const struct firmware *fw1, *fw2;
+ struct speedtch_instance_data *instance = arg;
+
+ BUG_ON(!instance);
+
+ daemonize("firmware/speedtch");
+
+ if (!speedtch_find_firmware(instance, 1, &fw1)) {
+ if (!speedtch_find_firmware(instance, 2, &fw2)) {
+ speedtch_upload_firmware(instance, fw1, fw2);
+ release_firmware(fw2);
+ }
+ release_firmware(fw1);
+ }
+
+ /* In case we failed, set state back to NO_FIRMWARE so that
+ another later attempt may work. Otherwise, we never actually
+ manage to recover if, for example, the firmware is on /usr and
+ we look for it too early. */
+ speedtch_got_firmware(instance, 0);
+
+ module_put(THIS_MODULE);
+ udsl_put_instance(&instance->u);
+ return 0;
+}
+#endif /* USE_FW_LOADER */
+
+static void speedtch_firmware_start(struct speedtch_instance_data *instance)
+{
+#ifdef USE_FW_LOADER
+ int ret;
+#endif
+
+ dbg("speedtch_firmware_start");
+
+ down(&instance->u.serialize); /* vs self, speedtch_got_firmware */
+
+ if (instance->u.status >= UDSL_LOADING_FIRMWARE) {
+ up(&instance->u.serialize);
+ return;
+ }
+
+ instance->u.status = UDSL_LOADING_FIRMWARE;
+ up(&instance->u.serialize);
+
+#ifdef USE_FW_LOADER
+ udsl_get_instance(&instance->u);
+ try_module_get(THIS_MODULE);
+
+ ret = kernel_thread(speedtch_load_firmware, instance,
+ CLONE_FS | CLONE_FILES);
+
+ if (ret >= 0)
+ return; /* OK */
+
+ dbg("speedtch_firmware_start: kernel_thread failed (%d)!", ret);
+
+ module_put(THIS_MODULE);
+ udsl_put_instance(&instance->u);
+ /* Just pretend it never happened... hope modem_run happens */
+#endif /* USE_FW_LOADER */
+
+ speedtch_got_firmware(instance, 0);
+}
+
+static int speedtch_firmware_wait(struct udsl_instance_data *instance)
+{
+ speedtch_firmware_start((void *)instance);
+
+ if (wait_event_interruptible(instance->firmware_waiters, instance->status != UDSL_LOADING_FIRMWARE) < 0)
+ return -ERESTARTSYS;
+
+ return (instance->status == UDSL_LOADED_FIRMWARE) ? 0 : -EAGAIN;
+}
+
+/**********
+** USB **
+**********/
+
+static int speedtch_usb_ioctl(struct usb_interface *intf, unsigned int code,
+ void *user_data)
+{
+ struct speedtch_instance_data *instance = usb_get_intfdata(intf);
+
+ dbg("speedtch_usb_ioctl entered");
+
+ if (!instance) {
+ dbg("speedtch_usb_ioctl: NULL instance!");
+ return -ENODEV;
+ }
+
+ switch (code) {
+ case UDSL_IOCTL_LINE_UP:
+ instance->u.atm_dev->signal = ATM_PHY_SIG_FOUND;
+ speedtch_got_firmware(instance, 1);
+ return (instance->u.status == UDSL_LOADED_FIRMWARE) ? 0 : -EIO;
+ case UDSL_IOCTL_LINE_DOWN:
+ instance->u.atm_dev->signal = ATM_PHY_SIG_LOST;
+ return 0;
+ default:
+ return -ENOTTY;
+ }
+}
+
+static int speedtch_usb_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
+{
+ struct usb_device *dev = interface_to_usbdev(intf);
+ int ifnum = intf->altsetting->desc.bInterfaceNumber;
+ struct speedtch_instance_data *instance;
+ unsigned char mac_str[13];
+ int ret, i;
+ char buf7[SIZE_7];
+
+ dbg("speedtch_usb_probe: trying device with vendor=0x%x, product=0x%x, ifnum %d", dev->descriptor.idVendor, dev->descriptor.idProduct, ifnum);
+
+ if ((dev->descriptor.bDeviceClass != USB_CLASS_VENDOR_SPEC) ||
+ (dev->descriptor.idVendor != SPEEDTOUCH_VENDORID) ||
+ (dev->descriptor.idProduct != SPEEDTOUCH_PRODUCTID) || (ifnum != 1))
+ return -ENODEV;
+
+ dbg("speedtch_usb_probe: device accepted");
+
+ /* instance init */
+ instance = kmalloc(sizeof(*instance), GFP_KERNEL);
+ if (!instance) {
+ dbg("speedtch_usb_probe: no memory for instance data!");
+ return -ENOMEM;
+ }
+
+ memset(instance, 0, sizeof(struct speedtch_instance_data));
+
+ if ((ret = usb_set_interface(dev, 0, 0)) < 0)
+ goto fail;
+
+ if ((ret = usb_set_interface(dev, 2, 0)) < 0)
+ goto fail;
+
+ instance->u.data_endpoint = SPEEDTCH_ENDPOINT_DATA;
+ instance->u.firmware_wait = speedtch_firmware_wait;
+ instance->u.driver_name = speedtch_driver_name;
+
+ ret = udsl_instance_setup(dev, &instance->u);
+ if (ret)
+ goto fail;
+
+ init_timer(&instance->poll_timer);
+ instance->poll_timer.function = speedtch_timer_poll;
+ instance->poll_timer.data = (unsigned long)instance;
+
+ INIT_WORK(&instance->poll_work, (void *)speedtch_poll_status, instance);
+
+ /* set MAC address, it is stored in the serial number */
+ memset(instance->u.atm_dev->esi, 0, sizeof(instance->u.atm_dev->esi));
+ if (usb_string(dev, dev->descriptor.iSerialNumber, mac_str, sizeof(mac_str)) == 12) {
+ for (i = 0; i < 6; i++)
+ instance->u.atm_dev->esi[i] =
+ (hex2int(mac_str[i * 2]) * 16) + (hex2int(mac_str[i * 2 + 1]));
+ }
+
+ /* First check whether the modem already seems to be alive */
+ ret = usb_control_msg(dev, usb_rcvctrlpipe(dev, 0),
+ 0x12, 0xc0, 0x07, 0x00, buf7, SIZE_7, HZ / 2);
+
+ if (ret == SIZE_7) {
+ dbg("firmware appears to be already loaded");
+ speedtch_got_firmware(instance, 1);
+ speedtch_poll_status(instance);
+ } else {
+ speedtch_firmware_start(instance);
+ }
+
+ usb_set_intfdata(intf, instance);
+
+ return 0;
+
+ fail:
+ kfree(instance);
+
+ return -ENOMEM;
+}
+
+static void speedtch_usb_disconnect(struct usb_interface *intf)
+{
+ struct speedtch_instance_data *instance = usb_get_intfdata(intf);
+
+ dbg("speedtch_usb_disconnect entered");
+
+ if (!instance) {
+ dbg("speedtch_usb_disconnect: NULL instance!");
+ return;
+ }
+
+/*QQ need to handle disconnects on interface #2 while uploading firmware */
+/*QQ and what about interface #1? */
+
+ if (instance->int_urb) {
+ struct urb *int_urb = instance->int_urb;
+ instance->int_urb = NULL;
+ wmb();
+ usb_unlink_urb(int_urb);
+ usb_free_urb(int_urb);
+ }
+
+ instance->int_data[0] = 1;
+ del_timer_sync(&instance->poll_timer);
+ wmb();
+ flush_scheduled_work();
+
+ udsl_instance_disconnect(&instance->u);
+
+ /* clean up */
+ usb_set_intfdata(intf, NULL);
+ udsl_put_instance(&instance->u);
+}
+
+/***********
+** init **
+***********/
+
+static int __init speedtch_usb_init(void)
+{
+ dbg("speedtch_usb_init: driver version " DRIVER_VERSION);
+
+ return usb_register(&speedtch_usb_driver);
+}
+
+static void __exit speedtch_usb_cleanup(void)
+{
+ dbg("speedtch_usb_cleanup entered");
+
+ usb_deregister(&speedtch_usb_driver);
+}
+
+module_init(speedtch_usb_init);
+module_exit(speedtch_usb_cleanup);
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
+MODULE_VERSION(DRIVER_VERSION);
--- /dev/null
+/******************************************************************************
+ * usb_atm.c - Generic USB xDSL driver core
+ *
+ * Copyright (C) 2001, Alcatel
+ * Copyright (C) 2003, Duncan Sands, SolNegro, Josep Comas
+ * Copyright (C) 2004, David Woodhouse
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ ******************************************************************************/
+
+/*
+ * Written by Johan Verrept, maintained by Duncan Sands (duncan.sands@free.fr)
+ *
+ * 1.7+: - See the check-in logs
+ *
+ * 1.6: - No longer opens a connection if the firmware is not loaded
+ * - Added support for the speedtouch 330
+ * - Removed the limit on the number of devices
+ * - Module now autoloads on device plugin
+ * - Merged relevant parts of sarlib
+ * - Replaced the kernel thread with a tasklet
+ * - New packet transmission code
+ * - Changed proc file contents
+ * - Fixed all known SMP races
+ * - Many fixes and cleanups
+ * - Various fixes by Oliver Neukum (oliver@neukum.name)
+ *
+ * 1.5A: - Version for inclusion in 2.5 series kernel
+ * - Modifications by Richard Purdie (rpurdie@rpsys.net)
+ * - made compatible with kernel 2.5.6 onwards by changing
+ * udsl_usb_send_data_context->urb to a pointer and adding code
+ * to alloc and free it
+ * - remove_wait_queue() added to udsl_atm_processqueue_thread()
+ *
+ * 1.5: - fixed memory leak when atmsar_decode_aal5 returned NULL.
+ * (reported by stephen.robinson@zen.co.uk)
+ *
+ * 1.4: - changed the spin_lock() under interrupt to spin_lock_irqsave()
+ * - unlink all active send urbs of a vcc that is being closed.
+ *
+ * 1.3.1: - added the version number
+ *
+ * 1.3: - Added multiple send urb support
+ * - fixed memory leak and vcc->tx_inuse starvation bug
+ * when not enough memory left in vcc.
+ *
+ * 1.2: - Fixed race condition in udsl_usb_send_data()
+ * 1.1: - Turned off packet debugging
+ *
+ */
+
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/timer.h>
+#include <linux/errno.h>
+#include <linux/proc_fs.h>
+#include <linux/slab.h>
+#include <linux/wait.h>
+#include <linux/list.h>
+#include <asm/uaccess.h>
+#include <linux/smp_lock.h>
+#include <linux/interrupt.h>
+#include <linux/atm.h>
+#include <linux/atmdev.h>
+#include <linux/crc32.h>
+#include <linux/init.h>
+#include <linux/firmware.h>
+
+#include "usb_atm.h"
+
+/*
+#define DEBUG
+#define VERBOSE_DEBUG
+*/
+
+#if !defined (DEBUG) && defined (CONFIG_USB_DEBUG)
+# define DEBUG
+#endif
+
+#include <linux/usb.h>
+
+#ifdef DEBUG
+#define UDSL_ASSERT(x) BUG_ON(!(x))
+#else
+#define UDSL_ASSERT(x) do { if (!(x)) warn("failed assertion '" #x "' at line %d", __LINE__); } while(0)
+#endif
+
+#ifdef VERBOSE_DEBUG
+static int udsl_print_packet(const unsigned char *data, int len);
+#define PACKETDEBUG(arg...) udsl_print_packet (arg)
+#define vdbg(arg...) dbg (arg)
+#else
+#define PACKETDEBUG(arg...)
+#define vdbg(arg...)
+#endif
+
+#define DRIVER_AUTHOR "Johan Verrept, Duncan Sands <duncan.sands@free.fr>"
+#define DRIVER_VERSION "1.8"
+#define DRIVER_DESC "Generic USB ATM/DSL I/O, version " DRIVER_VERSION
+
+static unsigned int num_rcv_urbs = UDSL_DEFAULT_RCV_URBS;
+static unsigned int num_snd_urbs = UDSL_DEFAULT_SND_URBS;
+static unsigned int num_rcv_bufs = UDSL_DEFAULT_RCV_BUFS;
+static unsigned int num_snd_bufs = UDSL_DEFAULT_SND_BUFS;
+static unsigned int rcv_buf_size = UDSL_DEFAULT_RCV_BUF_SIZE;
+static unsigned int snd_buf_size = UDSL_DEFAULT_SND_BUF_SIZE;
+
+module_param(num_rcv_urbs, uint, 0444);
+MODULE_PARM_DESC(num_rcv_urbs,
+ "Number of urbs used for reception (range: 0-"
+ __MODULE_STRING(UDSL_MAX_RCV_URBS) ", default: "
+ __MODULE_STRING(UDSL_DEFAULT_RCV_URBS) ")");
+
+module_param(num_snd_urbs, uint, 0444);
+MODULE_PARM_DESC(num_snd_urbs,
+ "Number of urbs used for transmission (range: 0-"
+ __MODULE_STRING(UDSL_MAX_SND_URBS) ", default: "
+ __MODULE_STRING(UDSL_DEFAULT_SND_URBS) ")");
+
+module_param(num_rcv_bufs, uint, 0444);
+MODULE_PARM_DESC(num_rcv_bufs,
+ "Number of buffers used for reception (range: 0-"
+ __MODULE_STRING(UDSL_MAX_RCV_BUFS) ", default: "
+ __MODULE_STRING(UDSL_DEFAULT_RCV_BUFS) ")");
+
+module_param(num_snd_bufs, uint, 0444);
+MODULE_PARM_DESC(num_snd_bufs,
+ "Number of buffers used for transmission (range: 0-"
+ __MODULE_STRING(UDSL_MAX_SND_BUFS) ", default: "
+ __MODULE_STRING(UDSL_DEFAULT_SND_BUFS) ")");
+
+module_param(rcv_buf_size, uint, 0444);
+MODULE_PARM_DESC(rcv_buf_size,
+ "Size of the buffers used for reception (range: 0-"
+ __MODULE_STRING(UDSL_MAX_RCV_BUF_SIZE) ", default: "
+ __MODULE_STRING(UDSL_DEFAULT_RCV_BUF_SIZE) ")");
+
+module_param(snd_buf_size, uint, 0444);
+MODULE_PARM_DESC(snd_buf_size,
+ "Size of the buffers used for transmission (range: 0-"
+ __MODULE_STRING(UDSL_MAX_SND_BUF_SIZE) ", default: "
+ __MODULE_STRING(UDSL_DEFAULT_SND_BUF_SIZE) ")");
+
+/* ATM */
+
+static void udsl_atm_dev_close(struct atm_dev *dev);
+static int udsl_atm_open(struct atm_vcc *vcc);
+static void udsl_atm_close(struct atm_vcc *vcc);
+static int udsl_atm_ioctl(struct atm_dev *dev, unsigned int cmd, void __user * arg);
+static int udsl_atm_send(struct atm_vcc *vcc, struct sk_buff *skb);
+static int udsl_atm_proc_read(struct atm_dev *atm_dev, loff_t * pos, char *page);
+
+static struct atmdev_ops udsl_atm_devops = {
+ .dev_close = udsl_atm_dev_close,
+ .open = udsl_atm_open,
+ .close = udsl_atm_close,
+ .ioctl = udsl_atm_ioctl,
+ .send = udsl_atm_send,
+ .proc_read = udsl_atm_proc_read,
+ .owner = THIS_MODULE,
+};
+
+/***********
+** misc **
+***********/
+
+static inline void udsl_pop(struct atm_vcc *vcc, struct sk_buff *skb)
+{
+ if (vcc->pop)
+ vcc->pop(vcc, skb);
+ else
+ dev_kfree_skb(skb);
+}
+
+/*************
+** decode **
+*************/
+
+static inline struct udsl_vcc_data *udsl_find_vcc(struct udsl_instance_data *instance,
+ short vpi, int vci)
+{
+ struct udsl_vcc_data *vcc;
+
+ list_for_each_entry(vcc, &instance->vcc_list, list)
+ if ((vcc->vci == vci) && (vcc->vpi == vpi))
+ return vcc;
+ return NULL;
+}
+
+static void udsl_extract_cells(struct udsl_instance_data *instance,
+ unsigned char *source, unsigned int howmany)
+{
+ struct udsl_vcc_data *cached_vcc = NULL;
+ struct atm_vcc *vcc;
+ struct sk_buff *sarb;
+ struct udsl_vcc_data *vcc_data;
+ int cached_vci = 0;
+ unsigned int i;
+ int pti;
+ int vci;
+ short cached_vpi = 0;
+ short vpi;
+
+ for (i = 0; i < howmany;
+ i++, source += ATM_CELL_SIZE + instance->rcv_padding) {
+ vpi = ((source[0] & 0x0f) << 4) | (source[1] >> 4);
+ vci = ((source[1] & 0x0f) << 12) | (source[2] << 4) | (source[3] >> 4);
+ pti = (source[3] & 0x2) != 0;
+
+ vdbg("udsl_extract_cells: vpi %hd, vci %d, pti %d", vpi, vci, pti);
+
+ if (cached_vcc && (vci == cached_vci) && (vpi == cached_vpi))
+ vcc_data = cached_vcc;
+ else if ((vcc_data = udsl_find_vcc(instance, vpi, vci))) {
+ cached_vcc = vcc_data;
+ cached_vpi = vpi;
+ cached_vci = vci;
+ } else {
+ dbg("udsl_extract_cells: unknown vpi/vci (%hd/%d)!", vpi, vci);
+ continue;
+ }
+
+ vcc = vcc_data->vcc;
+ sarb = vcc_data->sarb;
+
+ if (sarb->tail + ATM_CELL_PAYLOAD > sarb->end) {
+ dbg("udsl_extract_cells: buffer overrun (sarb->len %u, vcc: 0x%p)!", sarb->len, vcc);
+ /* discard cells already received */
+ skb_trim(sarb, 0);
+ }
+
+ memcpy(sarb->tail, source + ATM_CELL_HEADER, ATM_CELL_PAYLOAD);
+ __skb_put(sarb, ATM_CELL_PAYLOAD);
+
+ if (pti) {
+ struct sk_buff *skb;
+ unsigned int length;
+ unsigned int pdu_length;
+
+ length = (source[ATM_CELL_SIZE - 6] << 8) + source[ATM_CELL_SIZE - 5];
+
+ /* guard against overflow */
+ if (length > ATM_MAX_AAL5_PDU) {
+ dbg("udsl_extract_cells: bogus length %u (vcc: 0x%p)!", length, vcc);
+ atomic_inc(&vcc->stats->rx_err);
+ goto out;
+ }
+
+ pdu_length = UDSL_NUM_CELLS(length) * ATM_CELL_PAYLOAD;
+
+ if (sarb->len < pdu_length) {
+ dbg("udsl_extract_cells: bogus pdu_length %u (sarb->len: %u, vcc: 0x%p)!", pdu_length, sarb->len, vcc);
+ atomic_inc(&vcc->stats->rx_err);
+ goto out;
+ }
+
+ if (crc32_be(~0, sarb->tail - pdu_length, pdu_length) != 0xc704dd7b) {
+ dbg("udsl_extract_cells: packet failed crc check (vcc: 0x%p)!", vcc);
+ atomic_inc(&vcc->stats->rx_err);
+ goto out;
+ }
+
+ vdbg("udsl_extract_cells: got packet (length: %u, pdu_length: %u, vcc: 0x%p)", length, pdu_length, vcc);
+
+ if (!(skb = dev_alloc_skb(length))) {
+ dbg("udsl_extract_cells: no memory for skb (length: %u)!", length);
+ atomic_inc(&vcc->stats->rx_drop);
+ goto out;
+ }
+
+ vdbg("udsl_extract_cells: allocated new sk_buff (skb: 0x%p, skb->truesize: %u)", skb, skb->truesize);
+
+ if (!atm_charge(vcc, skb->truesize)) {
+ dbg("udsl_extract_cells: failed atm_charge (skb->truesize: %u)!", skb->truesize);
+ dev_kfree_skb(skb);
+ goto out; /* atm_charge increments rx_drop */
+ }
+
+ memcpy(skb->data, sarb->tail - pdu_length, length);
+ __skb_put(skb, length);
+
+ vdbg("udsl_extract_cells: sending skb 0x%p, skb->len %u, skb->truesize %u", skb, skb->len, skb->truesize);
+
+ PACKETDEBUG(skb->data, skb->len);
+
+ vcc->push(vcc, skb);
+
+ atomic_inc(&vcc->stats->rx);
+ out:
+ skb_trim(sarb, 0);
+ }
+ }
+}
+
+/*************
+** encode **
+*************/
+
+static inline void udsl_fill_cell_header(unsigned char *target, struct atm_vcc *vcc)
+{
+ target[0] = vcc->vpi >> 4;
+ target[1] = (vcc->vpi << 4) | (vcc->vci >> 12);
+ target[2] = vcc->vci >> 4;
+ target[3] = vcc->vci << 4;
+ target[4] = 0xec;
+}
+
+static const unsigned char zeros[ATM_CELL_PAYLOAD];
+
+static void udsl_groom_skb(struct atm_vcc *vcc, struct sk_buff *skb)
+{
+ struct udsl_control *ctrl = UDSL_SKB(skb);
+ unsigned int zero_padding;
+ u32 crc;
+
+ ctrl->atm_data.vcc = vcc;
+
+ ctrl->num_cells = UDSL_NUM_CELLS(skb->len);
+ ctrl->num_entire = skb->len / ATM_CELL_PAYLOAD;
+
+ zero_padding = ctrl->num_cells * ATM_CELL_PAYLOAD - skb->len - ATM_AAL5_TRAILER;
+
+ if (ctrl->num_entire + 1 < ctrl->num_cells)
+ ctrl->pdu_padding = zero_padding - (ATM_CELL_PAYLOAD - ATM_AAL5_TRAILER);
+ else
+ ctrl->pdu_padding = zero_padding;
+
+ ctrl->aal5_trailer[0] = 0; /* UU = 0 */
+ ctrl->aal5_trailer[1] = 0; /* CPI = 0 */
+ ctrl->aal5_trailer[2] = skb->len >> 8;
+ ctrl->aal5_trailer[3] = skb->len;
+
+ crc = crc32_be(~0, skb->data, skb->len);
+ crc = crc32_be(crc, zeros, zero_padding);
+ crc = crc32_be(crc, ctrl->aal5_trailer, 4);
+ crc = ~crc;
+
+ ctrl->aal5_trailer[4] = crc >> 24;
+ ctrl->aal5_trailer[5] = crc >> 16;
+ ctrl->aal5_trailer[6] = crc >> 8;
+ ctrl->aal5_trailer[7] = crc;
+}
+
+static unsigned int udsl_write_cells(struct udsl_instance_data *instance,
+ unsigned int howmany, struct sk_buff *skb,
+ unsigned char **target_p)
+{
+ struct udsl_control *ctrl = UDSL_SKB(skb);
+ unsigned char *target = *target_p;
+ unsigned int nc, ne, i;
+
+ vdbg("udsl_write_cells: howmany=%u, skb->len=%d, num_cells=%u, num_entire=%u, pdu_padding=%u", howmany, skb->len, ctrl->num_cells, ctrl->num_entire, ctrl->pdu_padding);
+
+ nc = ctrl->num_cells;
+ ne = min(howmany, ctrl->num_entire);
+
+ for (i = 0; i < ne; i++) {
+ udsl_fill_cell_header(target, ctrl->atm_data.vcc);
+ target += ATM_CELL_HEADER;
+ memcpy(target, skb->data, ATM_CELL_PAYLOAD);
+ target += ATM_CELL_PAYLOAD;
+ if (instance->snd_padding) {
+ memset(target, 0, instance->snd_padding);
+ target += instance->snd_padding;
+ }
+ __skb_pull(skb, ATM_CELL_PAYLOAD);
+ }
+
+ ctrl->num_entire -= ne;
+
+ if (!(ctrl->num_cells -= ne) || !(howmany -= ne))
+ goto out;
+
+ udsl_fill_cell_header(target, ctrl->atm_data.vcc);
+ target += ATM_CELL_HEADER;
+ memcpy(target, skb->data, skb->len);
+ target += skb->len;
+ __skb_pull(skb, skb->len);
+ memset(target, 0, ctrl->pdu_padding);
+ target += ctrl->pdu_padding;
+
+ if (--ctrl->num_cells) {
+ if (!--howmany) {
+ ctrl->pdu_padding = ATM_CELL_PAYLOAD - ATM_AAL5_TRAILER;
+ goto out;
+ }
+
+ if (instance->snd_padding) {
+ memset(target, 0, instance->snd_padding);
+ target += instance->snd_padding;
+ }
+ udsl_fill_cell_header(target, ctrl->atm_data.vcc);
+ target += ATM_CELL_HEADER;
+ memset(target, 0, ATM_CELL_PAYLOAD - ATM_AAL5_TRAILER);
+ target += ATM_CELL_PAYLOAD - ATM_AAL5_TRAILER;
+
+ --ctrl->num_cells;
+ UDSL_ASSERT(!ctrl->num_cells);
+ }
+
+ memcpy(target, ctrl->aal5_trailer, ATM_AAL5_TRAILER);
+ target += ATM_AAL5_TRAILER;
+ /* set pti bit in last cell */
+ *(target + 3 - ATM_CELL_SIZE) |= 0x2;
+ if (instance->snd_padding) {
+ memset(target, 0, instance->snd_padding);
+ target += instance->snd_padding;
+ }
+ out:
+ *target_p = target;
+ return nc - ctrl->num_cells;
+}
+
+/**************
+** receive **
+**************/
+
+static void udsl_complete_receive(struct urb *urb, struct pt_regs *regs)
+{
+ struct udsl_receive_buffer *buf;
+ struct udsl_instance_data *instance;
+ struct udsl_receiver *rcv;
+ unsigned long flags;
+
+ if (!urb || !(rcv = urb->context)) {
+ dbg("udsl_complete_receive: bad urb!");
+ return;
+ }
+
+ instance = rcv->instance;
+ buf = rcv->buffer;
+
+ buf->filled_cells = urb->actual_length / (ATM_CELL_SIZE + instance->rcv_padding);
+
+ vdbg("udsl_complete_receive: urb 0x%p, status %d, actual_length %d, filled_cells %u, rcv 0x%p, buf 0x%p", urb, urb->status, urb->actual_length, buf->filled_cells, rcv, buf);
+
+ UDSL_ASSERT(buf->filled_cells <= rcv_buf_size);
+
+ /* may not be in_interrupt() */
+ spin_lock_irqsave(&instance->receive_lock, flags);
+ list_add(&rcv->list, &instance->spare_receivers);
+ list_add_tail(&buf->list, &instance->filled_receive_buffers);
+ if (likely(!urb->status))
+ tasklet_schedule(&instance->receive_tasklet);
+ spin_unlock_irqrestore(&instance->receive_lock, flags);
+}
+
+static void udsl_process_receive(unsigned long data)
+{
+ struct udsl_receive_buffer *buf;
+ struct udsl_instance_data *instance = (struct udsl_instance_data *)data;
+ struct udsl_receiver *rcv;
+ int err;
+
+ made_progress:
+ while (!list_empty(&instance->spare_receive_buffers)) {
+ spin_lock_irq(&instance->receive_lock);
+ if (list_empty(&instance->spare_receivers)) {
+ spin_unlock_irq(&instance->receive_lock);
+ break;
+ }
+ rcv = list_entry(instance->spare_receivers.next,
+ struct udsl_receiver, list);
+ list_del(&rcv->list);
+ spin_unlock_irq(&instance->receive_lock);
+
+ buf = list_entry(instance->spare_receive_buffers.next,
+ struct udsl_receive_buffer, list);
+ list_del(&buf->list);
+
+ rcv->buffer = buf;
+
+ usb_fill_bulk_urb(rcv->urb, instance->usb_dev,
+ usb_rcvbulkpipe(instance->usb_dev, instance->data_endpoint),
+ buf->base,
+ rcv_buf_size * (ATM_CELL_SIZE + instance->rcv_padding),
+ udsl_complete_receive, rcv);
+
+ vdbg("udsl_process_receive: sending urb 0x%p, rcv 0x%p, buf 0x%p",
+ rcv->urb, rcv, buf);
+
+ if ((err = usb_submit_urb(rcv->urb, GFP_ATOMIC)) < 0) {
+ dbg("udsl_process_receive: urb submission failed (%d)!", err);
+ list_add(&buf->list, &instance->spare_receive_buffers);
+ spin_lock_irq(&instance->receive_lock);
+ list_add(&rcv->list, &instance->spare_receivers);
+ spin_unlock_irq(&instance->receive_lock);
+ break;
+ }
+ }
+
+ spin_lock_irq(&instance->receive_lock);
+ if (list_empty(&instance->filled_receive_buffers)) {
+ spin_unlock_irq(&instance->receive_lock);
+ return; /* done - no more buffers */
+ }
+ buf = list_entry(instance->filled_receive_buffers.next,
+ struct udsl_receive_buffer, list);
+ list_del(&buf->list);
+ spin_unlock_irq(&instance->receive_lock);
+
+ vdbg("udsl_process_receive: processing buf 0x%p", buf);
+ udsl_extract_cells(instance, buf->base, buf->filled_cells);
+ list_add(&buf->list, &instance->spare_receive_buffers);
+ goto made_progress;
+}
+
+/***********
+** send **
+***********/
+
+static void udsl_complete_send(struct urb *urb, struct pt_regs *regs)
+{
+ struct udsl_instance_data *instance;
+ struct udsl_sender *snd;
+ unsigned long flags;
+
+ if (!urb || !(snd = urb->context) || !(instance = snd->instance)) {
+ dbg("udsl_complete_send: bad urb!");
+ return;
+ }
+
+ vdbg("udsl_complete_send: urb 0x%p, status %d, snd 0x%p, buf 0x%p", urb,
+ urb->status, snd, snd->buffer);
+
+ /* may not be in_interrupt() */
+ spin_lock_irqsave(&instance->send_lock, flags);
+ list_add(&snd->list, &instance->spare_senders);
+ list_add(&snd->buffer->list, &instance->spare_send_buffers);
+ tasklet_schedule(&instance->send_tasklet);
+ spin_unlock_irqrestore(&instance->send_lock, flags);
+}
+
+static void udsl_process_send(unsigned long data)
+{
+ struct udsl_send_buffer *buf;
+ struct udsl_instance_data *instance = (struct udsl_instance_data *)data;
+ struct sk_buff *skb;
+ struct udsl_sender *snd;
+ int err;
+ unsigned int num_written;
+
+ made_progress:
+ spin_lock_irq(&instance->send_lock);
+ while (!list_empty(&instance->spare_senders)) {
+ if (!list_empty(&instance->filled_send_buffers)) {
+ buf = list_entry(instance->filled_send_buffers.next,
+ struct udsl_send_buffer, list);
+ list_del(&buf->list);
+ } else if ((buf = instance->current_buffer)) {
+ instance->current_buffer = NULL;
+ } else /* all buffers empty */
+ break;
+
+ snd = list_entry(instance->spare_senders.next,
+ struct udsl_sender, list);
+ list_del(&snd->list);
+ spin_unlock_irq(&instance->send_lock);
+
+ snd->buffer = buf;
+ usb_fill_bulk_urb(snd->urb, instance->usb_dev,
+ usb_sndbulkpipe(instance->usb_dev, instance->data_endpoint),
+ buf->base,
+ (snd_buf_size - buf->free_cells) * (ATM_CELL_SIZE + instance->snd_padding),
+ udsl_complete_send, snd);
+
+ vdbg("udsl_process_send: submitting urb 0x%p (%d cells), snd 0x%p, buf 0x%p",
+ snd->urb, snd_buf_size - buf->free_cells, snd, buf);
+
+ if ((err = usb_submit_urb(snd->urb, GFP_ATOMIC)) < 0) {
+ dbg("udsl_process_send: urb submission failed (%d)!", err);
+ spin_lock_irq(&instance->send_lock);
+ list_add(&snd->list, &instance->spare_senders);
+ spin_unlock_irq(&instance->send_lock);
+ list_add(&buf->list, &instance->filled_send_buffers);
+ return; /* bail out */
+ }
+
+ spin_lock_irq(&instance->send_lock);
+ } /* while */
+ spin_unlock_irq(&instance->send_lock);
+
+ if (!instance->current_skb)
+ instance->current_skb = skb_dequeue(&instance->sndqueue);
+ if (!instance->current_skb)
+ return; /* done - no more skbs */
+
+ skb = instance->current_skb;
+
+ if (!(buf = instance->current_buffer)) {
+ spin_lock_irq(&instance->send_lock);
+ if (list_empty(&instance->spare_send_buffers)) {
+ instance->current_buffer = NULL;
+ spin_unlock_irq(&instance->send_lock);
+ return; /* done - no more buffers */
+ }
+ buf = list_entry(instance->spare_send_buffers.next,
+ struct udsl_send_buffer, list);
+ list_del(&buf->list);
+ spin_unlock_irq(&instance->send_lock);
+
+ buf->free_start = buf->base;
+ buf->free_cells = snd_buf_size;
+
+ instance->current_buffer = buf;
+ }
+
+ num_written = udsl_write_cells(instance, buf->free_cells, skb, &buf->free_start);
+
+ vdbg("udsl_process_send: wrote %u cells from skb 0x%p to buffer 0x%p",
+ num_written, skb, buf);
+
+ if (!(buf->free_cells -= num_written)) {
+ list_add_tail(&buf->list, &instance->filled_send_buffers);
+ instance->current_buffer = NULL;
+ }
+
+ vdbg("udsl_process_send: buffer contains %d cells, %d left",
+ snd_buf_size - buf->free_cells, buf->free_cells);
+
+ if (!UDSL_SKB(skb)->num_cells) {
+ struct atm_vcc *vcc = UDSL_SKB(skb)->atm_data.vcc;
+
+ udsl_pop(vcc, skb);
+ instance->current_skb = NULL;
+
+ atomic_inc(&vcc->stats->tx);
+ }
+
+ goto made_progress;
+}
+
+static void udsl_cancel_send(struct udsl_instance_data *instance,
+ struct atm_vcc *vcc)
+{
+ struct sk_buff *skb, *n;
+
+ dbg("udsl_cancel_send entered");
+ spin_lock_irq(&instance->sndqueue.lock);
+ for (skb = instance->sndqueue.next, n = skb->next;
+ skb != (struct sk_buff *)&instance->sndqueue;
+ skb = n, n = skb->next)
+ if (UDSL_SKB(skb)->atm_data.vcc == vcc) {
+ dbg("udsl_cancel_send: popping skb 0x%p", skb);
+ __skb_unlink(skb, &instance->sndqueue);
+ udsl_pop(vcc, skb);
+ }
+ spin_unlock_irq(&instance->sndqueue.lock);
+
+ tasklet_disable(&instance->send_tasklet);
+ if ((skb = instance->current_skb) && (UDSL_SKB(skb)->atm_data.vcc == vcc)) {
+ dbg("udsl_cancel_send: popping current skb (0x%p)", skb);
+ instance->current_skb = NULL;
+ udsl_pop(vcc, skb);
+ }
+ tasklet_enable(&instance->send_tasklet);
+ dbg("udsl_cancel_send done");
+}
+
+static int udsl_atm_send(struct atm_vcc *vcc, struct sk_buff *skb)
+{
+ struct udsl_instance_data *instance = vcc->dev->dev_data;
+ int err;
+
+ vdbg("udsl_atm_send called (skb 0x%p, len %u)", skb, skb->len);
+
+ if (!instance) {
+ dbg("udsl_atm_send: NULL data!");
+ err = -ENODEV;
+ goto fail;
+ }
+
+ if (vcc->qos.aal != ATM_AAL5) {
+ dbg("udsl_atm_send: unsupported ATM type %d!", vcc->qos.aal);
+ err = -EINVAL;
+ goto fail;
+ }
+
+ if (skb->len > ATM_MAX_AAL5_PDU) {
+ dbg("udsl_atm_send: packet too long (%d vs %d)!", skb->len,
+ ATM_MAX_AAL5_PDU);
+ err = -EINVAL;
+ goto fail;
+ }
+
+ PACKETDEBUG(skb->data, skb->len);
+
+ udsl_groom_skb(vcc, skb);
+ skb_queue_tail(&instance->sndqueue, skb);
+ tasklet_schedule(&instance->send_tasklet);
+
+ return 0;
+
+ fail:
+ udsl_pop(vcc, skb);
+ return err;
+}
+
+/********************
+** bean counting **
+********************/
+
+static void udsl_destroy_instance(struct kref *kref)
+{
+ struct udsl_instance_data *instance =
+ container_of(kref, struct udsl_instance_data, refcount);
+
+ tasklet_kill(&instance->receive_tasklet);
+ tasklet_kill(&instance->send_tasklet);
+ usb_put_dev(instance->usb_dev);
+ kfree(instance);
+}
+
+void udsl_get_instance(struct udsl_instance_data *instance)
+{
+ kref_get(&instance->refcount);
+}
+
+void udsl_put_instance(struct udsl_instance_data *instance)
+{
+ kref_put(&instance->refcount, udsl_destroy_instance);
+}
+
+/**********
+** ATM **
+**********/
+
+static void udsl_atm_dev_close(struct atm_dev *dev)
+{
+ struct udsl_instance_data *instance = dev->dev_data;
+
+ dev->dev_data = NULL;
+ udsl_put_instance(instance);
+}
+
+static int udsl_atm_proc_read(struct atm_dev *atm_dev, loff_t * pos, char *page)
+{
+ struct udsl_instance_data *instance = atm_dev->dev_data;
+ int left = *pos;
+
+ if (!instance) {
+ dbg("udsl_atm_proc_read: NULL instance!");
+ return -ENODEV;
+ }
+
+ if (!left--)
+ return sprintf(page, "%s\n", instance->description);
+
+ if (!left--)
+ return sprintf(page, "MAC: %02x:%02x:%02x:%02x:%02x:%02x\n",
+ atm_dev->esi[0], atm_dev->esi[1],
+ atm_dev->esi[2], atm_dev->esi[3],
+ atm_dev->esi[4], atm_dev->esi[5]);
+
+ if (!left--)
+ return sprintf(page,
+ "AAL5: tx %d ( %d err ), rx %d ( %d err, %d drop )\n",
+ atomic_read(&atm_dev->stats.aal5.tx),
+ atomic_read(&atm_dev->stats.aal5.tx_err),
+ atomic_read(&atm_dev->stats.aal5.rx),
+ atomic_read(&atm_dev->stats.aal5.rx_err),
+ atomic_read(&atm_dev->stats.aal5.rx_drop));
+
+ if (!left--) {
+ switch (atm_dev->signal) {
+ case ATM_PHY_SIG_FOUND:
+ sprintf(page, "Line up");
+ break;
+ case ATM_PHY_SIG_LOST:
+ sprintf(page, "Line down");
+ break;
+ default:
+ sprintf(page, "Line state unknown");
+ break;
+ }
+
+ if (instance->usb_dev->state == USB_STATE_NOTATTACHED)
+ strcat(page, ", disconnected\n");
+ else {
+ if (instance->status == UDSL_LOADED_FIRMWARE)
+ strcat(page, ", firmware loaded\n");
+ else if (instance->status == UDSL_LOADING_FIRMWARE)
+ strcat(page, ", firmware loading\n");
+ else
+ strcat(page, ", no firmware\n");
+ }
+
+ return strlen(page);
+ }
+
+ return 0;
+}
+
+static int udsl_atm_open(struct atm_vcc *vcc)
+{
+ struct udsl_instance_data *instance = vcc->dev->dev_data;
+ struct udsl_vcc_data *new;
+ unsigned int max_pdu;
+ int vci = vcc->vci;
+ short vpi = vcc->vpi;
+ int err;
+
+ dbg("udsl_atm_open: vpi %hd, vci %d", vpi, vci);
+
+ if (!instance) {
+ dbg("udsl_atm_open: NULL data!");
+ return -ENODEV;
+ }
+
+ /* only support AAL5 */
+ if ((vcc->qos.aal != ATM_AAL5) || (vcc->qos.rxtp.max_sdu < 0)
+ || (vcc->qos.rxtp.max_sdu > ATM_MAX_AAL5_PDU)) {
+ dbg("udsl_atm_open: unsupported ATM type %d!", vcc->qos.aal);
+ return -EINVAL;
+ }
+
+ if (instance->firmware_wait &&
+ (err = instance->firmware_wait(instance)) < 0) {
+ dbg("udsl_atm_open: firmware not loaded (%d)!", err);
+ return err;
+ }
+
+ down(&instance->serialize); /* vs self, udsl_atm_close */
+
+ if (udsl_find_vcc(instance, vpi, vci)) {
+ dbg("udsl_atm_open: %hd/%d already in use!", vpi, vci);
+ up(&instance->serialize);
+ return -EADDRINUSE;
+ }
+
+ if (!(new = kmalloc(sizeof(struct udsl_vcc_data), GFP_KERNEL))) {
+ dbg("udsl_atm_open: no memory for vcc_data!");
+ up(&instance->serialize);
+ return -ENOMEM;
+ }
+
+ memset(new, 0, sizeof(struct udsl_vcc_data));
+ new->vcc = vcc;
+ new->vpi = vpi;
+ new->vci = vci;
+
+ /* udsl_extract_cells requires at least one cell */
+ max_pdu = max(1, UDSL_NUM_CELLS(vcc->qos.rxtp.max_sdu)) * ATM_CELL_PAYLOAD;
+ if (!(new->sarb = alloc_skb(max_pdu, GFP_KERNEL))) {
+ dbg("udsl_atm_open: no memory for SAR buffer!");
+ kfree(new);
+ up(&instance->serialize);
+ return -ENOMEM;
+ }
+
+ vcc->dev_data = new;
+
+ tasklet_disable(&instance->receive_tasklet);
+ list_add(&new->list, &instance->vcc_list);
+ tasklet_enable(&instance->receive_tasklet);
+
+ set_bit(ATM_VF_ADDR, &vcc->flags);
+ set_bit(ATM_VF_PARTIAL, &vcc->flags);
+ set_bit(ATM_VF_READY, &vcc->flags);
+
+ up(&instance->serialize);
+
+ tasklet_schedule(&instance->receive_tasklet);
+
+ dbg("udsl_atm_open: allocated vcc data 0x%p (max_pdu: %u)", new, max_pdu);
+
+ return 0;
+}
+
+static void udsl_atm_close(struct atm_vcc *vcc)
+{
+ struct udsl_instance_data *instance = vcc->dev->dev_data;
+ struct udsl_vcc_data *vcc_data = vcc->dev_data;
+
+ dbg("udsl_atm_close called");
+
+ if (!instance || !vcc_data) {
+ dbg("udsl_atm_close: NULL data!");
+ return;
+ }
+
+ dbg("udsl_atm_close: deallocating vcc 0x%p with vpi %d vci %d",
+ vcc_data, vcc_data->vpi, vcc_data->vci);
+
+ udsl_cancel_send(instance, vcc);
+
+ down(&instance->serialize); /* vs self, udsl_atm_open */
+
+ tasklet_disable(&instance->receive_tasklet);
+ list_del(&vcc_data->list);
+ tasklet_enable(&instance->receive_tasklet);
+
+ kfree_skb(vcc_data->sarb);
+ vcc_data->sarb = NULL;
+
+ kfree(vcc_data);
+ vcc->dev_data = NULL;
+
+ vcc->vpi = ATM_VPI_UNSPEC;
+ vcc->vci = ATM_VCI_UNSPEC;
+ clear_bit(ATM_VF_READY, &vcc->flags);
+ clear_bit(ATM_VF_PARTIAL, &vcc->flags);
+ clear_bit(ATM_VF_ADDR, &vcc->flags);
+
+ up(&instance->serialize);
+
+ dbg("udsl_atm_close successful");
+}
+
+static int udsl_atm_ioctl(struct atm_dev *dev, unsigned int cmd,
+ void __user * arg)
+{
+ switch (cmd) {
+ case ATM_QUERYLOOP:
+ return put_user(ATM_LM_NONE, (int __user *)arg) ? -EFAULT : 0;
+ default:
+ return -ENOIOCTLCMD;
+ }
+}
+
+/**********
+** USB **
+**********/
+
+int udsl_instance_setup(struct usb_device *dev,
+ struct udsl_instance_data *instance)
+{
+ char *buf;
+ int i, length;
+
+ kref_init(&instance->refcount); /* one for USB */
+ udsl_get_instance(instance); /* one for ATM */
+
+ init_MUTEX(&instance->serialize);
+
+ instance->usb_dev = dev;
+
+ INIT_LIST_HEAD(&instance->vcc_list);
+
+ instance->status = UDSL_NO_FIRMWARE;
+ init_waitqueue_head(&instance->firmware_waiters);
+
+ spin_lock_init(&instance->receive_lock);
+ INIT_LIST_HEAD(&instance->spare_receivers);
+ INIT_LIST_HEAD(&instance->filled_receive_buffers);
+
+ tasklet_init(&instance->receive_tasklet, udsl_process_receive, (unsigned long)instance);
+ INIT_LIST_HEAD(&instance->spare_receive_buffers);
+
+ skb_queue_head_init(&instance->sndqueue);
+
+ spin_lock_init(&instance->send_lock);
+ INIT_LIST_HEAD(&instance->spare_senders);
+ INIT_LIST_HEAD(&instance->spare_send_buffers);
+
+ tasklet_init(&instance->send_tasklet, udsl_process_send,
+ (unsigned long)instance);
+ INIT_LIST_HEAD(&instance->filled_send_buffers);
+
+ /* receive init */
+ for (i = 0; i < num_rcv_urbs; i++) {
+ struct udsl_receiver *rcv = &(instance->receivers[i]);
+
+ if (!(rcv->urb = usb_alloc_urb(0, GFP_KERNEL))) {
+ dbg("udsl_usb_probe: no memory for receive urb %d!", i);
+ goto fail;
+ }
+
+ rcv->instance = instance;
+
+ list_add(&rcv->list, &instance->spare_receivers);
+ }
+
+ for (i = 0; i < num_rcv_bufs; i++) {
+ struct udsl_receive_buffer *buf =
+ &(instance->receive_buffers[i]);
+
+ buf->base = kmalloc(rcv_buf_size * (ATM_CELL_SIZE + instance->rcv_padding),
+ GFP_KERNEL);
+ if (!buf->base) {
+ dbg("udsl_usb_probe: no memory for receive buffer %d!", i);
+ goto fail;
+ }
+
+ list_add(&buf->list, &instance->spare_receive_buffers);
+ }
+
+ /* send init */
+ for (i = 0; i < num_snd_urbs; i++) {
+ struct udsl_sender *snd = &(instance->senders[i]);
+
+ if (!(snd->urb = usb_alloc_urb(0, GFP_KERNEL))) {
+ dbg("udsl_usb_probe: no memory for send urb %d!", i);
+ goto fail;
+ }
+
+ snd->instance = instance;
+
+ list_add(&snd->list, &instance->spare_senders);
+ }
+
+ for (i = 0; i < num_snd_bufs; i++) {
+ struct udsl_send_buffer *buf = &(instance->send_buffers[i]);
+
+ buf->base = kmalloc(snd_buf_size * (ATM_CELL_SIZE + instance->snd_padding),
+ GFP_KERNEL);
+ if (!buf->base) {
+ dbg("udsl_usb_probe: no memory for send buffer %d!", i);
+ goto fail;
+ }
+
+ list_add(&buf->list, &instance->spare_send_buffers);
+ }
+
+ /* ATM init */
+ instance->atm_dev = atm_dev_register(instance->driver_name,
+ &udsl_atm_devops, -1, NULL);
+ if (!instance->atm_dev) {
+ dbg("udsl_usb_probe: failed to register ATM device!");
+ goto fail;
+ }
+
+ instance->atm_dev->ci_range.vpi_bits = ATM_CI_MAX;
+ instance->atm_dev->ci_range.vci_bits = ATM_CI_MAX;
+ instance->atm_dev->signal = ATM_PHY_SIG_UNKNOWN;
+
+ /* temp init ATM device, set to 128kbit */
+ instance->atm_dev->link_rate = 128 * 1000 / 424;
+
+ /* device description */
+ buf = instance->description;
+ length = sizeof(instance->description);
+
+ if ((i = usb_string(dev, dev->descriptor.iProduct, buf, length)) < 0)
+ goto finish;
+
+ buf += i;
+ length -= i;
+
+ i = scnprintf(buf, length, " (");
+ buf += i;
+ length -= i;
+
+ if (length <= 0 || (i = usb_make_path(dev, buf, length)) < 0)
+ goto finish;
+
+ buf += i;
+ length -= i;
+
+ snprintf(buf, length, ")");
+
+ finish:
+ /* ready for ATM callbacks */
+ wmb();
+ instance->atm_dev->dev_data = instance;
+
+ usb_get_dev(dev);
+
+ return 0;
+
+ fail:
+ for (i = 0; i < num_snd_bufs; i++)
+ kfree(instance->send_buffers[i].base);
+
+ for (i = 0; i < num_snd_urbs; i++)
+ usb_free_urb(instance->senders[i].urb);
+
+ for (i = 0; i < num_rcv_bufs; i++)
+ kfree(instance->receive_buffers[i].base);
+
+ for (i = 0; i < num_rcv_urbs; i++)
+ usb_free_urb(instance->receivers[i].urb);
+
+ return -ENOMEM;
+}
+
+void udsl_instance_disconnect(struct udsl_instance_data *instance)
+{
+ int i;
+
+ dbg("udsl_instance_disconnect entered");
+
+ if (!instance) {
+ dbg("udsl_instance_disconnect: NULL instance!");
+ return;
+ }
+
+ /* receive finalize */
+ tasklet_disable(&instance->receive_tasklet);
+
+ for (i = 0; i < num_rcv_urbs; i++)
+ usb_kill_urb(instance->receivers[i].urb);
+
+ /* no need to take the spinlock */
+ INIT_LIST_HEAD(&instance->filled_receive_buffers);
+ INIT_LIST_HEAD(&instance->spare_receive_buffers);
+
+ tasklet_enable(&instance->receive_tasklet);
+
+ for (i = 0; i < num_rcv_urbs; i++)
+ usb_free_urb(instance->receivers[i].urb);
+
+ for (i = 0; i < num_rcv_bufs; i++)
+ kfree(instance->receive_buffers[i].base);
+
+ /* send finalize */
+ tasklet_disable(&instance->send_tasklet);
+
+ for (i = 0; i < num_snd_urbs; i++)
+ usb_kill_urb(instance->senders[i].urb);
+
+ /* no need to take the spinlock */
+ INIT_LIST_HEAD(&instance->spare_senders);
+ INIT_LIST_HEAD(&instance->spare_send_buffers);
+ instance->current_buffer = NULL;
+
+ tasklet_enable(&instance->send_tasklet);
+
+ for (i = 0; i < num_snd_urbs; i++)
+ usb_free_urb(instance->senders[i].urb);
+
+ for (i = 0; i < num_snd_bufs; i++)
+ kfree(instance->send_buffers[i].base);
+
+ /* ATM finalize */
+ shutdown_atm_dev(instance->atm_dev);
+}
+
+EXPORT_SYMBOL_GPL(udsl_get_instance);
+EXPORT_SYMBOL_GPL(udsl_put_instance);
+EXPORT_SYMBOL_GPL(udsl_instance_setup);
+EXPORT_SYMBOL_GPL(udsl_instance_disconnect);
+
+/***********
+** init **
+***********/
+
+static int __init udsl_usb_init(void)
+{
+ dbg("udsl_usb_init: driver version " DRIVER_VERSION);
+
+ if (sizeof(struct udsl_control) > sizeof(((struct sk_buff *) 0)->cb)) {
+ printk(KERN_ERR __FILE__ ": unusable with this kernel!\n");
+ return -EIO;
+ }
+
+ if ((num_rcv_urbs > UDSL_MAX_RCV_URBS)
+ || (num_snd_urbs > UDSL_MAX_SND_URBS)
+ || (num_rcv_bufs > UDSL_MAX_RCV_BUFS)
+ || (num_snd_bufs > UDSL_MAX_SND_BUFS)
+ || (rcv_buf_size > UDSL_MAX_RCV_BUF_SIZE)
+ || (snd_buf_size > UDSL_MAX_SND_BUF_SIZE))
+ return -EINVAL;
+
+ return 0;
+}
+
+static void __exit udsl_usb_exit(void)
+{
+}
+
+module_init(udsl_usb_init);
+module_exit(udsl_usb_exit);
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
+MODULE_VERSION(DRIVER_VERSION);
+
+/************
+** debug **
+************/
+
+#ifdef VERBOSE_DEBUG
+static int udsl_print_packet(const unsigned char *data, int len)
+{
+ unsigned char buffer[256];
+ int i = 0, j = 0;
+
+ for (i = 0; i < len;) {
+ buffer[0] = '\0';
+ sprintf(buffer, "%.3d :", i);
+ for (j = 0; (j < 16) && (i < len); j++, i++) {
+ sprintf(buffer, "%s %2.2x", buffer, data[i]);
+ }
+ dbg("%s", buffer);
+ }
+ return i;
+}
+#endif
--- /dev/null
+/******************************************************************************
+ * usb_atm.h - Generic USB xDSL driver core
+ *
+ * Copyright (C) 2001, Alcatel
+ * Copyright (C) 2003, Duncan Sands, SolNegro, Josep Comas
+ * Copyright (C) 2004, David Woodhouse
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+ *
+ ******************************************************************************/
+
+#include <linux/list.h>
+#include <linux/usb.h>
+#include <linux/kref.h>
+#include <linux/atm.h>
+#include <linux/atmdev.h>
+#include <asm/semaphore.h>
+
+#define UDSL_MAX_RCV_URBS 4
+#define UDSL_MAX_SND_URBS 4
+#define UDSL_MAX_RCV_BUFS 8
+#define UDSL_MAX_SND_BUFS 8
+#define UDSL_MAX_RCV_BUF_SIZE 1024 /* ATM cells */
+#define UDSL_MAX_SND_BUF_SIZE 1024 /* ATM cells */
+#define UDSL_DEFAULT_RCV_URBS 2
+#define UDSL_DEFAULT_SND_URBS 2
+#define UDSL_DEFAULT_RCV_BUFS 4
+#define UDSL_DEFAULT_SND_BUFS 4
+#define UDSL_DEFAULT_RCV_BUF_SIZE 64 /* ATM cells */
+#define UDSL_DEFAULT_SND_BUF_SIZE 64 /* ATM cells */
+
+#define ATM_CELL_HEADER (ATM_CELL_SIZE - ATM_CELL_PAYLOAD)
+#define UDSL_NUM_CELLS(x) (((x) + ATM_AAL5_TRAILER + ATM_CELL_PAYLOAD - 1) / ATM_CELL_PAYLOAD)
+
+/* receive */
+
+struct udsl_receive_buffer {
+ struct list_head list;
+ unsigned char *base;
+ unsigned int filled_cells;
+};
+
+struct udsl_receiver {
+ struct list_head list;
+ struct udsl_receive_buffer *buffer;
+ struct urb *urb;
+ struct udsl_instance_data *instance;
+};
+
+struct udsl_vcc_data {
+ /* vpi/vci lookup */
+ struct list_head list;
+ short vpi;
+ int vci;
+ struct atm_vcc *vcc;
+
+ /* raw cell reassembly */
+ struct sk_buff *sarb;
+};
+
+/* send */
+
+struct udsl_send_buffer {
+ struct list_head list;
+ unsigned char *base;
+ unsigned char *free_start;
+ unsigned int free_cells;
+};
+
+struct udsl_sender {
+ struct list_head list;
+ struct udsl_send_buffer *buffer;
+ struct urb *urb;
+ struct udsl_instance_data *instance;
+};
+
+struct udsl_control {
+ struct atm_skb_data atm_data;
+ unsigned int num_cells;
+ unsigned int num_entire;
+ unsigned int pdu_padding;
+ unsigned char aal5_trailer[ATM_AAL5_TRAILER];
+};
+
+#define UDSL_SKB(x) ((struct udsl_control *)(x)->cb)
+
+/* main driver data */
+
+enum udsl_status {
+ UDSL_NO_FIRMWARE,
+ UDSL_LOADING_FIRMWARE,
+ UDSL_LOADED_FIRMWARE
+};
+
+struct udsl_instance_data {
+ struct kref refcount;
+ struct semaphore serialize;
+
+ /* USB device part */
+ struct usb_device *usb_dev;
+ char description[64];
+ int data_endpoint;
+ int snd_padding;
+ int rcv_padding;
+ const char *driver_name;
+
+ /* ATM device part */
+ struct atm_dev *atm_dev;
+ struct list_head vcc_list;
+
+ /* firmware */
+ int (*firmware_wait) (struct udsl_instance_data *);
+ enum udsl_status status;
+ wait_queue_head_t firmware_waiters;
+
+ /* receive */
+ struct udsl_receiver receivers[UDSL_MAX_RCV_URBS];
+ struct udsl_receive_buffer receive_buffers[UDSL_MAX_RCV_BUFS];
+
+ spinlock_t receive_lock;
+ struct list_head spare_receivers;
+ struct list_head filled_receive_buffers;
+
+ struct tasklet_struct receive_tasklet;
+ struct list_head spare_receive_buffers;
+
+ /* send */
+ struct udsl_sender senders[UDSL_MAX_SND_URBS];
+ struct udsl_send_buffer send_buffers[UDSL_MAX_SND_BUFS];
+
+ struct sk_buff_head sndqueue;
+
+ spinlock_t send_lock;
+ struct list_head spare_senders;
+ struct list_head spare_send_buffers;
+
+ struct tasklet_struct send_tasklet;
+ struct sk_buff *current_skb; /* being emptied */
+ struct udsl_send_buffer *current_buffer; /* being filled */
+ struct list_head filled_send_buffers;
+};
+
+extern int udsl_instance_setup(struct usb_device *dev,
+ struct udsl_instance_data *instance);
+extern void udsl_instance_disconnect(struct udsl_instance_data *instance);
+extern void udsl_get_instance(struct udsl_instance_data *instance);
+extern void udsl_put_instance(struct udsl_instance_data *instance);
--- /dev/null
+/*
+ *
+ * Includes for cdc-acm.c
+ *
+ * Mainly take from usbnet's cdc-ether part
+ *
+ */
+
+/*
+ * CMSPAR, some architectures can't have space and mark parity.
+ */
+
+#ifndef CMSPAR
+#define CMSPAR 0
+#endif
+
+/*
+ * Major and minor numbers.
+ */
+
+#define ACM_TTY_MAJOR 166
+#define ACM_TTY_MINORS 32
+
+/*
+ * Requests.
+ */
+
+#define USB_RT_ACM (USB_TYPE_CLASS | USB_RECIP_INTERFACE)
+
+#define ACM_REQ_COMMAND 0x00
+#define ACM_REQ_RESPONSE 0x01
+#define ACM_REQ_SET_FEATURE 0x02
+#define ACM_REQ_GET_FEATURE 0x03
+#define ACM_REQ_CLEAR_FEATURE 0x04
+
+#define ACM_REQ_SET_LINE 0x20
+#define ACM_REQ_GET_LINE 0x21
+#define ACM_REQ_SET_CONTROL 0x22
+#define ACM_REQ_SEND_BREAK 0x23
+
+/*
+ * IRQs.
+ */
+
+#define ACM_IRQ_NETWORK 0x00
+#define ACM_IRQ_LINE_STATE 0x20
+
+/*
+ * Output control lines.
+ */
+
+#define ACM_CTRL_DTR 0x01
+#define ACM_CTRL_RTS 0x02
+
+/*
+ * Input control lines and line errors.
+ */
+
+#define ACM_CTRL_DCD 0x01
+#define ACM_CTRL_DSR 0x02
+#define ACM_CTRL_BRK 0x04
+#define ACM_CTRL_RI 0x08
+
+#define ACM_CTRL_FRAMING 0x10
+#define ACM_CTRL_PARITY 0x20
+#define ACM_CTRL_OVERRUN 0x40
+
+/*
+ * Line speed and caracter encoding.
+ */
+
+struct acm_line {
+ __le32 speed;
+ __u8 stopbits;
+ __u8 parity;
+ __u8 databits;
+} __attribute__ ((packed));
+
+/*
+ * Internal driver structures.
+ */
+
+struct acm {
+ struct usb_device *dev; /* the corresponding usb device */
+ struct usb_interface *control; /* control interface */
+ struct usb_interface *data; /* data interface */
+ struct tty_struct *tty; /* the corresponding tty */
+ struct urb *ctrlurb, *readurb, *writeurb; /* urbs */
+ u8 *ctrl_buffer, *read_buffer, *write_buffer; /* buffers of urbs */
+ dma_addr_t ctrl_dma, read_dma, write_dma; /* dma handles of buffers */
+ struct acm_line line; /* line coding (bits, stop, parity) */
+ struct work_struct work; /* work queue entry for line discipline waking up */
+ struct tasklet_struct bh; /* rx processing */
+ spinlock_t throttle_lock; /* synchronize throtteling and read callback */
+ unsigned int ctrlin; /* input control lines (DCD, DSR, RI, break, overruns) */
+ unsigned int ctrlout; /* output control lines (DTR, RTS) */
+ unsigned int writesize; /* max packet size for the output bulk endpoint */
+ unsigned int readsize,ctrlsize; /* buffer sizes for freeing */
+ unsigned int used; /* someone has this acm's device open */
+ unsigned int minor; /* acm minor number */
+ unsigned char throttle; /* throttled by tty layer */
+ unsigned char clocal; /* termios CLOCAL */
+ unsigned char ready_for_write; /* write urb can be used */
+ unsigned char resubmit_to_unthrottle; /* throtteling has disabled the read urb */
+ unsigned int ctrl_caps; /* control capabilities from the class specific header */
+};
+
+/* "Union Functional Descriptor" from CDC spec 5.2.3.X */
+struct union_desc {
+ u8 bLength;
+ u8 bDescriptorType;
+ u8 bDescriptorSubType;
+
+ u8 bMasterInterface0;
+ u8 bSlaveInterface0;
+ /* ... and there could be other slave interfaces */
+} __attribute__ ((packed));
+
+/* class specific descriptor types */
+#define CDC_HEADER_TYPE 0x00
+#define CDC_CALL_MANAGEMENT_TYPE 0x01
+#define CDC_AC_MANAGEMENT_TYPE 0x02
+#define CDC_UNION_TYPE 0x06
+#define CDC_COUNTRY_TYPE 0x07
+
+#define CDC_DATA_INTERFACE_TYPE 0x0a
+
+
--- /dev/null
+/*
+ * drivers/usb/core/sysfs.c
+ *
+ * (C) Copyright 2002 David Brownell
+ * (C) Copyright 2002,2004 Greg Kroah-Hartman
+ * (C) Copyright 2002,2004 IBM Corp.
+ *
+ * All of the sysfs file attributes for usb devices and interfaces.
+ *
+ */
+
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+
+#ifdef CONFIG_USB_DEBUG
+ #define DEBUG
+#else
+ #undef DEBUG
+#endif
+#include <linux/usb.h>
+
+#include "usb.h"
+
+/* Active configuration fields */
+#define usb_actconfig_show(field, multiplier, format_string) \
+static ssize_t show_##field (struct device *dev, char *buf) \
+{ \
+ struct usb_device *udev; \
+ struct usb_host_config *actconfig; \
+ \
+ udev = to_usb_device (dev); \
+ actconfig = udev->actconfig; \
+ if (actconfig) \
+ return sprintf (buf, format_string, \
+ actconfig->desc.field * multiplier); \
+ else \
+ return 0; \
+} \
+
+#define usb_actconfig_attr(field, multiplier, format_string) \
+usb_actconfig_show(field, multiplier, format_string) \
+static DEVICE_ATTR(field, S_IRUGO, show_##field, NULL);
+
+usb_actconfig_attr (bNumInterfaces, 1, "%2d\n")
+usb_actconfig_attr (bmAttributes, 1, "%2x\n")
+usb_actconfig_attr (bMaxPower, 2, "%3dmA\n")
+
+#define usb_actconfig_str(name, field) \
+static ssize_t show_##name(struct device *dev, char *buf) \
+{ \
+ struct usb_device *udev; \
+ struct usb_host_config *actconfig; \
+ int len; \
+ \
+ udev = to_usb_device (dev); \
+ actconfig = udev->actconfig; \
+ if (!actconfig) \
+ return 0; \
+ len = usb_string(udev, actconfig->desc.field, buf, PAGE_SIZE); \
+ if (len < 0) \
+ return 0; \
+ buf[len] = '\n'; \
+ buf[len+1] = 0; \
+ return len+1; \
+} \
+static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL);
+
+usb_actconfig_str (configuration, iConfiguration)
+
+/* configuration value is always present, and r/w */
+usb_actconfig_show(bConfigurationValue, 1, "%u\n");
+
+static ssize_t
+set_bConfigurationValue (struct device *dev, const char *buf, size_t count)
+{
+ struct usb_device *udev = udev = to_usb_device (dev);
+ int config, value;
+
+ if (sscanf (buf, "%u", &config) != 1 || config > 255)
+ return -EINVAL;
+ usb_lock_device(udev);
+ value = usb_set_configuration (udev, config);
+ usb_unlock_device(udev);
+ return (value < 0) ? value : count;
+}
+
+static DEVICE_ATTR(bConfigurationValue, S_IRUGO | S_IWUSR,
+ show_bConfigurationValue, set_bConfigurationValue);
+
+/* String fields */
+#define usb_string_attr(name, field) \
+static ssize_t show_##name(struct device *dev, char *buf) \
+{ \
+ struct usb_device *udev; \
+ int len; \
+ \
+ udev = to_usb_device (dev); \
+ len = usb_string(udev, udev->descriptor.field, buf, PAGE_SIZE); \
+ if (len < 0) \
+ return 0; \
+ buf[len] = '\n'; \
+ buf[len+1] = 0; \
+ return len+1; \
+} \
+static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL);
+
+usb_string_attr(product, iProduct);
+usb_string_attr(manufacturer, iManufacturer);
+usb_string_attr(serial, iSerialNumber);
+
+static ssize_t
+show_speed (struct device *dev, char *buf)
+{
+ struct usb_device *udev;
+ char *speed;
+
+ udev = to_usb_device (dev);
+
+ switch (udev->speed) {
+ case USB_SPEED_LOW:
+ speed = "1.5";
+ break;
+ case USB_SPEED_UNKNOWN:
+ case USB_SPEED_FULL:
+ speed = "12";
+ break;
+ case USB_SPEED_HIGH:
+ speed = "480";
+ break;
+ default:
+ speed = "unknown";
+ }
+ return sprintf (buf, "%s\n", speed);
+}
+static DEVICE_ATTR(speed, S_IRUGO, show_speed, NULL);
+
+static ssize_t
+show_devnum (struct device *dev, char *buf)
+{
+ struct usb_device *udev;
+
+ udev = to_usb_device (dev);
+ return sprintf (buf, "%d\n", udev->devnum);
+}
+static DEVICE_ATTR(devnum, S_IRUGO, show_devnum, NULL);
+
+static ssize_t
+show_version (struct device *dev, char *buf)
+{
+ struct usb_device *udev;
+
+ udev = to_usb_device (dev);
+ return sprintf (buf, "%2x.%02x\n", udev->descriptor.bcdUSB >> 8,
+ udev->descriptor.bcdUSB & 0xff);
+}
+static DEVICE_ATTR(version, S_IRUGO, show_version, NULL);
+
+static ssize_t
+show_maxchild (struct device *dev, char *buf)
+{
+ struct usb_device *udev;
+
+ udev = to_usb_device (dev);
+ return sprintf (buf, "%d\n", udev->maxchild);
+}
+static DEVICE_ATTR(maxchild, S_IRUGO, show_maxchild, NULL);
+
+/* Descriptor fields */
+#define usb_descriptor_attr(field, format_string) \
+static ssize_t \
+show_##field (struct device *dev, char *buf) \
+{ \
+ struct usb_device *udev; \
+ \
+ udev = to_usb_device (dev); \
+ return sprintf (buf, format_string, udev->descriptor.field); \
+} \
+static DEVICE_ATTR(field, S_IRUGO, show_##field, NULL);
+
+usb_descriptor_attr (idVendor, "%04x\n")
+usb_descriptor_attr (idProduct, "%04x\n")
+usb_descriptor_attr (bcdDevice, "%04x\n")
+usb_descriptor_attr (bDeviceClass, "%02x\n")
+usb_descriptor_attr (bDeviceSubClass, "%02x\n")
+usb_descriptor_attr (bDeviceProtocol, "%02x\n")
+usb_descriptor_attr (bNumConfigurations, "%d\n")
+
+static struct attribute *dev_attrs[] = {
+ /* current configuration's attributes */
+ &dev_attr_bNumInterfaces.attr,
+ &dev_attr_bConfigurationValue.attr,
+ &dev_attr_bmAttributes.attr,
+ &dev_attr_bMaxPower.attr,
+ /* device attributes */
+ &dev_attr_idVendor.attr,
+ &dev_attr_idProduct.attr,
+ &dev_attr_bcdDevice.attr,
+ &dev_attr_bDeviceClass.attr,
+ &dev_attr_bDeviceSubClass.attr,
+ &dev_attr_bDeviceProtocol.attr,
+ &dev_attr_bNumConfigurations.attr,
+ &dev_attr_speed.attr,
+ &dev_attr_devnum.attr,
+ &dev_attr_version.attr,
+ &dev_attr_maxchild.attr,
+ NULL,
+};
+static struct attribute_group dev_attr_grp = {
+ .attrs = dev_attrs,
+};
+
+void usb_create_sysfs_dev_files (struct usb_device *udev)
+{
+ struct device *dev = &udev->dev;
+
+ sysfs_create_group(&dev->kobj, &dev_attr_grp);
+
+ if (udev->descriptor.iManufacturer)
+ device_create_file (dev, &dev_attr_manufacturer);
+ if (udev->descriptor.iProduct)
+ device_create_file (dev, &dev_attr_product);
+ if (udev->descriptor.iSerialNumber)
+ device_create_file (dev, &dev_attr_serial);
+ device_create_file (dev, &dev_attr_configuration);
+}
+
+void usb_remove_sysfs_dev_files (struct usb_device *udev)
+{
+ struct device *dev = &udev->dev;
+
+ sysfs_remove_group(&dev->kobj, &dev_attr_grp);
+
+ if (udev->descriptor.iManufacturer)
+ device_remove_file(dev, &dev_attr_manufacturer);
+ if (udev->descriptor.iProduct)
+ device_remove_file(dev, &dev_attr_product);
+ if (udev->descriptor.iSerialNumber)
+ device_remove_file(dev, &dev_attr_serial);
+ device_remove_file (dev, &dev_attr_configuration);
+}
+
+/* Interface fields */
+#define usb_intf_attr(field, format_string) \
+static ssize_t \
+show_##field (struct device *dev, char *buf) \
+{ \
+ struct usb_interface *intf = to_usb_interface (dev); \
+ \
+ return sprintf (buf, format_string, intf->cur_altsetting->desc.field); \
+} \
+static DEVICE_ATTR(field, S_IRUGO, show_##field, NULL);
+
+usb_intf_attr (bInterfaceNumber, "%02x\n")
+usb_intf_attr (bAlternateSetting, "%2d\n")
+usb_intf_attr (bNumEndpoints, "%02x\n")
+usb_intf_attr (bInterfaceClass, "%02x\n")
+usb_intf_attr (bInterfaceSubClass, "%02x\n")
+usb_intf_attr (bInterfaceProtocol, "%02x\n")
+
+#define usb_intf_str(name, field) \
+static ssize_t show_##name(struct device *dev, char *buf) \
+{ \
+ struct usb_interface *intf; \
+ struct usb_device *udev; \
+ int len; \
+ \
+ intf = to_usb_interface (dev); \
+ udev = interface_to_usbdev (intf); \
+ len = usb_string(udev, intf->cur_altsetting->desc.field, buf, PAGE_SIZE);\
+ if (len < 0) \
+ return 0; \
+ buf[len] = '\n'; \
+ buf[len+1] = 0; \
+ return len+1; \
+} \
+static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL);
+
+usb_intf_str (interface, iInterface);
+
+static struct attribute *intf_attrs[] = {
+ &dev_attr_bInterfaceNumber.attr,
+ &dev_attr_bAlternateSetting.attr,
+ &dev_attr_bNumEndpoints.attr,
+ &dev_attr_bInterfaceClass.attr,
+ &dev_attr_bInterfaceSubClass.attr,
+ &dev_attr_bInterfaceProtocol.attr,
+ NULL,
+};
+static struct attribute_group intf_attr_grp = {
+ .attrs = intf_attrs,
+};
+
+void usb_create_sysfs_intf_files (struct usb_interface *intf)
+{
+ sysfs_create_group(&intf->dev.kobj, &intf_attr_grp);
+
+ if (intf->cur_altsetting->desc.iInterface)
+ device_create_file(&intf->dev, &dev_attr_interface);
+
+}
+
+void usb_remove_sysfs_intf_files (struct usb_interface *intf)
+{
+ sysfs_remove_group(&intf->dev.kobj, &intf_attr_grp);
+
+ if (intf->cur_altsetting->desc.iInterface)
+ device_remove_file(&intf->dev, &dev_attr_interface);
+
+}
--- /dev/null
+/*
+ * OHCI HCD (Host Controller Driver) for USB.
+ *
+ * (C) Copyright 1999 Roman Weissgaerber <weissg@vienna.at>
+ * (C) Copyright 2000-2002 David Brownell <dbrownell@users.sourceforge.net>
+ * (C) Copyright 2002 Hewlett-Packard Company
+ *
+ * Bus Glue for Sharp LH7A404
+ *
+ * Written by Christopher Hoover <ch@hpl.hp.com>
+ * Based on fragments of previous driver by Rusell King et al.
+ *
+ * Modified for LH7A404 from ohci-sa1111.c
+ * by Durgesh Pattamatta <pattamattad@sharpsec.com>
+ *
+ * This file is licenced under the GPL.
+ */
+
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <asm/arch/hardware.h>
+
+
+extern int usb_disabled(void);
+
+/*-------------------------------------------------------------------------*/
+
+static void lh7a404_start_hc(struct platform_device *dev)
+{
+ printk(KERN_DEBUG __FILE__
+ ": starting LH7A404 OHCI USB Controller\n");
+
+ /*
+ * Now, carefully enable the USB clock, and take
+ * the USB host controller out of reset.
+ */
+ CSC_PWRCNT |= CSC_PWRCNT_USBH_EN; /* Enable clock */
+ udelay(1000);
+ USBH_CMDSTATUS = OHCI_HCR;
+
+ printk(KERN_DEBUG __FILE__
+ ": Clock to USB host has been enabled \n");
+}
+
+static void lh7a404_stop_hc(struct platform_device *dev)
+{
+ printk(KERN_DEBUG __FILE__
+ ": stopping LH7A404 OHCI USB Controller\n");
+
+ CSC_PWRCNT &= ~CSC_PWRCNT_USBH_EN; /* Disable clock */
+}
+
+
+/*-------------------------------------------------------------------------*/
+
+
+static irqreturn_t usb_hcd_lh7a404_hcim_irq (int irq, void *__hcd,
+ struct pt_regs * r)
+{
+ struct usb_hcd *hcd = __hcd;
+
+ return usb_hcd_irq(irq, hcd, r);
+}
+
+/*-------------------------------------------------------------------------*/
+
+void usb_hcd_lh7a404_remove (struct usb_hcd *, struct platform_device *);
+
+/* configure so an HC device and id are always provided */
+/* always called with process context; sleeping is OK */
+
+
+/**
+ * usb_hcd_lh7a404_probe - initialize LH7A404-based HCDs
+ * Context: !in_interrupt()
+ *
+ * Allocates basic resources for this USB host controller, and
+ * then invokes the start() method for the HCD associated with it
+ * through the hotplug entry's driver_data.
+ *
+ */
+int usb_hcd_lh7a404_probe (const struct hc_driver *driver,
+ struct usb_hcd **hcd_out,
+ struct platform_device *dev)
+{
+ int retval;
+ struct usb_hcd *hcd = 0;
+
+ unsigned int *addr = NULL;
+
+ if (!request_mem_region(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1, hcd_name)) {
+ pr_debug("request_mem_region failed");
+ return -EBUSY;
+ }
+
+
+ lh7a404_start_hc(dev);
+
+ addr = ioremap(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1);
+ if (!addr) {
+ pr_debug("ioremap failed");
+ retval = -ENOMEM;
+ goto err1;
+ }
+
+
+ hcd = driver->hcd_alloc ();
+ if (hcd == NULL){
+ pr_debug ("hcd_alloc failed");
+ retval = -ENOMEM;
+ goto err1;
+ }
+
+ if(dev->resource[1].flags != IORESOURCE_IRQ){
+ pr_debug ("resource[1] is not IORESOURCE_IRQ");
+ retval = -ENOMEM;
+ goto err1;
+ }
+
+ hcd->driver = (struct hc_driver *) driver;
+ hcd->description = driver->description;
+ hcd->irq = dev->resource[1].start;
+ hcd->regs = addr;
+ hcd->self.controller = &dev->dev;
+
+ retval = hcd_buffer_create (hcd);
+ if (retval != 0) {
+ pr_debug ("pool alloc fail");
+ goto err1;
+ }
+
+ retval = request_irq (hcd->irq, usb_hcd_lh7a404_hcim_irq, SA_INTERRUPT,
+ hcd->description, hcd);
+ if (retval != 0) {
+ pr_debug("request_irq failed");
+ retval = -EBUSY;
+ goto err2;
+ }
+
+ pr_debug ("%s (LH7A404) at 0x%p, irq %d",
+ hcd->description, hcd->regs, hcd->irq);
+
+ usb_bus_init (&hcd->self);
+ hcd->self.op = &usb_hcd_operations;
+ hcd->self.release = &usb_hcd_release;
+ hcd->self.hcpriv = (void *) hcd;
+ hcd->self.bus_name = "lh7a404";
+ hcd->product_desc = "LH7A404 OHCI";
+
+ INIT_LIST_HEAD (&hcd->dev_list);
+
+ usb_register_bus (&hcd->self);
+
+ if ((retval = driver->start (hcd)) < 0)
+ {
+ usb_hcd_lh7a404_remove(hcd, dev);
+ return retval;
+ }
+
+ *hcd_out = hcd;
+ return 0;
+
+ err2:
+ hcd_buffer_destroy (hcd);
+ err1:
+ kfree(hcd);
+ lh7a404_stop_hc(dev);
+ release_mem_region(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1);
+ return retval;
+}
+
+
+/* may be called without controller electrically present */
+/* may be called with controller, bus, and devices active */
+
+/**
+ * usb_hcd_lh7a404_remove - shutdown processing for LH7A404-based HCDs
+ * @dev: USB Host Controller being removed
+ * Context: !in_interrupt()
+ *
+ * Reverses the effect of usb_hcd_lh7a404_probe(), first invoking
+ * the HCD's stop() method. It is always called from a thread
+ * context, normally "rmmod", "apmd", or something similar.
+ *
+ */
+void usb_hcd_lh7a404_remove (struct usb_hcd *hcd, struct platform_device *dev)
+{
+ pr_debug ("remove: %s, state %x", hcd->self.bus_name, hcd->state);
+
+ if (in_interrupt ())
+ BUG ();
+
+ hcd->state = USB_STATE_QUIESCING;
+
+ pr_debug ("%s: roothub graceful disconnect", hcd->self.bus_name);
+ usb_disconnect (&hcd->self.root_hub);
+
+ hcd->driver->stop (hcd);
+ hcd->state = USB_STATE_HALT;
+
+ free_irq (hcd->irq, hcd);
+ hcd_buffer_destroy (hcd);
+
+ usb_deregister_bus (&hcd->self);
+
+ lh7a404_stop_hc(dev);
+ release_mem_region(dev->resource[0].start,
+ dev->resource[0].end
+ - dev->resource[0].start + 1);
+}
+
+/*-------------------------------------------------------------------------*/
+
+static int __devinit
+ohci_lh7a404_start (struct usb_hcd *hcd)
+{
+ struct ohci_hcd *ohci = hcd_to_ohci (hcd);
+ int ret;
+
+ ohci_dbg (ohci, "ohci_lh7a404_start, ohci:%p", ohci);
+ if ((ret = ohci_init(ohci)) < 0)
+ return ret;
+
+ if ((ret = ohci_run (ohci)) < 0) {
+ err ("can't start %s", ohci->hcd.self.bus_name);
+ ohci_stop (hcd);
+ return ret;
+ }
+ return 0;
+}
+
+/*-------------------------------------------------------------------------*/
+
+static const struct hc_driver ohci_lh7a404_hc_driver = {
+ .description = hcd_name,
+
+ /*
+ * generic hardware linkage
+ */
+ .irq = ohci_irq,
+ .flags = HCD_USB11,
+
+ /*
+ * basic lifecycle operations
+ */
+ .start = ohci_lh7a404_start,
+#ifdef CONFIG_PM
+ /* suspend: ohci_lh7a404_suspend, -- tbd */
+ /* resume: ohci_lh7a404_resume, -- tbd */
+#endif /*CONFIG_PM*/
+ .stop = ohci_stop,
+
+ /*
+ * memory lifecycle (except per-request)
+ */
+ .hcd_alloc = ohci_hcd_alloc,
+
+ /*
+ * managing i/o requests and associated device resources
+ */
+ .urb_enqueue = ohci_urb_enqueue,
+ .urb_dequeue = ohci_urb_dequeue,
+ .endpoint_disable = ohci_endpoint_disable,
+
+ /*
+ * scheduling support
+ */
+ .get_frame_number = ohci_get_frame,
+
+ /*
+ * root hub support
+ */
+ .hub_status_data = ohci_hub_status_data,
+ .hub_control = ohci_hub_control,
+};
+
+/*-------------------------------------------------------------------------*/
+
+static int ohci_hcd_lh7a404_drv_probe(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = NULL;
+ int ret;
+
+ pr_debug ("In ohci_hcd_lh7a404_drv_probe");
+
+ if (usb_disabled())
+ return -ENODEV;
+
+ ret = usb_hcd_lh7a404_probe(&ohci_lh7a404_hc_driver, &hcd, pdev);
+
+ if (ret == 0)
+ dev_set_drvdata(dev, hcd);
+
+ return ret;
+}
+
+static int ohci_hcd_lh7a404_drv_remove(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = dev_get_drvdata(dev);
+
+ usb_hcd_lh7a404_remove(hcd, pdev);
+ dev_set_drvdata(dev, NULL);
+ return 0;
+}
+ /*TBD*/
+/*static int ohci_hcd_lh7a404_drv_suspend(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = dev_get_drvdata(dev);
+
+ return 0;
+}
+static int ohci_hcd_lh7a404_drv_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct usb_hcd *hcd = dev_get_drvdata(dev);
+
+
+ return 0;
+}
+*/
+
+static struct device_driver ohci_hcd_lh7a404_driver = {
+ .name = "lh7a404-ohci",
+ .bus = &platform_bus_type,
+ .probe = ohci_hcd_lh7a404_drv_probe,
+ .remove = ohci_hcd_lh7a404_drv_remove,
+ /*.suspend = ohci_hcd_lh7a404_drv_suspend, */
+ /*.resume = ohci_hcd_lh7a404_drv_resume, */
+};
+
+static int __init ohci_hcd_lh7a404_init (void)
+{
+ pr_debug (DRIVER_INFO " (LH7A404)");
+ pr_debug ("block sizes: ed %d td %d\n",
+ sizeof (struct ed), sizeof (struct td));
+
+ return driver_register(&ohci_hcd_lh7a404_driver);
+}
+
+static void __exit ohci_hcd_lh7a404_cleanup (void)
+{
+ driver_unregister(&ohci_hcd_lh7a404_driver);
+}
+
+module_init (ohci_hcd_lh7a404_init);
+module_exit (ohci_hcd_lh7a404_cleanup);
--- /dev/null
+/******************************************************************************
+ * touchkitusb.c -- Driver for eGalax TouchKit USB Touchscreens
+ *
+ * Copyright (C) 2004 by Daniel Ritz
+ * Copyright (C) by Todd E. Johnson (mtouchusb.c)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ * Based upon mtouchusb.c
+ *
+ *****************************************************************************/
+
+//#define DEBUG
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/input.h>
+#include <linux/module.h>
+#include <linux/init.h>
+
+#if !defined(DEBUG) && defined(CONFIG_USB_DEBUG)
+#define DEBUG
+#endif
+#include <linux/usb.h>
+
+
+#define TOUCHKIT_MIN_XC 0x0
+#define TOUCHKIT_MAX_XC 0x07ff
+#define TOUCHKIT_XC_FUZZ 0x0
+#define TOUCHKIT_XC_FLAT 0x0
+#define TOUCHKIT_MIN_YC 0x0
+#define TOUCHKIT_MAX_YC 0x07ff
+#define TOUCHKIT_YC_FUZZ 0x0
+#define TOUCHKIT_YC_FLAT 0x0
+#define TOUCHKIT_REPORT_DATA_SIZE 8
+
+#define TOUCHKIT_DOWN 0x01
+#define TOUCHKIT_POINT_TOUCH 0x81
+#define TOUCHKIT_POINT_NOTOUCH 0x80
+
+#define TOUCHKIT_GET_TOUCHED(dat) ((((dat)[0]) & TOUCHKIT_DOWN) ? 1 : 0)
+#define TOUCHKIT_GET_X(dat) (((dat)[3] << 7) | (dat)[4])
+#define TOUCHKIT_GET_Y(dat) (((dat)[1] << 7) | (dat)[2])
+
+#define DRIVER_VERSION "v0.1"
+#define DRIVER_AUTHOR "Daniel Ritz <daniel.ritz@gmx.ch>"
+#define DRIVER_DESC "eGalax TouchKit USB HID Touchscreen Driver"
+
+static int swap_xy;
+module_param(swap_xy, bool, 0644);
+MODULE_PARM_DESC(swap_xy, "If set X and Y axes are swapped.");
+
+struct touchkit_usb {
+ unsigned char *data;
+ dma_addr_t data_dma;
+ struct urb *irq;
+ struct usb_device *udev;
+ struct input_dev input;
+ int open;
+ char name[128];
+ char phys[64];
+};
+
+static struct usb_device_id touchkit_devices[] = {
+ {USB_DEVICE(0x3823, 0x0001)},
+ {USB_DEVICE(0x0eef, 0x0001)},
+ {}
+};
+
+static void touchkit_irq(struct urb *urb, struct pt_regs *regs)
+{
+ struct touchkit_usb *touchkit = urb->context;
+ int retval;
+ int x, y;
+
+ switch (urb->status) {
+ case 0:
+ /* success */
+ break;
+ case -ETIMEDOUT:
+ /* this urb is timing out */
+ dbg("%s - urb timed out - was the device unplugged?",
+ __FUNCTION__);
+ return;
+ case -ECONNRESET:
+ case -ENOENT:
+ case -ESHUTDOWN:
+ /* this urb is terminated, clean up */
+ dbg("%s - urb shutting down with status: %d",
+ __FUNCTION__, urb->status);
+ return;
+ default:
+ dbg("%s - nonzero urb status received: %d",
+ __FUNCTION__, urb->status);
+ goto exit;
+ }
+
+ if (swap_xy) {
+ y = TOUCHKIT_GET_X(touchkit->data);
+ x = TOUCHKIT_GET_Y(touchkit->data);
+ } else {
+ x = TOUCHKIT_GET_X(touchkit->data);
+ y = TOUCHKIT_GET_Y(touchkit->data);
+ }
+
+ input_regs(&touchkit->input, regs);
+ input_report_key(&touchkit->input, BTN_TOUCH,
+ TOUCHKIT_GET_TOUCHED(touchkit->data));
+ input_report_abs(&touchkit->input, ABS_X, x);
+ input_report_abs(&touchkit->input, ABS_Y, y);
+ input_sync(&touchkit->input);
+
+exit:
+ retval = usb_submit_urb(urb, GFP_ATOMIC);
+ if (retval)
+ err("%s - usb_submit_urb failed with result: %d",
+ __FUNCTION__, retval);
+}
+
+static int touchkit_open(struct input_dev *input)
+{
+ struct touchkit_usb *touchkit = input->private;
+
+ if (touchkit->open++)
+ return 0;
+
+ touchkit->irq->dev = touchkit->udev;
+
+ if (usb_submit_urb(touchkit->irq, GFP_ATOMIC)) {
+ touchkit->open--;
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static void touchkit_close(struct input_dev *input)
+{
+ struct touchkit_usb *touchkit = input->private;
+
+ if (!--touchkit->open)
+ usb_kill_urb(touchkit->irq);
+}
+
+static int touchkit_alloc_buffers(struct usb_device *udev,
+ struct touchkit_usb *touchkit)
+{
+ touchkit->data = usb_buffer_alloc(udev, TOUCHKIT_REPORT_DATA_SIZE,
+ SLAB_ATOMIC, &touchkit->data_dma);
+
+ if (!touchkit->data)
+ return -1;
+
+ return 0;
+}
+
+static void touchkit_free_buffers(struct usb_device *udev,
+ struct touchkit_usb *touchkit)
+{
+ if (touchkit->data)
+ usb_buffer_free(udev, TOUCHKIT_REPORT_DATA_SIZE,
+ touchkit->data, touchkit->data_dma);
+}
+
+static int touchkit_probe(struct usb_interface *intf,
+ const struct usb_device_id *id)
+{
+ int ret;
+ struct touchkit_usb *touchkit;
+ struct usb_host_interface *interface;
+ struct usb_endpoint_descriptor *endpoint;
+ struct usb_device *udev = interface_to_usbdev(intf);
+ char path[64];
+ char *buf;
+
+ interface = intf->cur_altsetting;
+ endpoint = &interface->endpoint[0].desc;
+
+ touchkit = kmalloc(sizeof(struct touchkit_usb), GFP_KERNEL);
+ if (!touchkit)
+ return -ENOMEM;
+
+ memset(touchkit, 0, sizeof(struct touchkit_usb));
+ touchkit->udev = udev;
+
+ if (touchkit_alloc_buffers(udev, touchkit)) {
+ ret = -ENOMEM;
+ goto out_free;
+ }
+
+ touchkit->input.private = touchkit;
+ touchkit->input.open = touchkit_open;
+ touchkit->input.close = touchkit_close;
+
+ usb_make_path(udev, path, 64);
+ sprintf(touchkit->phys, "%s/input0", path);
+
+ touchkit->input.name = touchkit->name;
+ touchkit->input.phys = touchkit->phys;
+ touchkit->input.id.bustype = BUS_USB;
+ touchkit->input.id.vendor = udev->descriptor.idVendor;
+ touchkit->input.id.product = udev->descriptor.idProduct;
+ touchkit->input.id.version = udev->descriptor.bcdDevice;
+ touchkit->input.dev = &intf->dev;
+
+ touchkit->input.evbit[0] = BIT(EV_KEY) | BIT(EV_ABS);
+ touchkit->input.absbit[0] = BIT(ABS_X) | BIT(ABS_Y);
+ touchkit->input.keybit[LONG(BTN_TOUCH)] = BIT(BTN_TOUCH);
+
+ /* Used to Scale Compensated Data */
+ touchkit->input.absmin[ABS_X] = TOUCHKIT_MIN_XC;
+ touchkit->input.absmax[ABS_X] = TOUCHKIT_MAX_XC;
+ touchkit->input.absfuzz[ABS_X] = TOUCHKIT_XC_FUZZ;
+ touchkit->input.absflat[ABS_X] = TOUCHKIT_XC_FLAT;
+ touchkit->input.absmin[ABS_Y] = TOUCHKIT_MIN_YC;
+ touchkit->input.absmax[ABS_Y] = TOUCHKIT_MAX_YC;
+ touchkit->input.absfuzz[ABS_Y] = TOUCHKIT_YC_FUZZ;
+ touchkit->input.absflat[ABS_Y] = TOUCHKIT_YC_FLAT;
+
+ buf = kmalloc(63, GFP_KERNEL);
+ if (!buf) {
+ ret = -ENOMEM;
+ goto out_free_buffers;
+ }
+
+ if (udev->descriptor.iManufacturer &&
+ usb_string(udev, udev->descriptor.iManufacturer, buf, 63) > 0)
+ strcat(touchkit->name, buf);
+ if (udev->descriptor.iProduct &&
+ usb_string(udev, udev->descriptor.iProduct, buf, 63) > 0)
+ sprintf(touchkit->name, "%s %s", touchkit->name, buf);
+
+ if (!strlen(touchkit->name))
+ sprintf(touchkit->name, "USB Touchscreen %04x:%04x",
+ touchkit->input.id.vendor, touchkit->input.id.product);
+
+ kfree(buf);
+
+ touchkit->irq = usb_alloc_urb(0, GFP_KERNEL);
+ if (!touchkit->irq) {
+ dbg("%s - usb_alloc_urb failed: touchkit->irq", __FUNCTION__);
+ ret = -ENOMEM;
+ goto out_free_buffers;
+ }
+
+ usb_fill_int_urb(touchkit->irq, touchkit->udev,
+ usb_rcvintpipe(touchkit->udev, 0x81),
+ touchkit->data, TOUCHKIT_REPORT_DATA_SIZE,
+ touchkit_irq, touchkit, endpoint->bInterval);
+
+ input_register_device(&touchkit->input);
+
+ printk(KERN_INFO "input: %s on %s\n", touchkit->name, path);
+ usb_set_intfdata(intf, touchkit);
+
+ return 0;
+
+out_free_buffers:
+ touchkit_free_buffers(udev, touchkit);
+out_free:
+ kfree(touchkit);
+ return ret;
+}
+
+static void touchkit_disconnect(struct usb_interface *intf)
+{
+ struct touchkit_usb *touchkit = usb_get_intfdata(intf);
+
+ dbg("%s - called", __FUNCTION__);
+
+ if (!touchkit)
+ return;
+
+ dbg("%s - touchkit is initialized, cleaning up", __FUNCTION__);
+ usb_set_intfdata(intf, NULL);
+ input_unregister_device(&touchkit->input);
+ usb_kill_urb(touchkit->irq);
+ usb_free_urb(touchkit->irq);
+ touchkit_free_buffers(interface_to_usbdev(intf), touchkit);
+ kfree(touchkit);
+}
+
+MODULE_DEVICE_TABLE(usb, touchkit_devices);
+
+static struct usb_driver touchkit_driver = {
+ .owner = THIS_MODULE,
+ .name = "touchkitusb",
+ .probe = touchkit_probe,
+ .disconnect = touchkit_disconnect,
+ .id_table = touchkit_devices,
+};
+
+static int __init touchkit_init(void)
+{
+ return usb_register(&touchkit_driver);
+}
+
+static void __exit touchkit_cleanup(void)
+{
+ usb_deregister(&touchkit_driver);
+}
+
+module_init(touchkit_init);
+module_exit(touchkit_cleanup);
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
--- /dev/null
+9.0.2
+
+* Adding #ifdef to compile PWC before and after 2.6.5
+
+9.0.1
+
+9.0
+
+
+8.12
+
+* Implement motorized pan/tilt feature for Logitech QuickCam Orbit/Spere.
+
+8.11.1
+
+* Fix for PCVC720/40, would not be able to set videomode
+* Fix for Samsung MPC models, appearantly they are based on a newer chipset
+
+8.11
+
+* 20 dev_hints (per request)
+* Hot unplugging should be better, no more dangling pointers or memory leaks
+* Added reserved Logitech webcam IDs
+* Device now remembers size & fps between close()/open()
+* Removed palette stuff altogether
+
+8.10.1
+
+* Added IDs for PCVC720K/40 and Creative Labs Webcam Pro
+
+8.10
+
+* Fixed ID for QuickCam Notebook pro
+* Added GREALSIZE ioctl() call
+* Fixed bug in case PWCX was not loaded and invalid size was set
+
+8.9
+
+* Merging with kernel 2.5.49
+* Adding IDs for QuickCam Zoom & QuickCam Notebook
+
+8.8
+
+* Fixing 'leds' parameter
+* Adding IDs for Logitech QuickCam Pro 4000
+* Making URB init/cleanup a little nicer
+
+8.7
+
+* Incorporating changes in ioctl() parameter passing
+* Also changes to URB mechanism
+
+8.6
+
+* Added ID's for Visionite VCS UM100 and UC300
+* Removed YUV420-interlaced palette altogether (was confusing)
+* Removed MIRROR stuff as it didn't work anyway
+* Fixed a problem with the 'leds' parameter (wouldn't blink)
+* Added ioctl()s for advanced features: 'extended' whitebalance ioctl()s,
+ CONTOUR, BACKLIGHT, FLICKER, DYNNOISE.
+* VIDIOCGCAP.name now contains real camera model name instead of
+ 'Philips xxx webcam'
+* Added PROBE ioctl (see previous point & API doc)
+
+8.5
+
+* Adding IDs for Creative Labs Webcam 5
+* Adding IDs for SOTEC CMS-001 webcam
+* Solving possible hang in VIDIOCSYNC when unplugging the cam
+* Forgot to return structure in VIDIOCPWCGAWB, oops
+* Time interval for the LEDs are now in milliseconds
+
+8.4
+
+* Fixing power_save option for Vesta range
+* Handling new error codes in ISOC callback
+* Adding dev_hint module parameter, to specify /dev/videoX device nodes
+
+8.3
+
+* Adding Samsung C10 and C30 cameras
+* Removing palette module parameter
+* Fixed typo in ID of QuickCam 3000 Pro
+* Adding LED settings (blinking while in use) for ToUCam cameras.
+* Turns LED off when camera is not in use.
+
+8.2
+
+* Making module more silent when trace = 0
+* Adding QuickCam 3000 Pro IDs
+* Chrominance control for the Vesta cameras
+* Hopefully fixed problems on machines with BIGMEM and > 1GB of RAM
+* Included Oliver Neukem's lock_kernel() patch
+* Allocates less memory for image buffers
+* Adds ioctl()s for the whitebalancing
+
+8.1
+
+* Adding support for 750
+* Adding V4L GAUDIO/SAUDIO/UNIT ioctl() calls
+
+8.0
+* 'damage control' after inclusion in 2.4.5.
+* Changed wait-queue mechanism in read/mmap/poll according to the book.
+* Included YUV420P palette.
+* Changed interface to decompressor module.
+* Cleaned up pwc structure a bit.
+
+7.0
+
+* Fixed bug in vcvt_420i_yuyv; extra variables on stack were misaligned.
+* There is now a clear error message when an image size is selected that
+ is only supported using the decompressor, and the decompressor isn't
+ loaded.
+* When the decompressor wasn't loaded, selecting large image size
+ would create skewed or double images.
+
+6.3
+
+* Introduced spinlocks for the buffer pointer manipulation; a number of
+ reports seem to suggest the down()/up() semaphores were the cause of
+ lockups, since they are not suitable for interrupt/user locking.
+* Separated decompressor and core code into 2 modules.
+
+6.2
+
+* Non-integral image sizes are now padded with gray or black.
+* Added SHUTTERSPEED ioctl().
+* Fixed buglet in VIDIOCPWCSAGC; the function would always return an error,
+ even though the call succeeded.
+* Added hotplug support for 2.4.*.
+* Memory: the 645/646 uses less memory now.
+
+6.1
+
+* VIDIOCSPICT returns -EINVAL with invalid palettes.
+* Added saturation control.
+* Split decompressors from rest.
+* Fixed bug that would reset the framerate to the default framerate if
+ the rate field was set to 0 (which is not what I intended, nl. do not
+ change the framerate!).
+* VIDIOCPWCSCQUAL (setting compression quality) now takes effect immediately.
+* Workaround for a bug in the 730 sensor.
--- /dev/null
+ifneq ($(KERNELRELEASE),)
+
+pwc-objs := pwc-if.o pwc-misc.o pwc-ctrl.o pwc-uncompress.o pwc-dec1.o pwc-dec23.o pwc-kiara.o pwc-timon.o
+
+obj-$(CONFIG_USB_PWC) += pwc.o
+
+else
+
+KDIR := /lib/modules/$(shell uname -r)/build
+PWD := $(shell pwd)
+
+default:
+ $(MAKE) -C $(KDIR) SUBDIRS=$(PWD) modules
+
+endif
+
+clean:
+ rm -f *.[oas] .*.flags *.ko .*.cmd .*.d .*.tmp *.mod.c
+ rm -rf .tmp_versions
+
--- /dev/null
+This file contains some additional information for the Philips and OEM webcams.
+E-mail: webcam@smcc.demon.nl Last updated: 2004-01-19
+Site: http://www.smcc.demon.nl/webcam/
+
+As of this moment, the following cameras are supported:
+ * Philips PCA645
+ * Philips PCA646
+ * Philips PCVC675
+ * Philips PCVC680
+ * Philips PCVC690
+ * Philips PCVC720/40
+ * Philips PCVC730
+ * Philips PCVC740
+ * Philips PCVC750
+ * Askey VC010
+ * Creative Labs Webcam 5
+ * Creative Labs Webcam Pro Ex
+ * Logitech QuickCam 3000 Pro
+ * Logitech QuickCam 4000 Pro
+ * Logitech QuickCam Notebook Pro
+ * Logitech QuickCam Zoom
+ * Logitech QuickCam Orbit
+ * Logitech QuickCam Sphere
+ * Samsung MPC-C10
+ * Samsung MPC-C30
+ * Sotec Afina Eye
+ * AME CU-001
+ * Visionite VCS-UM100
+ * Visionite VCS-UC300
+
+The main webpage for the Philips driver is at the address above. It contains
+a lot of extra information, a FAQ, and the binary plugin 'PWCX'. This plugin
+contains decompression routines that allow you to use higher image sizes and
+framerates; in addition the webcam uses less bandwidth on the USB bus (handy
+if you want to run more than 1 camera simultaneously). These routines fall
+under a NDA, and may therefor not be distributed as source; however, its use
+is completely optional.
+
+You can build this code either into your kernel, or as a module. I recommend
+the latter, since it makes troubleshooting a lot easier. The built-in
+microphone is supported through the USB Audio class.
+
+When you load the module you can set some default settings for the
+camera; some programs depend on a particular image-size or -format and
+don't know how to set it properly in the driver. The options are:
+
+size
+ Can be one of 'sqcif', 'qsif', 'qcif', 'sif', 'cif' or
+ 'vga', for an image size of resp. 128x96, 160x120, 176x144,
+ 320x240, 352x288 and 640x480 (of course, only for those cameras that
+ support these resolutions).
+
+fps
+ Specifies the desired framerate. Is an integer in the range of 4-30.
+
+fbufs
+ This paramter specifies the number of internal buffers to use for storing
+ frames from the cam. This will help if the process that reads images from
+ the cam is a bit slow or momentarely busy. However, on slow machines it
+ only introduces lag, so choose carefully. The default is 3, which is
+ reasonable. You can set it between 2 and 5.
+
+mbufs
+ This is an integer between 1 and 10. It will tell the module the number of
+ buffers to reserve for mmap(), VIDIOCCGMBUF, VIDIOCMCAPTURE and friends.
+ The default is 2, which is adequate for most applications (double
+ buffering).
+
+ Should you experience a lot of 'Dumping frame...' messages during
+ grabbing with a tool that uses mmap(), you might want to increase if.
+ However, it doesn't really buffer images, it just gives you a bit more
+ slack when your program is behind. But you need a multi-threaded or
+ forked program to really take advantage of these buffers.
+
+ The absolute maximum is 10, but don't set it too high! Every buffer takes
+ up 460 KB of RAM, so unless you have a lot of memory setting this to
+ something more than 4 is an absolute waste. This memory is only
+ allocated during open(), so nothing is wasted when the camera is not in
+ use.
+
+power_save
+ When power_save is enabled (set to 1), the module will try to shut down
+ the cam on close() and re-activate on open(). This will save power and
+ turn off the LED. Not all cameras support this though (the 645 and 646
+ don't have power saving at all), and some models don't work either (they
+ will shut down, but never wake up). Consider this experimental. By
+ default this option is disabled.
+
+compression (only useful with the plugin)
+ With this option you can control the compression factor that the camera
+ uses to squeeze the image through the USB bus. You can set the
+ parameter between 0 and 3:
+ 0 = prefer uncompressed images; if the requested mode is not available
+ in an uncompressed format, the driver will silently switch to low
+ compression.
+ 1 = low compression.
+ 2 = medium compression.
+ 3 = high compression.
+
+ High compression takes less bandwidth of course, but it could also
+ introduce some unwanted artefacts. The default is 2, medium compression.
+ See the FAQ on the website for an overview of which modes require
+ compression.
+
+ The compression parameter does not apply to the 645 and 646 cameras
+ and OEM models derived from those (only a few). Most cams honour this
+ parameter.
+
+leds
+ This settings takes 2 integers, that define the on/off time for the LED
+ (in milliseconds). One of the interesting things that you can do with
+ this is let the LED blink while the camera is in use. This:
+
+ leds=500,500
+
+ will blink the LED once every second. But with:
+
+ leds=0,0
+
+ the LED never goes on, making it suitable for silent surveillance.
+
+ By default the camera's LED is on solid while in use, and turned off
+ when the camera is not used anymore.
+
+ This parameter works only with the ToUCam range of cameras (720, 730, 740,
+ 750) and OEMs. For other cameras this command is silently ignored, and
+ the LED cannot be controlled.
+
+ Finally: this parameters does not take effect UNTIL the first time you
+ open the camera device. Until then, the LED remains on.
+
+dev_hint
+ A long standing problem with USB devices is their dynamic nature: you
+ never know what device a camera gets assigned; it depends on module load
+ order, the hub configuration, the order in which devices are plugged in,
+ and the phase of the moon (i.e. it can be random). With this option you
+ can give the driver a hint as to what video device node (/dev/videoX) it
+ should use with a specific camera. This is also handy if you have two
+ cameras of the same model.
+
+ A camera is specified by its type (the number from the camera model,
+ like PCA645, PCVC750VC, etc) and optionally the serial number (visible
+ in /proc/bus/usb/devices). A hint consists of a string with the following
+ format:
+
+ [type[.serialnumber]:]node
+
+ The square brackets mean that both the type and the serialnumber are
+ optional, but a serialnumber cannot be specified without a type (which
+ would be rather pointless). The serialnumber is separated from the type
+ by a '.'; the node number by a ':'.
+
+ This somewhat cryptic syntax is best explained by a few examples:
+
+ dev_hint=3,5 The first detected cam gets assigned
+ /dev/video3, the second /dev/video5. Any
+ other cameras will get the first free
+ available slot (see below).
+
+ dev_hint=645:1,680:2 The PCA645 camera will get /dev/video1,
+ and a PCVC680 /dev/video2.
+
+ dev_hint=645.0123:3,645.4567:0 The PCA645 camera with serialnumber
+ 0123 goes to /dev/video3, the same
+ camera model with the 4567 serial
+ gets /dev/video0.
+
+ dev_hint=750:1,4,5,6 The PCVC750 camera will get /dev/video1, the
+ next 3 Philips cams will use /dev/video4
+ through /dev/video6.
+
+ Some points worth knowing:
+ - Serialnumbers are case sensitive and must be written full, including
+ leading zeroes (it's treated as a string).
+ - If a device node is already occupied, registration will fail and
+ the webcam is not available.
+ - You can have up to 64 video devices; be sure to make enough device
+ nodes in /dev if you want to spread the numbers (this does not apply
+ to devfs). After /dev/video9 comes /dev/video10 (not /dev/videoA).
+ - If a camera does not match any dev_hint, it will simply get assigned
+ the first available device node, just as it used to be.
+
+trace
+ In order to better detect problems, it is now possible to turn on a
+ 'trace' of some of the calls the module makes; it logs all items in your
+ kernel log at debug level.
+
+ The trace variable is a bitmask; each bit represents a certain feature.
+ If you want to trace something, look up the bit value(s) in the table
+ below, add the values together and supply that to the trace variable.
+
+ Value Value Description Default
+ (dec) (hex)
+ 1 0x1 Module initialization; this will log messages On
+ while loading and unloading the module
+
+ 2 0x2 probe() and disconnect() traces On
+
+ 4 0x4 Trace open() and close() calls Off
+
+ 8 0x8 read(), mmap() and associated ioctl() calls Off
+
+ 16 0x10 Memory allocation of buffers, etc. Off
+
+ 32 0x20 Showing underflow, overflow and Dumping frame On
+ messages
+
+ 64 0x40 Show viewport and image sizes Off
+
+ 128 0x80 PWCX debugging Off
+
+ For example, to trace the open() & read() fuctions, sum 8 + 4 = 12,
+ so you would supply trace=12 during insmod or modprobe. If
+ you want to turn the initialization and probing tracing off, set trace=0.
+ The default value for trace is 35 (0x23).
+
+
+
+Example:
+
+ # modprobe pwc size=cif fps=15 power_save=1
+
+The fbufs, mbufs and trace parameters are global and apply to all connected
+cameras. Each camera has its own set of buffers.
+
+size and fps only specify defaults when you open() the device; this is to
+accommodate some tools that don't set the size. You can change these
+settings after open() with the Video4Linux ioctl() calls. The default of
+defaults is QCIF size at 10 fps.
+
+The compression parameter is semiglobal; it sets the initial compression
+preference for all camera's, but this parameter can be set per camera with
+the VIDIOCPWCSCQUAL ioctl() call.
+
+All parameters are optional.
+
--- /dev/null
+/* Driver for Philips webcam
+ Functions that send various control messages to the webcam, including
+ video modes.
+ (C) 1999-2003 Nemosoft Unv.
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+/*
+ Changes
+ 2001/08/03 Alvarado Added methods for changing white balance and
+ red/green gains
+ */
+
+/* Control functions for the cam; brightness, contrast, video mode, etc. */
+
+#ifdef __KERNEL__
+#include <asm/uaccess.h>
+#endif
+#include <asm/errno.h>
+#include <linux/version.h>
+
+#include "pwc.h"
+#include "pwc-ioctl.h"
+#include "pwc-uncompress.h"
+#include "pwc-kiara.h"
+#include "pwc-timon.h"
+#include "pwc-dec1.h"
+#include "pwc-dec23.h"
+
+/* Request types: video */
+#define SET_LUM_CTL 0x01
+#define GET_LUM_CTL 0x02
+#define SET_CHROM_CTL 0x03
+#define GET_CHROM_CTL 0x04
+#define SET_STATUS_CTL 0x05
+#define GET_STATUS_CTL 0x06
+#define SET_EP_STREAM_CTL 0x07
+#define GET_EP_STREAM_CTL 0x08
+#define SET_MPT_CTL 0x0D
+#define GET_MPT_CTL 0x0E
+
+/* Selectors for the Luminance controls [GS]ET_LUM_CTL */
+#define AGC_MODE_FORMATTER 0x2000
+#define PRESET_AGC_FORMATTER 0x2100
+#define SHUTTER_MODE_FORMATTER 0x2200
+#define PRESET_SHUTTER_FORMATTER 0x2300
+#define PRESET_CONTOUR_FORMATTER 0x2400
+#define AUTO_CONTOUR_FORMATTER 0x2500
+#define BACK_LIGHT_COMPENSATION_FORMATTER 0x2600
+#define CONTRAST_FORMATTER 0x2700
+#define DYNAMIC_NOISE_CONTROL_FORMATTER 0x2800
+#define FLICKERLESS_MODE_FORMATTER 0x2900
+#define AE_CONTROL_SPEED 0x2A00
+#define BRIGHTNESS_FORMATTER 0x2B00
+#define GAMMA_FORMATTER 0x2C00
+
+/* Selectors for the Chrominance controls [GS]ET_CHROM_CTL */
+#define WB_MODE_FORMATTER 0x1000
+#define AWB_CONTROL_SPEED_FORMATTER 0x1100
+#define AWB_CONTROL_DELAY_FORMATTER 0x1200
+#define PRESET_MANUAL_RED_GAIN_FORMATTER 0x1300
+#define PRESET_MANUAL_BLUE_GAIN_FORMATTER 0x1400
+#define COLOUR_MODE_FORMATTER 0x1500
+#define SATURATION_MODE_FORMATTER1 0x1600
+#define SATURATION_MODE_FORMATTER2 0x1700
+
+/* Selectors for the Status controls [GS]ET_STATUS_CTL */
+#define SAVE_USER_DEFAULTS_FORMATTER 0x0200
+#define RESTORE_USER_DEFAULTS_FORMATTER 0x0300
+#define RESTORE_FACTORY_DEFAULTS_FORMATTER 0x0400
+#define READ_AGC_FORMATTER 0x0500
+#define READ_SHUTTER_FORMATTER 0x0600
+#define READ_RED_GAIN_FORMATTER 0x0700
+#define READ_BLUE_GAIN_FORMATTER 0x0800
+#define SENSOR_TYPE_FORMATTER1 0x0C00
+#define READ_RAW_Y_MEAN_FORMATTER 0x3100
+#define SET_POWER_SAVE_MODE_FORMATTER 0x3200
+#define MIRROR_IMAGE_FORMATTER 0x3300
+#define LED_FORMATTER 0x3400
+#define SENSOR_TYPE_FORMATTER2 0x3700
+
+/* Formatters for the Video Endpoint controls [GS]ET_EP_STREAM_CTL */
+#define VIDEO_OUTPUT_CONTROL_FORMATTER 0x0100
+
+/* Formatters for the motorized pan & tilt [GS]ET_MPT_CTL */
+#define PT_RELATIVE_CONTROL_FORMATTER 0x01
+#define PT_RESET_CONTROL_FORMATTER 0x02
+#define PT_STATUS_FORMATTER 0x03
+
+static char *size2name[PSZ_MAX] =
+{
+ "subQCIF",
+ "QSIF",
+ "QCIF",
+ "SIF",
+ "CIF",
+ "VGA",
+};
+
+/********/
+
+/* Entries for the Nala (645/646) camera; the Nala doesn't have compression
+ preferences, so you either get compressed or non-compressed streams.
+
+ An alternate value of 0 means this mode is not available at all.
+ */
+
+struct Nala_table_entry {
+ char alternate; /* USB alternate setting */
+ int compressed; /* Compressed yes/no */
+
+ unsigned char mode[3]; /* precomputed mode table */
+};
+
+static struct Nala_table_entry Nala_table[PSZ_MAX][8] =
+{
+#include "pwc-nala.h"
+};
+
+
+/****************************************************************************/
+
+
+#define SendControlMsg(request, value, buflen) \
+ usb_control_msg(pdev->udev, usb_sndctrlpipe(pdev->udev, 0), \
+ request, \
+ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE, \
+ value, \
+ pdev->vcinterface, \
+ &buf, buflen, HZ / 2)
+
+#define RecvControlMsg(request, value, buflen) \
+ usb_control_msg(pdev->udev, usb_rcvctrlpipe(pdev->udev, 0), \
+ request, \
+ USB_DIR_IN | USB_TYPE_VENDOR | USB_RECIP_DEVICE, \
+ value, \
+ pdev->vcinterface, \
+ &buf, buflen, HZ / 2)
+
+
+#if PWC_DEBUG
+void pwc_hexdump(void *p, int len)
+{
+ int i;
+ unsigned char *s;
+ char buf[100], *d;
+
+ s = (unsigned char *)p;
+ d = buf;
+ *d = '\0';
+ Debug("Doing hexdump @ %p, %d bytes.\n", p, len);
+ for (i = 0; i < len; i++) {
+ d += sprintf(d, "%02X ", *s++);
+ if ((i & 0xF) == 0xF) {
+ Debug("%s\n", buf);
+ d = buf;
+ *d = '\0';
+ }
+ }
+ if ((i & 0xF) != 0)
+ Debug("%s\n", buf);
+}
+#endif
+
+static inline int send_video_command(struct usb_device *udev, int index, void *buf, int buflen)
+{
+ return usb_control_msg(udev,
+ usb_sndctrlpipe(udev, 0),
+ SET_EP_STREAM_CTL,
+ USB_DIR_OUT | USB_TYPE_VENDOR | USB_RECIP_DEVICE,
+ VIDEO_OUTPUT_CONTROL_FORMATTER,
+ index,
+ buf, buflen, HZ);
+}
+
+
+
+static inline int set_video_mode_Nala(struct pwc_device *pdev, int size, int frames)
+{
+ unsigned char buf[3];
+ int ret, fps;
+ struct Nala_table_entry *pEntry;
+ int frames2frames[31] =
+ { /* closest match of framerate */
+ 0, 0, 0, 0, 4, /* 0-4 */
+ 5, 5, 7, 7, 10, /* 5-9 */
+ 10, 10, 12, 12, 15, /* 10-14 */
+ 15, 15, 15, 20, 20, /* 15-19 */
+ 20, 20, 20, 24, 24, /* 20-24 */
+ 24, 24, 24, 24, 24, /* 25-29 */
+ 24 /* 30 */
+ };
+ int frames2table[31] =
+ { 0, 0, 0, 0, 0, /* 0-4 */
+ 1, 1, 1, 2, 2, /* 5-9 */
+ 3, 3, 4, 4, 4, /* 10-14 */
+ 5, 5, 5, 5, 5, /* 15-19 */
+ 6, 6, 6, 6, 7, /* 20-24 */
+ 7, 7, 7, 7, 7, /* 25-29 */
+ 7 /* 30 */
+ };
+
+ if (size < 0 || size > PSZ_CIF || frames < 4 || frames > 25)
+ return -EINVAL;
+ frames = frames2frames[frames];
+ fps = frames2table[frames];
+ pEntry = &Nala_table[size][fps];
+ if (pEntry->alternate == 0)
+ return -EINVAL;
+
+ if (pEntry->compressed)
+ return -ENOENT; /* Not supported. */
+
+ memcpy(buf, pEntry->mode, 3);
+ ret = send_video_command(pdev->udev, pdev->vendpoint, buf, 3);
+ if (ret < 0) {
+ Debug("Failed to send video command... %d\n", ret);
+ return ret;
+ }
+ if (pEntry->compressed && pdev->vpalette != VIDEO_PALETTE_RAW)
+ {
+ switch(pdev->type) {
+ case 645:
+ case 646:
+ pwc_dec1_init(pdev->type, pdev->release, buf, pdev->decompress_data);
+ break;
+
+ case 675:
+ case 680:
+ case 690:
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ pwc_dec23_init(pdev->type, pdev->release, buf, pdev->decompress_data);
+ break;
+ }
+ }
+
+ pdev->cmd_len = 3;
+ memcpy(pdev->cmd_buf, buf, 3);
+
+ /* Set various parameters */
+ pdev->vframes = frames;
+ pdev->vsize = size;
+ pdev->valternate = pEntry->alternate;
+ pdev->image = pwc_image_sizes[size];
+ pdev->frame_size = (pdev->image.x * pdev->image.y * 3) / 2;
+ if (pEntry->compressed) {
+ if (pdev->release < 5) { /* 4 fold compression */
+ pdev->vbandlength = 528;
+ pdev->frame_size /= 4;
+ }
+ else {
+ pdev->vbandlength = 704;
+ pdev->frame_size /= 3;
+ }
+ }
+ else
+ pdev->vbandlength = 0;
+ return 0;
+}
+
+
+static inline int set_video_mode_Timon(struct pwc_device *pdev, int size, int frames, int compression, int snapshot)
+{
+ unsigned char buf[13];
+ const struct Timon_table_entry *pChoose;
+ int ret, fps;
+
+ if (size >= PSZ_MAX || frames < 5 || frames > 30 || compression < 0 || compression > 3)
+ return -EINVAL;
+ if (size == PSZ_VGA && frames > 15)
+ return -EINVAL;
+ fps = (frames / 5) - 1;
+
+ /* Find a supported framerate with progressively higher compression ratios
+ if the preferred ratio is not available.
+ */
+ pChoose = NULL;
+ while (compression <= 3) {
+ pChoose = &Timon_table[size][fps][compression];
+ if (pChoose->alternate != 0)
+ break;
+ compression++;
+ }
+ if (pChoose == NULL || pChoose->alternate == 0)
+ return -ENOENT; /* Not supported. */
+
+ memcpy(buf, pChoose->mode, 13);
+ if (snapshot)
+ buf[0] |= 0x80;
+ ret = send_video_command(pdev->udev, pdev->vendpoint, buf, 13);
+ if (ret < 0)
+ return ret;
+
+ if (pChoose->bandlength > 0 && pdev->vpalette != VIDEO_PALETTE_RAW)
+ pwc_dec23_init(pdev->type, pdev->release, buf, pdev->decompress_data);
+
+ pdev->cmd_len = 13;
+ memcpy(pdev->cmd_buf, buf, 13);
+
+ /* Set various parameters */
+ pdev->vframes = frames;
+ pdev->vsize = size;
+ pdev->vsnapshot = snapshot;
+ pdev->valternate = pChoose->alternate;
+ pdev->image = pwc_image_sizes[size];
+ pdev->vbandlength = pChoose->bandlength;
+ if (pChoose->bandlength > 0)
+ pdev->frame_size = (pChoose->bandlength * pdev->image.y) / 4;
+ else
+ pdev->frame_size = (pdev->image.x * pdev->image.y * 12) / 8;
+ return 0;
+}
+
+
+static inline int set_video_mode_Kiara(struct pwc_device *pdev, int size, int frames, int compression, int snapshot)
+{
+ const struct Kiara_table_entry *pChoose = 0;
+ int fps, ret;
+ unsigned char buf[12];
+ struct Kiara_table_entry RawEntry = {6, 773, 1272, {0xAD, 0xF4, 0x10, 0x27, 0xB6, 0x24, 0x96, 0x02, 0x30, 0x05, 0x03, 0x80}};
+
+ if (size >= PSZ_MAX || frames < 5 || frames > 30 || compression < 0 || compression > 3)
+ return -EINVAL;
+ if (size == PSZ_VGA && frames > 15)
+ return -EINVAL;
+ fps = (frames / 5) - 1;
+
+ /* special case: VGA @ 5 fps and snapshot is raw bayer mode */
+ if (size == PSZ_VGA && frames == 5 && snapshot)
+ {
+ /* Only available in case the raw palette is selected or
+ we have the decompressor available. This mode is
+ only available in compressed form
+ */
+ if (pdev->vpalette == VIDEO_PALETTE_RAW)
+ {
+ Info("Choosing VGA/5 BAYER mode (%d).\n", pdev->vpalette);
+ pChoose = &RawEntry;
+ }
+ else
+ {
+ Info("VGA/5 BAYER mode _must_ have a decompressor available, or use RAW palette.\n");
+ }
+ }
+ else
+ {
+ /* Find a supported framerate with progressively higher compression ratios
+ if the preferred ratio is not available.
+ Skip this step when using RAW modes.
+ */
+ while (compression <= 3) {
+ pChoose = &Kiara_table[size][fps][compression];
+ if (pChoose->alternate != 0)
+ break;
+ compression++;
+ }
+ }
+ if (pChoose == NULL || pChoose->alternate == 0)
+ return -ENOENT; /* Not supported. */
+
+ Debug("Using alternate setting %d.\n", pChoose->alternate);
+
+ /* usb_control_msg won't take staticly allocated arrays as argument?? */
+ memcpy(buf, pChoose->mode, 12);
+ if (snapshot)
+ buf[0] |= 0x80;
+
+ /* Firmware bug: video endpoint is 5, but commands are sent to endpoint 4 */
+ ret = send_video_command(pdev->udev, 4 /* pdev->vendpoint */, buf, 12);
+ if (ret < 0)
+ return ret;
+
+ if (pChoose->bandlength > 0 && pdev->vpalette != VIDEO_PALETTE_RAW)
+ pwc_dec23_init(pdev->type, pdev->release, buf, pdev->decompress_data);
+
+ pdev->cmd_len = 12;
+ memcpy(pdev->cmd_buf, buf, 12);
+ /* All set and go */
+ pdev->vframes = frames;
+ pdev->vsize = size;
+ pdev->vsnapshot = snapshot;
+ pdev->valternate = pChoose->alternate;
+ pdev->image = pwc_image_sizes[size];
+ pdev->vbandlength = pChoose->bandlength;
+ if (pdev->vbandlength > 0)
+ pdev->frame_size = (pdev->vbandlength * pdev->image.y) / 4;
+ else
+ pdev->frame_size = (pdev->image.x * pdev->image.y * 12) / 8;
+ return 0;
+}
+
+
+
+/**
+ @pdev: device structure
+ @width: viewport width
+ @height: viewport height
+ @frame: framerate, in fps
+ @compression: preferred compression ratio
+ @snapshot: snapshot mode or streaming
+ */
+int pwc_set_video_mode(struct pwc_device *pdev, int width, int height, int frames, int compression, int snapshot)
+{
+ int ret, size;
+
+ Trace(TRACE_FLOW, "set_video_mode(%dx%d @ %d, palette %d).\n", width, height, frames, pdev->vpalette);
+ size = pwc_decode_size(pdev, width, height);
+ if (size < 0) {
+ Debug("Could not find suitable size.\n");
+ return -ERANGE;
+ }
+ Debug("decode_size = %d.\n", size);
+
+ ret = -EINVAL;
+ switch(pdev->type) {
+ case 645:
+ case 646:
+ ret = set_video_mode_Nala(pdev, size, frames);
+ break;
+
+ case 675:
+ case 680:
+ case 690:
+ ret = set_video_mode_Timon(pdev, size, frames, compression, snapshot);
+ break;
+
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ ret = set_video_mode_Kiara(pdev, size, frames, compression, snapshot);
+ break;
+ }
+ if (ret < 0) {
+ if (ret == -ENOENT)
+ Info("Video mode %s@%d fps is only supported with the decompressor module (pwcx).\n", size2name[size], frames);
+ else {
+ Err("Failed to set video mode %s@%d fps; return code = %d\n", size2name[size], frames, ret);
+ }
+ return ret;
+ }
+ pdev->view.x = width;
+ pdev->view.y = height;
+ pdev->frame_total_size = pdev->frame_size + pdev->frame_header_size + pdev->frame_trailer_size;
+ pwc_set_image_buffer_size(pdev);
+ Trace(TRACE_SIZE, "Set viewport to %dx%d, image size is %dx%d.\n", width, height, pwc_image_sizes[size].x, pwc_image_sizes[size].y);
+ return 0;
+}
+
+
+void pwc_set_image_buffer_size(struct pwc_device *pdev)
+{
+ int i, factor = 0, filler = 0;
+
+ /* for PALETTE_YUV420P */
+ switch(pdev->vpalette)
+ {
+ case VIDEO_PALETTE_YUV420P:
+ factor = 6;
+ filler = 128;
+ break;
+ case VIDEO_PALETTE_RAW:
+ factor = 6; /* can be uncompressed YUV420P */
+ filler = 0;
+ break;
+ }
+
+ /* Set sizes in bytes */
+ pdev->image.size = pdev->image.x * pdev->image.y * factor / 4;
+ pdev->view.size = pdev->view.x * pdev->view.y * factor / 4;
+
+ /* Align offset, or you'll get some very weird results in
+ YUV420 mode... x must be multiple of 4 (to get the Y's in
+ place), and y even (or you'll mixup U & V). This is less of a
+ problem for YUV420P.
+ */
+ pdev->offset.x = ((pdev->view.x - pdev->image.x) / 2) & 0xFFFC;
+ pdev->offset.y = ((pdev->view.y - pdev->image.y) / 2) & 0xFFFE;
+
+ /* Fill buffers with gray or black */
+ for (i = 0; i < MAX_IMAGES; i++) {
+ if (pdev->image_ptr[i] != NULL)
+ memset(pdev->image_ptr[i], filler, pdev->view.size);
+ }
+}
+
+
+
+/* BRIGHTNESS */
+
+int pwc_get_brightness(struct pwc_device *pdev)
+{
+ char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_LUM_CTL, BRIGHTNESS_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ return buf << 9;
+}
+
+int pwc_set_brightness(struct pwc_device *pdev, int value)
+{
+ char buf;
+
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ buf = (value >> 9) & 0x7f;
+ return SendControlMsg(SET_LUM_CTL, BRIGHTNESS_FORMATTER, 1);
+}
+
+/* CONTRAST */
+
+int pwc_get_contrast(struct pwc_device *pdev)
+{
+ char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_LUM_CTL, CONTRAST_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ return buf << 10;
+}
+
+int pwc_set_contrast(struct pwc_device *pdev, int value)
+{
+ char buf;
+
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ buf = (value >> 10) & 0x3f;
+ return SendControlMsg(SET_LUM_CTL, CONTRAST_FORMATTER, 1);
+}
+
+/* GAMMA */
+
+int pwc_get_gamma(struct pwc_device *pdev)
+{
+ char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_LUM_CTL, GAMMA_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ return buf << 11;
+}
+
+int pwc_set_gamma(struct pwc_device *pdev, int value)
+{
+ char buf;
+
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ buf = (value >> 11) & 0x1f;
+ return SendControlMsg(SET_LUM_CTL, GAMMA_FORMATTER, 1);
+}
+
+
+/* SATURATION */
+
+int pwc_get_saturation(struct pwc_device *pdev)
+{
+ char buf;
+ int ret;
+
+ if (pdev->type < 675)
+ return -1;
+ ret = RecvControlMsg(GET_CHROM_CTL, pdev->type < 730 ? SATURATION_MODE_FORMATTER2 : SATURATION_MODE_FORMATTER1, 1);
+ if (ret < 0)
+ return ret;
+ return 32768 + buf * 327;
+}
+
+int pwc_set_saturation(struct pwc_device *pdev, int value)
+{
+ char buf;
+
+ if (pdev->type < 675)
+ return -EINVAL;
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ /* saturation ranges from -100 to +100 */
+ buf = (value - 32768) / 327;
+ return SendControlMsg(SET_CHROM_CTL, pdev->type < 730 ? SATURATION_MODE_FORMATTER2 : SATURATION_MODE_FORMATTER1, 1);
+}
+
+/* AGC */
+
+static inline int pwc_set_agc(struct pwc_device *pdev, int mode, int value)
+{
+ char buf;
+ int ret;
+
+ if (mode)
+ buf = 0x0; /* auto */
+ else
+ buf = 0xff; /* fixed */
+
+ ret = SendControlMsg(SET_LUM_CTL, AGC_MODE_FORMATTER, 1);
+
+ if (!mode && ret >= 0) {
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ buf = (value >> 10) & 0x3F;
+ ret = SendControlMsg(SET_LUM_CTL, PRESET_AGC_FORMATTER, 1);
+ }
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+static inline int pwc_get_agc(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_LUM_CTL, AGC_MODE_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+
+ if (buf != 0) { /* fixed */
+ ret = RecvControlMsg(GET_LUM_CTL, PRESET_AGC_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ if (buf > 0x3F)
+ buf = 0x3F;
+ *value = (buf << 10);
+ }
+ else { /* auto */
+ ret = RecvControlMsg(GET_STATUS_CTL, READ_AGC_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ /* Gah... this value ranges from 0x00 ... 0x9F */
+ if (buf > 0x9F)
+ buf = 0x9F;
+ *value = -(48 + buf * 409);
+ }
+
+ return 0;
+}
+
+static inline int pwc_set_shutter_speed(struct pwc_device *pdev, int mode, int value)
+{
+ char buf[2];
+ int speed, ret;
+
+
+ if (mode)
+ buf[0] = 0x0; /* auto */
+ else
+ buf[0] = 0xff; /* fixed */
+
+ ret = SendControlMsg(SET_LUM_CTL, SHUTTER_MODE_FORMATTER, 1);
+
+ if (!mode && ret >= 0) {
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ switch(pdev->type) {
+ case 675:
+ case 680:
+ case 690:
+ /* speed ranges from 0x0 to 0x290 (656) */
+ speed = (value / 100);
+ buf[1] = speed >> 8;
+ buf[0] = speed & 0xff;
+ break;
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ /* speed seems to range from 0x0 to 0xff */
+ buf[1] = 0;
+ buf[0] = value >> 8;
+ break;
+ }
+
+ ret = SendControlMsg(SET_LUM_CTL, PRESET_SHUTTER_FORMATTER, 2);
+ }
+ return ret;
+}
+
+
+/* POWER */
+
+int pwc_camera_power(struct pwc_device *pdev, int power)
+{
+ char buf;
+
+ if (pdev->type < 675 || (pdev->type < 730 && pdev->release < 6))
+ return 0; /* Not supported by Nala or Timon < release 6 */
+
+ if (power)
+ buf = 0x00; /* active */
+ else
+ buf = 0xFF; /* power save */
+ return SendControlMsg(SET_STATUS_CTL, SET_POWER_SAVE_MODE_FORMATTER, 1);
+}
+
+
+
+/* private calls */
+
+static inline int pwc_restore_user(struct pwc_device *pdev)
+{
+ char buf; /* dummy */
+ return SendControlMsg(SET_STATUS_CTL, RESTORE_USER_DEFAULTS_FORMATTER, 0);
+}
+
+static inline int pwc_save_user(struct pwc_device *pdev)
+{
+ char buf; /* dummy */
+ return SendControlMsg(SET_STATUS_CTL, SAVE_USER_DEFAULTS_FORMATTER, 0);
+}
+
+static inline int pwc_restore_factory(struct pwc_device *pdev)
+{
+ char buf; /* dummy */
+ return SendControlMsg(SET_STATUS_CTL, RESTORE_FACTORY_DEFAULTS_FORMATTER, 0);
+}
+
+ /* ************************************************* */
+ /* Patch by Alvarado: (not in the original version */
+
+ /*
+ * the camera recognizes modes from 0 to 4:
+ *
+ * 00: indoor (incandescant lighting)
+ * 01: outdoor (sunlight)
+ * 02: fluorescent lighting
+ * 03: manual
+ * 04: auto
+ */
+static inline int pwc_set_awb(struct pwc_device *pdev, int mode)
+{
+ char buf;
+ int ret;
+
+ if (mode < 0)
+ mode = 0;
+
+ if (mode > 4)
+ mode = 4;
+
+ buf = mode & 0x07; /* just the lowest three bits */
+
+ ret = SendControlMsg(SET_CHROM_CTL, WB_MODE_FORMATTER, 1);
+
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+static inline int pwc_get_awb(struct pwc_device *pdev)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_CHROM_CTL, WB_MODE_FORMATTER, 1);
+
+ if (ret < 0)
+ return ret;
+ return buf;
+}
+
+static inline int pwc_set_red_gain(struct pwc_device *pdev, int value)
+{
+ unsigned char buf;
+
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ /* only the msb is considered */
+ buf = value >> 8;
+ return SendControlMsg(SET_CHROM_CTL, PRESET_MANUAL_RED_GAIN_FORMATTER, 1);
+}
+
+static inline int pwc_get_red_gain(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_CHROM_CTL, PRESET_MANUAL_RED_GAIN_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf << 8;
+ return 0;
+}
+
+
+static inline int pwc_set_blue_gain(struct pwc_device *pdev, int value)
+{
+ unsigned char buf;
+
+ if (value < 0)
+ value = 0;
+ if (value > 0xffff)
+ value = 0xffff;
+ /* only the msb is considered */
+ buf = value >> 8;
+ return SendControlMsg(SET_CHROM_CTL, PRESET_MANUAL_BLUE_GAIN_FORMATTER, 1);
+}
+
+static inline int pwc_get_blue_gain(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_CHROM_CTL, PRESET_MANUAL_BLUE_GAIN_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf << 8;
+ return 0;
+}
+
+
+/* The following two functions are different, since they only read the
+ internal red/blue gains, which may be different from the manual
+ gains set or read above.
+ */
+static inline int pwc_read_red_gain(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_STATUS_CTL, READ_RED_GAIN_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf << 8;
+ return 0;
+}
+
+static inline int pwc_read_blue_gain(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_STATUS_CTL, READ_BLUE_GAIN_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf << 8;
+ return 0;
+}
+
+
+static inline int pwc_set_wb_speed(struct pwc_device *pdev, int speed)
+{
+ unsigned char buf;
+
+ /* useful range is 0x01..0x20 */
+ buf = speed / 0x7f0;
+ return SendControlMsg(SET_CHROM_CTL, AWB_CONTROL_SPEED_FORMATTER, 1);
+}
+
+static inline int pwc_get_wb_speed(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_CHROM_CTL, AWB_CONTROL_SPEED_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf * 0x7f0;
+ return 0;
+}
+
+
+static inline int pwc_set_wb_delay(struct pwc_device *pdev, int delay)
+{
+ unsigned char buf;
+
+ /* useful range is 0x01..0x3F */
+ buf = (delay >> 10);
+ return SendControlMsg(SET_CHROM_CTL, AWB_CONTROL_DELAY_FORMATTER, 1);
+}
+
+static inline int pwc_get_wb_delay(struct pwc_device *pdev, int *value)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_CHROM_CTL, AWB_CONTROL_DELAY_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *value = buf << 10;
+ return 0;
+}
+
+
+int pwc_set_leds(struct pwc_device *pdev, int on_value, int off_value)
+{
+ unsigned char buf[2];
+
+ if (pdev->type < 730)
+ return 0;
+ on_value /= 100;
+ off_value /= 100;
+ if (on_value < 0)
+ on_value = 0;
+ if (on_value > 0xff)
+ on_value = 0xff;
+ if (off_value < 0)
+ off_value = 0;
+ if (off_value > 0xff)
+ off_value = 0xff;
+
+ buf[0] = on_value;
+ buf[1] = off_value;
+
+ return SendControlMsg(SET_STATUS_CTL, LED_FORMATTER, 2);
+}
+
+int pwc_get_leds(struct pwc_device *pdev, int *on_value, int *off_value)
+{
+ unsigned char buf[2];
+ int ret;
+
+ if (pdev->type < 730) {
+ *on_value = -1;
+ *off_value = -1;
+ return 0;
+ }
+
+ ret = RecvControlMsg(GET_STATUS_CTL, LED_FORMATTER, 2);
+ if (ret < 0)
+ return ret;
+ *on_value = buf[0] * 100;
+ *off_value = buf[1] * 100;
+ return 0;
+}
+
+static inline int pwc_set_contour(struct pwc_device *pdev, int contour)
+{
+ unsigned char buf;
+ int ret;
+
+ if (contour < 0)
+ buf = 0xff; /* auto contour on */
+ else
+ buf = 0x0; /* auto contour off */
+ ret = SendControlMsg(SET_LUM_CTL, AUTO_CONTOUR_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+
+ if (contour < 0)
+ return 0;
+ if (contour > 0xffff)
+ contour = 0xffff;
+
+ buf = (contour >> 10); /* contour preset is [0..3f] */
+ ret = SendControlMsg(SET_LUM_CTL, PRESET_CONTOUR_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ return 0;
+}
+
+static inline int pwc_get_contour(struct pwc_device *pdev, int *contour)
+{
+ unsigned char buf;
+ int ret;
+
+ ret = RecvControlMsg(GET_LUM_CTL, AUTO_CONTOUR_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+
+ if (buf == 0) {
+ /* auto mode off, query current preset value */
+ ret = RecvControlMsg(GET_LUM_CTL, PRESET_CONTOUR_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *contour = buf << 10;
+ }
+ else
+ *contour = -1;
+ return 0;
+}
+
+
+static inline int pwc_set_backlight(struct pwc_device *pdev, int backlight)
+{
+ unsigned char buf;
+
+ if (backlight)
+ buf = 0xff;
+ else
+ buf = 0x0;
+ return SendControlMsg(SET_LUM_CTL, BACK_LIGHT_COMPENSATION_FORMATTER, 1);
+}
+
+static inline int pwc_get_backlight(struct pwc_device *pdev, int *backlight)
+{
+ int ret;
+ unsigned char buf;
+
+ ret = RecvControlMsg(GET_LUM_CTL, BACK_LIGHT_COMPENSATION_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *backlight = buf;
+ return 0;
+}
+
+
+static inline int pwc_set_flicker(struct pwc_device *pdev, int flicker)
+{
+ unsigned char buf;
+
+ if (flicker)
+ buf = 0xff;
+ else
+ buf = 0x0;
+ return SendControlMsg(SET_LUM_CTL, FLICKERLESS_MODE_FORMATTER, 1);
+}
+
+static inline int pwc_get_flicker(struct pwc_device *pdev, int *flicker)
+{
+ int ret;
+ unsigned char buf;
+
+ ret = RecvControlMsg(GET_LUM_CTL, FLICKERLESS_MODE_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *flicker = buf;
+ return 0;
+}
+
+
+static inline int pwc_set_dynamic_noise(struct pwc_device *pdev, int noise)
+{
+ unsigned char buf;
+
+ if (noise < 0)
+ noise = 0;
+ if (noise > 3)
+ noise = 3;
+ buf = noise;
+ return SendControlMsg(SET_LUM_CTL, DYNAMIC_NOISE_CONTROL_FORMATTER, 1);
+}
+
+static inline int pwc_get_dynamic_noise(struct pwc_device *pdev, int *noise)
+{
+ int ret;
+ unsigned char buf;
+
+ ret = RecvControlMsg(GET_LUM_CTL, DYNAMIC_NOISE_CONTROL_FORMATTER, 1);
+ if (ret < 0)
+ return ret;
+ *noise = buf;
+ return 0;
+}
+
+int pwc_mpt_reset(struct pwc_device *pdev, int flags)
+{
+ unsigned char buf;
+
+ buf = flags & 0x03; // only lower two bits are currently used
+ return SendControlMsg(SET_MPT_CTL, PT_RESET_CONTROL_FORMATTER, 1);
+}
+
+static inline int pwc_mpt_set_angle(struct pwc_device *pdev, int pan, int tilt)
+{
+ unsigned char buf[4];
+
+ /* set new relative angle; angles are expressed in degrees * 100,
+ but cam as .5 degree resolution, hence devide by 200. Also
+ the angle must be multiplied by 64 before it's send to
+ the cam (??)
+ */
+ pan = 64 * pan / 100;
+ tilt = -64 * tilt / 100; /* positive tilt is down, which is not what the user would expect */
+ buf[0] = pan & 0xFF;
+ buf[1] = (pan >> 8) & 0xFF;
+ buf[2] = tilt & 0xFF;
+ buf[3] = (tilt >> 8) & 0xFF;
+ return SendControlMsg(SET_MPT_CTL, PT_RELATIVE_CONTROL_FORMATTER, 4);
+}
+
+static inline int pwc_mpt_get_status(struct pwc_device *pdev, struct pwc_mpt_status *status)
+{
+ int ret;
+ unsigned char buf[5];
+
+ ret = RecvControlMsg(GET_MPT_CTL, PT_STATUS_FORMATTER, 5);
+ if (ret < 0)
+ return ret;
+ status->status = buf[0] & 0x7; // 3 bits are used for reporting
+ status->time_pan = (buf[1] << 8) + buf[2];
+ status->time_tilt = (buf[3] << 8) + buf[4];
+ return 0;
+}
+
+
+int pwc_get_cmos_sensor(struct pwc_device *pdev, int *sensor)
+{
+ unsigned char buf;
+ int ret = -1, request;
+
+ if (pdev->type < 675)
+ request = SENSOR_TYPE_FORMATTER1;
+ else if (pdev->type < 730)
+ return -1; /* The Vesta series doesn't have this call */
+ else
+ request = SENSOR_TYPE_FORMATTER2;
+
+ ret = RecvControlMsg(GET_STATUS_CTL, request, 1);
+ if (ret < 0)
+ return ret;
+ if (pdev->type < 675)
+ *sensor = buf | 0x100;
+ else
+ *sensor = buf;
+ return 0;
+}
+
+
+ /* End of Add-Ons */
+ /* ************************************************* */
+
+/* Linux 2.5.something and 2.6 pass direct pointers to arguments of
+ ioctl() calls. With 2.4, you have to do tedious copy_from_user()
+ and copy_to_user() calls. With these macros we circumvent this,
+ and let me maintain only one source file. The functionality is
+ exactly the same otherwise.
+ */
+
+#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 0)
+
+/* define local variable for arg */
+#define ARG_DEF(ARG_type, ARG_name)\
+ ARG_type *ARG_name = arg;
+/* copy arg to local variable */
+#define ARG_IN(ARG_name) /* nothing */
+/* argument itself (referenced) */
+#define ARGR(ARG_name) (*ARG_name)
+/* argument address */
+#define ARGA(ARG_name) ARG_name
+/* copy local variable to arg */
+#define ARG_OUT(ARG_name) /* nothing */
+
+#else
+
+#define ARG_DEF(ARG_type, ARG_name)\
+ ARG_type ARG_name;
+#define ARG_IN(ARG_name)\
+ if (copy_from_user(&ARG_name, arg, sizeof(ARG_name))) {\
+ ret = -EFAULT;\
+ break;\
+ }
+#define ARGR(ARG_name) ARG_name
+#define ARGA(ARG_name) &ARG_name
+#define ARG_OUT(ARG_name)\
+ if (copy_to_user(arg, &ARG_name, sizeof(ARG_name))) {\
+ ret = -EFAULT;\
+ break;\
+ }
+
+#endif
+
+int pwc_ioctl(struct pwc_device *pdev, unsigned int cmd, void *arg)
+{
+ int ret = 0;
+
+ switch(cmd) {
+ case VIDIOCPWCRUSER:
+ {
+ if (pwc_restore_user(pdev))
+ ret = -EINVAL;
+ break;
+ }
+
+ case VIDIOCPWCSUSER:
+ {
+ if (pwc_save_user(pdev))
+ ret = -EINVAL;
+ break;
+ }
+
+ case VIDIOCPWCFACTORY:
+ {
+ if (pwc_restore_factory(pdev))
+ ret = -EINVAL;
+ break;
+ }
+
+ case VIDIOCPWCSCQUAL:
+ {
+ ARG_DEF(int, qual)
+
+ ARG_IN(qual)
+ if (ARGR(qual) < 0 || ARGR(qual) > 3)
+ ret = -EINVAL;
+ else
+ ret = pwc_try_video_mode(pdev, pdev->view.x, pdev->view.y, pdev->vframes, ARGR(qual), pdev->vsnapshot);
+ if (ret >= 0)
+ pdev->vcompression = ARGR(qual);
+ break;
+ }
+
+ case VIDIOCPWCGCQUAL:
+ {
+ ARG_DEF(int, qual)
+
+ ARGR(qual) = pdev->vcompression;
+ ARG_OUT(qual)
+ break;
+ }
+
+ case VIDIOCPWCPROBE:
+ {
+ ARG_DEF(struct pwc_probe, probe)
+
+ strcpy(ARGR(probe).name, pdev->vdev->name);
+ ARGR(probe).type = pdev->type;
+ ARG_OUT(probe)
+ break;
+ }
+
+ case VIDIOCPWCGSERIAL:
+ {
+ ARG_DEF(struct pwc_serial, serial)
+
+ strcpy(ARGR(serial).serial, pdev->serial);
+ ARG_OUT(serial)
+ break;
+ }
+
+ case VIDIOCPWCSAGC:
+ {
+ ARG_DEF(int, agc)
+
+ ARG_IN(agc)
+ if (pwc_set_agc(pdev, ARGR(agc) < 0 ? 1 : 0, ARGR(agc)))
+ ret = -EINVAL;
+ break;
+ }
+
+ case VIDIOCPWCGAGC:
+ {
+ ARG_DEF(int, agc)
+
+ if (pwc_get_agc(pdev, ARGA(agc)))
+ ret = -EINVAL;
+ ARG_OUT(agc)
+ break;
+ }
+
+ case VIDIOCPWCSSHUTTER:
+ {
+ ARG_DEF(int, shutter_speed)
+
+ ARG_IN(shutter_speed)
+ ret = pwc_set_shutter_speed(pdev, ARGR(shutter_speed) < 0 ? 1 : 0, ARGR(shutter_speed));
+ break;
+ }
+
+ case VIDIOCPWCSAWB:
+ {
+ ARG_DEF(struct pwc_whitebalance, wb)
+
+ ARG_IN(wb)
+ ret = pwc_set_awb(pdev, ARGR(wb).mode);
+ if (ret >= 0 && ARGR(wb).mode == PWC_WB_MANUAL) {
+ pwc_set_red_gain(pdev, ARGR(wb).manual_red);
+ pwc_set_blue_gain(pdev, ARGR(wb).manual_blue);
+ }
+ break;
+ }
+
+ case VIDIOCPWCGAWB:
+ {
+ ARG_DEF(struct pwc_whitebalance, wb)
+
+ memset(ARGA(wb), 0, sizeof(struct pwc_whitebalance));
+ ARGR(wb).mode = pwc_get_awb(pdev);
+ if (ARGR(wb).mode < 0)
+ ret = -EINVAL;
+ else {
+ if (ARGR(wb).mode == PWC_WB_MANUAL) {
+ ret = pwc_get_red_gain(pdev, &ARGR(wb).manual_red);
+ if (ret < 0)
+ break;
+ ret = pwc_get_blue_gain(pdev, &ARGR(wb).manual_blue);
+ if (ret < 0)
+ break;
+ }
+ if (ARGR(wb).mode == PWC_WB_AUTO) {
+ ret = pwc_read_red_gain(pdev, &ARGR(wb).read_red);
+ if (ret < 0)
+ break;
+ ret =pwc_read_blue_gain(pdev, &ARGR(wb).read_blue);
+ if (ret < 0)
+ break;
+ }
+ }
+ ARG_OUT(wb)
+ break;
+ }
+
+ case VIDIOCPWCSAWBSPEED:
+ {
+ ARG_DEF(struct pwc_wb_speed, wbs)
+
+ if (ARGR(wbs).control_speed > 0) {
+ ret = pwc_set_wb_speed(pdev, ARGR(wbs).control_speed);
+ }
+ if (ARGR(wbs).control_delay > 0) {
+ ret = pwc_set_wb_delay(pdev, ARGR(wbs).control_delay);
+ }
+ break;
+ }
+
+ case VIDIOCPWCGAWBSPEED:
+ {
+ ARG_DEF(struct pwc_wb_speed, wbs)
+
+ ret = pwc_get_wb_speed(pdev, &ARGR(wbs).control_speed);
+ if (ret < 0)
+ break;
+ ret = pwc_get_wb_delay(pdev, &ARGR(wbs).control_delay);
+ if (ret < 0)
+ break;
+ ARG_OUT(wbs)
+ break;
+ }
+
+ case VIDIOCPWCSLED:
+ {
+ ARG_DEF(struct pwc_leds, leds)
+
+ ARG_IN(leds)
+ ret = pwc_set_leds(pdev, ARGR(leds).led_on, ARGR(leds).led_off);
+ break;
+ }
+
+
+ case VIDIOCPWCGLED:
+ {
+ ARG_DEF(struct pwc_leds, leds)
+
+ ret = pwc_get_leds(pdev, &ARGR(leds).led_on, &ARGR(leds).led_off);
+ ARG_OUT(leds)
+ break;
+ }
+
+ case VIDIOCPWCSCONTOUR:
+ {
+ ARG_DEF(int, contour)
+
+ ARG_IN(contour)
+ ret = pwc_set_contour(pdev, ARGR(contour));
+ break;
+ }
+
+ case VIDIOCPWCGCONTOUR:
+ {
+ ARG_DEF(int, contour)
+
+ ret = pwc_get_contour(pdev, ARGA(contour));
+ ARG_OUT(contour)
+ break;
+ }
+
+ case VIDIOCPWCSBACKLIGHT:
+ {
+ ARG_DEF(int, backlight)
+
+ ARG_IN(backlight)
+ ret = pwc_set_backlight(pdev, ARGR(backlight));
+ break;
+ }
+
+ case VIDIOCPWCGBACKLIGHT:
+ {
+ ARG_DEF(int, backlight)
+
+ ret = pwc_get_backlight(pdev, ARGA(backlight));
+ ARG_OUT(backlight)
+ break;
+ }
+
+ case VIDIOCPWCSFLICKER:
+ {
+ ARG_DEF(int, flicker)
+
+ ARG_IN(flicker)
+ ret = pwc_set_flicker(pdev, ARGR(flicker));
+ break;
+ }
+
+ case VIDIOCPWCGFLICKER:
+ {
+ ARG_DEF(int, flicker)
+
+ ret = pwc_get_flicker(pdev, ARGA(flicker));
+ ARG_OUT(flicker)
+ break;
+ }
+
+ case VIDIOCPWCSDYNNOISE:
+ {
+ ARG_DEF(int, dynnoise)
+
+ ARG_IN(dynnoise)
+ ret = pwc_set_dynamic_noise(pdev, ARGR(dynnoise));
+ break;
+ }
+
+ case VIDIOCPWCGDYNNOISE:
+ {
+ ARG_DEF(int, dynnoise)
+
+ ret = pwc_get_dynamic_noise(pdev, ARGA(dynnoise));
+ ARG_OUT(dynnoise);
+ break;
+ }
+
+ case VIDIOCPWCGREALSIZE:
+ {
+ ARG_DEF(struct pwc_imagesize, size)
+
+ ARGR(size).width = pdev->image.x;
+ ARGR(size).height = pdev->image.y;
+ ARG_OUT(size)
+ break;
+ }
+
+ case VIDIOCPWCMPTRESET:
+ {
+ if (pdev->features & FEATURE_MOTOR_PANTILT)
+ {
+ ARG_DEF(int, flags)
+
+ ARG_IN(flags)
+ ret = pwc_mpt_reset(pdev, ARGR(flags));
+ if (ret >= 0)
+ {
+ pdev->pan_angle = 0;
+ pdev->tilt_angle = 0;
+ }
+ }
+ else
+ {
+ ret = -ENXIO;
+ }
+ break;
+ }
+
+ case VIDIOCPWCMPTGRANGE:
+ {
+ if (pdev->features & FEATURE_MOTOR_PANTILT)
+ {
+ ARG_DEF(struct pwc_mpt_range, range)
+
+ ARGR(range) = pdev->angle_range;
+ ARG_OUT(range)
+ }
+ else
+ {
+ ret = -ENXIO;
+ }
+ break;
+ }
+
+ case VIDIOCPWCMPTSANGLE:
+ {
+ int new_pan, new_tilt;
+
+ if (pdev->features & FEATURE_MOTOR_PANTILT)
+ {
+ ARG_DEF(struct pwc_mpt_angles, angles)
+
+ ARG_IN(angles)
+ /* The camera can only set relative angles, so
+ do some calculations when getting an absolute angle .
+ */
+ if (ARGR(angles).absolute)
+ {
+ new_pan = ARGR(angles).pan;
+ new_tilt = ARGR(angles).tilt;
+ }
+ else
+ {
+ new_pan = pdev->pan_angle + ARGR(angles).pan;
+ new_tilt = pdev->tilt_angle + ARGR(angles).tilt;
+ }
+ /* check absolute ranges */
+ if (new_pan < pdev->angle_range.pan_min ||
+ new_pan > pdev->angle_range.pan_max ||
+ new_tilt < pdev->angle_range.tilt_min ||
+ new_tilt > pdev->angle_range.tilt_max)
+ {
+ ret = -ERANGE;
+ }
+ else
+ {
+ /* go to relative range, check again */
+ new_pan -= pdev->pan_angle;
+ new_tilt -= pdev->tilt_angle;
+ /* angles are specified in degrees * 100, thus the limit = 36000 */
+ if (new_pan < -36000 || new_pan > 36000 || new_tilt < -36000 || new_tilt > 36000)
+ ret = -ERANGE;
+ }
+ if (ret == 0) /* no errors so far */
+ {
+ ret = pwc_mpt_set_angle(pdev, new_pan, new_tilt);
+ if (ret >= 0)
+ {
+ pdev->pan_angle += new_pan;
+ pdev->tilt_angle += new_tilt;
+ }
+ if (ret == -EPIPE) /* stall -> out of range */
+ ret = -ERANGE;
+ }
+ }
+ else
+ {
+ ret = -ENXIO;
+ }
+ break;
+ }
+
+ case VIDIOCPWCMPTGANGLE:
+ {
+
+ if (pdev->features & FEATURE_MOTOR_PANTILT)
+ {
+ ARG_DEF(struct pwc_mpt_angles, angles)
+
+ ARGR(angles).absolute = 1;
+ ARGR(angles).pan = pdev->pan_angle;
+ ARGR(angles).tilt = pdev->tilt_angle;
+ ARG_OUT(angles)
+ }
+ else
+ {
+ ret = -ENXIO;
+ }
+ break;
+ }
+
+ case VIDIOCPWCMPTSTATUS:
+ {
+ if (pdev->features & FEATURE_MOTOR_PANTILT)
+ {
+ ARG_DEF(struct pwc_mpt_status, status)
+
+ ret = pwc_mpt_get_status(pdev, ARGA(status));
+ ARG_OUT(status)
+ }
+ else
+ {
+ ret = -ENXIO;
+ }
+ break;
+ }
+
+ case VIDIOCPWCGVIDCMD:
+ {
+ ARG_DEF(struct pwc_video_command, cmd);
+
+ ARGR(cmd).type = pdev->type;
+ ARGR(cmd).release = pdev->release;
+ ARGR(cmd).command_len = pdev->cmd_len;
+ memcpy(&ARGR(cmd).command_buf, pdev->cmd_buf, pdev->cmd_len);
+ ARGR(cmd).bandlength = pdev->vbandlength;
+ ARGR(cmd).frame_size = pdev->frame_size;
+ ARG_OUT(cmd)
+ break;
+ }
+ /*
+ case VIDIOCPWCGVIDTABLE:
+ {
+ ARG_DEF(struct pwc_table_init_buffer, table);
+ ARGR(table).len = pdev->cmd_len;
+ memcpy(&ARGR(table).buffer, pdev->decompress_data, pdev->decompressor->table_size);
+ ARG_OUT(table)
+ break;
+ }
+ */
+
+ default:
+ ret = -ENOIOCTLCMD;
+ break;
+ }
+
+ if (ret > 0)
+ return 0;
+ return ret;
+}
+
+
+
--- /dev/null
+/* Linux driver for Philips webcam
+ Decompression for chipset version 1
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+
+
+#include "pwc-dec1.h"
+
+
+void pwc_dec1_init(int type, int release, void *buffer, void *table)
+{
+
+}
+
+void pwc_dec1_exit(void)
+{
+
+
+
+}
+
--- /dev/null
+/* Linux driver for Philips webcam
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+
+
+#ifndef PWC_DEC1_H
+#define PWC_DEC1_H
+
+void pwc_dec1_init(int type, int release, void *buffer, void *private_data);
+void pwc_dec1_exit(void);
+
+#endif
+
+
+
--- /dev/null
+/* Linux driver for Philips webcam
+ Decompression for chipset version 2 et 3
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#include "pwc-timon.h"
+#include "pwc-kiara.h"
+#include "pwc-dec23.h"
+#include "pwc-ioctl.h"
+
+#include <linux/string.h>
+
+/****
+ *
+ *
+ *
+ */
+
+
+static void fill_table_a000(unsigned int *p)
+{
+ static unsigned int initial_values[12] = {
+ 0xFFAD9B00, 0xFFDDEE00, 0x00221200, 0x00526500,
+ 0xFFC21E00, 0x003DE200, 0xFF924B80, 0xFFD2A300,
+ 0x002D5D00, 0x006DB480, 0xFFED3E00, 0x0012C200
+ };
+ static unsigned int values_derivated[12] = {
+ 0x0000A4CA, 0x00004424, 0xFFFFBBDC, 0xFFFF5B36,
+ 0x00007BC4, 0xFFFF843C, 0x0000DB69, 0x00005ABA,
+ 0xFFFFA546, 0xFFFF2497, 0x00002584, 0xFFFFDA7C
+ };
+ unsigned int temp_values[12];
+ int i,j;
+
+ memcpy(temp_values,initial_values,sizeof(initial_values));
+ for (i=0;i<256;i++)
+ {
+ for (j=0;j<12;j++)
+ {
+ *p++ = temp_values[j];
+ temp_values[j] += values_derivated[j];
+ }
+ }
+}
+
+static void fill_table_d000(unsigned char *p)
+{
+ int bit,byte;
+
+ for (bit=0; bit<8; bit++)
+ {
+ unsigned char bitpower = 1<<bit;
+ unsigned char mask = bitpower-1;
+ for (byte=0; byte<256; byte++)
+ {
+ if (byte & bitpower)
+ *p++ = -(byte & mask);
+ else
+ *p++ = (byte & mask);
+ }
+ }
+}
+
+/*
+ *
+ * Kiara: 0 <= ver <= 7
+ * Timon: 0 <= ver <= 15
+ *
+ */
+void fill_table_color(unsigned int version, const unsigned int *romtable,
+ unsigned char *p0004,
+ unsigned char *p8004)
+{
+ const unsigned int *table;
+ unsigned char *p0, *p8;
+ int i,j,k;
+ int dl,bit,pw;
+
+ romtable += version*256;
+
+ for (i=0; i<2; i++)
+ {
+ table = romtable + i*128;
+
+ for (dl=0; dl<16; dl++)
+ {
+ p0 = p0004 + (i<<14) + (dl<<10);
+ p8 = p8004 + (i<<12) + (dl<<8);
+
+ for (j=0; j<8; j++ , table++, p0+=128)
+ {
+ for (k=0; k<16; k++)
+ {
+ if (k==0)
+ bit=1;
+ else if (k>=1 && k<3)
+ bit=(table[0]>>15)&7;
+ else if (k>=3 && k<6)
+ bit=(table[0]>>12)&7;
+ else if (k>=6 && k<10)
+ bit=(table[0]>>9)&7;
+ else if (k>=10 && k<13)
+ bit=(table[0]>>6)&7;
+ else if (k>=13 && k<15)
+ bit=(table[0]>>3)&7;
+ else
+ bit=(table[0])&7;
+ if (k == 0)
+ *(unsigned char *)p8++ = 8;
+ else
+ *(unsigned char *)p8++ = j - bit;
+ *(unsigned char *)p8++ = bit;
+
+ pw = 1<<bit;
+ p0[k+0x00] = (1*pw) + 0x80;
+ p0[k+0x10] = (2*pw) + 0x80;
+ p0[k+0x20] = (3*pw) + 0x80;
+ p0[k+0x30] = (4*pw) + 0x80;
+ p0[k+0x40] = (-pw) + 0x80;
+ p0[k+0x50] = (2*-pw) + 0x80;
+ p0[k+0x60] = (3*-pw) + 0x80;
+ p0[k+0x70] = (4*-pw) + 0x80;
+ } /* end of for (k=0; k<16; k++, p8++) */
+ } /* end of for (j=0; j<8; j++ , table++) */
+ } /* end of for (dl=0; dl<16; dl++) */
+ } /* end of for (i=0; i<2; i++) */
+}
+
+/*
+ * precision = (pdev->xx + pdev->yy)
+ *
+ */
+void fill_table_dc00_d800(unsigned int precision, unsigned int *pdc00, unsigned int *pd800)
+{
+ int i;
+ unsigned int offset1, offset2;
+
+ for(i=0,offset1=0x4000, offset2=0; i<256 ; i++,offset1+=0x7BC4, offset2+=0x7BC4)
+ {
+ unsigned int msb = offset1 >> 15;
+
+ if ( msb > 255)
+ {
+ if (msb)
+ msb=0;
+ else
+ msb=255;
+ }
+
+ *pdc00++ = msb << precision;
+ *pd800++ = offset2;
+ }
+
+}
+
+/*
+ * struct {
+ * unsigned char op; // operation to execute
+ * unsigned char bits; // bits use to perform operation
+ * unsigned char offset1; // offset to add to access in the table_0004 % 16
+ * unsigned char offset2; // offset to add to access in the table_0004
+ * }
+ *
+ */
+static unsigned int table_ops[] = {
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x00, 0x00,0x04,0x01,0x10, 0x00,0x06,0x01,0x30,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x40, 0x00,0x05,0x01,0x20, 0x01,0x00,0x00,0x00,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x00, 0x00,0x04,0x01,0x50, 0x00,0x05,0x02,0x00,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x40, 0x00,0x05,0x03,0x00, 0x01,0x00,0x00,0x00,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x00, 0x00,0x04,0x01,0x10, 0x00,0x06,0x02,0x10,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x40, 0x00,0x05,0x01,0x60, 0x01,0x00,0x00,0x00,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x00, 0x00,0x04,0x01,0x50, 0x00,0x05,0x02,0x40,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x40, 0x00,0x05,0x03,0x40, 0x01,0x00,0x00,0x00,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x00, 0x00,0x04,0x01,0x10, 0x00,0x06,0x01,0x70,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x40, 0x00,0x05,0x01,0x20, 0x01,0x00,0x00,0x00,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x00, 0x00,0x04,0x01,0x50, 0x00,0x05,0x02,0x00,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x40, 0x00,0x05,0x03,0x00, 0x01,0x00,0x00,0x00,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x00, 0x00,0x04,0x01,0x10, 0x00,0x06,0x02,0x50,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x40, 0x00,0x05,0x01,0x60, 0x01,0x00,0x00,0x00,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x00, 0x00,0x04,0x01,0x50, 0x00,0x05,0x02,0x40,
+0x02,0x00,0x00,0x00, 0x00,0x03,0x01,0x40, 0x00,0x05,0x03,0x40, 0x01,0x00,0x00,0x00
+};
+
+/*
+ * TODO: multiply by 4 all values
+ *
+ */
+static unsigned int MulIdx[256] = {
+ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
+ 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, 2, 3,
+ 0, 0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3,
+ 4, 4, 4, 4, 5, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4,
+ 6, 7, 8, 9, 7,10,11, 8, 8,11,10, 7, 9, 8, 7, 6,
+ 4, 5, 5, 4, 4, 5, 5, 4, 4, 5, 5, 4, 4, 5, 5, 4,
+ 1, 3, 0, 2, 1, 3, 0, 2, 1, 3, 0, 2, 1, 3, 0, 2,
+ 0, 3, 3, 0, 1, 2, 2, 1, 2, 1, 1, 2, 3, 0, 0, 3,
+ 0, 1, 2, 3, 3, 2, 1, 0, 3, 2, 1, 0, 0, 1, 2, 3,
+ 1, 1, 1, 1, 3, 3, 3, 3, 0, 0, 0, 0, 2, 2, 2, 2,
+ 7,10,11, 8, 9, 8, 7, 6, 6, 7, 8, 9, 8,11,10, 7,
+ 4, 5, 5, 4, 5, 4, 4, 5, 5, 4, 4, 5, 4, 5, 5, 4,
+ 7, 9, 6, 8,10, 8, 7,11,11, 7, 8,10, 8, 6, 9, 7,
+ 1, 3, 0, 2, 2, 0, 3, 1, 2, 0, 3, 1, 1, 3, 0, 2,
+ 1, 2, 2, 1, 3, 0, 0, 3, 0, 3, 3, 0, 2, 1, 1, 2,
+10, 8, 7,11, 8, 6, 9, 7, 7, 9, 6, 8,11, 7, 8,10
+};
+
+
+
+void pwc_dec23_init(int type, int release, unsigned char *mode, void *data)
+{
+ int flags;
+ struct pwc_dec23_private *pdev = data;
+ release = release;
+
+ switch (type)
+ {
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ flags = mode[2]&0x18; /* our: flags = 8, mode[2]==e8 */
+ if (flags==8)
+ pdev->zz = 7;
+ else if (flags==0x10)
+ pdev->zz = 8;
+ else
+ pdev->zz = 6;
+ flags = mode[2]>>5; /* our: 7 */
+
+ fill_table_color(flags, (unsigned int *)KiaraRomTable, pdev->table_0004, pdev->table_8004);
+ break;
+
+
+ case 675:
+ case 680:
+ case 690:
+ flags = mode[2]&6;
+ if (flags==2)
+ pdev->zz = 7;
+ else if (flags==4)
+ pdev->zz = 8;
+ else
+ pdev->zz = 6;
+ flags = mode[2]>>3;
+
+ fill_table_color(flags, (unsigned int *)TimonRomTable, pdev->table_0004, pdev->table_8004);
+ break;
+
+ default:
+ /* Not supported */
+ return;
+ }
+
+ /* * * * ** */
+ pdev->xx = 8 - pdev->zz;
+ pdev->yy = 15 - pdev->xx;
+ pdev->zzmask = 0xFF>>pdev->xx;
+ //pdev->zzmask = (1U<<pdev->zz)-1;
+
+
+ fill_table_dc00_d800(pdev->xx + pdev->yy, pdev->table_dc00, pdev->table_d800);
+ fill_table_a000(pdev->table_a004);
+ fill_table_d000(pdev->table_d004);
+}
+
+
+/*
+ * To manage the stream, we keep in a 32 bits variables,
+ * the next bits in the stream. fill_reservoir() add to
+ * the reservoir at least wanted nbits.
+ *
+ *
+ */
+#define fill_nbits(reservoir,nbits_in_reservoir,stream,nbits_wanted) do { \
+ while (nbits_in_reservoir<nbits_wanted) \
+ { \
+ reservoir |= (*(stream)++) << nbits_in_reservoir; \
+ nbits_in_reservoir+=8; \
+ } \
+} while(0);
+
+#define get_nbits(reservoir,nbits_in_reservoir,stream,nbits_wanted,result) do { \
+ fill_nbits(reservoir,nbits_in_reservoir,stream,nbits_wanted); \
+ result = (reservoir) & ((1U<<nbits_wanted)-1); \
+ reservoir >>= nbits_wanted; \
+ nbits_in_reservoir -= nbits_wanted; \
+} while(0);
+
+
+
+static void DecompressBand23(const struct pwc_dec23_private *pdev,
+ const unsigned char *rawyuv,
+ unsigned char *planar_y,
+ unsigned char *planar_u,
+ unsigned char *planar_v,
+ unsigned int image_x, /* aka number of pixels wanted ??? */
+ unsigned int pixels_per_line, /* aka number of pixels per line */
+ int flags)
+{
+
+
+ unsigned int reservoir, nbits_in_reservoir;
+ int first_4_bits;
+ unsigned int bytes_per_channel;
+ int line_size; /* size of the line (4Y+U+V) */
+ int passes;
+ const unsigned char *ptable0004, *ptable8004;
+
+ int even_line;
+ unsigned int temp_colors[16];
+ int nblocks;
+
+ const unsigned char *stream;
+ unsigned char *dest_y, *dest_u=NULL, *dest_v=NULL;
+ unsigned int offset_to_plane_u, offset_to_plane_v;
+
+ int i;
+
+
+ reservoir = 0;
+ nbits_in_reservoir = 0;
+ stream = rawyuv+1; /* The first byte of the stream is skipped */
+ even_line = 1;
+
+ get_nbits(reservoir,nbits_in_reservoir,stream,4,first_4_bits);
+
+ line_size = pixels_per_line*3;
+
+ for (passes=0;passes<2;passes++)
+ {
+ if (passes==0)
+ {
+ bytes_per_channel = pixels_per_line;
+ dest_y = planar_y;
+ nblocks = image_x/4;
+ }
+ else
+ {
+ /* Format planar: All Y, then all U, then all V */
+ bytes_per_channel = pixels_per_line/2;
+ dest_u = planar_u;
+ dest_v = planar_v;
+ dest_y = dest_u;
+ nblocks = image_x/8;
+ }
+
+ offset_to_plane_u = bytes_per_channel*2;
+ offset_to_plane_v = bytes_per_channel*3;
+ /*
+ printf("bytes_per_channel = %d\n",bytes_per_channel);
+ printf("offset_to_plane_u = %d\n",offset_to_plane_u);
+ printf("offset_to_plane_v = %d\n",offset_to_plane_v);
+ */
+
+ while (nblocks-->0)
+ {
+ unsigned int gray_index;
+
+ fill_nbits(reservoir,nbits_in_reservoir,stream,16);
+ gray_index = reservoir & pdev->zzmask;
+ reservoir >>= pdev->zz;
+ nbits_in_reservoir -= pdev->zz;
+
+ fill_nbits(reservoir,nbits_in_reservoir,stream,2);
+
+ if ( (reservoir & 3) == 0)
+ {
+ reservoir>>=2;
+ nbits_in_reservoir-=2;
+ for (i=0;i<16;i++)
+ temp_colors[i] = pdev->table_dc00[gray_index];
+
+ }
+ else
+ {
+ unsigned int channel_v, offset1;
+
+ /* swap bit 0 and 2 of offset_OR */
+ channel_v = ((reservoir & 1) << 2) | (reservoir & 2) | ((reservoir & 4)>>2);
+ reservoir>>=3;
+ nbits_in_reservoir-=3;
+
+ for (i=0;i<16;i++)
+ temp_colors[i] = pdev->table_d800[gray_index];
+
+ ptable0004 = pdev->table_0004 + (passes*16384) + (first_4_bits*1024) + (channel_v*128);
+ ptable8004 = pdev->table_8004 + (passes*4096) + (first_4_bits*256) + (channel_v*32);
+
+ offset1 = 0;
+ while(1)
+ {
+ unsigned int index_in_table_ops, op, rows=0;
+ fill_nbits(reservoir,nbits_in_reservoir,stream,16);
+
+ /* mode is 0,1 or 2 */
+ index_in_table_ops = (reservoir&0x3F);
+ op = table_ops[ index_in_table_ops*4 ];
+ if (op == 2)
+ {
+ reservoir >>= 2;
+ nbits_in_reservoir -= 2;
+ break; /* exit the while(1) */
+ }
+ if (op == 0)
+ {
+ unsigned int shift;
+
+ offset1 = (offset1 + table_ops[index_in_table_ops*4+2]) & 0x0F;
+ shift = table_ops[ index_in_table_ops*4+1 ];
+ reservoir >>= shift;
+ nbits_in_reservoir -= shift;
+ rows = ptable0004[ offset1 + table_ops[index_in_table_ops*4+3] ];
+ }
+ if (op == 1)
+ {
+ /* 10bits [ xxxx xxxx yyyy 000 ]
+ * yyy => offset in the table8004
+ * xxx => offset in the tabled004
+ */
+ unsigned int mask, shift;
+ unsigned int col1, row1, total_bits;
+
+ offset1 = (offset1 + ((reservoir>>3)&0x0F)+1) & 0x0F;
+
+ col1 = (reservoir>>7) & 0xFF;
+ row1 = ptable8004 [ offset1*2 ];
+
+ /* Bit mask table */
+ mask = pdev->table_d004[ (row1<<8) + col1 ];
+ shift = ptable8004 [ offset1*2 + 1];
+ rows = ((mask << shift) + 0x80) & 0xFF;
+
+ total_bits = row1 + 8;
+ reservoir >>= total_bits;
+ nbits_in_reservoir -= total_bits;
+ }
+ {
+ const unsigned int *table_a004 = pdev->table_a004 + rows*12;
+ unsigned int *poffset = MulIdx + offset1*16; /* 64/4 (int) */
+ for (i=0;i<16;i++)
+ {
+ temp_colors[i] += table_a004[ *poffset ];
+ poffset++;
+ }
+ }
+ }
+ }
+#define USE_SIGNED_INT_FOR_COLOR
+#ifdef USE_SIGNED_INT_FOR_COLOR
+# define CLAMP(x) ((x)>255?255:((x)<0?0:x))
+#else
+# define CLAMP(x) ((x)>255?255:x)
+#endif
+
+ if (passes == 0)
+ {
+#ifdef USE_SIGNED_INT_FOR_COLOR
+ const int *c = temp_colors;
+#else
+ const unsigned int *c = temp_colors;
+#endif
+ unsigned char *d;
+
+ d = dest_y;
+ for (i=0;i<4;i++,c++)
+ *d++ = CLAMP((*c) >> pdev->yy);
+
+ d = dest_y + bytes_per_channel;
+ for (i=0;i<4;i++,c++)
+ *d++ = CLAMP((*c) >> pdev->yy);
+
+ d = dest_y + offset_to_plane_u;
+ for (i=0;i<4;i++,c++)
+ *d++ = CLAMP((*c) >> pdev->yy);
+
+ d = dest_y + offset_to_plane_v;
+ for (i=0;i<4;i++,c++)
+ *d++ = CLAMP((*c) >> pdev->yy);
+
+ dest_y += 4;
+ }
+ else if (passes == 1)
+ {
+#ifdef USE_SIGNED_INT_FOR_COLOR
+ int *c1 = temp_colors;
+ int *c2 = temp_colors+4;
+#else
+ unsigned int *c1 = temp_colors;
+ unsigned int *c2 = temp_colors+4;
+#endif
+ unsigned char *d;
+
+ d = dest_y;
+ for (i=0;i<4;i++,c1++,c2++)
+ {
+ *d++ = CLAMP((*c1) >> pdev->yy);
+ *d++ = CLAMP((*c2) >> pdev->yy);
+ }
+ c1 = temp_colors+12;
+ //c2 = temp_colors+8;
+ d = dest_y + bytes_per_channel;
+ for (i=0;i<4;i++,c1++,c2++)
+ {
+ *d++ = CLAMP((*c1) >> pdev->yy);
+ *d++ = CLAMP((*c2) >> pdev->yy);
+ }
+
+ if (even_line) /* Each line, swap u/v */
+ {
+ even_line=0;
+ dest_y = dest_v;
+ dest_u += 8;
+ }
+ else
+ {
+ even_line=1;
+ dest_y = dest_u;
+ dest_v += 8;
+ }
+ }
+
+ } /* end of while (nblocks-->0) */
+
+ } /* end of for (passes=0;passes<2;passes++) */
+
+}
+
+
+/**
+ *
+ * image: size of the image wanted
+ * view : size of the image returned by the camera
+ * offset: (x,y) to displayer image in the view
+ *
+ * src: raw data
+ * dst: image output
+ * flags: PWCX_FLAG_PLANAR
+ * pdev: private buffer
+ * bandlength:
+ *
+ */
+void pwc_dec23_decompress(const struct pwc_coord *image,
+ const struct pwc_coord *view,
+ const struct pwc_coord *offset,
+ const void *src,
+ void *dst,
+ int flags,
+ const void *data,
+ int bandlength)
+{
+ const struct pwc_dec23_private *pdev = data;
+ unsigned char *pout, *pout_planar_y=NULL, *pout_planar_u=NULL, *pout_planar_v=NULL;
+ int i,n,stride,pixel_size;
+
+
+ if (flags & PWCX_FLAG_BAYER)
+ {
+ pout = dst + (view->x * offset->y) + offset->x;
+ pixel_size = view->x * 4;
+ }
+ else
+ {
+ n = view->x * view->y;
+
+ /* offset in Y plane */
+ stride = view->x * offset->y;
+ pout_planar_y = dst + stride + offset->x;
+
+ /* offsets in U/V planes */
+ stride = (view->x * offset->y)/4 + offset->x/2;
+ pout_planar_u = dst + n + + stride;
+ pout_planar_v = dst + n + n/4 + stride;
+
+ pixel_size = view->x * 4;
+ }
+
+
+ for (i=0;i<image->y;i+=4)
+ {
+ if (flags & PWCX_FLAG_BAYER)
+ {
+ //TODO:
+ //DecompressBandBayer(pdev,src,pout,image.x,view->x,flags);
+ src += bandlength;
+ pout += pixel_size;
+ }
+ else
+ {
+ DecompressBand23(pdev,src,pout_planar_y,pout_planar_u,pout_planar_v,image->x,view->x,flags);
+ src += bandlength;
+ pout_planar_y += pixel_size;
+ pout_planar_u += view->x;
+ pout_planar_v += view->x;
+ }
+ }
+}
+
+void pwc_dec23_exit(void)
+{
+ /* Do nothing */
+
+}
+
--- /dev/null
+/* Linux driver for Philips webcam
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef PWC_DEC23_H
+#define PWC_DEC23_H
+
+struct pwc_dec23_private
+{
+ unsigned char xx,yy,zz,zzmask;
+
+ unsigned char table_0004[2*0x4000];
+ unsigned char table_8004[2*0x1000];
+ unsigned int table_a004[256*12];
+
+ unsigned char table_d004[8*256];
+ unsigned int table_d800[256];
+ unsigned int table_dc00[256];
+};
+
+
+void pwc_dec23_init(int type, int release, unsigned char *buffer, void *private_data);
+void pwc_dec23_exit(void);
+void pwc_dec23_decompress(const struct pwc_coord *image,
+ const struct pwc_coord *view,
+ const struct pwc_coord *offset,
+ const void *src,
+ void *dst,
+ int flags,
+ const void *data,
+ int bandlength);
+
+
+
+#endif
+
+
+
--- /dev/null
+/* Linux driver for Philips webcam
+ USB and Video4Linux interface part.
+ (C) 1999-2004 Nemosoft Unv.
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+*/
+
+/*
+ This code forms the interface between the USB layers and the Philips
+ specific stuff. Some adanved stuff of the driver falls under an
+ NDA, signed between me and Philips B.V., Eindhoven, the Netherlands, and
+ is thus not distributed in source form. The binary pwcx.o module
+ contains the code that falls under the NDA.
+
+ In case you're wondering: 'pwc' stands for "Philips WebCam", but
+ I really didn't want to type 'philips_web_cam' every time (I'm lazy as
+ any Linux kernel hacker, but I don't like uncomprehensible abbreviations
+ without explanation).
+
+ Oh yes, convention: to disctinguish between all the various pointers to
+ device-structures, I use these names for the pointer variables:
+ udev: struct usb_device *
+ vdev: struct video_device *
+ pdev: struct pwc_devive *
+*/
+
+/* Contributors:
+ - Alvarado: adding whitebalance code
+ - Alistar Moire: QuickCam 3000 Pro device/product ID
+ - Tony Hoyle: Creative Labs Webcam 5 device/product ID
+ - Mark Burazin: solving hang in VIDIOCSYNC when camera gets unplugged
+ - Jk Fang: Sotec Afina Eye ID
+ - Xavier Roche: QuickCam Pro 4000 ID
+ - Jens Knudsen: QuickCam Zoom ID
+ - J. Debert: QuickCam for Notebooks ID
+*/
+
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+#include <linux/poll.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <asm/io.h>
+
+#include "pwc.h"
+#include "pwc-ioctl.h"
+#include "pwc-kiara.h"
+#include "pwc-timon.h"
+#include "pwc-dec23.h"
+#include "pwc-dec1.h"
+#include "pwc-uncompress.h"
+
+/* Function prototypes and driver templates */
+
+/* hotplug device table support */
+static struct usb_device_id pwc_device_table [] = {
+ { USB_DEVICE(0x0471, 0x0302) }, /* Philips models */
+ { USB_DEVICE(0x0471, 0x0303) },
+ { USB_DEVICE(0x0471, 0x0304) },
+ { USB_DEVICE(0x0471, 0x0307) },
+ { USB_DEVICE(0x0471, 0x0308) },
+ { USB_DEVICE(0x0471, 0x030C) },
+ { USB_DEVICE(0x0471, 0x0310) },
+ { USB_DEVICE(0x0471, 0x0311) },
+ { USB_DEVICE(0x0471, 0x0312) },
+ { USB_DEVICE(0x0471, 0x0313) }, /* the 'new' 720K */
+ { USB_DEVICE(0x069A, 0x0001) }, /* Askey */
+ { USB_DEVICE(0x046D, 0x08B0) }, /* Logitech QuickCam Pro 3000 */
+ { USB_DEVICE(0x046D, 0x08B1) }, /* Logitech QuickCam Notebook Pro */
+ { USB_DEVICE(0x046D, 0x08B2) }, /* Logitech QuickCam Pro 4000 */
+ { USB_DEVICE(0x046D, 0x08B3) }, /* Logitech QuickCam Zoom (old model) */
+ { USB_DEVICE(0x046D, 0x08B4) }, /* Logitech QuickCam Zoom (new model) */
+ { USB_DEVICE(0x046D, 0x08B5) }, /* Logitech QuickCam Orbit/Sphere */
+ { USB_DEVICE(0x046D, 0x08B6) }, /* Logitech (reserved) */
+ { USB_DEVICE(0x046D, 0x08B7) }, /* Logitech (reserved) */
+ { USB_DEVICE(0x046D, 0x08B8) }, /* Logitech (reserved) */
+ { USB_DEVICE(0x055D, 0x9000) }, /* Samsung */
+ { USB_DEVICE(0x055D, 0x9001) },
+ { USB_DEVICE(0x041E, 0x400C) }, /* Creative Webcam 5 */
+ { USB_DEVICE(0x041E, 0x4011) }, /* Creative Webcam Pro Ex */
+ { USB_DEVICE(0x04CC, 0x8116) }, /* Afina Eye */
+ { USB_DEVICE(0x06BE, 0x8116) }, /* new Afina Eye */
+ { USB_DEVICE(0x0d81, 0x1910) }, /* Visionite */
+ { USB_DEVICE(0x0d81, 0x1900) },
+ { }
+};
+MODULE_DEVICE_TABLE(usb, pwc_device_table);
+
+static int usb_pwc_probe(struct usb_interface *intf, const struct usb_device_id *id);
+static void usb_pwc_disconnect(struct usb_interface *intf);
+
+static struct usb_driver pwc_driver = {
+ .owner = THIS_MODULE,
+ .name = "Philips webcam", /* name */
+ .id_table = pwc_device_table,
+ .probe = usb_pwc_probe, /* probe() */
+ .disconnect = usb_pwc_disconnect, /* disconnect() */
+};
+
+#define MAX_DEV_HINTS 20
+#define MAX_ISOC_ERRORS 20
+
+static int default_size = PSZ_QCIF;
+static int default_fps = 10;
+static int default_fbufs = 3; /* Default number of frame buffers */
+static int default_mbufs = 2; /* Default number of mmap() buffers */
+ int pwc_trace = TRACE_MODULE | TRACE_FLOW | TRACE_PWCX;
+static int power_save = 0;
+static int led_on = 100, led_off = 0; /* defaults to LED that is on while in use */
+ int pwc_preferred_compression = 2; /* 0..3 = uncompressed..high */
+static struct {
+ int type;
+ char serial_number[30];
+ int device_node;
+ struct pwc_device *pdev;
+} device_hint[MAX_DEV_HINTS];
+
+/***/
+
+static int pwc_video_open(struct inode *inode, struct file *file);
+static int pwc_video_close(struct inode *inode, struct file *file);
+static ssize_t pwc_video_read(struct file *file, char *buf,
+ size_t count, loff_t *ppos);
+static unsigned int pwc_video_poll(struct file *file, poll_table *wait);
+static int pwc_video_ioctl(struct inode *inode, struct file *file,
+ unsigned int ioctlnr, unsigned long arg);
+static int pwc_video_mmap(struct file *file, struct vm_area_struct *vma);
+
+static struct file_operations pwc_fops = {
+ .owner = THIS_MODULE,
+ .open = pwc_video_open,
+ .release = pwc_video_close,
+ .read = pwc_video_read,
+ .poll = pwc_video_poll,
+ .mmap = pwc_video_mmap,
+ .ioctl = pwc_video_ioctl,
+ .llseek = no_llseek,
+};
+static struct video_device pwc_template = {
+ .owner = THIS_MODULE,
+ .name = "Philips Webcam", /* Filled in later */
+ .type = VID_TYPE_CAPTURE,
+ .hardware = VID_HARDWARE_PWC,
+ .release = video_device_release,
+ .fops = &pwc_fops,
+ .minor = -1,
+};
+
+/***************************************************************************/
+
+/* Okay, this is some magic that I worked out and the reasoning behind it...
+
+ The biggest problem with any USB device is of course: "what to do
+ when the user unplugs the device while it is in use by an application?"
+ We have several options:
+ 1) Curse them with the 7 plagues when they do (requires divine intervention)
+ 2) Tell them not to (won't work: they'll do it anyway)
+ 3) Oops the kernel (this will have a negative effect on a user's uptime)
+ 4) Do something sensible.
+
+ Of course, we go for option 4.
+
+ It happens that this device will be linked to two times, once from
+ usb_device and once from the video_device in their respective 'private'
+ pointers. This is done when the device is probed() and all initialization
+ succeeded. The pwc_device struct links back to both structures.
+
+ When a device is unplugged while in use it will be removed from the
+ list of known USB devices; I also de-register it as a V4L device, but
+ unfortunately I can't free the memory since the struct is still in use
+ by the file descriptor. This free-ing is then deferend until the first
+ opportunity. Crude, but it works.
+
+ A small 'advantage' is that if a user unplugs the cam and plugs it back
+ in, it should get assigned the same video device minor, but unfortunately
+ it's non-trivial to re-link the cam back to the video device... (that
+ would surely be magic! :))
+*/
+
+/***************************************************************************/
+/* Private functions */
+
+/* Here we want the physical address of the memory.
+ * This is used when initializing the contents of the area.
+ */
+static inline unsigned long kvirt_to_pa(unsigned long adr)
+{
+ unsigned long kva, ret;
+
+ kva = (unsigned long) page_address(vmalloc_to_page((void *)adr));
+ kva |= adr & (PAGE_SIZE-1); /* restore the offset */
+ ret = __pa(kva);
+ return ret;
+}
+
+static void * rvmalloc(unsigned long size)
+{
+ void * mem;
+ unsigned long adr;
+
+ size=PAGE_ALIGN(size);
+ mem=vmalloc_32(size);
+ if (mem)
+ {
+ memset(mem, 0, size); /* Clear the ram out, no junk to the user */
+ adr=(unsigned long) mem;
+ while (size > 0)
+ {
+ SetPageReserved(vmalloc_to_page((void *)adr));
+ adr+=PAGE_SIZE;
+ size-=PAGE_SIZE;
+ }
+ }
+ return mem;
+}
+
+static void rvfree(void * mem, unsigned long size)
+{
+ unsigned long adr;
+
+ if (mem)
+ {
+ adr=(unsigned long) mem;
+ while ((long) size > 0)
+ {
+ ClearPageReserved(vmalloc_to_page((void *)adr));
+ adr+=PAGE_SIZE;
+ size-=PAGE_SIZE;
+ }
+ vfree(mem);
+ }
+}
+
+
+
+
+static int pwc_allocate_buffers(struct pwc_device *pdev)
+{
+ int i;
+ void *kbuf;
+
+ Trace(TRACE_MEMORY, ">> pwc_allocate_buffers(pdev = 0x%p)\n", pdev);
+
+ if (pdev == NULL)
+ return -ENXIO;
+
+#ifdef PWC_MAGIC
+ if (pdev->magic != PWC_MAGIC) {
+ Err("allocate_buffers(): magic failed.\n");
+ return -ENXIO;
+ }
+#endif
+ /* Allocate Isochronuous pipe buffers */
+ for (i = 0; i < MAX_ISO_BUFS; i++) {
+ if (pdev->sbuf[i].data == NULL) {
+ kbuf = kmalloc(ISO_BUFFER_SIZE, GFP_KERNEL);
+ if (kbuf == NULL) {
+ Err("Failed to allocate iso buffer %d.\n", i);
+ return -ENOMEM;
+ }
+ Trace(TRACE_MEMORY, "Allocated iso buffer at %p.\n", kbuf);
+ pdev->sbuf[i].data = kbuf;
+ memset(kbuf, 0, ISO_BUFFER_SIZE);
+ }
+ }
+
+ /* Allocate frame buffer structure */
+ if (pdev->fbuf == NULL) {
+ kbuf = kmalloc(default_fbufs * sizeof(struct pwc_frame_buf), GFP_KERNEL);
+ if (kbuf == NULL) {
+ Err("Failed to allocate frame buffer structure.\n");
+ return -ENOMEM;
+ }
+ Trace(TRACE_MEMORY, "Allocated frame buffer structure at %p.\n", kbuf);
+ pdev->fbuf = kbuf;
+ memset(kbuf, 0, default_fbufs * sizeof(struct pwc_frame_buf));
+ }
+ /* create frame buffers, and make circular ring */
+ for (i = 0; i < default_fbufs; i++) {
+ if (pdev->fbuf[i].data == NULL) {
+ kbuf = vmalloc(PWC_FRAME_SIZE); /* need vmalloc since frame buffer > 128K */
+ if (kbuf == NULL) {
+ Err("Failed to allocate frame buffer %d.\n", i);
+ return -ENOMEM;
+ }
+ Trace(TRACE_MEMORY, "Allocated frame buffer %d at %p.\n", i, kbuf);
+ pdev->fbuf[i].data = kbuf;
+ memset(kbuf, 128, PWC_FRAME_SIZE);
+ }
+ }
+
+ /* Allocate decompressor table space */
+ kbuf = NULL;
+ switch (pdev->type)
+ {
+ case 675:
+ case 680:
+ case 690:
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ Trace(TRACE_MEMORY,"private_data(%d)\n",sizeof(struct pwc_dec23_private));
+ kbuf = kmalloc(sizeof(struct pwc_dec23_private), GFP_KERNEL); /* Timon & Kiara */
+ break;
+ case 645:
+ case 646:
+ /* TODO & FIXME */
+ kbuf = kmalloc(sizeof(struct pwc_dec23_private), GFP_KERNEL);
+ break;
+ }
+ if (kbuf == NULL) {
+ Err("Failed to allocate decompress table.\n");
+ return -ENOMEM;
+ }
+ pdev->decompress_data = kbuf;
+
+ /* Allocate image buffer; double buffer for mmap() */
+ kbuf = rvmalloc(default_mbufs * pdev->len_per_image);
+ if (kbuf == NULL) {
+ Err("Failed to allocate image buffer(s). needed (%d)\n",default_mbufs * pdev->len_per_image);
+ return -ENOMEM;
+ }
+ Trace(TRACE_MEMORY, "Allocated image buffer at %p.\n", kbuf);
+ pdev->image_data = kbuf;
+ for (i = 0; i < default_mbufs; i++)
+ pdev->image_ptr[i] = kbuf + i * pdev->len_per_image;
+ for (; i < MAX_IMAGES; i++)
+ pdev->image_ptr[i] = NULL;
+
+ kbuf = NULL;
+
+ Trace(TRACE_MEMORY, "<< pwc_allocate_buffers()\n");
+ return 0;
+}
+
+static void pwc_free_buffers(struct pwc_device *pdev)
+{
+ int i;
+
+ Trace(TRACE_MEMORY, "Entering free_buffers(%p).\n", pdev);
+
+ if (pdev == NULL)
+ return;
+#ifdef PWC_MAGIC
+ if (pdev->magic != PWC_MAGIC) {
+ Err("free_buffers(): magic failed.\n");
+ return;
+ }
+#endif
+
+ /* Release Iso-pipe buffers */
+ for (i = 0; i < MAX_ISO_BUFS; i++)
+ if (pdev->sbuf[i].data != NULL) {
+ Trace(TRACE_MEMORY, "Freeing ISO buffer at %p.\n", pdev->sbuf[i].data);
+ kfree(pdev->sbuf[i].data);
+ pdev->sbuf[i].data = NULL;
+ }
+
+ /* The same for frame buffers */
+ if (pdev->fbuf != NULL) {
+ for (i = 0; i < default_fbufs; i++) {
+ if (pdev->fbuf[i].data != NULL) {
+ Trace(TRACE_MEMORY, "Freeing frame buffer %d at %p.\n", i, pdev->fbuf[i].data);
+ vfree(pdev->fbuf[i].data);
+ pdev->fbuf[i].data = NULL;
+ }
+ }
+ kfree(pdev->fbuf);
+ pdev->fbuf = NULL;
+ }
+
+ /* Intermediate decompression buffer & tables */
+ if (pdev->decompress_data != NULL) {
+ Trace(TRACE_MEMORY, "Freeing decompression buffer at %p.\n", pdev->decompress_data);
+ kfree(pdev->decompress_data);
+ pdev->decompress_data = NULL;
+ }
+ pdev->decompressor = NULL;
+
+ /* Release image buffers */
+ if (pdev->image_data != NULL) {
+ Trace(TRACE_MEMORY, "Freeing image buffer at %p.\n", pdev->image_data);
+ rvfree(pdev->image_data, default_mbufs * pdev->len_per_image);
+ }
+ pdev->image_data = NULL;
+
+ Trace(TRACE_MEMORY, "Leaving free_buffers().\n");
+}
+
+/* The frame & image buffer mess.
+
+ Yes, this is a mess. Well, it used to be simple, but alas... In this
+ module, 3 buffers schemes are used to get the data from the USB bus to
+ the user program. The first scheme involves the ISO buffers (called thus
+ since they transport ISO data from the USB controller), and not really
+ interesting. Suffices to say the data from this buffer is quickly
+ gathered in an interrupt handler (pwc_isoc_handler) and placed into the
+ frame buffer.
+
+ The frame buffer is the second scheme, and is the central element here.
+ It collects the data from a single frame from the camera (hence, the
+ name). Frames are delimited by the USB camera with a short USB packet,
+ so that's easy to detect. The frame buffers form a list that is filled
+ by the camera+USB controller and drained by the user process through
+ either read() or mmap().
+
+ The image buffer is the third scheme, in which frames are decompressed
+ and converted into planar format. For mmap() there is more than
+ one image buffer available.
+
+ The frame buffers provide the image buffering. In case the user process
+ is a bit slow, this introduces lag and some undesired side-effects.
+ The problem arises when the frame buffer is full. I used to drop the last
+ frame, which makes the data in the queue stale very quickly. But dropping
+ the frame at the head of the queue proved to be a litte bit more difficult.
+ I tried a circular linked scheme, but this introduced more problems than
+ it solved.
+
+ Because filling and draining are completely asynchronous processes, this
+ requires some fiddling with pointers and mutexes.
+
+ Eventually, I came up with a system with 2 lists: an 'empty' frame list
+ and a 'full' frame list:
+ * Initially, all frame buffers but one are on the 'empty' list; the one
+ remaining buffer is our initial fill frame.
+ * If a frame is needed for filling, we try to take it from the 'empty'
+ list, unless that list is empty, in which case we take the buffer at
+ the head of the 'full' list.
+ * When our fill buffer has been filled, it is appended to the 'full'
+ list.
+ * If a frame is needed by read() or mmap(), it is taken from the head of
+ the 'full' list, handled, and then appended to the 'empty' list. If no
+ buffer is present on the 'full' list, we wait.
+ The advantage is that the buffer that is currently being decompressed/
+ converted, is on neither list, and thus not in our way (any other scheme
+ I tried had the problem of old data lingering in the queue).
+
+ Whatever strategy you choose, it always remains a tradeoff: with more
+ frame buffers the chances of a missed frame are reduced. On the other
+ hand, on slower machines it introduces lag because the queue will
+ always be full.
+ */
+
+/**
+ \brief Find next frame buffer to fill. Take from empty or full list, whichever comes first.
+ */
+static inline int pwc_next_fill_frame(struct pwc_device *pdev)
+{
+ int ret;
+ unsigned long flags;
+
+ ret = 0;
+ spin_lock_irqsave(&pdev->ptrlock, flags);
+ if (pdev->fill_frame != NULL) {
+ /* append to 'full' list */
+ if (pdev->full_frames == NULL) {
+ pdev->full_frames = pdev->fill_frame;
+ pdev->full_frames_tail = pdev->full_frames;
+ }
+ else {
+ pdev->full_frames_tail->next = pdev->fill_frame;
+ pdev->full_frames_tail = pdev->fill_frame;
+ }
+ }
+ if (pdev->empty_frames != NULL) {
+ /* We have empty frames available. That's easy */
+ pdev->fill_frame = pdev->empty_frames;
+ pdev->empty_frames = pdev->empty_frames->next;
+ }
+ else {
+ /* Hmm. Take it from the full list */
+#if PWC_DEBUG
+ /* sanity check */
+ if (pdev->full_frames == NULL) {
+ Err("Neither empty or full frames available!\n");
+ spin_unlock_irqrestore(&pdev->ptrlock, flags);
+ return -EINVAL;
+ }
+#endif
+ pdev->fill_frame = pdev->full_frames;
+ pdev->full_frames = pdev->full_frames->next;
+ ret = 1;
+ }
+ pdev->fill_frame->next = NULL;
+#if PWC_DEBUG
+ Trace(TRACE_SEQUENCE, "Assigning sequence number %d.\n", pdev->sequence);
+ pdev->fill_frame->sequence = pdev->sequence++;
+#endif
+ spin_unlock_irqrestore(&pdev->ptrlock, flags);
+ return ret;
+}
+
+
+/**
+ \brief Reset all buffers, pointers and lists, except for the image_used[] buffer.
+
+ If the image_used[] buffer is cleared too, mmap()/VIDIOCSYNC will run into trouble.
+ */
+static void pwc_reset_buffers(struct pwc_device *pdev)
+{
+ int i;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pdev->ptrlock, flags);
+ pdev->full_frames = NULL;
+ pdev->full_frames_tail = NULL;
+ for (i = 0; i < default_fbufs; i++) {
+ pdev->fbuf[i].filled = 0;
+ if (i > 0)
+ pdev->fbuf[i].next = &pdev->fbuf[i - 1];
+ else
+ pdev->fbuf->next = NULL;
+ }
+ pdev->empty_frames = &pdev->fbuf[default_fbufs - 1];
+ pdev->empty_frames_tail = pdev->fbuf;
+ pdev->read_frame = NULL;
+ pdev->fill_frame = pdev->empty_frames;
+ pdev->empty_frames = pdev->empty_frames->next;
+
+ pdev->image_read_pos = 0;
+ pdev->fill_image = 0;
+ spin_unlock_irqrestore(&pdev->ptrlock, flags);
+}
+
+
+/**
+ \brief Do all the handling for getting one frame: get pointer, decompress, advance pointers.
+ */
+static int pwc_handle_frame(struct pwc_device *pdev)
+{
+ int ret = 0;
+ unsigned long flags;
+
+ spin_lock_irqsave(&pdev->ptrlock, flags);
+ /* First grab our read_frame; this is removed from all lists, so
+ we can release the lock after this without problems */
+ if (pdev->read_frame != NULL) {
+ /* This can't theoretically happen */
+ Err("Huh? Read frame still in use?\n");
+ }
+ else {
+ if (pdev->full_frames == NULL) {
+ Err("Woops. No frames ready.\n");
+ }
+ else {
+ pdev->read_frame = pdev->full_frames;
+ pdev->full_frames = pdev->full_frames->next;
+ pdev->read_frame->next = NULL;
+ }
+
+ if (pdev->read_frame != NULL) {
+#if PWC_DEBUG
+ Trace(TRACE_SEQUENCE, "Decompressing frame %d\n", pdev->read_frame->sequence);
+#endif
+ /* Decompression is a lenghty process, so it's outside of the lock.
+ This gives the isoc_handler the opportunity to fill more frames
+ in the mean time.
+ */
+ spin_unlock_irqrestore(&pdev->ptrlock, flags);
+ ret = pwc_decompress(pdev);
+ spin_lock_irqsave(&pdev->ptrlock, flags);
+
+ /* We're done with read_buffer, tack it to the end of the empty buffer list */
+ if (pdev->empty_frames == NULL) {
+ pdev->empty_frames = pdev->read_frame;
+ pdev->empty_frames_tail = pdev->empty_frames;
+ }
+ else {
+ pdev->empty_frames_tail->next = pdev->read_frame;
+ pdev->empty_frames_tail = pdev->read_frame;
+ }
+ pdev->read_frame = NULL;
+ }
+ }
+ spin_unlock_irqrestore(&pdev->ptrlock, flags);
+ return ret;
+}
+
+/**
+ \brief Advance pointers of image buffer (after each user request)
+*/
+static inline void pwc_next_image(struct pwc_device *pdev)
+{
+ pdev->image_used[pdev->fill_image] = 0;
+ pdev->fill_image = (pdev->fill_image + 1) % default_mbufs;
+}
+
+
+/* This gets called for the Isochronous pipe (video). This is done in
+ * interrupt time, so it has to be fast, not crash, and not stall. Neat.
+ */
+static void pwc_isoc_handler(struct urb *urb, struct pt_regs *regs)
+{
+ struct pwc_device *pdev;
+ int i, fst, flen;
+ int awake;
+ struct pwc_frame_buf *fbuf;
+ unsigned char *fillptr = 0, *iso_buf = 0;
+
+ awake = 0;
+ pdev = (struct pwc_device *)urb->context;
+ if (pdev == NULL) {
+ Err("isoc_handler() called with NULL device?!\n");
+ return;
+ }
+#ifdef PWC_MAGIC
+ if (pdev->magic != PWC_MAGIC) {
+ Err("isoc_handler() called with bad magic!\n");
+ return;
+ }
+#endif
+ if (urb->status == -ENOENT || urb->status == -ECONNRESET) {
+ Trace(TRACE_OPEN, "pwc_isoc_handler(): URB (%p) unlinked %ssynchronuously.\n", urb, urb->status == -ENOENT ? "" : "a");
+ return;
+ }
+ if (urb->status != -EINPROGRESS && urb->status != 0) {
+ const char *errmsg;
+
+ errmsg = "Unknown";
+ switch(urb->status) {
+ case -ENOSR: errmsg = "Buffer error (overrun)"; break;
+ case -EPIPE: errmsg = "Stalled (device not responding)"; break;
+ case -EOVERFLOW: errmsg = "Babble (bad cable?)"; break;
+ case -EPROTO: errmsg = "Bit-stuff error (bad cable?)"; break;
+ case -EILSEQ: errmsg = "CRC/Timeout (could be anything)"; break;
+ case -ETIMEDOUT: errmsg = "NAK (device does not respond)"; break;
+ }
+ Trace(TRACE_FLOW, "pwc_isoc_handler() called with status %d [%s].\n", urb->status, errmsg);
+ /* Give up after a number of contiguous errors on the USB bus.
+ Appearantly something is wrong so we simulate an unplug event.
+ */
+ if (++pdev->visoc_errors > MAX_ISOC_ERRORS)
+ {
+ Info("Too many ISOC errors, bailing out.\n");
+ pdev->error_status = EIO;
+ awake = 1;
+ wake_up_interruptible(&pdev->frameq);
+ }
+ goto handler_end; // ugly, but practical
+ }
+
+ fbuf = pdev->fill_frame;
+ if (fbuf == NULL) {
+ Err("pwc_isoc_handler without valid fill frame.\n");
+ awake = 1;
+ goto handler_end;
+ }
+ else {
+ fillptr = fbuf->data + fbuf->filled;
+ }
+
+ /* Reset ISOC error counter. We did get here, after all. */
+ pdev->visoc_errors = 0;
+
+ /* vsync: 0 = don't copy data
+ 1 = sync-hunt
+ 2 = synched
+ */
+ /* Compact data */
+ for (i = 0; i < urb->number_of_packets; i++) {
+ fst = urb->iso_frame_desc[i].status;
+ flen = urb->iso_frame_desc[i].actual_length;
+ iso_buf = urb->transfer_buffer + urb->iso_frame_desc[i].offset;
+ if (fst == 0) {
+ if (flen > 0) { /* if valid data... */
+ if (pdev->vsync > 0) { /* ...and we are not sync-hunting... */
+ pdev->vsync = 2;
+
+ /* ...copy data to frame buffer, if possible */
+ if (flen + fbuf->filled > pdev->frame_total_size) {
+ Trace(TRACE_FLOW, "Frame buffer overflow (flen = %d, frame_total_size = %d).\n", flen, pdev->frame_total_size);
+ pdev->vsync = 0; /* Hmm, let's wait for an EOF (end-of-frame) */
+ pdev->vframes_error++;
+ }
+ else {
+ memmove(fillptr, iso_buf, flen);
+ fillptr += flen;
+ }
+ }
+ fbuf->filled += flen;
+ } /* ..flen > 0 */
+
+ if (flen < pdev->vlast_packet_size) {
+ /* Shorter packet... We probably have the end of an image-frame;
+ wake up read() process and let select()/poll() do something.
+ Decompression is done in user time over there.
+ */
+ if (pdev->vsync == 2) {
+ /* The ToUCam Fun CMOS sensor causes the firmware to send 2 or 3 bogus
+ frames on the USB wire after an exposure change. This conditition is
+ however detected in the cam and a bit is set in the header.
+ */
+ if (pdev->type == 730) {
+ unsigned char *ptr = (unsigned char *)fbuf->data;
+
+ if (ptr[1] == 1 && ptr[0] & 0x10) {
+#if PWC_DEBUG
+ Debug("Hyundai CMOS sensor bug. Dropping frame %d.\n", fbuf->sequence);
+#endif
+ pdev->drop_frames += 2;
+ pdev->vframes_error++;
+ }
+ if ((ptr[0] ^ pdev->vmirror) & 0x01) {
+ if (ptr[0] & 0x01)
+ Info("Snapshot button pressed.\n");
+ else
+ Info("Snapshot button released.\n");
+ }
+ if ((ptr[0] ^ pdev->vmirror) & 0x02) {
+ if (ptr[0] & 0x02)
+ Info("Image is mirrored.\n");
+ else
+ Info("Image is normal.\n");
+ }
+ pdev->vmirror = ptr[0] & 0x03;
+ /* Sometimes the trailer of the 730 is still sent as a 4 byte packet
+ after a short frame; this condition is filtered out specifically. A 4 byte
+ frame doesn't make sense anyway.
+ So we get either this sequence:
+ drop_bit set -> 4 byte frame -> short frame -> good frame
+ Or this one:
+ drop_bit set -> short frame -> good frame
+ So we drop either 3 or 2 frames in all!
+ */
+ if (fbuf->filled == 4)
+ pdev->drop_frames++;
+ }
+
+ /* In case we were instructed to drop the frame, do so silently.
+ The buffer pointers are not updated either (but the counters are reset below).
+ */
+ if (pdev->drop_frames > 0)
+ pdev->drop_frames--;
+ else {
+ /* Check for underflow first */
+ if (fbuf->filled < pdev->frame_total_size) {
+ Trace(TRACE_FLOW, "Frame buffer underflow (%d bytes); discarded.\n", fbuf->filled);
+ pdev->vframes_error++;
+ }
+ else {
+ /* Send only once per EOF */
+ awake = 1; /* delay wake_ups */
+
+ /* Find our next frame to fill. This will always succeed, since we
+ * nick a frame from either empty or full list, but if we had to
+ * take it from the full list, it means a frame got dropped.
+ */
+ if (pwc_next_fill_frame(pdev)) {
+ pdev->vframes_dumped++;
+ if ((pdev->vframe_count > FRAME_LOWMARK) && (pwc_trace & TRACE_FLOW)) {
+ if (pdev->vframes_dumped < 20)
+ Trace(TRACE_FLOW, "Dumping frame %d.\n", pdev->vframe_count);
+ if (pdev->vframes_dumped == 20)
+ Trace(TRACE_FLOW, "Dumping frame %d (last message).\n", pdev->vframe_count);
+ }
+ }
+ fbuf = pdev->fill_frame;
+ }
+ } /* !drop_frames */
+ pdev->vframe_count++;
+ }
+ fbuf->filled = 0;
+ fillptr = fbuf->data;
+ pdev->vsync = 1;
+ } /* .. flen < last_packet_size */
+ pdev->vlast_packet_size = flen;
+ } /* ..status == 0 */
+#if PWC_DEBUG
+ /* This is normally not interesting to the user, unless you are really debugging something */
+ else {
+ static int iso_error = 0;
+ iso_error++;
+ if (iso_error < 20)
+ Trace(TRACE_FLOW, "Iso frame %d of USB has error %d\n", i, fst);
+ }
+#endif
+ }
+
+handler_end:
+ if (awake)
+ wake_up_interruptible(&pdev->frameq);
+
+ urb->dev = pdev->udev;
+ i = usb_submit_urb(urb, GFP_ATOMIC);
+ if (i != 0)
+ Err("Error (%d) re-submitting urb in pwc_isoc_handler.\n", i);
+}
+
+
+static int pwc_isoc_init(struct pwc_device *pdev)
+{
+ struct usb_device *udev;
+ struct urb *urb;
+ int i, j, ret;
+
+ struct usb_interface *intf;
+ struct usb_host_interface *idesc = NULL;
+
+ if (pdev == NULL)
+ return -EFAULT;
+ if (pdev->iso_init)
+ return 0;
+ pdev->vsync = 0;
+ udev = pdev->udev;
+
+ /* Get the current alternate interface, adjust packet size */
+ if (!udev->actconfig)
+ return -EFAULT;
+#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,5)
+ idesc = &udev->actconfig->interface[0]->altsetting[pdev->valternate];
+#else
+ intf = usb_ifnum_to_if(udev, 0);
+ if (intf)
+ idesc = usb_altnum_to_altsetting(intf, pdev->valternate);
+#endif
+
+ if (!idesc)
+ return -EFAULT;
+
+ /* Search video endpoint */
+ pdev->vmax_packet_size = -1;
+ for (i = 0; i < idesc->desc.bNumEndpoints; i++)
+ if ((idesc->endpoint[i].desc.bEndpointAddress & 0xF) == pdev->vendpoint) {
+ pdev->vmax_packet_size = idesc->endpoint[i].desc.wMaxPacketSize;
+ break;
+ }
+
+ if (pdev->vmax_packet_size < 0 || pdev->vmax_packet_size > ISO_MAX_FRAME_SIZE) {
+ Err("Failed to find packet size for video endpoint in current alternate setting.\n");
+ return -ENFILE; /* Odd error, that should be noticable */
+ }
+
+ /* Set alternate interface */
+ ret = 0;
+ Trace(TRACE_OPEN, "Setting alternate interface %d\n", pdev->valternate);
+ ret = usb_set_interface(pdev->udev, 0, pdev->valternate);
+ if (ret < 0)
+ return ret;
+
+ for (i = 0; i < MAX_ISO_BUFS; i++) {
+ urb = usb_alloc_urb(ISO_FRAMES_PER_DESC, GFP_KERNEL);
+ if (urb == NULL) {
+ Err("Failed to allocate urb %d\n", i);
+ ret = -ENOMEM;
+ break;
+ }
+ pdev->sbuf[i].urb = urb;
+ Trace(TRACE_MEMORY, "Allocated URB at 0x%p\n", urb);
+ }
+ if (ret) {
+ /* De-allocate in reverse order */
+ while (i >= 0) {
+ if (pdev->sbuf[i].urb != NULL)
+ usb_free_urb(pdev->sbuf[i].urb);
+ pdev->sbuf[i].urb = NULL;
+ i--;
+ }
+ return ret;
+ }
+
+ /* init URB structure */
+ for (i = 0; i < MAX_ISO_BUFS; i++) {
+ urb = pdev->sbuf[i].urb;
+
+ urb->interval = 1; // devik
+ urb->dev = udev;
+ urb->pipe = usb_rcvisocpipe(udev, pdev->vendpoint);
+ urb->transfer_flags = URB_ISO_ASAP;
+ urb->transfer_buffer = pdev->sbuf[i].data;
+ urb->transfer_buffer_length = ISO_BUFFER_SIZE;
+ urb->complete = pwc_isoc_handler;
+ urb->context = pdev;
+ urb->start_frame = 0;
+ urb->number_of_packets = ISO_FRAMES_PER_DESC;
+ for (j = 0; j < ISO_FRAMES_PER_DESC; j++) {
+ urb->iso_frame_desc[j].offset = j * ISO_MAX_FRAME_SIZE;
+ urb->iso_frame_desc[j].length = pdev->vmax_packet_size;
+ }
+ }
+
+ /* link */
+ for (i = 0; i < MAX_ISO_BUFS; i++) {
+ ret = usb_submit_urb(pdev->sbuf[i].urb, GFP_KERNEL);
+ if (ret)
+ Err("isoc_init() submit_urb %d failed with error %d\n", i, ret);
+ else
+ Trace(TRACE_MEMORY, "URB 0x%p submitted.\n", pdev->sbuf[i].urb);
+ }
+
+ /* All is done... */
+ pdev->iso_init = 1;
+ Trace(TRACE_OPEN, "<< pwc_isoc_init()\n");
+ return 0;
+}
+
+static void pwc_isoc_cleanup(struct pwc_device *pdev)
+{
+ int i;
+
+ Trace(TRACE_OPEN, ">> pwc_isoc_cleanup()\n");
+ if (pdev == NULL)
+ return;
+
+ /* Unlinking ISOC buffers one by one */
+ for (i = 0; i < MAX_ISO_BUFS; i++) {
+ struct urb *urb;
+
+ urb = pdev->sbuf[i].urb;
+ if (urb != 0) {
+ if (pdev->iso_init) {
+ Trace(TRACE_MEMORY, "Unlinking URB %p\n", urb);
+ usb_kill_urb(urb);
+ }
+ Trace(TRACE_MEMORY, "Freeing URB\n");
+ usb_free_urb(urb);
+ pdev->sbuf[i].urb = NULL;
+ }
+ }
+
+ /* Stop camera, but only if we are sure the camera is still there (unplug
+ is signalled by EPIPE)
+ */
+ if (pdev->error_status && pdev->error_status != EPIPE) {
+ Trace(TRACE_OPEN, "Setting alternate interface 0.\n");
+ usb_set_interface(pdev->udev, 0, 0);
+ }
+
+ pdev->iso_init = 0;
+ Trace(TRACE_OPEN, "<< pwc_isoc_cleanup()\n");
+}
+
+int pwc_try_video_mode(struct pwc_device *pdev, int width, int height, int new_fps, int new_compression, int new_snapshot)
+{
+ int ret, start;
+
+ /* Stop isoc stuff */
+ pwc_isoc_cleanup(pdev);
+ /* Reset parameters */
+ pwc_reset_buffers(pdev);
+ /* Try to set video mode... */
+ start = ret = pwc_set_video_mode(pdev, width, height, new_fps, new_compression, new_snapshot);
+ if (ret) {
+ Trace(TRACE_FLOW, "pwc_set_video_mode attempt 1 failed.\n");
+ /* That failed... restore old mode (we know that worked) */
+ start = pwc_set_video_mode(pdev, pdev->view.x, pdev->view.y, pdev->vframes, pdev->vcompression, pdev->vsnapshot);
+ if (start) {
+ Trace(TRACE_FLOW, "pwc_set_video_mode attempt 2 failed.\n");
+ }
+ }
+ if (start == 0)
+ {
+ if (pwc_isoc_init(pdev) < 0)
+ {
+ Info("Failed to restart ISOC transfers in pwc_try_video_mode.\n");
+ ret = -EAGAIN; /* let's try again, who knows if it works a second time */
+ }
+ }
+ pdev->drop_frames++; /* try to avoid garbage during switch */
+ return ret; /* Return original error code */
+}
+
+
+/***************************************************************************/
+/* Video4Linux functions */
+
+static int pwc_video_open(struct inode *inode, struct file *file)
+{
+ int i;
+ struct video_device *vdev = video_devdata(file);
+ struct pwc_device *pdev;
+
+ Trace(TRACE_OPEN, ">> video_open called(vdev = 0x%p).\n", vdev);
+
+ pdev = (struct pwc_device *)vdev->priv;
+ if (pdev == NULL)
+ BUG();
+ if (pdev->vopen)
+ return -EBUSY;
+
+ down(&pdev->modlock);
+ if (!pdev->usb_init) {
+ Trace(TRACE_OPEN, "Doing first time initialization.\n");
+ pdev->usb_init = 1;
+
+ if (pwc_trace & TRACE_OPEN)
+ {
+ /* Query sensor type */
+ const char *sensor_type = NULL;
+ int ret;
+
+ ret = pwc_get_cmos_sensor(pdev, &i);
+ if (ret >= 0)
+ {
+ switch(i) {
+ case 0x00: sensor_type = "Hyundai CMOS sensor"; break;
+ case 0x20: sensor_type = "Sony CCD sensor + TDA8787"; break;
+ case 0x2E: sensor_type = "Sony CCD sensor + Exas 98L59"; break;
+ case 0x2F: sensor_type = "Sony CCD sensor + ADI 9804"; break;
+ case 0x30: sensor_type = "Sharp CCD sensor + TDA8787"; break;
+ case 0x3E: sensor_type = "Sharp CCD sensor + Exas 98L59"; break;
+ case 0x3F: sensor_type = "Sharp CCD sensor + ADI 9804"; break;
+ case 0x40: sensor_type = "UPA 1021 sensor"; break;
+ case 0x100: sensor_type = "VGA sensor"; break;
+ case 0x101: sensor_type = "PAL MR sensor"; break;
+ default: sensor_type = "unknown type of sensor"; break;
+ }
+ }
+ if (sensor_type != NULL)
+ Info("This %s camera is equipped with a %s (%d).\n", pdev->vdev->name, sensor_type, i);
+ }
+ }
+
+ /* Turn on camera */
+ if (power_save) {
+ i = pwc_camera_power(pdev, 1);
+ if (i < 0)
+ Info("Failed to restore power to the camera! (%d)\n", i);
+ }
+ /* Set LED on/off time */
+ if (pwc_set_leds(pdev, led_on, led_off) < 0)
+ Info("Failed to set LED on/off time.\n");
+
+ pwc_construct(pdev); /* set min/max sizes correct */
+
+ /* So far, so good. Allocate memory. */
+ i = pwc_allocate_buffers(pdev);
+ if (i < 0) {
+ Trace(TRACE_OPEN, "Failed to allocate buffer memory.\n");
+ up(&pdev->modlock);
+ return i;
+ }
+
+ /* Reset buffers & parameters */
+ pwc_reset_buffers(pdev);
+ for (i = 0; i < default_mbufs; i++)
+ pdev->image_used[i] = 0;
+ pdev->vframe_count = 0;
+ pdev->vframes_dumped = 0;
+ pdev->vframes_error = 0;
+ pdev->visoc_errors = 0;
+ pdev->error_status = 0;
+#if PWC_DEBUG
+ pdev->sequence = 0;
+#endif
+ pwc_construct(pdev); /* set min/max sizes correct */
+
+ /* Set some defaults */
+ pdev->vsnapshot = 0;
+
+ /* Start iso pipe for video; first try the last used video size
+ (or the default one); if that fails try QCIF/10 or QSIF/10;
+ it that fails too, give up.
+ */
+ i = pwc_set_video_mode(pdev, pwc_image_sizes[pdev->vsize].x, pwc_image_sizes[pdev->vsize].y, pdev->vframes, pdev->vcompression, 0);
+ if (i) {
+ Trace(TRACE_OPEN, "First attempt at set_video_mode failed.\n");
+ if (pdev->type == 730 || pdev->type == 740 || pdev->type == 750)
+ i = pwc_set_video_mode(pdev, pwc_image_sizes[PSZ_QSIF].x, pwc_image_sizes[PSZ_QSIF].y, 10, pdev->vcompression, 0);
+ else
+ i = pwc_set_video_mode(pdev, pwc_image_sizes[PSZ_QCIF].x, pwc_image_sizes[PSZ_QCIF].y, 10, pdev->vcompression, 0);
+ }
+ if (i) {
+ Trace(TRACE_OPEN, "Second attempt at set_video_mode failed.\n");
+ up(&pdev->modlock);
+ return i;
+ }
+
+ i = pwc_isoc_init(pdev);
+ if (i) {
+ Trace(TRACE_OPEN, "Failed to init ISOC stuff = %d.\n", i);
+ up(&pdev->modlock);
+ return i;
+ }
+
+ pdev->vopen++;
+ file->private_data = vdev;
+ up(&pdev->modlock);
+ Trace(TRACE_OPEN, "<< video_open() returns 0.\n");
+ return 0;
+}
+
+/* Note that all cleanup is done in the reverse order as in _open */
+static int pwc_video_close(struct inode *inode, struct file *file)
+{
+ struct video_device *vdev = file->private_data;
+ struct pwc_device *pdev;
+ int i;
+
+ Trace(TRACE_OPEN, ">> video_close called(vdev = 0x%p).\n", vdev);
+
+ pdev = (struct pwc_device *)vdev->priv;
+ if (pdev->vopen == 0)
+ Info("video_close() called on closed device?\n");
+
+ /* Dump statistics, but only if a reasonable amount of frames were
+ processed (to prevent endless log-entries in case of snap-shot
+ programs)
+ */
+ if (pdev->vframe_count > 20)
+ Info("Closing video device: %d frames received, dumped %d frames, %d frames with errors.\n", pdev->vframe_count, pdev->vframes_dumped, pdev->vframes_error);
+
+ switch (pdev->type)
+ {
+ case 675:
+ case 680:
+ case 690:
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ pwc_dec23_exit(); /* Timon & Kiara */
+ break;
+ case 645:
+ case 646:
+ pwc_dec1_exit();
+ break;
+ }
+
+ pwc_isoc_cleanup(pdev);
+ pwc_free_buffers(pdev);
+
+ /* Turn off LEDS and power down camera, but only when not unplugged */
+ if (pdev->error_status != EPIPE) {
+ /* Turn LEDs off */
+ if (pwc_set_leds(pdev, 0, 0) < 0)
+ Info("Failed to set LED on/off time.\n");
+ if (power_save) {
+ i = pwc_camera_power(pdev, 0);
+ if (i < 0)
+ Err("Failed to power down camera (%d)\n", i);
+ }
+ }
+ pdev->vopen = 0;
+ Trace(TRACE_OPEN, "<< video_close()\n");
+ return 0;
+}
+
+/*
+ * FIXME: what about two parallel reads ????
+ * ANSWER: Not supported. You can't open the device more than once,
+ despite what the V4L1 interface says. First, I don't see
+ the need, second there's no mechanism of alerting the
+ 2nd/3rd/... process of events like changing image size.
+ And I don't see the point of blocking that for the
+ 2nd/3rd/... process.
+ In multi-threaded environments reading parallel from any
+ device is tricky anyhow.
+ */
+
+static ssize_t pwc_video_read(struct file *file, char *buf,
+ size_t count, loff_t *ppos)
+{
+ struct video_device *vdev = file->private_data;
+ struct pwc_device *pdev;
+ int noblock = file->f_flags & O_NONBLOCK;
+ DECLARE_WAITQUEUE(wait, current);
+ int bytes_to_read;
+
+ Trace(TRACE_READ, "video_read(0x%p, %p, %d) called.\n", vdev, buf, count);
+ if (vdev == NULL)
+ return -EFAULT;
+ pdev = vdev->priv;
+ if (pdev == NULL)
+ return -EFAULT;
+ if (pdev->error_status)
+ return -pdev->error_status; /* Something happened, report what. */
+
+ /* In case we're doing partial reads, we don't have to wait for a frame */
+ if (pdev->image_read_pos == 0) {
+ /* Do wait queueing according to the (doc)book */
+ add_wait_queue(&pdev->frameq, &wait);
+ while (pdev->full_frames == NULL) {
+ /* Check for unplugged/etc. here */
+ if (pdev->error_status) {
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+ return -pdev->error_status ;
+ }
+ if (noblock) {
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+ return -EWOULDBLOCK;
+ }
+ if (signal_pending(current)) {
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+ return -ERESTARTSYS;
+ }
+ schedule();
+ set_current_state(TASK_INTERRUPTIBLE);
+ }
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+
+ /* Decompress and release frame */
+ if (pwc_handle_frame(pdev))
+ return -EFAULT;
+ }
+
+ Trace(TRACE_READ, "Copying data to user space.\n");
+ if (pdev->vpalette == VIDEO_PALETTE_RAW)
+ bytes_to_read = pdev->frame_size;
+ else
+ bytes_to_read = pdev->view.size;
+
+ /* copy bytes to user space; we allow for partial reads */
+ if (count + pdev->image_read_pos > bytes_to_read)
+ count = bytes_to_read - pdev->image_read_pos;
+ if (copy_to_user(buf, pdev->image_ptr[pdev->fill_image] + pdev->image_read_pos, count))
+ return -EFAULT;
+ pdev->image_read_pos += count;
+ if (pdev->image_read_pos >= bytes_to_read) { /* All data has been read */
+ pdev->image_read_pos = 0;
+ pwc_next_image(pdev);
+ }
+ return count;
+}
+
+static unsigned int pwc_video_poll(struct file *file, poll_table *wait)
+{
+ struct video_device *vdev = file->private_data;
+ struct pwc_device *pdev;
+
+ if (vdev == NULL)
+ return -EFAULT;
+ pdev = vdev->priv;
+ if (pdev == NULL)
+ return -EFAULT;
+
+ poll_wait(file, &pdev->frameq, wait);
+ if (pdev->error_status)
+ return POLLERR;
+ if (pdev->full_frames != NULL) /* we have frames waiting */
+ return (POLLIN | POLLRDNORM);
+
+ return 0;
+}
+
+static int pwc_video_do_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, void *arg)
+{
+ struct video_device *vdev = file->private_data;
+ struct pwc_device *pdev;
+ DECLARE_WAITQUEUE(wait, current);
+
+ if (vdev == NULL)
+ return -EFAULT;
+ pdev = vdev->priv;
+ if (pdev == NULL)
+ return -EFAULT;
+
+ switch (cmd) {
+ /* Query cabapilities */
+ case VIDIOCGCAP:
+ {
+ struct video_capability *caps = arg;
+
+ strcpy(caps->name, vdev->name);
+ caps->type = VID_TYPE_CAPTURE;
+ caps->channels = 1;
+ caps->audios = 1;
+ caps->minwidth = pdev->view_min.x;
+ caps->minheight = pdev->view_min.y;
+ caps->maxwidth = pdev->view_max.x;
+ caps->maxheight = pdev->view_max.y;
+ break;
+ }
+
+ /* Channel functions (simulate 1 channel) */
+ case VIDIOCGCHAN:
+ {
+ struct video_channel *v = arg;
+
+ if (v->channel != 0)
+ return -EINVAL;
+ v->flags = 0;
+ v->tuners = 0;
+ v->type = VIDEO_TYPE_CAMERA;
+ strcpy(v->name, "Webcam");
+ return 0;
+ }
+
+ case VIDIOCSCHAN:
+ {
+ /* The spec says the argument is an integer, but
+ the bttv driver uses a video_channel arg, which
+ makes sense becasue it also has the norm flag.
+ */
+ struct video_channel *v = arg;
+ if (v->channel != 0)
+ return -EINVAL;
+ return 0;
+ }
+
+
+ /* Picture functions; contrast etc. */
+ case VIDIOCGPICT:
+ {
+ struct video_picture *p = arg;
+ int val;
+
+ val = pwc_get_brightness(pdev);
+ if (val >= 0)
+ p->brightness = val;
+ else
+ p->brightness = 0xffff;
+ val = pwc_get_contrast(pdev);
+ if (val >= 0)
+ p->contrast = val;
+ else
+ p->contrast = 0xffff;
+ /* Gamma, Whiteness, what's the difference? :) */
+ val = pwc_get_gamma(pdev);
+ if (val >= 0)
+ p->whiteness = val;
+ else
+ p->whiteness = 0xffff;
+ val = pwc_get_saturation(pdev);
+ if (val >= 0)
+ p->colour = val;
+ else
+ p->colour = 0xffff;
+ p->depth = 24;
+ p->palette = pdev->vpalette;
+ p->hue = 0xFFFF; /* N/A */
+ break;
+ }
+
+ case VIDIOCSPICT:
+ {
+ struct video_picture *p = arg;
+ /*
+ * FIXME: Suppose we are mid read
+ ANSWER: No problem: the firmware of the camera
+ can handle brightness/contrast/etc
+ changes at _any_ time, and the palette
+ is used exactly once in the uncompress
+ routine.
+ */
+ pwc_set_brightness(pdev, p->brightness);
+ pwc_set_contrast(pdev, p->contrast);
+ pwc_set_gamma(pdev, p->whiteness);
+ pwc_set_saturation(pdev, p->colour);
+ if (p->palette && p->palette != pdev->vpalette) {
+ switch (p->palette) {
+ case VIDEO_PALETTE_YUV420P:
+ case VIDEO_PALETTE_RAW:
+ pdev->vpalette = p->palette;
+ return pwc_try_video_mode(pdev, pdev->image.x, pdev->image.y, pdev->vframes, pdev->vcompression, pdev->vsnapshot);
+ break;
+ default:
+ return -EINVAL;
+ break;
+ }
+ }
+ break;
+ }
+
+ /* Window/size parameters */
+ case VIDIOCGWIN:
+ {
+ struct video_window *vw = arg;
+
+ vw->x = 0;
+ vw->y = 0;
+ vw->width = pdev->view.x;
+ vw->height = pdev->view.y;
+ vw->chromakey = 0;
+ vw->flags = (pdev->vframes << PWC_FPS_SHIFT) |
+ (pdev->vsnapshot ? PWC_FPS_SNAPSHOT : 0);
+ break;
+ }
+
+ case VIDIOCSWIN:
+ {
+ struct video_window *vw = arg;
+ int fps, snapshot, ret;
+
+ fps = (vw->flags & PWC_FPS_FRMASK) >> PWC_FPS_SHIFT;
+ snapshot = vw->flags & PWC_FPS_SNAPSHOT;
+ if (fps == 0)
+ fps = pdev->vframes;
+ if (pdev->view.x == vw->width && pdev->view.y && fps == pdev->vframes && snapshot == pdev->vsnapshot)
+ return 0;
+ ret = pwc_try_video_mode(pdev, vw->width, vw->height, fps, pdev->vcompression, snapshot);
+ if (ret)
+ return ret;
+ break;
+ }
+
+ /* We don't have overlay support (yet) */
+ case VIDIOCGFBUF:
+ {
+ struct video_buffer *vb = arg;
+
+ memset(vb,0,sizeof(*vb));
+ break;
+ }
+
+ /* mmap() functions */
+ case VIDIOCGMBUF:
+ {
+ /* Tell the user program how much memory is needed for a mmap() */
+ struct video_mbuf *vm = arg;
+ int i;
+
+ memset(vm, 0, sizeof(*vm));
+ vm->size = default_mbufs * pdev->len_per_image;
+ vm->frames = default_mbufs; /* double buffering should be enough for most applications */
+ for (i = 0; i < default_mbufs; i++)
+ vm->offsets[i] = i * pdev->len_per_image;
+ break;
+ }
+
+ case VIDIOCMCAPTURE:
+ {
+ /* Start capture into a given image buffer (called 'frame' in video_mmap structure) */
+ struct video_mmap *vm = arg;
+
+ Trace(TRACE_READ, "VIDIOCMCAPTURE: %dx%d, frame %d, format %d\n", vm->width, vm->height, vm->frame, vm->format);
+ if (vm->frame < 0 || vm->frame >= default_mbufs)
+ return -EINVAL;
+
+ /* xawtv is nasty. It probes the available palettes
+ by setting a very small image size and trying
+ various palettes... The driver doesn't support
+ such small images, so I'm working around it.
+ */
+ if (vm->format)
+ {
+ switch (vm->format)
+ {
+ case VIDEO_PALETTE_YUV420P:
+ case VIDEO_PALETTE_RAW:
+ break;
+ default:
+ return -EINVAL;
+ break;
+ }
+ }
+
+ if ((vm->width != pdev->view.x || vm->height != pdev->view.y) &&
+ (vm->width >= pdev->view_min.x && vm->height >= pdev->view_min.y)) {
+ int ret;
+
+ Trace(TRACE_OPEN, "VIDIOCMCAPTURE: changing size to please xawtv :-(.\n");
+ ret = pwc_try_video_mode(pdev, vm->width, vm->height, pdev->vframes, pdev->vcompression, pdev->vsnapshot);
+ if (ret)
+ return ret;
+ } /* ... size mismatch */
+
+ /* FIXME: should we lock here? */
+ if (pdev->image_used[vm->frame])
+ return -EBUSY; /* buffer wasn't available. Bummer */
+ pdev->image_used[vm->frame] = 1;
+
+ /* Okay, we're done here. In the SYNC call we wait until a
+ frame comes available, then expand image into the given
+ buffer.
+ In contrast to the CPiA cam the Philips cams deliver a
+ constant stream, almost like a grabber card. Also,
+ we have separate buffers for the rawdata and the image,
+ meaning we can nearly always expand into the requested buffer.
+ */
+ Trace(TRACE_READ, "VIDIOCMCAPTURE done.\n");
+ break;
+ }
+
+ case VIDIOCSYNC:
+ {
+ /* The doc says: "Whenever a buffer is used it should
+ call VIDIOCSYNC to free this frame up and continue."
+
+ The only odd thing about this whole procedure is
+ that MCAPTURE flags the buffer as "in use", and
+ SYNC immediately unmarks it, while it isn't
+ after SYNC that you know that the buffer actually
+ got filled! So you better not start a CAPTURE in
+ the same frame immediately (use double buffering).
+ This is not a problem for this cam, since it has
+ extra intermediate buffers, but a hardware
+ grabber card will then overwrite the buffer
+ you're working on.
+ */
+ int *mbuf = arg;
+ int ret;
+
+ Trace(TRACE_READ, "VIDIOCSYNC called (%d).\n", *mbuf);
+
+ /* bounds check */
+ if (*mbuf < 0 || *mbuf >= default_mbufs)
+ return -EINVAL;
+ /* check if this buffer was requested anyway */
+ if (pdev->image_used[*mbuf] == 0)
+ return -EINVAL;
+
+ /* Add ourselves to the frame wait-queue.
+
+ FIXME: needs auditing for safety.
+ QUESTION: In what respect? I think that using the
+ frameq is safe now.
+ */
+ add_wait_queue(&pdev->frameq, &wait);
+ while (pdev->full_frames == NULL) {
+ if (pdev->error_status) {
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+ return -pdev->error_status;
+ }
+
+ if (signal_pending(current)) {
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+ return -ERESTARTSYS;
+ }
+ schedule();
+ set_current_state(TASK_INTERRUPTIBLE);
+ }
+ remove_wait_queue(&pdev->frameq, &wait);
+ set_current_state(TASK_RUNNING);
+
+ /* The frame is ready. Expand in the image buffer
+ requested by the user. I don't care if you
+ mmap() 5 buffers and request data in this order:
+ buffer 4 2 3 0 1 2 3 0 4 3 1 . . .
+ Grabber hardware may not be so forgiving.
+ */
+ Trace(TRACE_READ, "VIDIOCSYNC: frame ready.\n");
+ pdev->fill_image = *mbuf; /* tell in which buffer we want the image to be expanded */
+ /* Decompress, etc */
+ ret = pwc_handle_frame(pdev);
+ pdev->image_used[*mbuf] = 0;
+ if (ret)
+ return -EFAULT;
+ break;
+ }
+
+ case VIDIOCGAUDIO:
+ {
+ struct video_audio *v = arg;
+
+ strcpy(v->name, "Microphone");
+ v->audio = -1; /* unknown audio minor */
+ v->flags = 0;
+ v->mode = VIDEO_SOUND_MONO;
+ v->volume = 0;
+ v->bass = 0;
+ v->treble = 0;
+ v->balance = 0x8000;
+ v->step = 1;
+ break;
+ }
+
+ case VIDIOCSAUDIO:
+ {
+ /* Dummy: nothing can be set */
+ break;
+ }
+
+ case VIDIOCGUNIT:
+ {
+ struct video_unit *vu = arg;
+
+ vu->video = pdev->vdev->minor & 0x3F;
+ vu->audio = -1; /* not known yet */
+ vu->vbi = -1;
+ vu->radio = -1;
+ vu->teletext = -1;
+ break;
+ }
+ default:
+ return pwc_ioctl(pdev, cmd, arg);
+ } /* ..switch */
+ return 0;
+}
+
+static int pwc_video_ioctl(struct inode *inode, struct file *file,
+ unsigned int cmd, unsigned long arg)
+{
+ return video_usercopy(inode, file, cmd, arg, pwc_video_do_ioctl);
+}
+
+
+static int pwc_video_mmap(struct file *file, struct vm_area_struct *vma)
+{
+ struct video_device *vdev = file->private_data;
+ struct pwc_device *pdev;
+ unsigned long start = vma->vm_start;
+ unsigned long size = vma->vm_end-vma->vm_start;
+ unsigned long page, pos;
+
+ Trace(TRACE_MEMORY, "mmap(0x%p, 0x%lx, %lu) called.\n", vdev, start, size);
+ pdev = vdev->priv;
+
+ vma->vm_flags |= VM_IO;
+
+ pos = (unsigned long)pdev->image_data;
+ while (size > 0) {
+ page = vmalloc_to_pfn((void *)pos);
+ if (remap_pfn_range(vma, start, page, PAGE_SIZE, PAGE_SHARED))
+ return -EAGAIN;
+
+ start += PAGE_SIZE;
+ pos += PAGE_SIZE;
+ if (size > PAGE_SIZE)
+ size -= PAGE_SIZE;
+ else
+ size = 0;
+ }
+
+ return 0;
+}
+
+/***************************************************************************/
+/* USB functions */
+
+/* This function gets called when a new device is plugged in or the usb core
+ * is loaded.
+ */
+
+static int usb_pwc_probe(struct usb_interface *intf, const struct usb_device_id *id)
+{
+ struct usb_device *udev = interface_to_usbdev(intf);
+ struct pwc_device *pdev = NULL;
+ int vendor_id, product_id, type_id;
+ int i, hint;
+ int features = 0;
+ int video_nr = -1; /* default: use next available device */
+ char serial_number[30], *name;
+
+ /* Check if we can handle this device */
+ Trace(TRACE_PROBE, "probe() called [%04X %04X], if %d\n",
+ udev->descriptor.idVendor, udev->descriptor.idProduct,
+ intf->altsetting->desc.bInterfaceNumber);
+
+ /* the interfaces are probed one by one. We are only interested in the
+ video interface (0) now.
+ Interface 1 is the Audio Control, and interface 2 Audio itself.
+ */
+ if (intf->altsetting->desc.bInterfaceNumber > 0)
+ return -ENODEV;
+
+ vendor_id = udev->descriptor.idVendor;
+ product_id = udev->descriptor.idProduct;
+
+ if (vendor_id == 0x0471) {
+ switch (product_id) {
+ case 0x0302:
+ Info("Philips PCA645VC USB webcam detected.\n");
+ name = "Philips 645 webcam";
+ type_id = 645;
+ break;
+ case 0x0303:
+ Info("Philips PCA646VC USB webcam detected.\n");
+ name = "Philips 646 webcam";
+ type_id = 646;
+ break;
+ case 0x0304:
+ Info("Askey VC010 type 2 USB webcam detected.\n");
+ name = "Askey VC010 webcam";
+ type_id = 646;
+ break;
+ case 0x0307:
+ Info("Philips PCVC675K (Vesta) USB webcam detected.\n");
+ name = "Philips 675 webcam";
+ type_id = 675;
+ break;
+ case 0x0308:
+ Info("Philips PCVC680K (Vesta Pro) USB webcam detected.\n");
+ name = "Philips 680 webcam";
+ type_id = 680;
+ break;
+ case 0x030C:
+ Info("Philips PCVC690K (Vesta Pro Scan) USB webcam detected.\n");
+ name = "Philips 690 webcam";
+ type_id = 690;
+ break;
+ case 0x0310:
+ Info("Philips PCVC730K (ToUCam Fun)/PCVC830 (ToUCam II) USB webcam detected.\n");
+ name = "Philips 730 webcam";
+ type_id = 730;
+ break;
+ case 0x0311:
+ Info("Philips PCVC740K (ToUCam Pro)/PCVC840 (ToUCam II) USB webcam detected.\n");
+ name = "Philips 740 webcam";
+ type_id = 740;
+ break;
+ case 0x0312:
+ Info("Philips PCVC750K (ToUCam Pro Scan) USB webcam detected.\n");
+ name = "Philips 750 webcam";
+ type_id = 750;
+ break;
+ case 0x0313:
+ Info("Philips PCVC720K/40 (ToUCam XS) USB webcam detected.\n");
+ name = "Philips 720K/40 webcam";
+ type_id = 720;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x069A) {
+ switch(product_id) {
+ case 0x0001:
+ Info("Askey VC010 type 1 USB webcam detected.\n");
+ name = "Askey VC010 webcam";
+ type_id = 645;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x046d) {
+ switch(product_id) {
+ case 0x08b0:
+ Info("Logitech QuickCam Pro 3000 USB webcam detected.\n");
+ name = "Logitech QuickCam Pro 3000";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x08b1:
+ Info("Logitech QuickCam Notebook Pro USB webcam detected.\n");
+ name = "Logitech QuickCam Notebook Pro";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x08b2:
+ Info("Logitech QuickCam 4000 Pro USB webcam detected.\n");
+ name = "Logitech QuickCam Pro 4000";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x08b3:
+ Info("Logitech QuickCam Zoom USB webcam detected.\n");
+ name = "Logitech QuickCam Zoom";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x08B4:
+ Info("Logitech QuickCam Zoom (new model) USB webcam detected.\n");
+ name = "Logitech QuickCam Zoom";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x08b5:
+ Info("Logitech QuickCam Orbit/Sphere USB webcam detected.\n");
+ name = "Logitech QuickCam Orbit";
+ type_id = 740; /* CCD sensor */
+ features |= FEATURE_MOTOR_PANTILT;
+ break;
+ case 0x08b6:
+ case 0x08b7:
+ case 0x08b8:
+ Info("Logitech QuickCam detected (reserved ID).\n");
+ name = "Logitech QuickCam (res.)";
+ type_id = 730; /* Assuming CMOS */
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x055d) {
+ /* I don't know the difference between the C10 and the C30;
+ I suppose the difference is the sensor, but both cameras
+ work equally well with a type_id of 675
+ */
+ switch(product_id) {
+ case 0x9000:
+ Info("Samsung MPC-C10 USB webcam detected.\n");
+ name = "Samsung MPC-C10";
+ type_id = 675;
+ break;
+ case 0x9001:
+ Info("Samsung MPC-C30 USB webcam detected.\n");
+ name = "Samsung MPC-C30";
+ type_id = 675;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x041e) {
+ switch(product_id) {
+ case 0x400c:
+ Info("Creative Labs Webcam 5 detected.\n");
+ name = "Creative Labs Webcam 5";
+ type_id = 730;
+ break;
+ case 0x4011:
+ Info("Creative Labs Webcam Pro Ex detected.\n");
+ name = "Creative Labs Webcam Pro Ex";
+ type_id = 740;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x04cc) {
+ switch(product_id) {
+ case 0x8116:
+ Info("Sotec Afina Eye USB webcam detected.\n");
+ name = "Sotec Afina Eye";
+ type_id = 730;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else if (vendor_id == 0x06be) {
+ switch(product_id) {
+ case 0x8116:
+ /* This is essentially the same cam as the Sotec Afina Eye */
+ Info("AME Co. Afina Eye USB webcam detected.\n");
+ name = "AME Co. Afina Eye";
+ type_id = 750;
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+
+ }
+ else if (vendor_id == 0x0d81) {
+ switch(product_id) {
+ case 0x1900:
+ Info("Visionite VCS-UC300 USB webcam detected.\n");
+ name = "Visionite VCS-UC300";
+ type_id = 740; /* CCD sensor */
+ break;
+ case 0x1910:
+ Info("Visionite VCS-UM100 USB webcam detected.\n");
+ name = "Visionite VCS-UM100";
+ type_id = 730; /* CMOS sensor */
+ break;
+ default:
+ return -ENODEV;
+ break;
+ }
+ }
+ else
+ return -ENODEV; /* Not any of the know types; but the list keeps growing. */
+
+ memset(serial_number, 0, 30);
+ usb_string(udev, udev->descriptor.iSerialNumber, serial_number, 29);
+ Trace(TRACE_PROBE, "Device serial number is %s\n", serial_number);
+
+ if (udev->descriptor.bNumConfigurations > 1)
+ Info("Warning: more than 1 configuration available.\n");
+
+ /* Allocate structure, initialize pointers, mutexes, etc. and link it to the usb_device */
+ pdev = kmalloc(sizeof(struct pwc_device), GFP_KERNEL);
+ if (pdev == NULL) {
+ Err("Oops, could not allocate memory for pwc_device.\n");
+ return -ENOMEM;
+ }
+ memset(pdev, 0, sizeof(struct pwc_device));
+ pdev->type = type_id;
+ pdev->vsize = default_size;
+ pdev->vframes = default_fps;
+ strcpy(pdev->serial, serial_number);
+ pdev->features = features;
+ if (vendor_id == 0x046D && product_id == 0x08B5)
+ {
+ /* Logitech QuickCam Orbit
+ The ranges have been determined experimentally; they may differ from cam to cam.
+ Also, the exact ranges left-right and up-down are different for my cam
+ */
+ pdev->angle_range.pan_min = -7000;
+ pdev->angle_range.pan_max = 7000;
+ pdev->angle_range.tilt_min = -3000;
+ pdev->angle_range.tilt_max = 2500;
+ }
+
+ init_MUTEX(&pdev->modlock);
+ pdev->ptrlock = SPIN_LOCK_UNLOCKED;
+
+ pdev->udev = udev;
+ init_waitqueue_head(&pdev->frameq);
+ pdev->vcompression = pwc_preferred_compression;
+
+ /* Allocate video_device structure */
+ pdev->vdev = video_device_alloc();
+ if (pdev->vdev == 0)
+ {
+ Err("Err, cannot allocate video_device struture. Failing probe.");
+ kfree(pdev);
+ return -ENOMEM;
+ }
+ memcpy(pdev->vdev, &pwc_template, sizeof(pwc_template));
+ strcpy(pdev->vdev->name, name);
+ pdev->vdev->owner = THIS_MODULE;
+ video_set_drvdata(pdev->vdev, pdev);
+
+ pdev->release = udev->descriptor.bcdDevice;
+ Trace(TRACE_PROBE, "Release: %04x\n", pdev->release);
+
+ /* Now search device_hint[] table for a match, so we can hint a node number. */
+ for (hint = 0; hint < MAX_DEV_HINTS; hint++) {
+ if (((device_hint[hint].type == -1) || (device_hint[hint].type == pdev->type)) &&
+ (device_hint[hint].pdev == NULL)) {
+ /* so far, so good... try serial number */
+ if ((device_hint[hint].serial_number[0] == '*') || !strcmp(device_hint[hint].serial_number, serial_number)) {
+ /* match! */
+ video_nr = device_hint[hint].device_node;
+ Trace(TRACE_PROBE, "Found hint, will try to register as /dev/video%d\n", video_nr);
+ break;
+ }
+ }
+ }
+
+ pdev->vdev->release = video_device_release;
+ i = video_register_device(pdev->vdev, VFL_TYPE_GRABBER, video_nr);
+ if (i < 0) {
+ Err("Failed to register as video device (%d).\n", i);
+ video_device_release(pdev->vdev); /* Drip... drip... drip... */
+ kfree(pdev); /* Oops, no memory leaks please */
+ return -EIO;
+ }
+ else {
+ Info("Registered as /dev/video%d.\n", pdev->vdev->minor & 0x3F);
+ }
+
+ /* occupy slot */
+ if (hint < MAX_DEV_HINTS)
+ device_hint[hint].pdev = pdev;
+
+ Trace(TRACE_PROBE, "probe() function returning struct at 0x%p.\n", pdev);
+ usb_set_intfdata (intf, pdev);
+ return 0;
+}
+
+/* The user janked out the cable... */
+static void usb_pwc_disconnect(struct usb_interface *intf)
+{
+ struct pwc_device *pdev;
+ int hint;
+
+ lock_kernel();
+ pdev = usb_get_intfdata (intf);
+ usb_set_intfdata (intf, NULL);
+ if (pdev == NULL) {
+ Err("pwc_disconnect() Called without private pointer.\n");
+ goto disconnect_out;
+ }
+ if (pdev->udev == NULL) {
+ Err("pwc_disconnect() already called for %p\n", pdev);
+ goto disconnect_out;
+ }
+ if (pdev->udev != interface_to_usbdev(intf)) {
+ Err("pwc_disconnect() Woops: pointer mismatch udev/pdev.\n");
+ goto disconnect_out;
+ }
+#ifdef PWC_MAGIC
+ if (pdev->magic != PWC_MAGIC) {
+ Err("pwc_disconnect() Magic number failed. Consult your scrolls and try again.\n");
+ goto disconnect_out;
+ }
+#endif
+
+ /* We got unplugged; this is signalled by an EPIPE error code */
+ if (pdev->vopen) {
+ Info("Disconnected while webcam is in use!\n");
+ pdev->error_status = EPIPE;
+ }
+
+ /* Alert waiting processes */
+ wake_up_interruptible(&pdev->frameq);
+ /* Wait until device is closed */
+ while (pdev->vopen)
+ schedule();
+ /* Device is now closed, so we can safely unregister it */
+ Trace(TRACE_PROBE, "Unregistering video device in disconnect().\n");
+ video_unregister_device(pdev->vdev);
+
+ /* Free memory (don't set pdev to 0 just yet) */
+ kfree(pdev);
+
+disconnect_out:
+ /* search device_hint[] table if we occupy a slot, by any chance */
+ for (hint = 0; hint < MAX_DEV_HINTS; hint++)
+ if (device_hint[hint].pdev == pdev)
+ device_hint[hint].pdev = NULL;
+
+ unlock_kernel();
+}
+
+
+/* *grunt* We have to do atoi ourselves :-( */
+static int pwc_atoi(const char *s)
+{
+ int k = 0;
+
+ k = 0;
+ while (*s != '\0' && *s >= '0' && *s <= '9') {
+ k = 10 * k + (*s - '0');
+ s++;
+ }
+ return k;
+}
+
+
+/*
+ * Initialization code & module stuff
+ */
+
+static char *size = NULL;
+static int fps = 0;
+static int fbufs = 0;
+static int mbufs = 0;
+static int trace = -1;
+static int compression = -1;
+static int leds[2] = { -1, -1 };
+static char *dev_hint[MAX_DEV_HINTS] = { };
+
+MODULE_PARM(size, "s");
+MODULE_PARM_DESC(size, "Initial image size. One of sqcif, qsif, qcif, sif, cif, vga");
+MODULE_PARM(fps, "i");
+MODULE_PARM_DESC(fps, "Initial frames per second. Varies with model, useful range 5-30");
+MODULE_PARM(fbufs, "i");
+MODULE_PARM_DESC(fbufs, "Number of internal frame buffers to reserve");
+MODULE_PARM(mbufs, "i");
+MODULE_PARM_DESC(mbufs, "Number of external (mmap()ed) image buffers");
+MODULE_PARM(trace, "i");
+MODULE_PARM_DESC(trace, "For debugging purposes");
+MODULE_PARM(power_save, "i");
+MODULE_PARM_DESC(power_save, "Turn power save feature in camera on or off");
+MODULE_PARM(compression, "i");
+MODULE_PARM_DESC(compression, "Preferred compression quality. Range 0 (uncompressed) to 3 (high compression)");
+MODULE_PARM(leds, "2i");
+MODULE_PARM_DESC(leds, "LED on,off time in milliseconds");
+MODULE_PARM(dev_hint, "0-20s");
+MODULE_PARM_DESC(dev_hint, "Device node hints");
+
+MODULE_DESCRIPTION("Philips & OEM USB webcam driver");
+MODULE_AUTHOR("Luc Saillard <luc@saillard.org>");
+MODULE_LICENSE("GPL");
+
+static int __init usb_pwc_init(void)
+{
+ int i, sz;
+ char *sizenames[PSZ_MAX] = { "sqcif", "qsif", "qcif", "sif", "cif", "vga" };
+
+ Info("Philips webcam module version " PWC_VERSION " loaded.\n");
+ Info("Supports Philips PCA645/646, PCVC675/680/690, PCVC720[40]/730/740/750 & PCVC830/840.\n");
+ Info("Also supports the Askey VC010, various Logitech Quickcams, Samsung MPC-C10 and MPC-C30,\n");
+ Info("the Creative WebCam 5 & Pro Ex, SOTEC Afina Eye and Visionite VCS-UC300 and VCS-UM100.\n");
+
+ if (fps) {
+ if (fps < 4 || fps > 30) {
+ Err("Framerate out of bounds (4-30).\n");
+ return -EINVAL;
+ }
+ default_fps = fps;
+ Info("Default framerate set to %d.\n", default_fps);
+ }
+
+ if (size) {
+ /* string; try matching with array */
+ for (sz = 0; sz < PSZ_MAX; sz++) {
+ if (!strcmp(sizenames[sz], size)) { /* Found! */
+ default_size = sz;
+ break;
+ }
+ }
+ if (sz == PSZ_MAX) {
+ Err("Size not recognized; try size=[sqcif | qsif | qcif | sif | cif | vga].\n");
+ return -EINVAL;
+ }
+ Info("Default image size set to %s [%dx%d].\n", sizenames[default_size], pwc_image_sizes[default_size].x, pwc_image_sizes[default_size].y);
+ }
+ if (mbufs) {
+ if (mbufs < 1 || mbufs > MAX_IMAGES) {
+ Err("Illegal number of mmap() buffers; use a number between 1 and %d.\n", MAX_IMAGES);
+ return -EINVAL;
+ }
+ default_mbufs = mbufs;
+ Info("Number of image buffers set to %d.\n", default_mbufs);
+ }
+ if (fbufs) {
+ if (fbufs < 2 || fbufs > MAX_FRAMES) {
+ Err("Illegal number of frame buffers; use a number between 2 and %d.\n", MAX_FRAMES);
+ return -EINVAL;
+ }
+ default_fbufs = fbufs;
+ Info("Number of frame buffers set to %d.\n", default_fbufs);
+ }
+ if (trace >= 0) {
+ Info("Trace options: 0x%04x\n", trace);
+ pwc_trace = trace;
+ }
+ if (compression >= 0) {
+ if (compression > 3) {
+ Err("Invalid compression setting; use a number between 0 (uncompressed) and 3 (high).\n");
+ return -EINVAL;
+ }
+ pwc_preferred_compression = compression;
+ Info("Preferred compression set to %d.\n", pwc_preferred_compression);
+ }
+ if (power_save)
+ Info("Enabling power save on open/close.\n");
+ if (leds[0] >= 0)
+ led_on = leds[0];
+ if (leds[1] >= 0)
+ led_off = leds[1];
+
+ /* Big device node whoopla. Basicly, it allows you to assign a
+ device node (/dev/videoX) to a camera, based on its type
+ & serial number. The format is [type[.serialnumber]:]node.
+
+ Any camera that isn't matched by these rules gets the next
+ available free device node.
+ */
+ for (i = 0; i < MAX_DEV_HINTS; i++) {
+ char *s, *colon, *dot;
+
+ /* This loop also initializes the array */
+ device_hint[i].pdev = NULL;
+ s = dev_hint[i];
+ if (s != NULL && *s != '\0') {
+ device_hint[i].type = -1; /* wildcard */
+ strcpy(device_hint[i].serial_number, "*");
+
+ /* parse string: chop at ':' & '/' */
+ colon = dot = s;
+ while (*colon != '\0' && *colon != ':')
+ colon++;
+ while (*dot != '\0' && *dot != '.')
+ dot++;
+ /* Few sanity checks */
+ if (*dot != '\0' && dot > colon) {
+ Err("Malformed camera hint: the colon must be after the dot.\n");
+ return -EINVAL;
+ }
+
+ if (*colon == '\0') {
+ /* No colon */
+ if (*dot != '\0') {
+ Err("Malformed camera hint: no colon + device node given.\n");
+ return -EINVAL;
+ }
+ else {
+ /* No type or serial number specified, just a number. */
+ device_hint[i].device_node = pwc_atoi(s);
+ }
+ }
+ else {
+ /* There's a colon, so we have at least a type and a device node */
+ device_hint[i].type = pwc_atoi(s);
+ device_hint[i].device_node = pwc_atoi(colon + 1);
+ if (*dot != '\0') {
+ /* There's a serial number as well */
+ int k;
+
+ dot++;
+ k = 0;
+ while (*dot != ':' && k < 29) {
+ device_hint[i].serial_number[k++] = *dot;
+ dot++;
+ }
+ device_hint[i].serial_number[k] = '\0';
+ }
+ }
+#if PWC_DEBUG
+ Debug("device_hint[%d]:\n", i);
+ Debug(" type : %d\n", device_hint[i].type);
+ Debug(" serial# : %s\n", device_hint[i].serial_number);
+ Debug(" node : %d\n", device_hint[i].device_node);
+#endif
+ }
+ else
+ device_hint[i].type = 0; /* not filled */
+ } /* ..for MAX_DEV_HINTS */
+
+ Trace(TRACE_PROBE, "Registering driver at address 0x%p.\n", &pwc_driver);
+ return usb_register(&pwc_driver);
+}
+
+static void __exit usb_pwc_exit(void)
+{
+ Trace(TRACE_MODULE, "Deregistering driver.\n");
+ usb_deregister(&pwc_driver);
+ Info("Philips webcam module removed.\n");
+}
+
+module_init(usb_pwc_init);
+module_exit(usb_pwc_exit);
+
--- /dev/null
+#ifndef PWC_IOCTL_H
+#define PWC_IOCTL_H
+
+/* (C) 2001-2004 Nemosoft Unv.
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+/* This is pwc-ioctl.h belonging to PWC 8.12.1
+ It contains structures and defines to communicate from user space
+ directly to the driver.
+ */
+
+/*
+ Changes
+ 2001/08/03 Alvarado Added ioctl constants to access methods for
+ changing white balance and red/blue gains
+ 2002/12/15 G. H. Fernandez-Toribio VIDIOCGREALSIZE
+ 2003/12/13 Nemosft Unv. Some modifications to make interfacing to
+ PWCX easier
+ */
+
+/* These are private ioctl() commands, specific for the Philips webcams.
+ They contain functions not found in other webcams, and settings not
+ specified in the Video4Linux API.
+
+ The #define names are built up like follows:
+ VIDIOC VIDeo IOCtl prefix
+ PWC Philps WebCam
+ G optional: Get
+ S optional: Set
+ ... the function
+ */
+
+
+ /* Enumeration of image sizes */
+#define PSZ_SQCIF 0x00
+#define PSZ_QSIF 0x01
+#define PSZ_QCIF 0x02
+#define PSZ_SIF 0x03
+#define PSZ_CIF 0x04
+#define PSZ_VGA 0x05
+#define PSZ_MAX 6
+
+
+/* The frame rate is encoded in the video_window.flags parameter using
+ the upper 16 bits, since some flags are defined nowadays. The following
+ defines provide a mask and shift to filter out this value.
+
+ In 'Snapshot' mode the camera freezes its automatic exposure and colour
+ balance controls.
+ */
+#define PWC_FPS_SHIFT 16
+#define PWC_FPS_MASK 0x00FF0000
+#define PWC_FPS_FRMASK 0x003F0000
+#define PWC_FPS_SNAPSHOT 0x00400000
+
+
+/* structure for transfering x & y coordinates */
+struct pwc_coord
+{
+ int x, y; /* guess what */
+ int size; /* size, or offset */
+};
+
+
+/* Used with VIDIOCPWCPROBE */
+struct pwc_probe
+{
+ char name[32];
+ int type;
+};
+
+struct pwc_serial
+{
+ char serial[30]; /* String with serial number. Contains terminating 0 */
+};
+
+/* pwc_whitebalance.mode values */
+#define PWC_WB_INDOOR 0
+#define PWC_WB_OUTDOOR 1
+#define PWC_WB_FL 2
+#define PWC_WB_MANUAL 3
+#define PWC_WB_AUTO 4
+
+/* Used with VIDIOCPWC[SG]AWB (Auto White Balance).
+ Set mode to one of the PWC_WB_* values above.
+ *red and *blue are the respective gains of these colour components inside
+ the camera; range 0..65535
+ When 'mode' == PWC_WB_MANUAL, 'manual_red' and 'manual_blue' are set or read;
+ otherwise undefined.
+ 'read_red' and 'read_blue' are read-only.
+*/
+struct pwc_whitebalance
+{
+ int mode;
+ int manual_red, manual_blue; /* R/W */
+ int read_red, read_blue; /* R/O */
+};
+
+/*
+ 'control_speed' and 'control_delay' are used in automatic whitebalance mode,
+ and tell the camera how fast it should react to changes in lighting, and
+ with how much delay. Valid values are 0..65535.
+*/
+struct pwc_wb_speed
+{
+ int control_speed;
+ int control_delay;
+
+};
+
+/* Used with VIDIOCPWC[SG]LED */
+struct pwc_leds
+{
+ int led_on; /* Led on-time; range = 0..25000 */
+ int led_off; /* Led off-time; range = 0..25000 */
+};
+
+/* Image size (used with GREALSIZE) */
+struct pwc_imagesize
+{
+ int width;
+ int height;
+};
+
+/* Defines and structures for Motorized Pan & Tilt */
+#define PWC_MPT_PAN 0x01
+#define PWC_MPT_TILT 0x02
+#define PWC_MPT_TIMEOUT 0x04 /* for status */
+
+/* Set angles; when absolute != 0, the angle is absolute and the
+ driver calculates the relative offset for you. This can only
+ be used with VIDIOCPWCSANGLE; VIDIOCPWCGANGLE always returns
+ absolute angles.
+ */
+struct pwc_mpt_angles
+{
+ int absolute; /* write-only */
+ int pan; /* degrees * 100 */
+ int tilt; /* degress * 100 */
+};
+
+/* Range of angles of the camera, both horizontally and vertically.
+ */
+struct pwc_mpt_range
+{
+ int pan_min, pan_max; /* degrees * 100 */
+ int tilt_min, tilt_max;
+};
+
+struct pwc_mpt_status
+{
+ int status;
+ int time_pan;
+ int time_tilt;
+};
+
+
+/* This is used for out-of-kernel decompression. With it, you can get
+ all the necessary information to initialize and use the decompressor
+ routines in standalone applications.
+ */
+struct pwc_video_command
+{
+ int type; /* camera type (645, 675, 730, etc.) */
+ int release; /* release number */
+
+ int size; /* one of PSZ_* */
+ int alternate;
+ int command_len; /* length of USB video command */
+ unsigned char command_buf[13]; /* Actual USB video command */
+ int bandlength; /* >0 = compressed */
+ int frame_size; /* Size of one (un)compressed frame */
+};
+
+/* Flags for PWCX subroutines. Not all modules honour all flags. */
+#define PWCX_FLAG_PLANAR 0x0001
+#define PWCX_FLAG_BAYER 0x0008
+
+
+/* IOCTL definitions */
+
+ /* Restore user settings */
+#define VIDIOCPWCRUSER _IO('v', 192)
+ /* Save user settings */
+#define VIDIOCPWCSUSER _IO('v', 193)
+ /* Restore factory settings */
+#define VIDIOCPWCFACTORY _IO('v', 194)
+
+ /* You can manipulate the compression factor. A compression preference of 0
+ means use uncompressed modes when available; 1 is low compression, 2 is
+ medium and 3 is high compression preferred. Of course, the higher the
+ compression, the lower the bandwidth used but more chance of artefacts
+ in the image. The driver automatically chooses a higher compression when
+ the preferred mode is not available.
+ */
+ /* Set preferred compression quality (0 = uncompressed, 3 = highest compression) */
+#define VIDIOCPWCSCQUAL _IOW('v', 195, int)
+ /* Get preferred compression quality */
+#define VIDIOCPWCGCQUAL _IOR('v', 195, int)
+
+
+/* Retrieve serial number of camera */
+#define VIDIOCPWCGSERIAL _IOR('v', 198, struct pwc_serial)
+
+ /* This is a probe function; since so many devices are supported, it
+ becomes difficult to include all the names in programs that want to
+ check for the enhanced Philips stuff. So in stead, try this PROBE;
+ it returns a structure with the original name, and the corresponding
+ Philips type.
+ To use, fill the structure with zeroes, call PROBE and if that succeeds,
+ compare the name with that returned from VIDIOCGCAP; they should be the
+ same. If so, you can be assured it is a Philips (OEM) cam and the type
+ is valid.
+ */
+#define VIDIOCPWCPROBE _IOR('v', 199, struct pwc_probe)
+
+ /* Set AGC (Automatic Gain Control); int < 0 = auto, 0..65535 = fixed */
+#define VIDIOCPWCSAGC _IOW('v', 200, int)
+ /* Get AGC; int < 0 = auto; >= 0 = fixed, range 0..65535 */
+#define VIDIOCPWCGAGC _IOR('v', 200, int)
+ /* Set shutter speed; int < 0 = auto; >= 0 = fixed, range 0..65535 */
+#define VIDIOCPWCSSHUTTER _IOW('v', 201, int)
+
+ /* Color compensation (Auto White Balance) */
+#define VIDIOCPWCSAWB _IOW('v', 202, struct pwc_whitebalance)
+#define VIDIOCPWCGAWB _IOR('v', 202, struct pwc_whitebalance)
+
+ /* Auto WB speed */
+#define VIDIOCPWCSAWBSPEED _IOW('v', 203, struct pwc_wb_speed)
+#define VIDIOCPWCGAWBSPEED _IOR('v', 203, struct pwc_wb_speed)
+
+ /* LEDs on/off/blink; int range 0..65535 */
+#define VIDIOCPWCSLED _IOW('v', 205, struct pwc_leds)
+#define VIDIOCPWCGLED _IOR('v', 205, struct pwc_leds)
+
+ /* Contour (sharpness); int < 0 = auto, 0..65536 = fixed */
+#define VIDIOCPWCSCONTOUR _IOW('v', 206, int)
+#define VIDIOCPWCGCONTOUR _IOR('v', 206, int)
+
+ /* Backlight compensation; 0 = off, otherwise on */
+#define VIDIOCPWCSBACKLIGHT _IOW('v', 207, int)
+#define VIDIOCPWCGBACKLIGHT _IOR('v', 207, int)
+
+ /* Flickerless mode; = 0 off, otherwise on */
+#define VIDIOCPWCSFLICKER _IOW('v', 208, int)
+#define VIDIOCPWCGFLICKER _IOR('v', 208, int)
+
+ /* Dynamic noise reduction; 0 off, 3 = high noise reduction */
+#define VIDIOCPWCSDYNNOISE _IOW('v', 209, int)
+#define VIDIOCPWCGDYNNOISE _IOR('v', 209, int)
+
+ /* Real image size as used by the camera; tells you whether or not there's a gray border around the image */
+#define VIDIOCPWCGREALSIZE _IOR('v', 210, struct pwc_imagesize)
+
+ /* Motorized pan & tilt functions */
+#define VIDIOCPWCMPTRESET _IOW('v', 211, int)
+#define VIDIOCPWCMPTGRANGE _IOR('v', 211, struct pwc_mpt_range)
+#define VIDIOCPWCMPTSANGLE _IOW('v', 212, struct pwc_mpt_angles)
+#define VIDIOCPWCMPTGANGLE _IOR('v', 212, struct pwc_mpt_angles)
+#define VIDIOCPWCMPTSTATUS _IOR('v', 213, struct pwc_mpt_status)
+
+ /* Get the USB set-video command; needed for initializing libpwcx */
+#define VIDIOCPWCGVIDCMD _IOR('v', 215, struct pwc_video_command)
+struct pwc_table_init_buffer {
+ int len;
+ char *buffer;
+
+};
+#define VIDIOCPWCGVIDTABLE _IOR('v', 216, struct pwc_table_init_buffer)
+
+#endif
--- /dev/null
+/* Linux driver for Philips webcam
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+
+/* This tables contains entries for the 730/740/750 (Kiara) camera, with
+ 4 different qualities (no compression, low, medium, high).
+ It lists the bandwidth requirements for said mode by its alternate interface
+ number. An alternate of 0 means that the mode is unavailable.
+
+ There are 6 * 4 * 4 entries:
+ 6 different resolutions subqcif, qsif, qcif, sif, cif, vga
+ 6 framerates: 5, 10, 15, 20, 25, 30
+ 4 compression modi: none, low, medium, high
+
+ When an uncompressed mode is not available, the next available compressed mode
+ will be chosen (unless the decompressor is absent). Sometimes there are only
+ 1 or 2 compressed modes available; in that case entries are duplicated.
+*/
+
+
+#include "pwc-kiara.h"
+#include "pwc-uncompress.h"
+
+const struct Kiara_table_entry Kiara_table[PSZ_MAX][6][4] =
+{
+ /* SQCIF */
+ {
+ /* 5 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ },
+ /* QSIF */
+ {
+ /* 5 fps */
+ {
+ {1, 146, 0, {0x1D, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0x00, 0x80}},
+ {1, 146, 0, {0x1D, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0x00, 0x80}},
+ {1, 146, 0, {0x1D, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0x00, 0x80}},
+ {1, 146, 0, {0x1D, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0x00, 0x80}},
+ },
+ /* 10 fps */
+ {
+ {2, 291, 0, {0x1C, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x23, 0x01, 0x80}},
+ {1, 192, 630, {0x14, 0xF4, 0x30, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xC0, 0x00, 0x80}},
+ {1, 192, 630, {0x14, 0xF4, 0x30, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xC0, 0x00, 0x80}},
+ {1, 192, 630, {0x14, 0xF4, 0x30, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xC0, 0x00, 0x80}},
+ },
+ /* 15 fps */
+ {
+ {3, 437, 0, {0x1B, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0xB5, 0x01, 0x80}},
+ {2, 292, 640, {0x13, 0xF4, 0x30, 0x13, 0xF7, 0x13, 0x2F, 0x13, 0x20, 0x24, 0x01, 0x80}},
+ {2, 292, 640, {0x13, 0xF4, 0x30, 0x13, 0xF7, 0x13, 0x2F, 0x13, 0x20, 0x24, 0x01, 0x80}},
+ {1, 192, 420, {0x13, 0xF4, 0x30, 0x0D, 0x1B, 0x0C, 0x53, 0x1E, 0x18, 0xC0, 0x00, 0x80}},
+ },
+ /* 20 fps */
+ {
+ {4, 589, 0, {0x1A, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x4D, 0x02, 0x80}},
+ {3, 448, 730, {0x12, 0xF4, 0x30, 0x16, 0xC9, 0x16, 0x01, 0x0E, 0x18, 0xC0, 0x01, 0x80}},
+ {2, 292, 476, {0x12, 0xF4, 0x30, 0x0E, 0xD8, 0x0E, 0x10, 0x19, 0x18, 0x24, 0x01, 0x80}},
+ {1, 192, 312, {0x12, 0xF4, 0x50, 0x09, 0xB3, 0x08, 0xEB, 0x1E, 0x18, 0xC0, 0x00, 0x80}},
+ },
+ /* 25 fps */
+ {
+ {5, 703, 0, {0x19, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0xBF, 0x02, 0x80}},
+ {3, 447, 610, {0x11, 0xF4, 0x30, 0x13, 0x0B, 0x12, 0x43, 0x14, 0x28, 0xBF, 0x01, 0x80}},
+ {2, 292, 398, {0x11, 0xF4, 0x50, 0x0C, 0x6C, 0x0B, 0xA4, 0x1E, 0x28, 0x24, 0x01, 0x80}},
+ {1, 193, 262, {0x11, 0xF4, 0x50, 0x08, 0x23, 0x07, 0x5B, 0x1E, 0x28, 0xC1, 0x00, 0x80}},
+ },
+ /* 30 fps */
+ {
+ {8, 874, 0, {0x18, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x6A, 0x03, 0x80}},
+ {5, 704, 730, {0x10, 0xF4, 0x30, 0x16, 0xC9, 0x16, 0x01, 0x0E, 0x28, 0xC0, 0x02, 0x80}},
+ {3, 448, 492, {0x10, 0xF4, 0x30, 0x0F, 0x5D, 0x0E, 0x95, 0x15, 0x28, 0xC0, 0x01, 0x80}},
+ {2, 292, 320, {0x10, 0xF4, 0x50, 0x09, 0xFB, 0x09, 0x33, 0x1E, 0x28, 0x24, 0x01, 0x80}},
+ },
+ },
+ /* QCIF */
+ {
+ /* 5 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ },
+ /* SIF */
+ {
+ /* 5 fps */
+ {
+ {4, 582, 0, {0x0D, 0xF4, 0x30, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x46, 0x02, 0x80}},
+ {3, 387, 1276, {0x05, 0xF4, 0x30, 0x27, 0xD8, 0x26, 0x48, 0x03, 0x10, 0x83, 0x01, 0x80}},
+ {2, 291, 960, {0x05, 0xF4, 0x30, 0x1D, 0xF2, 0x1C, 0x62, 0x04, 0x10, 0x23, 0x01, 0x80}},
+ {1, 191, 630, {0x05, 0xF4, 0x50, 0x13, 0xA9, 0x12, 0x19, 0x05, 0x18, 0xBF, 0x00, 0x80}},
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {6, 775, 1278, {0x04, 0xF4, 0x30, 0x27, 0xE8, 0x26, 0x58, 0x05, 0x30, 0x07, 0x03, 0x80}},
+ {3, 447, 736, {0x04, 0xF4, 0x30, 0x16, 0xFB, 0x15, 0x6B, 0x05, 0x28, 0xBF, 0x01, 0x80}},
+ {2, 292, 480, {0x04, 0xF4, 0x70, 0x0E, 0xF9, 0x0D, 0x69, 0x09, 0x28, 0x24, 0x01, 0x80}},
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {9, 955, 1050, {0x03, 0xF4, 0x30, 0x20, 0xCF, 0x1F, 0x3F, 0x06, 0x48, 0xBB, 0x03, 0x80}},
+ {4, 592, 650, {0x03, 0xF4, 0x30, 0x14, 0x44, 0x12, 0xB4, 0x08, 0x30, 0x50, 0x02, 0x80}},
+ {3, 448, 492, {0x03, 0xF4, 0x50, 0x0F, 0x52, 0x0D, 0xC2, 0x09, 0x38, 0xC0, 0x01, 0x80}},
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {9, 958, 782, {0x02, 0xF4, 0x30, 0x18, 0x6A, 0x16, 0xDA, 0x0B, 0x58, 0xBE, 0x03, 0x80}},
+ {5, 703, 574, {0x02, 0xF4, 0x50, 0x11, 0xE7, 0x10, 0x57, 0x0B, 0x40, 0xBF, 0x02, 0x80}},
+ {3, 446, 364, {0x02, 0xF4, 0x90, 0x0B, 0x5C, 0x09, 0xCC, 0x0E, 0x38, 0xBE, 0x01, 0x80}},
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {9, 958, 654, {0x01, 0xF4, 0x30, 0x14, 0x66, 0x12, 0xD6, 0x0B, 0x50, 0xBE, 0x03, 0x80}},
+ {6, 776, 530, {0x01, 0xF4, 0x50, 0x10, 0x8C, 0x0E, 0xFC, 0x0C, 0x48, 0x08, 0x03, 0x80}},
+ {4, 592, 404, {0x01, 0xF4, 0x70, 0x0C, 0x96, 0x0B, 0x06, 0x0B, 0x48, 0x50, 0x02, 0x80}},
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {9, 957, 526, {0x00, 0xF4, 0x50, 0x10, 0x68, 0x0E, 0xD8, 0x0D, 0x58, 0xBD, 0x03, 0x80}},
+ {6, 775, 426, {0x00, 0xF4, 0x70, 0x0D, 0x48, 0x0B, 0xB8, 0x0F, 0x50, 0x07, 0x03, 0x80}},
+ {4, 590, 324, {0x00, 0x7A, 0x88, 0x0A, 0x1C, 0x08, 0xB4, 0x0E, 0x50, 0x4E, 0x02, 0x80}},
+ },
+ },
+ /* CIF */
+ {
+ /* 5 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ },
+ /* VGA */
+ {
+ /* 5 fps */
+ {
+ {0, },
+ {6, 773, 1272, {0x25, 0xF4, 0x30, 0x27, 0xB6, 0x24, 0x96, 0x02, 0x30, 0x05, 0x03, 0x80}},
+ {4, 592, 976, {0x25, 0xF4, 0x50, 0x1E, 0x78, 0x1B, 0x58, 0x03, 0x30, 0x50, 0x02, 0x80}},
+ {3, 448, 738, {0x25, 0xF4, 0x90, 0x17, 0x0C, 0x13, 0xEC, 0x04, 0x30, 0xC0, 0x01, 0x80}},
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {9, 956, 788, {0x24, 0xF4, 0x70, 0x18, 0x9C, 0x15, 0x7C, 0x03, 0x48, 0xBC, 0x03, 0x80}},
+ {6, 776, 640, {0x24, 0xF4, 0xB0, 0x13, 0xFC, 0x11, 0x2C, 0x04, 0x48, 0x08, 0x03, 0x80}},
+ {4, 592, 488, {0x24, 0x7A, 0xE8, 0x0F, 0x3C, 0x0C, 0x6C, 0x06, 0x48, 0x50, 0x02, 0x80}},
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {9, 957, 526, {0x23, 0x7A, 0xE8, 0x10, 0x68, 0x0D, 0x98, 0x06, 0x58, 0xBD, 0x03, 0x80}},
+ {9, 957, 526, {0x23, 0x7A, 0xE8, 0x10, 0x68, 0x0D, 0x98, 0x06, 0x58, 0xBD, 0x03, 0x80}},
+ {8, 895, 492, {0x23, 0x7A, 0xE8, 0x0F, 0x5D, 0x0C, 0x8D, 0x06, 0x58, 0x7F, 0x03, 0x80}},
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ },
+};
+
+
+/*
+ * Rom table for kiara chips
+ *
+ * 32 roms tables (one for each resolution ?)
+ * 2 tables per roms (one for each passes) (Y, and U&V)
+ * 128 bytes per passes
+ */
+
+const unsigned int KiaraRomTable [8][2][16][8] =
+{
+ { /* version 0 */
+ { /* version 0, passes 0 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000001,0x00000001},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000009,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000009,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000249,0x0000024a,0x00000049},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000249,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000249,
+ 0x00000249,0x0000124a,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000249,
+ 0x0000124a,0x00009252,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00009252,0x00009292,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009292,0x00009292,0x00009493,0x000124db},
+ {0x00000000,0x00000000,0x00000249,0x0000924a,
+ 0x00009492,0x0000a49b,0x0000a49b,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x0000a493,0x000124db,0x000124db,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x000124db,0x000126dc,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000124db,0x000136e4,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b925,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 0, passes 1 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000},
+ {0x00000000,0x00000000,0x00000001,0x00000009,
+ 0x00000009,0x00000009,0x00000009,0x00000001},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000249,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00001252},
+ {0x00000000,0x00000000,0x00000049,0x00001249,
+ 0x0000124a,0x0000124a,0x00001252,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009252,0x00009252,0x00009292,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x0000924a,
+ 0x00009292,0x00009292,0x00009292,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x00009292,
+ 0x00009492,0x00009493,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x0000a493,0x000124db,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00009252,0x00009493,
+ 0x000126dc,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000136e4,0x000136e4,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 1 */
+ { /* version 1, passes 0 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000001},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000009,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00001252},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x0000124a,0x00009252,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009252,0x00009292,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x0000924a,
+ 0x00009252,0x00009493,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x0000924a,
+ 0x00009292,0x00009493,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x00009252,
+ 0x00009492,0x00009493,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x000124db,0x000124db,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000126dc,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 1, passes 1 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000},
+ {0x00000000,0x00000000,0x00000049,0x00000009,
+ 0x00000049,0x00000009,0x00000001,0x00000000},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000000},
+ {0x00000000,0x00000000,0x00000249,0x00000049,
+ 0x00000249,0x00000049,0x0000024a,0x00000001},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00000001},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00000001},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x0000024a,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x0000024a,0x00000009},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00009252,0x00001252,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00009292,0x00001252,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00009292,0x00001252,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009292,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x0000924a,0x0000924a,
+ 0x00009492,0x00009493,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 2 */
+ { /* version 2, passes 0 */
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x0000124a,0x00001252,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x00009252,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x0000124a,0x00009292,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009252,0x00009493,0x00009493,0x0000a49b},
+ {0x00000000,0x00000000,0x00000249,0x0000924a,
+ 0x00009292,0x00009493,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009292,0x00009493,0x0000a49b,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009492,0x0000a49b,0x0000a49b,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x000124db,0x000124db,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x0000a493,0x000124db,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x000136e4},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0001249b,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000136e4,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x00009252,0x000124db,
+ 0x000126dc,0x0001b724,0x0001b725,0x0001b925},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 2, passes 1 */
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00000249,
+ 0x0000124a,0x0000124a,0x00001252,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x00009292,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x0000a49b,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x0000a49b,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x0000a49b,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x00009252,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 3 */
+ { /* version 3, passes 0 */
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x0000a49b,0x0000a49b,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x000124db,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000126dc,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0001249b,0x000126dc,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000136e4,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000136e4,0x0001b725,0x0001b925},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x000136e4,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b92d,0x0001c92d},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000126dc,0x0001b724,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x000136e4,0x0001b925,0x00025bb6,0x00024b77},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 3, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00000249,
+ 0x0000124a,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x00009493,0x0000a49b,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x000126dc,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00009492,0x0000a49b,
+ 0x000136e4,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x0001b724,0x0001b724,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 4 */
+ { /* version 4, passes 0 */
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000249,0x00000049,
+ 0x00000249,0x00000249,0x0000024a,0x00000049},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x00009252,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009493,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009292,0x00009493,0x00009493,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0001249b,0x000126dc,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00009252,0x00009493,
+ 0x000124db,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009252,0x0000a49b,
+ 0x000124db,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00009492,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x000136e4,0x0001b925,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 4, passes 1 */
+ {0x00000000,0x00000000,0x00000249,0x00000049,
+ 0x00000009,0x00000009,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000049,0x00000049,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00000249,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x0000124a,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009252,0x0000124a,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x00009252,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x00009292,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x00009292,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x00009493,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000124db,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009252,0x000124db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 5 */
+ { /* version 5, passes 0 */
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000126dc,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b725,0x000136e4},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000136e4,0x0001b724,0x0001b92d,0x0001b724},
+ {0x00000000,0x00000000,0x00009492,0x0000a49b,
+ 0x000136e4,0x0001b724,0x0001b92d,0x0001b724},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b925,0x0001c96e,0x0001b925},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x0001b724,0x0001b925,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001c924,0x0002496d,0x00025bb6,0x00024b77},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 5, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009252,0x00009252,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000124db,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000124db,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000126dc,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009292,0x000124db,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 6 */
+ { /* version 6, passes 0 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x0000a493,0x0000a49b,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000126dc,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b725,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000136e4,0x0001b724,0x0001b92d,0x000136e4},
+ {0x00000000,0x00000000,0x00009492,0x0000a49b,
+ 0x000136e4,0x0001b724,0x0001b92d,0x0001b724},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b724,0x0001b92d,0x0001b724},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b925,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x0001b724,0x0001b925,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x0001b724,0x0001c92d,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x0001b724,0x0001c92d,0x00024b76,0x0002496e},
+ {0x00000000,0x00000000,0x00012492,0x000126db,
+ 0x0001c924,0x00024b6d,0x0002ddb6,0x00025bbf},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 6, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x00009252,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x00009292,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x0000a49b,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000124db,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000126dc,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001c924,0x0001b724,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 7 */
+ { /* version 7, passes 0 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x0001b725,0x000124db},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000136e4,0x0001b724,0x0001b725,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x000124db,
+ 0x000136e4,0x0001b724,0x0001b725,0x000126dc},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b724,0x0001c96e,0x000136e4},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001c92d,0x0001c96e,0x0001b724},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x000136e4,0x0001c92d,0x0001c96e,0x0001b724},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x0001b724,0x0001c92d,0x0001c96e,0x0001b925},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001b724,0x0001c92d,0x00024b76,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001b924,0x0001c92d,0x00024b76,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001b924,0x0001c92d,0x00024b76,0x0002496e},
+ {0x00000000,0x00000000,0x00012492,0x000136db,
+ 0x00024924,0x00024b6d,0x0002ddb6,0x00025bbf},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 7, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x00009492,0x00009292,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x0000a49b,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000124db,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000136db,
+ 0x0001b724,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000136db,
+ 0x0001b724,0x000126dc,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00009292,0x000136db,
+ 0x0001b724,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009492,0x000136db,
+ 0x0001b724,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00012492,0x0001b6db,
+ 0x0001c924,0x0001b724,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ }
+};
+
--- /dev/null
+/* Linux driver for Philips webcam
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+/* Entries for the Kiara (730/740/750) camera */
+
+#ifndef PWC_KIARA_H
+#define PWC_KIARA_H
+
+#include "pwc-ioctl.h"
+
+struct Kiara_table_entry
+{
+ char alternate; /* USB alternate interface */
+ unsigned short packetsize; /* Normal packet size */
+ unsigned short bandlength; /* Bandlength when decompressing */
+ unsigned char mode[12]; /* precomputed mode settings for cam */
+};
+
+const extern struct Kiara_table_entry Kiara_table[PSZ_MAX][6][4];
+const extern unsigned int KiaraRomTable[8][2][16][8];
+
+#endif
+
+
--- /dev/null
+/* Linux driver for Philips webcam
+ Various miscellaneous functions and tables.
+ (C) 1999-2003 Nemosoft Unv.
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#include <linux/slab.h>
+
+#include "pwc.h"
+
+struct pwc_coord pwc_image_sizes[PSZ_MAX] =
+{
+ { 128, 96, 0 },
+ { 160, 120, 0 },
+ { 176, 144, 0 },
+ { 320, 240, 0 },
+ { 352, 288, 0 },
+ { 640, 480, 0 },
+};
+
+/* x,y -> PSZ_ */
+int pwc_decode_size(struct pwc_device *pdev, int width, int height)
+{
+ int i, find;
+
+ /* Make sure we don't go beyond our max size.
+ NB: we have different limits for RAW and normal modes. In case
+ you don't have the decompressor loaded or use RAW mode,
+ the maximum viewable size is smaller.
+ */
+ if (pdev->vpalette == VIDEO_PALETTE_RAW)
+ {
+ if (width > pdev->abs_max.x || height > pdev->abs_max.y)
+ {
+ Debug("VIDEO_PALETTE_RAW: going beyond abs_max.\n");
+ return -1;
+ }
+ }
+ else
+ {
+ if (width > pdev->view_max.x || height > pdev->view_max.y)
+ {
+ Debug("VIDEO_PALETTE_ not RAW: going beyond view_max.\n");
+ return -1;
+ }
+ }
+
+ /* Find the largest size supported by the camera that fits into the
+ requested size.
+ */
+ find = -1;
+ for (i = 0; i < PSZ_MAX; i++) {
+ if (pdev->image_mask & (1 << i)) {
+ if (pwc_image_sizes[i].x <= width && pwc_image_sizes[i].y <= height)
+ find = i;
+ }
+ }
+ return find;
+}
+
+/* initialize variables depending on type and decompressor*/
+void pwc_construct(struct pwc_device *pdev)
+{
+ switch(pdev->type) {
+ case 645:
+ case 646:
+ pdev->view_min.x = 128;
+ pdev->view_min.y = 96;
+ pdev->view_max.x = 352;
+ pdev->view_max.y = 288;
+ pdev->abs_max.x = 352;
+ pdev->abs_max.y = 288;
+ pdev->image_mask = 1 << PSZ_SQCIF | 1 << PSZ_QCIF | 1 << PSZ_CIF;
+ pdev->vcinterface = 2;
+ pdev->vendpoint = 4;
+ pdev->frame_header_size = 0;
+ pdev->frame_trailer_size = 0;
+ break;
+ case 675:
+ case 680:
+ case 690:
+ pdev->view_min.x = 128;
+ pdev->view_min.y = 96;
+ /* Anthill bug #38: PWC always reports max size, even without PWCX */
+ pdev->view_max.x = 640;
+ pdev->view_max.y = 480;
+ pdev->image_mask = 1 << PSZ_SQCIF | 1 << PSZ_QSIF | 1 << PSZ_QCIF | 1 << PSZ_SIF | 1 << PSZ_CIF | 1 << PSZ_VGA;
+ pdev->abs_max.x = 640;
+ pdev->abs_max.y = 480;
+ pdev->vcinterface = 3;
+ pdev->vendpoint = 4;
+ pdev->frame_header_size = 0;
+ pdev->frame_trailer_size = 0;
+ break;
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ pdev->view_min.x = 160;
+ pdev->view_min.y = 120;
+ pdev->view_max.x = 640;
+ pdev->view_max.y = 480;
+ pdev->image_mask = 1 << PSZ_QSIF | 1 << PSZ_SIF | 1 << PSZ_VGA;
+ pdev->abs_max.x = 640;
+ pdev->abs_max.y = 480;
+ pdev->vcinterface = 3;
+ pdev->vendpoint = 5;
+ pdev->frame_header_size = TOUCAM_HEADER_SIZE;
+ pdev->frame_trailer_size = TOUCAM_TRAILER_SIZE;
+ break;
+ }
+ Debug("type = %d\n",pdev->type);
+ pdev->vpalette = VIDEO_PALETTE_YUV420P; /* default */
+ pdev->view_min.size = pdev->view_min.x * pdev->view_min.y;
+ pdev->view_max.size = pdev->view_max.x * pdev->view_max.y;
+ /* length of image, in YUV format; always allocate enough memory. */
+ pdev->len_per_image = (pdev->abs_max.x * pdev->abs_max.y * 3) / 2;
+}
+
+
--- /dev/null
+ /* SQCIF */
+ {
+ {0, 0, {0x04, 0x01, 0x03}},
+ {8, 0, {0x05, 0x01, 0x03}},
+ {7, 0, {0x08, 0x01, 0x03}},
+ {7, 0, {0x0A, 0x01, 0x03}},
+ {6, 0, {0x0C, 0x01, 0x03}},
+ {5, 0, {0x0F, 0x01, 0x03}},
+ {4, 0, {0x14, 0x01, 0x03}},
+ {3, 0, {0x18, 0x01, 0x03}},
+ },
+ /* QSIF */
+ {
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ },
+ /* QCIF */
+ {
+ {0, 0, {0x04, 0x01, 0x02}},
+ {8, 0, {0x05, 0x01, 0x02}},
+ {7, 0, {0x08, 0x01, 0x02}},
+ {6, 0, {0x0A, 0x01, 0x02}},
+ {5, 0, {0x0C, 0x01, 0x02}},
+ {4, 0, {0x0F, 0x01, 0x02}},
+ {1, 0, {0x14, 0x01, 0x02}},
+ {1, 0, {0x18, 0x01, 0x02}},
+ },
+ /* SIF */
+ {
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ },
+ /* CIF */
+ {
+ {4, 0, {0x04, 0x01, 0x01}},
+ {7, 1, {0x05, 0x03, 0x01}},
+ {6, 1, {0x08, 0x03, 0x01}},
+ {4, 1, {0x0A, 0x03, 0x01}},
+ {3, 1, {0x0C, 0x03, 0x01}},
+ {2, 1, {0x0F, 0x03, 0x01}},
+ {0},
+ {0},
+ },
+ /* VGA */
+ {
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ {0},
+ },
--- /dev/null
+/* Linux driver for Philips webcam
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+
+/* This tables contains entries for the 675/680/690 (Timon) camera, with
+ 4 different qualities (no compression, low, medium, high).
+ It lists the bandwidth requirements for said mode by its alternate interface
+ number. An alternate of 0 means that the mode is unavailable.
+
+ There are 6 * 4 * 4 entries:
+ 6 different resolutions subqcif, qsif, qcif, sif, cif, vga
+ 6 framerates: 5, 10, 15, 20, 25, 30
+ 4 compression modi: none, low, medium, high
+
+ When an uncompressed mode is not available, the next available compressed mode
+ will be chosen (unless the decompressor is absent). Sometimes there are only
+ 1 or 2 compressed modes available; in that case entries are duplicated.
+*/
+
+#include "pwc-timon.h"
+
+const struct Timon_table_entry Timon_table[PSZ_MAX][6][4] =
+{
+ /* SQCIF */
+ {
+ /* 5 fps */
+ {
+ {1, 140, 0, {0x05, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x8C, 0xFC, 0x80, 0x02}},
+ {1, 140, 0, {0x05, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x8C, 0xFC, 0x80, 0x02}},
+ {1, 140, 0, {0x05, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x8C, 0xFC, 0x80, 0x02}},
+ {1, 140, 0, {0x05, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x8C, 0xFC, 0x80, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {2, 280, 0, {0x04, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x18, 0xA9, 0x80, 0x02}},
+ {2, 280, 0, {0x04, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x18, 0xA9, 0x80, 0x02}},
+ {2, 280, 0, {0x04, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x18, 0xA9, 0x80, 0x02}},
+ {2, 280, 0, {0x04, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x18, 0xA9, 0x80, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {3, 410, 0, {0x03, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x9A, 0x71, 0x80, 0x02}},
+ {3, 410, 0, {0x03, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x9A, 0x71, 0x80, 0x02}},
+ {3, 410, 0, {0x03, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x9A, 0x71, 0x80, 0x02}},
+ {3, 410, 0, {0x03, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x9A, 0x71, 0x80, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {4, 559, 0, {0x02, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x2F, 0x56, 0x80, 0x02}},
+ {4, 559, 0, {0x02, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x2F, 0x56, 0x80, 0x02}},
+ {4, 559, 0, {0x02, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x2F, 0x56, 0x80, 0x02}},
+ {4, 559, 0, {0x02, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x2F, 0x56, 0x80, 0x02}},
+ },
+ /* 25 fps */
+ {
+ {5, 659, 0, {0x01, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x93, 0x46, 0x80, 0x02}},
+ {5, 659, 0, {0x01, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x93, 0x46, 0x80, 0x02}},
+ {5, 659, 0, {0x01, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x93, 0x46, 0x80, 0x02}},
+ {5, 659, 0, {0x01, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x93, 0x46, 0x80, 0x02}},
+ },
+ /* 30 fps */
+ {
+ {7, 838, 0, {0x00, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x46, 0x3B, 0x80, 0x02}},
+ {7, 838, 0, {0x00, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x46, 0x3B, 0x80, 0x02}},
+ {7, 838, 0, {0x00, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x46, 0x3B, 0x80, 0x02}},
+ {7, 838, 0, {0x00, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x46, 0x3B, 0x80, 0x02}},
+ },
+ },
+ /* QSIF */
+ {
+ /* 5 fps */
+ {
+ {1, 146, 0, {0x2D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0xFC, 0xC0, 0x02}},
+ {1, 146, 0, {0x2D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0xFC, 0xC0, 0x02}},
+ {1, 146, 0, {0x2D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0xFC, 0xC0, 0x02}},
+ {1, 146, 0, {0x2D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x92, 0xFC, 0xC0, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {2, 291, 0, {0x2C, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x23, 0xA1, 0xC0, 0x02}},
+ {1, 191, 630, {0x2C, 0xF4, 0x05, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xBF, 0xF4, 0xC0, 0x02}},
+ {1, 191, 630, {0x2C, 0xF4, 0x05, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xBF, 0xF4, 0xC0, 0x02}},
+ {1, 191, 630, {0x2C, 0xF4, 0x05, 0x13, 0xA9, 0x12, 0xE1, 0x17, 0x08, 0xBF, 0xF4, 0xC0, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {3, 437, 0, {0x2B, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0xB5, 0x6D, 0xC0, 0x02}},
+ {2, 291, 640, {0x2B, 0xF4, 0x05, 0x13, 0xF7, 0x13, 0x2F, 0x13, 0x08, 0x23, 0xA1, 0xC0, 0x02}},
+ {2, 291, 640, {0x2B, 0xF4, 0x05, 0x13, 0xF7, 0x13, 0x2F, 0x13, 0x08, 0x23, 0xA1, 0xC0, 0x02}},
+ {1, 191, 420, {0x2B, 0xF4, 0x0D, 0x0D, 0x1B, 0x0C, 0x53, 0x1E, 0x08, 0xBF, 0xF4, 0xC0, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {4, 588, 0, {0x2A, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x4C, 0x52, 0xC0, 0x02}},
+ {3, 447, 730, {0x2A, 0xF4, 0x05, 0x16, 0xC9, 0x16, 0x01, 0x0E, 0x18, 0xBF, 0x69, 0xC0, 0x02}},
+ {2, 292, 476, {0x2A, 0xF4, 0x0D, 0x0E, 0xD8, 0x0E, 0x10, 0x19, 0x18, 0x24, 0xA1, 0xC0, 0x02}},
+ {1, 192, 312, {0x2A, 0xF4, 0x1D, 0x09, 0xB3, 0x08, 0xEB, 0x1E, 0x18, 0xC0, 0xF4, 0xC0, 0x02}},
+ },
+ /* 25 fps */
+ {
+ {5, 703, 0, {0x29, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0xBF, 0x42, 0xC0, 0x02}},
+ {3, 447, 610, {0x29, 0xF4, 0x05, 0x13, 0x0B, 0x12, 0x43, 0x14, 0x18, 0xBF, 0x69, 0xC0, 0x02}},
+ {2, 292, 398, {0x29, 0xF4, 0x0D, 0x0C, 0x6C, 0x0B, 0xA4, 0x1E, 0x18, 0x24, 0xA1, 0xC0, 0x02}},
+ {1, 192, 262, {0x29, 0xF4, 0x25, 0x08, 0x23, 0x07, 0x5B, 0x1E, 0x18, 0xC0, 0xF4, 0xC0, 0x02}},
+ },
+ /* 30 fps */
+ {
+ {8, 873, 0, {0x28, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0x69, 0x37, 0xC0, 0x02}},
+ {5, 704, 774, {0x28, 0xF4, 0x05, 0x18, 0x21, 0x17, 0x59, 0x0F, 0x18, 0xC0, 0x42, 0xC0, 0x02}},
+ {3, 448, 492, {0x28, 0xF4, 0x05, 0x0F, 0x5D, 0x0E, 0x95, 0x15, 0x18, 0xC0, 0x69, 0xC0, 0x02}},
+ {2, 291, 320, {0x28, 0xF4, 0x1D, 0x09, 0xFB, 0x09, 0x33, 0x1E, 0x18, 0x23, 0xA1, 0xC0, 0x02}},
+ },
+ },
+ /* QCIF */
+ {
+ /* 5 fps */
+ {
+ {1, 193, 0, {0x0D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0xC1, 0xF4, 0xC0, 0x02}},
+ {1, 193, 0, {0x0D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0xC1, 0xF4, 0xC0, 0x02}},
+ {1, 193, 0, {0x0D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0xC1, 0xF4, 0xC0, 0x02}},
+ {1, 193, 0, {0x0D, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0xC1, 0xF4, 0xC0, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {3, 385, 0, {0x0C, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x81, 0x79, 0xC0, 0x02}},
+ {2, 291, 800, {0x0C, 0xF4, 0x05, 0x18, 0xF4, 0x18, 0x18, 0x11, 0x08, 0x23, 0xA1, 0xC0, 0x02}},
+ {2, 291, 800, {0x0C, 0xF4, 0x05, 0x18, 0xF4, 0x18, 0x18, 0x11, 0x08, 0x23, 0xA1, 0xC0, 0x02}},
+ {1, 194, 532, {0x0C, 0xF4, 0x05, 0x10, 0x9A, 0x0F, 0xBE, 0x1B, 0x08, 0xC2, 0xF0, 0xC0, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {4, 577, 0, {0x0B, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x41, 0x52, 0xC0, 0x02}},
+ {3, 447, 818, {0x0B, 0xF4, 0x05, 0x19, 0x89, 0x18, 0xAD, 0x0F, 0x10, 0xBF, 0x69, 0xC0, 0x02}},
+ {2, 292, 534, {0x0B, 0xF4, 0x05, 0x10, 0xA3, 0x0F, 0xC7, 0x19, 0x10, 0x24, 0xA1, 0xC0, 0x02}},
+ {1, 195, 356, {0x0B, 0xF4, 0x15, 0x0B, 0x11, 0x0A, 0x35, 0x1E, 0x10, 0xC3, 0xF0, 0xC0, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {6, 776, 0, {0x0A, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0x08, 0x3F, 0xC0, 0x02}},
+ {4, 591, 804, {0x0A, 0xF4, 0x05, 0x19, 0x1E, 0x18, 0x42, 0x0F, 0x18, 0x4F, 0x4E, 0xC0, 0x02}},
+ {3, 447, 608, {0x0A, 0xF4, 0x05, 0x12, 0xFD, 0x12, 0x21, 0x15, 0x18, 0xBF, 0x69, 0xC0, 0x02}},
+ {2, 291, 396, {0x0A, 0xF4, 0x15, 0x0C, 0x5E, 0x0B, 0x82, 0x1E, 0x18, 0x23, 0xA1, 0xC0, 0x02}},
+ },
+ /* 25 fps */
+ {
+ {9, 928, 0, {0x09, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x12, 0x00, 0xA0, 0x33, 0xC0, 0x02}},
+ {5, 703, 800, {0x09, 0xF4, 0x05, 0x18, 0xF4, 0x18, 0x18, 0x10, 0x18, 0xBF, 0x42, 0xC0, 0x02}},
+ {3, 447, 508, {0x09, 0xF4, 0x0D, 0x0F, 0xD2, 0x0E, 0xF6, 0x1B, 0x18, 0xBF, 0x69, 0xC0, 0x02}},
+ {2, 292, 332, {0x09, 0xF4, 0x1D, 0x0A, 0x5A, 0x09, 0x7E, 0x1E, 0x18, 0x24, 0xA1, 0xC0, 0x02}},
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {9, 956, 876, {0x08, 0xF4, 0x05, 0x1B, 0x58, 0x1A, 0x7C, 0x0E, 0x20, 0xBC, 0x33, 0x10, 0x02}},
+ {4, 592, 542, {0x08, 0xF4, 0x05, 0x10, 0xE4, 0x10, 0x08, 0x17, 0x20, 0x50, 0x4E, 0x10, 0x02}},
+ {2, 291, 266, {0x08, 0xF4, 0x25, 0x08, 0x48, 0x07, 0x6C, 0x1E, 0x20, 0x23, 0xA1, 0x10, 0x02}},
+ },
+ },
+ /* SIF */
+ {
+ /* 5 fps */
+ {
+ {4, 582, 0, {0x35, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x46, 0x52, 0x60, 0x02}},
+ {3, 387, 1276, {0x35, 0xF4, 0x05, 0x27, 0xD8, 0x26, 0x48, 0x03, 0x10, 0x83, 0x79, 0x60, 0x02}},
+ {2, 291, 960, {0x35, 0xF4, 0x0D, 0x1D, 0xF2, 0x1C, 0x62, 0x04, 0x10, 0x23, 0xA1, 0x60, 0x02}},
+ {1, 191, 630, {0x35, 0xF4, 0x1D, 0x13, 0xA9, 0x12, 0x19, 0x05, 0x08, 0xBF, 0xF4, 0x60, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {6, 775, 1278, {0x34, 0xF4, 0x05, 0x27, 0xE8, 0x26, 0x58, 0x05, 0x30, 0x07, 0x3F, 0x10, 0x02}},
+ {3, 447, 736, {0x34, 0xF4, 0x15, 0x16, 0xFB, 0x15, 0x6B, 0x05, 0x18, 0xBF, 0x69, 0x10, 0x02}},
+ {2, 291, 480, {0x34, 0xF4, 0x2D, 0x0E, 0xF9, 0x0D, 0x69, 0x09, 0x18, 0x23, 0xA1, 0x10, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {9, 955, 1050, {0x33, 0xF4, 0x05, 0x20, 0xCF, 0x1F, 0x3F, 0x06, 0x48, 0xBB, 0x33, 0x10, 0x02}},
+ {4, 591, 650, {0x33, 0xF4, 0x15, 0x14, 0x44, 0x12, 0xB4, 0x08, 0x30, 0x4F, 0x4E, 0x10, 0x02}},
+ {3, 448, 492, {0x33, 0xF4, 0x25, 0x0F, 0x52, 0x0D, 0xC2, 0x09, 0x28, 0xC0, 0x69, 0x10, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {9, 958, 782, {0x32, 0xF4, 0x0D, 0x18, 0x6A, 0x16, 0xDA, 0x0B, 0x58, 0xBE, 0x33, 0xD0, 0x02}},
+ {5, 703, 574, {0x32, 0xF4, 0x1D, 0x11, 0xE7, 0x10, 0x57, 0x0B, 0x40, 0xBF, 0x42, 0xD0, 0x02}},
+ {3, 446, 364, {0x32, 0xF4, 0x3D, 0x0B, 0x5C, 0x09, 0xCC, 0x0E, 0x30, 0xBE, 0x69, 0xD0, 0x02}},
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {9, 958, 654, {0x31, 0xF4, 0x15, 0x14, 0x66, 0x12, 0xD6, 0x0B, 0x50, 0xBE, 0x33, 0x90, 0x02}},
+ {6, 776, 530, {0x31, 0xF4, 0x25, 0x10, 0x8C, 0x0E, 0xFC, 0x0C, 0x48, 0x08, 0x3F, 0x90, 0x02}},
+ {4, 592, 404, {0x31, 0xF4, 0x35, 0x0C, 0x96, 0x0B, 0x06, 0x0B, 0x38, 0x50, 0x4E, 0x90, 0x02}},
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {9, 957, 526, {0x30, 0xF4, 0x25, 0x10, 0x68, 0x0E, 0xD8, 0x0D, 0x58, 0xBD, 0x33, 0x60, 0x02}},
+ {6, 775, 426, {0x30, 0xF4, 0x35, 0x0D, 0x48, 0x0B, 0xB8, 0x0F, 0x50, 0x07, 0x3F, 0x60, 0x02}},
+ {4, 590, 324, {0x30, 0x7A, 0x4B, 0x0A, 0x1C, 0x08, 0xB4, 0x0E, 0x40, 0x4E, 0x52, 0x60, 0x02}},
+ },
+ },
+ /* CIF */
+ {
+ /* 5 fps */
+ {
+ {6, 771, 0, {0x15, 0xF4, 0x04, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x03, 0x3F, 0x80, 0x02}},
+ {4, 465, 1278, {0x15, 0xF4, 0x05, 0x27, 0xEE, 0x26, 0x36, 0x03, 0x18, 0xD1, 0x65, 0x80, 0x02}},
+ {2, 291, 800, {0x15, 0xF4, 0x15, 0x18, 0xF4, 0x17, 0x3C, 0x05, 0x18, 0x23, 0xA1, 0x80, 0x02}},
+ {1, 193, 528, {0x15, 0xF4, 0x2D, 0x10, 0x7E, 0x0E, 0xC6, 0x0A, 0x18, 0xC1, 0xF4, 0x80, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {9, 932, 1278, {0x14, 0xF4, 0x05, 0x27, 0xEE, 0x26, 0x36, 0x04, 0x30, 0xA4, 0x33, 0x10, 0x02}},
+ {4, 591, 812, {0x14, 0xF4, 0x15, 0x19, 0x56, 0x17, 0x9E, 0x06, 0x28, 0x4F, 0x4E, 0x10, 0x02}},
+ {2, 291, 400, {0x14, 0xF4, 0x3D, 0x0C, 0x7A, 0x0A, 0xC2, 0x0E, 0x28, 0x23, 0xA1, 0x10, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {9, 956, 876, {0x13, 0xF4, 0x0D, 0x1B, 0x58, 0x19, 0xA0, 0x05, 0x38, 0xBC, 0x33, 0x60, 0x02}},
+ {5, 703, 644, {0x13, 0xF4, 0x1D, 0x14, 0x1C, 0x12, 0x64, 0x08, 0x38, 0xBF, 0x42, 0x60, 0x02}},
+ {3, 448, 410, {0x13, 0xF4, 0x3D, 0x0C, 0xC4, 0x0B, 0x0C, 0x0E, 0x38, 0xC0, 0x69, 0x60, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {9, 956, 650, {0x12, 0xF4, 0x1D, 0x14, 0x4A, 0x12, 0x92, 0x09, 0x48, 0xBC, 0x33, 0x10, 0x03}},
+ {6, 776, 528, {0x12, 0xF4, 0x2D, 0x10, 0x7E, 0x0E, 0xC6, 0x0A, 0x40, 0x08, 0x3F, 0x10, 0x03}},
+ {4, 591, 402, {0x12, 0xF4, 0x3D, 0x0C, 0x8F, 0x0A, 0xD7, 0x0E, 0x40, 0x4F, 0x4E, 0x10, 0x03}},
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {9, 956, 544, {0x11, 0xF4, 0x25, 0x10, 0xF4, 0x0F, 0x3C, 0x0A, 0x48, 0xBC, 0x33, 0xC0, 0x02}},
+ {7, 840, 478, {0x11, 0xF4, 0x2D, 0x0E, 0xEB, 0x0D, 0x33, 0x0B, 0x48, 0x48, 0x3B, 0xC0, 0x02}},
+ {5, 703, 400, {0x11, 0xF4, 0x3D, 0x0C, 0x7A, 0x0A, 0xC2, 0x0E, 0x48, 0xBF, 0x42, 0xC0, 0x02}},
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {9, 956, 438, {0x10, 0xF4, 0x35, 0x0D, 0xAC, 0x0B, 0xF4, 0x0D, 0x50, 0xBC, 0x33, 0x10, 0x02}},
+ {7, 838, 384, {0x10, 0xF4, 0x45, 0x0B, 0xFD, 0x0A, 0x45, 0x0F, 0x50, 0x46, 0x3B, 0x10, 0x02}},
+ {6, 773, 354, {0x10, 0x7A, 0x4B, 0x0B, 0x0C, 0x09, 0x80, 0x10, 0x50, 0x05, 0x3F, 0x10, 0x02}},
+ },
+ },
+ /* VGA */
+ {
+ /* 5 fps */
+ {
+ {0, },
+ {6, 773, 1272, {0x1D, 0xF4, 0x15, 0x27, 0xB6, 0x24, 0x96, 0x02, 0x30, 0x05, 0x3F, 0x10, 0x02}},
+ {4, 592, 976, {0x1D, 0xF4, 0x25, 0x1E, 0x78, 0x1B, 0x58, 0x03, 0x30, 0x50, 0x4E, 0x10, 0x02}},
+ {3, 448, 738, {0x1D, 0xF4, 0x3D, 0x17, 0x0C, 0x13, 0xEC, 0x04, 0x30, 0xC0, 0x69, 0x10, 0x02}},
+ },
+ /* 10 fps */
+ {
+ {0, },
+ {9, 956, 788, {0x1C, 0xF4, 0x35, 0x18, 0x9C, 0x15, 0x7C, 0x03, 0x48, 0xBC, 0x33, 0x10, 0x02}},
+ {6, 776, 640, {0x1C, 0x7A, 0x53, 0x13, 0xFC, 0x11, 0x2C, 0x04, 0x48, 0x08, 0x3F, 0x10, 0x02}},
+ {4, 592, 488, {0x1C, 0x7A, 0x6B, 0x0F, 0x3C, 0x0C, 0x6C, 0x06, 0x48, 0x50, 0x4E, 0x10, 0x02}},
+ },
+ /* 15 fps */
+ {
+ {0, },
+ {9, 957, 526, {0x1B, 0x7A, 0x63, 0x10, 0x68, 0x0D, 0x98, 0x06, 0x58, 0xBD, 0x33, 0x80, 0x02}},
+ {9, 957, 526, {0x1B, 0x7A, 0x63, 0x10, 0x68, 0x0D, 0x98, 0x06, 0x58, 0xBD, 0x33, 0x80, 0x02}},
+ {8, 895, 492, {0x1B, 0x7A, 0x6B, 0x0F, 0x5D, 0x0C, 0x8D, 0x06, 0x58, 0x7F, 0x37, 0x80, 0x02}},
+ },
+ /* 20 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 25 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ /* 30 fps */
+ {
+ {0, },
+ {0, },
+ {0, },
+ {0, },
+ },
+ },
+};
+
+/*
+ * 16 versions:
+ * 2 tables (one for Y, and one for U&V)
+ * 16 levels of details per tables
+ * 8 blocs
+ */
+
+const unsigned int TimonRomTable [16][2][16][8] =
+{
+ { /* version 0 */
+ { /* version 0, passes 0 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000001},
+ {0x00000000,0x00000000,0x00000001,0x00000001,
+ 0x00000001,0x00000001,0x00000001,0x00000001},
+ {0x00000000,0x00000000,0x00000001,0x00000001,
+ 0x00000001,0x00000009,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000009,0x00000001,
+ 0x00000009,0x00000009,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000009,0x00000049,0x00000009},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000009,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000249,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000249,0x00000249,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000249,0x0000124a,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000249,
+ 0x00000249,0x0000124a,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x0000124a,0x00009252,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 0, passes 1 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000},
+ {0x00000000,0x00000000,0x00000001,0x00000001,
+ 0x00000001,0x00000001,0x00000000,0x00000000},
+ {0x00000000,0x00000000,0x00000009,0x00000001,
+ 0x00000001,0x00000009,0x00000000,0x00000000},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000009,0x00000000,0x00000000},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000009,0x00000001,0x00000000},
+ {0x00000000,0x00000000,0x00000049,0x00000009,
+ 0x00000009,0x00000049,0x00000001,0x00000001},
+ {0x00000000,0x00000000,0x00000049,0x00000009,
+ 0x00000009,0x00000049,0x00000001,0x00000001},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000009,0x00000001},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000009,0x00000001},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000009,0x00000001},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000249,0x00000049,0x00000009},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000249,0x00000049,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000049,
+ 0x00000249,0x00000249,0x00000049,0x00000009},
+ {0x00000000,0x00000000,0x00001249,0x00000249,
+ 0x0000124a,0x0000124a,0x0000024a,0x00000049},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 1 */
+ { /* version 1, passes 0 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000001},
+ {0x00000000,0x00000000,0x00000001,0x00000001,
+ 0x00000001,0x00000009,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000009,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000009,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000249,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00001252},
+ {0x00000000,0x00000000,0x00000049,0x00000249,
+ 0x00000249,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00000049,0x00000249,
+ 0x0000124a,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x0000124a,0x00009252,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009252,0x00009252,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x0000924a,
+ 0x00009292,0x00009493,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009492,0x0000a49b,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 1, passes 1 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000001,0x00000001,0x00000000},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000009,0x00000001,0x00000000},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000009,0x00000001,0x00000000},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000001,0x00000001},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000009,0x00000001},
+ {0x00000000,0x00000000,0x00000249,0x00000049,
+ 0x00000049,0x00000249,0x00000009,0x00000001},
+ {0x00000000,0x00000000,0x00000249,0x00000049,
+ 0x00000249,0x00000249,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x00000049,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x0000124a,0x00000049,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x0000124a,0x00000049,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x0000124a,0x0000024a,0x00000049},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x0000024a,0x00000049},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x0000024a,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009252,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 2 */
+ { /* version 2, passes 0 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000001},
+ {0x00000000,0x00000000,0x00000009,0x00000009,
+ 0x00000009,0x00000009,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000049,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00001252},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x0000124a,0x00009252,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009252,0x00009292,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x0000924a,
+ 0x00009252,0x00009493,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x0000924a,
+ 0x00009292,0x00009493,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x00009252,
+ 0x00009492,0x00009493,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x000124db,0x000124db,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000126dc,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 2, passes 1 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000},
+ {0x00000000,0x00000000,0x00000049,0x00000009,
+ 0x00000049,0x00000009,0x00000001,0x00000000},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000000},
+ {0x00000000,0x00000000,0x00000249,0x00000049,
+ 0x00000249,0x00000049,0x0000024a,0x00000001},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00000001},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00000001},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x0000024a,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x0000024a,0x00000009},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00009252,0x00001252,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00009292,0x00001252,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00009292,0x00001252,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009292,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x0000924a,0x0000924a,
+ 0x00009492,0x00009493,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 3 */
+ { /* version 3, passes 0 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000001},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000049,0x00000249,
+ 0x00000249,0x00000249,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x00009252,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x0000124a,0x00009292,0x00009292,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009252,0x00009292,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009292,0x00009493,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x00009252,
+ 0x00009292,0x00009493,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009292,0x0000a49b,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009492,0x0000a49b,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x000124db,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x0000a493,0x0000a49b,0x000124db,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0001249b,0x000126dc,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000136e4,0x0001b725,0x000136e4},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 3, passes 1 */
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000},
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000001,0x00000000},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x00000049,0x00000001},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x0000124a,0x00001252,0x00000001},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x00001252,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x0000124a,0x00009252,0x00009292,0x00000009},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00009252,0x00009292,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009252,0x00009292,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009493,0x00009292,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009493,0x00009292,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009493,0x00009493,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009292,0x00009493,0x00009493,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x00009493,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009492,0x0000a49b,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x00009292,
+ 0x0000a493,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 4 */
+ { /* version 4, passes 0 */
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x0000124a,0x00001252,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x00009252,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x0000124a,0x00009292,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009252,0x00009493,0x00009493,0x0000a49b},
+ {0x00000000,0x00000000,0x00000249,0x0000924a,
+ 0x00009292,0x00009493,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009292,0x00009493,0x0000a49b,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009492,0x0000a49b,0x0000a49b,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x000124db,0x000124db,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x0000a493,0x000124db,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x000136e4},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0001249b,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000136e4,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x00009252,0x000124db,
+ 0x000126dc,0x0001b724,0x0001b725,0x0001b925},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 4, passes 1 */
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00000249,
+ 0x0000124a,0x0000124a,0x00001252,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x00009292,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x0000a49b,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x0000a49b,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x0000a49b,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x00009252,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 5 */
+ { /* version 5, passes 0 */
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x0000124a,0x00001252,0x00009292},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x0000124a,0x00009292,0x00009292,0x00009493},
+ {0x00000000,0x00000000,0x00000249,0x0000924a,
+ 0x00009292,0x00009493,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009292,0x00009493,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x0000a49b,0x0000a49b,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x000124db,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x0000a493,0x000124db,0x000124db,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0001249b,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0001249b,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0001249b,0x000126dc,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000126dc,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x000136e4,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b724,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 5, passes 1 */
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x00009493,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x00009493,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x0000a49b,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x00009493,0x000124db,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x00009493,0x000124db,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x000124db,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x000124db,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009252,0x000124db,
+ 0x000126dc,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 6 */
+ { /* version 6, passes 0 */
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x0000124a,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x0000a49b,0x0000a49b,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x000124db,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000126dc,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0001249b,0x000126dc,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000136e4,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000136e4,0x0001b725,0x0001b925},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x000136e4,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b92d,0x0001c92d},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000126dc,0x0001b724,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x000136e4,0x0001b925,0x00025bb6,0x00024b77},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 6, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00000249,
+ 0x0000124a,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x00009493,0x0000a49b,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x000126dc,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00009492,0x0000a49b,
+ 0x000136e4,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x0001b724,0x0001b724,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 7 */
+ { /* version 7, passes 0 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x0000a49b,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x0000a493,0x0000a49b,0x000124db,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x000136e4},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0001249b,0x000126dc,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00001249,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000136e4,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x0001b725,0x0001b925},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x00009292,0x000124db,
+ 0x000126dc,0x0001b724,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b724,0x0001c96e,0x0002496e},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x000136e4,0x0001b925,0x0001c96e,0x0002496e},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x0002496d,0x00025bb6,0x00025bbf},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 7, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x00009493,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x000124db,0x000136e4,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x000124db,0x000136e4,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000124db,0x000136e4,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x000136e4,0x0001b724,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00012492,0x000126db,
+ 0x0001b724,0x0001b925,0x0001b725,0x000136e4},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 8 */
+ { /* version 8, passes 0 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009292,0x00009493,0x0000a49b,0x000124db},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x0000a493,0x000124db,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x000136e4},
+ {0x00000000,0x00000000,0x00001249,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000136e4,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x0001b725,0x0001b925},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b92d,0x0001c92d},
+ {0x00000000,0x00000000,0x00009252,0x000124db,
+ 0x000126dc,0x0001b724,0x0001b92d,0x0001c92d},
+ {0x00000000,0x00000000,0x00009292,0x000124db,
+ 0x000126dc,0x0001b925,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b925,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b925,0x00024b76,0x00024b77},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x000136e4,0x0001b925,0x00024b76,0x00025bbf},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x000136e4,0x0001c92d,0x00024b76,0x00025bbf},
+ {0x00000000,0x00000000,0x00012492,0x000136db,
+ 0x0001b724,0x00024b6d,0x0002ddb6,0x0002efff},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 8, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009493,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x0000a493,0x0000a49b,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x000124db,0x000136e4,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000126dc,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000136e4,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00009292,0x000124db,
+ 0x000136e4,0x0001b724,0x0001b725,0x000136e4},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x000136e4,0x0001b925,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x000136e4,0x0001b925,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x0002496d,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 9 */
+ { /* version 9, passes 0 */
+ {0x00000000,0x00000000,0x00000049,0x00000049,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00000249,0x00000049,
+ 0x00000249,0x00000249,0x0000024a,0x00000049},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x0000124a,0x00009252,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009493,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009292,0x00009493,0x00009493,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0001249b,0x000126dc,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00009252,0x00009493,
+ 0x000124db,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009252,0x0000a49b,
+ 0x000124db,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00009492,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x000136e4,0x0001b925,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 9, passes 1 */
+ {0x00000000,0x00000000,0x00000249,0x00000049,
+ 0x00000009,0x00000009,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000049,0x00000049,0x00000009,0x00000009},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00000249,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x0000124a,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009252,0x0000124a,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x00009252,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x00009292,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x00009292,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x00009493,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000124db,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009252,0x000124db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 10 */
+ { /* version 10, passes 0 */
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00000249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x00009493,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x000124db,0x000124db,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0001249b,0x000126dc,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000126dc,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009252,0x0000a49b,
+ 0x000124db,0x000136e4,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x00009492,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b92d,0x0001b724},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000126dc,0x0001b925,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x000136e4,0x0002496d,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 10, passes 1 */
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000049,0x00000049,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00000249,0x00000049,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x00009252,0x0000024a,0x00000049},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009493,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00009252,
+ 0x00009492,0x00009493,0x00001252,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x00009493,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x00009492,0x00009493,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x00009493,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009252,0x000126db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 11 */
+ { /* version 11, passes 0 */
+ {0x00000000,0x00000000,0x00000249,0x00000249,
+ 0x00000249,0x00000249,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009492,0x0000a49b,0x0000a49b,0x00009292},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x000124db,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000126dc,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b725,0x000136e4},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000136e4,0x0001b724,0x0001b92d,0x0001b724},
+ {0x00000000,0x00000000,0x00009492,0x0000a49b,
+ 0x000136e4,0x0001b724,0x0001b92d,0x0001b724},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b925,0x0001c96e,0x0001b925},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x0001b724,0x0001b925,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001c924,0x0002496d,0x00025bb6,0x00024b77},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 11, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00000249,
+ 0x00000249,0x00000249,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009252,0x00009252,0x0000024a,0x0000024a},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x0000a49b,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x0000a49b,0x00009292,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000124db,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000124db,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000126dc,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009292,0x000124db,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 12 */
+ { /* version 12, passes 0 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x0000a493,0x0000a49b,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x000126dc,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b725,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000136e4,0x0001b724,0x0001b92d,0x000136e4},
+ {0x00000000,0x00000000,0x00009492,0x0000a49b,
+ 0x000136e4,0x0001b724,0x0001b92d,0x0001b724},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b724,0x0001b92d,0x0001b724},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b925,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x0001b724,0x0001b925,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x0001b724,0x0001c92d,0x0001c96e,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x0001b724,0x0001c92d,0x00024b76,0x0002496e},
+ {0x00000000,0x00000000,0x00012492,0x000126db,
+ 0x0001c924,0x00024b6d,0x0002ddb6,0x00025bbf},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 12, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x00001249,0x00009292,
+ 0x00009492,0x00009252,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x00009292,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x0000a49b,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000124db,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000124db,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000126dc,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x000136e4,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00009492,0x000126db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001c924,0x0001b724,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 13 */
+ { /* version 13, passes 0 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x00009252,0x00009292,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x000124db,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000126dc,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x000136e4,0x0001b725,0x000124db},
+ {0x00000000,0x00000000,0x00009292,0x0000a49b,
+ 0x000136e4,0x0001b724,0x0001b725,0x000126dc},
+ {0x00000000,0x00000000,0x00009292,0x000124db,
+ 0x000136e4,0x0001b724,0x0001b725,0x000126dc},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b724,0x0001c96e,0x000136e4},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001c92d,0x0001c96e,0x0001b724},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x000136e4,0x0001c92d,0x0001c96e,0x0001b724},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x0001b724,0x0001c92d,0x0001c96e,0x0001b925},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001b724,0x0001c92d,0x00024b76,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001b924,0x0001c92d,0x00024b76,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001b924,0x0001c92d,0x00024b76,0x0002496e},
+ {0x00000000,0x00000000,0x00012492,0x000136db,
+ 0x00024924,0x00024b6d,0x0002ddb6,0x00025bbf},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 13, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x00009492,0x00009292,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x0000a49b,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000124db,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000136db,
+ 0x0001b724,0x000124db,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000136db,
+ 0x0001b724,0x000126dc,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00009292,0x000136db,
+ 0x0001b724,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x00009492,0x000136db,
+ 0x0001b724,0x000126dc,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00012492,0x0001b6db,
+ 0x0001c924,0x0001b724,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 14 */
+ { /* version 14, passes 0 */
+ {0x00000000,0x00000000,0x00001249,0x0000924a,
+ 0x00009292,0x00009493,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00001249,0x0000a49b,
+ 0x0000a493,0x000124db,0x000126dc,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x0000a49b},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x000136e4,0x0001b725,0x000124db},
+ {0x00000000,0x00000000,0x00009292,0x000124db,
+ 0x000126dc,0x0001b724,0x0001b92d,0x000126dc},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b724,0x0001b92d,0x000126dc},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001c92d,0x0001c96e,0x000136e4},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x0001b724,0x0001c92d,0x0001c96e,0x0001b724},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x0001b724,0x0001c92d,0x00024b76,0x0001b925},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001b724,0x0001c92d,0x00024b76,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001b724,0x0001c92d,0x00024b76,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x0001c92d,0x00024b76,0x0002496e},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b924,0x0002496d,0x00024b76,0x00024b77},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b924,0x00024b6d,0x0002ddb6,0x00025bbf},
+ {0x00000000,0x00000000,0x00012492,0x0001b6db,
+ 0x00024924,0x0002db6d,0x00036db6,0x0002efff},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 14, passes 1 */
+ {0x00000000,0x00000000,0x00001249,0x00001249,
+ 0x0000124a,0x0000124a,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x00009493,
+ 0x0000a493,0x00009292,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x0000a49b,0x00001252,0x00001252},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000136e4,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000136e4,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x000136e4,0x00009493,0x00009292},
+ {0x00000000,0x00000000,0x00009492,0x000136db,
+ 0x0001b724,0x000136e4,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x00009492,0x000136db,
+ 0x0001b724,0x000136e4,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x00009492,0x000136db,
+ 0x0001b724,0x000136e4,0x0000a49b,0x00009493},
+ {0x00000000,0x00000000,0x00009492,0x000136db,
+ 0x0001b724,0x000136e4,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x000136e4,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x000136e4,0x000124db,0x0000a49b},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b724,0x000136e4,0x000126dc,0x000124db},
+ {0x00000000,0x00000000,0x00012492,0x0001b6db,
+ 0x0001c924,0x0001b724,0x000136e4,0x000126dc},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ },
+ { /* version 15 */
+ { /* version 15, passes 0 */
+ {0x00000000,0x00000000,0x00001249,0x00009493,
+ 0x0000a493,0x0000a49b,0x000124db,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0001249b,0x000126dc,0x000136e4,0x000124db},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x000126dc,0x0001b724,0x0001b725,0x000126dc},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x0001b724,0x0001b92d,0x000126dc},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x000136e4,0x0001b925,0x0001c96e,0x000136e4},
+ {0x00000000,0x00000000,0x00009492,0x000124db,
+ 0x0001b724,0x0001c92d,0x0001c96e,0x0001b724},
+ {0x00000000,0x00000000,0x0000a492,0x000124db,
+ 0x0001b724,0x0001c92d,0x0001c96e,0x0001b724},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001b724,0x0001c92d,0x0001c96e,0x0001b925},
+ {0x00000000,0x00000000,0x0000a492,0x000126db,
+ 0x0001b924,0x0001c92d,0x00024b76,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b924,0x0001c92d,0x00024b76,0x0001c92d},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001b924,0x0002496d,0x00024b76,0x0002496e},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001c924,0x0002496d,0x00025bb6,0x00024b77},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001c924,0x00024b6d,0x00025bb6,0x00024b77},
+ {0x00000000,0x00000000,0x00012492,0x000136db,
+ 0x0001c924,0x00024b6d,0x0002ddb6,0x00025bbf},
+ {0x00000000,0x00000000,0x00012492,0x0001b6db,
+ 0x00024924,0x0002db6d,0x00036db6,0x0002efff},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ },
+ { /* version 15, passes 1 */
+ {0x00000000,0x00000000,0x0000924a,0x0000924a,
+ 0x00009292,0x00009292,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x0000a49b,
+ 0x0000a493,0x000124db,0x00009292,0x00009292},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000124db,0x0001b724,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000126dc,0x0001b724,0x00009493,0x00009493},
+ {0x00000000,0x00000000,0x0000924a,0x000124db,
+ 0x000136e4,0x0001b724,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00009292,0x000136db,
+ 0x0001b724,0x0001b724,0x0000a49b,0x0000a49b},
+ {0x00000000,0x00000000,0x00009492,0x000136db,
+ 0x0001c924,0x0001b724,0x000124db,0x000124db},
+ {0x00000000,0x00000000,0x00009492,0x000136db,
+ 0x0001c924,0x0001b724,0x000124db,0x000124db},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001c924,0x0001b724,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001c924,0x0001b925,0x000126dc,0x000126dc},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001c924,0x0001b925,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001c924,0x0001b925,0x000136e4,0x000136e4},
+ {0x00000000,0x00000000,0x0000a492,0x000136db,
+ 0x0001c924,0x0001b925,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x00012492,0x000136db,
+ 0x0001c924,0x0001b925,0x0001b725,0x0001b724},
+ {0x00000000,0x00000000,0x00012492,0x0001b6db,
+ 0x00024924,0x0002496d,0x0001b92d,0x0001b925},
+ {0x00000000,0x00000000,0x00000000,0x00000000,
+ 0x00000000,0x00000000,0x00000000,0x00000000}
+ }
+ }
+};
--- /dev/null
+/* Linux driver for Philips webcam
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+
+
+/* This tables contains entries for the 675/680/690 (Timon) camera, with
+ 4 different qualities (no compression, low, medium, high).
+ It lists the bandwidth requirements for said mode by its alternate interface
+ number. An alternate of 0 means that the mode is unavailable.
+
+ There are 6 * 4 * 4 entries:
+ 6 different resolutions subqcif, qsif, qcif, sif, cif, vga
+ 6 framerates: 5, 10, 15, 20, 25, 30
+ 4 compression modi: none, low, medium, high
+
+ When an uncompressed mode is not available, the next available compressed mode
+ will be chosen (unless the decompressor is absent). Sometimes there are only
+ 1 or 2 compressed modes available; in that case entries are duplicated.
+*/
+
+#ifndef PWC_TIMON_H
+#define PWC_TIMON_H
+
+#include "pwc-ioctl.h"
+
+struct Timon_table_entry
+{
+ char alternate; /* USB alternate interface */
+ unsigned short packetsize; /* Normal packet size */
+ unsigned short bandlength; /* Bandlength when decompressing */
+ unsigned char mode[13]; /* precomputed mode settings for cam */
+};
+
+const extern struct Timon_table_entry Timon_table[PSZ_MAX][6][4];
+const extern unsigned int TimonRomTable [16][2][16][8];
+
+
+#endif
+
+
--- /dev/null
+/* Linux driver for Philips webcam
+ Decompression frontend.
+ (C) 1999-2003 Nemosoft Unv.
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#include <asm/current.h>
+#include <asm/types.h>
+
+#include "pwc.h"
+#include "pwc-uncompress.h"
+#include "pwc-dec1.h"
+#include "pwc-dec23.h"
+
+int pwc_decompress(struct pwc_device *pdev)
+{
+ struct pwc_frame_buf *fbuf;
+ int n, line, col, stride;
+ void *yuv, *image;
+ u16 *src;
+ u16 *dsty, *dstu, *dstv;
+
+ if (pdev == NULL)
+ return -EFAULT;
+#if defined(__KERNEL__) && defined(PWC_MAGIC)
+ if (pdev->magic != PWC_MAGIC) {
+ Err("pwc_decompress(): magic failed.\n");
+ return -EFAULT;
+ }
+#endif
+
+ fbuf = pdev->read_frame;
+ if (fbuf == NULL)
+ return -EFAULT;
+ image = pdev->image_ptr[pdev->fill_image];
+ if (!image)
+ return -EFAULT;
+
+ yuv = fbuf->data + pdev->frame_header_size; /* Skip header */
+
+ /* Raw format; that's easy... */
+ if (pdev->vpalette == VIDEO_PALETTE_RAW)
+ {
+ memcpy(image, yuv, pdev->frame_size);
+ return 0;
+ }
+
+ if (pdev->vbandlength == 0) {
+ /* Uncompressed mode. We copy the data into the output buffer,
+ using the viewport size (which may be larger than the image
+ size). Unfortunately we have to do a bit of byte stuffing
+ to get the desired output format/size.
+ */
+ /*
+ * We do some byte shuffling here to go from the
+ * native format to YUV420P.
+ */
+ src = (u16 *)yuv;
+ n = pdev->view.x * pdev->view.y;
+
+ /* offset in Y plane */
+ stride = pdev->view.x * pdev->offset.y + pdev->offset.x;
+ dsty = (u16 *)(image + stride);
+
+ /* offsets in U/V planes */
+ stride = pdev->view.x * pdev->offset.y / 4 + pdev->offset.x / 2;
+ dstu = (u16 *)(image + n + stride);
+ dstv = (u16 *)(image + n + n / 4 + stride);
+
+ /* increment after each line */
+ stride = (pdev->view.x - pdev->image.x) / 2; /* u16 is 2 bytes */
+
+ for (line = 0; line < pdev->image.y; line++) {
+ for (col = 0; col < pdev->image.x; col += 4) {
+ *dsty++ = *src++;
+ *dsty++ = *src++;
+ if (line & 1)
+ *dstv++ = *src++;
+ else
+ *dstu++ = *src++;
+ }
+ dsty += stride;
+ if (line & 1)
+ dstv += (stride >> 1);
+ else
+ dstu += (stride >> 1);
+ }
+ }
+ else {
+ /* Compressed; the decompressor routines will write the data
+ in planar format immediately.
+ */
+ int flags;
+
+ flags = PWCX_FLAG_PLANAR;
+ if (pdev->vsize == PSZ_VGA && pdev->vframes == 5 && pdev->vsnapshot)
+ {
+ printk(KERN_ERR "pwc: Mode Bayer is not supported for now\n");
+ flags |= PWCX_FLAG_BAYER;
+ return -ENXIO; /* No such device or address: missing decompressor */
+ }
+
+ switch (pdev->type)
+ {
+ case 675:
+ case 680:
+ case 690:
+ case 720:
+ case 730:
+ case 740:
+ case 750:
+ pwc_dec23_decompress(&pdev->image, &pdev->view, &pdev->offset,
+ yuv, image,
+ flags,
+ pdev->decompress_data, pdev->vbandlength);
+ break;
+ case 645:
+ case 646:
+ /* TODO & FIXME */
+ return -ENXIO; /* No such device or address: missing decompressor */
+ break;
+ }
+ }
+ return 0;
+}
+
+
--- /dev/null
+/* (C) 1999-2003 Nemosoft Unv.
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+/* This file is the bridge between the kernel module and the plugin; it
+ describes the structures and datatypes used in both modules. Any
+ significant change should be reflected by increasing the
+ pwc_decompressor_version major number.
+ */
+#ifndef PWC_UNCOMPRESS_H
+#define PWC_UNCOMPRESS_H
+
+#include <linux/config.h>
+
+#include "pwc-ioctl.h"
+
+/* from pwc-dec.h */
+#define PWCX_FLAG_PLANAR 0x0001
+/* */
+
+#endif
--- /dev/null
+/* (C) 1999-2003 Nemosoft Unv.
+ (C) 2004 Luc Saillard (luc@saillard.org)
+
+ NOTE: this version of pwc is an unofficial (modified) release of pwc & pcwx
+ driver and thus may have bugs that are not present in the original version.
+ Please send bug reports and support requests to <luc@saillard.org>.
+ The decompression routines have been implemented by reverse-engineering the
+ Nemosoft binary pwcx module. Caveat emptor.
+
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 2 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program; if not, write to the Free Software
+ Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+*/
+
+#ifndef PWC_H
+#define PWC_H
+
+#include <linux/version.h>
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/usb.h>
+#include <linux/spinlock.h>
+#include <linux/videodev.h>
+#include <linux/wait.h>
+#include <linux/smp_lock.h>
+#include <asm/semaphore.h>
+#include <asm/errno.h>
+
+#include "pwc-uncompress.h"
+#include "pwc-ioctl.h"
+
+/* Defines and structures for the Philips webcam */
+/* Used for checking memory corruption/pointer validation */
+#define PWC_MAGIC 0x89DC10ABUL
+#undef PWC_MAGIC
+
+/* Turn some debugging options on/off */
+#define PWC_DEBUG 0
+
+/* Trace certain actions in the driver */
+#define TRACE_MODULE 0x0001
+#define TRACE_PROBE 0x0002
+#define TRACE_OPEN 0x0004
+#define TRACE_READ 0x0008
+#define TRACE_MEMORY 0x0010
+#define TRACE_FLOW 0x0020
+#define TRACE_SIZE 0x0040
+#define TRACE_PWCX 0x0080
+#define TRACE_SEQUENCE 0x1000
+
+#define Trace(R, A...) if (pwc_trace & R) printk(KERN_DEBUG PWC_NAME " " A)
+#define Debug(A...) printk(KERN_DEBUG PWC_NAME " " A)
+#define Info(A...) printk(KERN_INFO PWC_NAME " " A)
+#define Err(A...) printk(KERN_ERR PWC_NAME " " A)
+
+
+/* Defines for ToUCam cameras */
+#define TOUCAM_HEADER_SIZE 8
+#define TOUCAM_TRAILER_SIZE 4
+
+#define FEATURE_MOTOR_PANTILT 0x0001
+
+/* Version block */
+#define PWC_MAJOR 9
+#define PWC_MINOR 0
+#define PWC_VERSION "9.0.2-unofficial"
+#define PWC_NAME "pwc"
+
+/* Turn certain features on/off */
+#define PWC_INT_PIPE 0
+
+/* Ignore errors in the first N frames, to allow for startup delays */
+#define FRAME_LOWMARK 5
+
+/* Size and number of buffers for the ISO pipe. */
+#define MAX_ISO_BUFS 2
+#define ISO_FRAMES_PER_DESC 10
+#define ISO_MAX_FRAME_SIZE 960
+#define ISO_BUFFER_SIZE (ISO_FRAMES_PER_DESC * ISO_MAX_FRAME_SIZE)
+
+/* Frame buffers: contains compressed or uncompressed video data. */
+#define MAX_FRAMES 5
+/* Maximum size after decompression is 640x480 YUV data, 1.5 * 640 * 480 */
+#define PWC_FRAME_SIZE (460800 + TOUCAM_HEADER_SIZE + TOUCAM_TRAILER_SIZE)
+
+/* Absolute maximum number of buffers available for mmap() */
+#define MAX_IMAGES 10
+
+/* The following structures were based on cpia.h. Why reinvent the wheel? :-) */
+struct pwc_iso_buf
+{
+ void *data;
+ int length;
+ int read;
+ struct urb *urb;
+};
+
+/* intermediate buffers with raw data from the USB cam */
+struct pwc_frame_buf
+{
+ void *data;
+ volatile int filled; /* number of bytes filled */
+ struct pwc_frame_buf *next; /* list */
+#if PWC_DEBUG
+ int sequence; /* Sequence number */
+#endif
+};
+
+struct pwc_device
+{
+ struct video_device *vdev;
+#ifdef PWC_MAGIC
+ int magic;
+#endif
+ /* Pointer to our usb_device */
+ struct usb_device *udev;
+
+ int type; /* type of cam (645, 646, 675, 680, 690, 720, 730, 740, 750) */
+ int release; /* release number */
+ int features; /* feature bits */
+ char serial[30]; /* serial number (string) */
+ int error_status; /* set when something goes wrong with the cam (unplugged, USB errors) */
+ int usb_init; /* set when the cam has been initialized over USB */
+
+ /*** Video data ***/
+ int vopen; /* flag */
+ int vendpoint; /* video isoc endpoint */
+ int vcinterface; /* video control interface */
+ int valternate; /* alternate interface needed */
+ int vframes, vsize; /* frames-per-second & size (see PSZ_*) */
+ int vpalette; /* palette: 420P, RAW or RGBBAYER */
+ int vframe_count; /* received frames */
+ int vframes_dumped; /* counter for dumped frames */
+ int vframes_error; /* frames received in error */
+ int vmax_packet_size; /* USB maxpacket size */
+ int vlast_packet_size; /* for frame synchronisation */
+ int visoc_errors; /* number of contiguous ISOC errors */
+ int vcompression; /* desired compression factor */
+ int vbandlength; /* compressed band length; 0 is uncompressed */
+ char vsnapshot; /* snapshot mode */
+ char vsync; /* used by isoc handler */
+ char vmirror; /* for ToUCaM series */
+
+ int cmd_len;
+ unsigned char cmd_buf[13];
+
+ /* The image acquisition requires 3 to 4 steps:
+ 1. data is gathered in short packets from the USB controller
+ 2. data is synchronized and packed into a frame buffer
+ 3a. in case data is compressed, decompress it directly into image buffer
+ 3b. in case data is uncompressed, copy into image buffer with viewport
+ 4. data is transferred to the user process
+
+ Note that MAX_ISO_BUFS != MAX_FRAMES != MAX_IMAGES....
+ We have in effect a back-to-back-double-buffer system.
+ */
+ /* 1: isoc */
+ struct pwc_iso_buf sbuf[MAX_ISO_BUFS];
+ char iso_init;
+
+ /* 2: frame */
+ struct pwc_frame_buf *fbuf; /* all frames */
+ struct pwc_frame_buf *empty_frames, *empty_frames_tail; /* all empty frames */
+ struct pwc_frame_buf *full_frames, *full_frames_tail; /* all filled frames */
+ struct pwc_frame_buf *fill_frame; /* frame currently being filled */
+ struct pwc_frame_buf *read_frame; /* frame currently read by user process */
+ int frame_header_size, frame_trailer_size;
+ int frame_size;
+ int frame_total_size; /* including header & trailer */
+ int drop_frames;
+#if PWC_DEBUG
+ int sequence; /* Debugging aid */
+#endif
+
+ /* 3: decompression */
+ struct pwc_decompressor *decompressor; /* function block with decompression routines */
+ void *decompress_data; /* private data for decompression engine */
+
+ /* 4: image */
+ /* We have an 'image' and a 'view', where 'image' is the fixed-size image
+ as delivered by the camera, and 'view' is the size requested by the
+ program. The camera image is centered in this viewport, laced with
+ a gray or black border. view_min <= image <= view <= view_max;
+ */
+ int image_mask; /* bitmask of supported sizes */
+ struct pwc_coord view_min, view_max; /* minimum and maximum viewable sizes */
+ struct pwc_coord abs_max; /* maximum supported size with compression */
+ struct pwc_coord image, view; /* image and viewport size */
+ struct pwc_coord offset; /* offset within the viewport */
+
+ void *image_data; /* total buffer, which is subdivided into ... */
+ void *image_ptr[MAX_IMAGES]; /* ...several images... */
+ int fill_image; /* ...which are rotated. */
+ int len_per_image; /* length per image */
+ int image_read_pos; /* In case we read data in pieces, keep track of were we are in the imagebuffer */
+ int image_used[MAX_IMAGES]; /* For MCAPTURE and SYNC */
+
+ struct semaphore modlock; /* to prevent races in video_open(), etc */
+ spinlock_t ptrlock; /* for manipulating the buffer pointers */
+
+ /*** motorized pan/tilt feature */
+ struct pwc_mpt_range angle_range;
+ int pan_angle; /* in degrees * 100 */
+ int tilt_angle; /* absolute angle; 0,0 is home position */
+
+ /*** Misc. data ***/
+ wait_queue_head_t frameq; /* When waiting for a frame to finish... */
+#if PWC_INT_PIPE
+ void *usb_int_handler; /* for the interrupt endpoint */
+#endif
+};
+
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/* Global variables */
+extern int pwc_trace;
+extern int pwc_preferred_compression;
+
+/** functions in pwc-if.c */
+int pwc_try_video_mode(struct pwc_device *pdev, int width, int height, int new_fps, int new_compression, int new_snapshot);
+
+/** Functions in pwc-misc.c */
+/* sizes in pixels */
+extern struct pwc_coord pwc_image_sizes[PSZ_MAX];
+
+int pwc_decode_size(struct pwc_device *pdev, int width, int height);
+void pwc_construct(struct pwc_device *pdev);
+
+/** Functions in pwc-ctrl.c */
+/* Request a certain video mode. Returns < 0 if not possible */
+extern int pwc_set_video_mode(struct pwc_device *pdev, int width, int height, int frames, int compression, int snapshot);
+/* Calculate the number of bytes per image (not frame) */
+extern void pwc_set_image_buffer_size(struct pwc_device *pdev);
+
+/* Various controls; should be obvious. Value 0..65535, or < 0 on error */
+extern int pwc_get_brightness(struct pwc_device *pdev);
+extern int pwc_set_brightness(struct pwc_device *pdev, int value);
+extern int pwc_get_contrast(struct pwc_device *pdev);
+extern int pwc_set_contrast(struct pwc_device *pdev, int value);
+extern int pwc_get_gamma(struct pwc_device *pdev);
+extern int pwc_set_gamma(struct pwc_device *pdev, int value);
+extern int pwc_get_saturation(struct pwc_device *pdev);
+extern int pwc_set_saturation(struct pwc_device *pdev, int value);
+extern int pwc_set_leds(struct pwc_device *pdev, int on_value, int off_value);
+extern int pwc_get_leds(struct pwc_device *pdev, int *on_value, int *off_value);
+extern int pwc_get_cmos_sensor(struct pwc_device *pdev, int *sensor);
+
+/* Power down or up the camera; not supported by all models */
+extern int pwc_camera_power(struct pwc_device *pdev, int power);
+
+/* Private ioctl()s; see pwc-ioctl.h */
+extern int pwc_ioctl(struct pwc_device *pdev, unsigned int cmd, void *arg);
+
+
+/** pwc-uncompress.c */
+/* Expand frame to image, possibly including decompression. Uses read_frame and fill_image */
+extern int pwc_decompress(struct pwc_device *pdev);
+
+#ifdef __cplusplus
+}
+#endif
+
+
+#endif
--- /dev/null
+/***************************************************************************
+ * V4L2 driver for SN9C10x PC Camera Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#ifndef _SN9C102_H_
+#define _SN9C102_H_
+
+#include <linux/version.h>
+#include <linux/usb.h>
+#include <linux/videodev.h>
+#include <linux/device.h>
+#include <linux/list.h>
+#include <linux/spinlock.h>
+#include <linux/time.h>
+#include <linux/wait.h>
+#include <linux/types.h>
+#include <linux/param.h>
+#include <linux/rwsem.h>
+#include <asm/semaphore.h>
+
+#include "sn9c102_sensor.h"
+
+/*****************************************************************************/
+
+#define SN9C102_DEBUG
+#define SN9C102_DEBUG_LEVEL 2
+#define SN9C102_MAX_DEVICES 64
+#define SN9C102_PRESERVE_IMGSCALE 0
+#define SN9C102_MAX_FRAMES 32
+#define SN9C102_URBS 2
+#define SN9C102_ISO_PACKETS 7
+#define SN9C102_ALTERNATE_SETTING 8
+#define SN9C102_URB_TIMEOUT msecs_to_jiffies(3)
+#define SN9C102_CTRL_TIMEOUT msecs_to_jiffies(100)
+
+/*****************************************************************************/
+
+#define SN9C102_MODULE_NAME "V4L2 driver for SN9C10x PC Camera Controllers"
+#define SN9C102_MODULE_AUTHOR "(C) 2004 Luca Risolia"
+#define SN9C102_AUTHOR_EMAIL "<luca.risolia@studio.unibo.it>"
+#define SN9C102_MODULE_LICENSE "GPL"
+#define SN9C102_MODULE_VERSION "1:1.19"
+#define SN9C102_MODULE_VERSION_CODE KERNEL_VERSION(1, 0, 19)
+
+enum sn9c102_bridge {
+ BRIDGE_SN9C101 = 0x01,
+ BRIDGE_SN9C102 = 0x02,
+ BRIDGE_SN9C103 = 0x04,
+};
+
+SN9C102_ID_TABLE
+SN9C102_SENSOR_TABLE
+
+enum sn9c102_frame_state {
+ F_UNUSED,
+ F_QUEUED,
+ F_GRABBING,
+ F_DONE,
+ F_ERROR,
+};
+
+struct sn9c102_frame_t {
+ void* bufmem;
+ struct v4l2_buffer buf;
+ enum sn9c102_frame_state state;
+ struct list_head frame;
+ unsigned long vma_use_count;
+};
+
+enum sn9c102_dev_state {
+ DEV_INITIALIZED = 0x01,
+ DEV_DISCONNECTED = 0x02,
+ DEV_MISCONFIGURED = 0x04,
+};
+
+enum sn9c102_io_method {
+ IO_NONE,
+ IO_READ,
+ IO_MMAP,
+};
+
+enum sn9c102_stream_state {
+ STREAM_OFF,
+ STREAM_INTERRUPT,
+ STREAM_ON,
+};
+
+struct sn9c102_sysfs_attr {
+ u8 reg, i2c_reg;
+};
+
+static DECLARE_MUTEX(sn9c102_sysfs_lock);
+static DECLARE_RWSEM(sn9c102_disconnect);
+
+struct sn9c102_device {
+ struct device dev;
+
+ struct video_device* v4ldev;
+
+ enum sn9c102_bridge bridge;
+ struct sn9c102_sensor* sensor;
+
+ struct usb_device* usbdev;
+ struct urb* urb[SN9C102_URBS];
+ void* transfer_buffer[SN9C102_URBS];
+ u8* control_buffer;
+
+ struct sn9c102_frame_t *frame_current, frame[SN9C102_MAX_FRAMES];
+ struct list_head inqueue, outqueue;
+ u32 frame_count, nbuffers, nreadbuffers;
+
+ enum sn9c102_io_method io;
+ enum sn9c102_stream_state stream;
+
+ struct v4l2_jpegcompression compression;
+
+ struct sn9c102_sysfs_attr sysfs;
+ u16 reg[32];
+
+ enum sn9c102_dev_state state;
+ u8 users;
+
+ struct semaphore dev_sem, fileop_sem;
+ spinlock_t queue_lock;
+ wait_queue_head_t open, wait_frame, wait_stream;
+};
+
+/*****************************************************************************/
+
+void
+sn9c102_attach_sensor(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor)
+{
+ cam->sensor = sensor;
+ cam->sensor->dev = &cam->dev;
+ cam->sensor->usbdev = cam->usbdev;
+}
+
+/*****************************************************************************/
+
+#undef DBG
+#undef KDBG
+#ifdef SN9C102_DEBUG
+# define DBG(level, fmt, args...) \
+{ \
+ if (debug >= (level)) { \
+ if ((level) == 1) \
+ dev_err(&cam->dev, fmt "\n", ## args); \
+ else if ((level) == 2) \
+ dev_info(&cam->dev, fmt "\n", ## args); \
+ else if ((level) >= 3) \
+ dev_info(&cam->dev, "[%s:%d] " fmt "\n", \
+ __FUNCTION__, __LINE__ , ## args); \
+ } \
+}
+# define KDBG(level, fmt, args...) \
+{ \
+ if (debug >= (level)) { \
+ if ((level) == 1 || (level) == 2) \
+ pr_info("sn9c102: " fmt "\n", ## args); \
+ else if ((level) == 3) \
+ pr_debug("sn9c102: [%s:%d] " fmt "\n", __FUNCTION__, \
+ __LINE__ , ## args); \
+ } \
+}
+#else
+# define KDBG(level, fmt, args...) do {;} while(0);
+# define DBG(level, fmt, args...) do {;} while(0);
+#endif
+
+#undef PDBG
+#define PDBG(fmt, args...) \
+dev_info(&cam->dev, "[%s:%d] " fmt "\n", __FUNCTION__, __LINE__ , ## args);
+
+#undef PDBGG
+#define PDBGG(fmt, args...) do {;} while(0); /* placeholder */
+
+#endif /* _SN9C102_H_ */
--- /dev/null
+/***************************************************************************
+ * V4L2 driver for SN9C10x PC Camera Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/param.h>
+#include <linux/moduleparam.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/device.h>
+#include <linux/fs.h>
+#include <linux/delay.h>
+#include <linux/stddef.h>
+#include <linux/compiler.h>
+#include <linux/ioctl.h>
+#include <linux/poll.h>
+#include <linux/stat.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/page-flags.h>
+#include <asm/page.h>
+#include <asm/uaccess.h>
+
+#include "sn9c102.h"
+
+/*****************************************************************************/
+
+MODULE_DEVICE_TABLE(usb, sn9c102_id_table);
+
+MODULE_AUTHOR(SN9C102_MODULE_AUTHOR " " SN9C102_AUTHOR_EMAIL);
+MODULE_DESCRIPTION(SN9C102_MODULE_NAME);
+MODULE_VERSION(SN9C102_MODULE_VERSION);
+MODULE_LICENSE(SN9C102_MODULE_LICENSE);
+
+static short video_nr[] = {[0 ... SN9C102_MAX_DEVICES-1] = -1};
+module_param_array(video_nr, short, NULL, 0444);
+MODULE_PARM_DESC(video_nr,
+ "\n<-1|n[,...]> Specify V4L2 minor mode number."
+ "\n -1 = use next available (default)"
+ "\n n = use minor number n (integer >= 0)"
+ "\nYou can specify up to "__MODULE_STRING(SN9C102_MAX_DEVICES)
+ " cameras this way."
+ "\nFor example:"
+ "\nvideo_nr=-1,2,-1 would assign minor number 2 to"
+ "\nthe second camera and use auto for the first"
+ "\none and for every other camera."
+ "\n");
+
+#ifdef SN9C102_DEBUG
+static unsigned short debug = SN9C102_DEBUG_LEVEL;
+module_param(debug, ushort, 0644);
+MODULE_PARM_DESC(debug,
+ "\n<n> Debugging information level, from 0 to 3:"
+ "\n0 = none (use carefully)"
+ "\n1 = critical errors"
+ "\n2 = significant informations"
+ "\n3 = more verbose messages"
+ "\nLevel 3 is useful for testing only, when only "
+ "one device is used."
+ "\nDefault value is "__MODULE_STRING(SN9C102_DEBUG_LEVEL)"."
+ "\n");
+#endif
+
+/*****************************************************************************/
+
+typedef char sn9c102_sof_header_t[12];
+typedef char sn9c102_eof_header_t[4];
+
+static sn9c102_sof_header_t sn9c102_sof_header[] = {
+ {0xff, 0xff, 0x00, 0xc4, 0xc4, 0x96, 0x00},
+ {0xff, 0xff, 0x00, 0xc4, 0xc4, 0x96, 0x01},
+};
+
+
+static sn9c102_eof_header_t sn9c102_eof_header[] = {
+ {0x00, 0x00, 0x00, 0x00},
+ {0x40, 0x00, 0x00, 0x00},
+ {0x80, 0x00, 0x00, 0x00},
+ {0xc0, 0x00, 0x00, 0x00},
+};
+
+/*****************************************************************************/
+
+static void* rvmalloc(size_t size)
+{
+ void* mem;
+ unsigned long adr;
+
+ size = PAGE_ALIGN(size);
+
+ mem = vmalloc_32((unsigned long)size);
+ if (!mem)
+ return NULL;
+
+ memset(mem, 0, size);
+
+ adr = (unsigned long)mem;
+ while (size > 0) {
+ SetPageReserved(vmalloc_to_page((void *)adr));
+ adr += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
+
+ return mem;
+}
+
+
+static void rvfree(void* mem, size_t size)
+{
+ unsigned long adr;
+
+ if (!mem)
+ return;
+
+ size = PAGE_ALIGN(size);
+
+ adr = (unsigned long)mem;
+ while (size > 0) {
+ ClearPageReserved(vmalloc_to_page((void *)adr));
+ adr += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
+
+ vfree(mem);
+}
+
+
+static u32 sn9c102_request_buffers(struct sn9c102_device* cam, u32 count)
+{
+ struct v4l2_pix_format* p = &(cam->sensor->pix_format);
+ const size_t imagesize = (p->width * p->height * p->priv)/8;
+ void* buff = NULL;
+ u32 i;
+
+ if (count > SN9C102_MAX_FRAMES)
+ count = SN9C102_MAX_FRAMES;
+
+ cam->nbuffers = count;
+ while (cam->nbuffers > 0) {
+ if ((buff = rvmalloc(cam->nbuffers * PAGE_ALIGN(imagesize))))
+ break;
+ cam->nbuffers--;
+ }
+
+ for (i = 0; i < cam->nbuffers; i++) {
+ cam->frame[i].bufmem = buff + i*PAGE_ALIGN(imagesize);
+ cam->frame[i].buf.index = i;
+ cam->frame[i].buf.m.offset = i*PAGE_ALIGN(imagesize);
+ cam->frame[i].buf.length = imagesize;
+ cam->frame[i].buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ cam->frame[i].buf.sequence = 0;
+ cam->frame[i].buf.field = V4L2_FIELD_NONE;
+ cam->frame[i].buf.memory = V4L2_MEMORY_MMAP;
+ cam->frame[i].buf.flags = 0;
+ }
+
+ return cam->nbuffers;
+}
+
+
+static void sn9c102_release_buffers(struct sn9c102_device* cam)
+{
+ if (cam->nbuffers) {
+ rvfree(cam->frame[0].bufmem,
+ cam->nbuffers * cam->frame[0].buf.length);
+ cam->nbuffers = 0;
+ }
+}
+
+
+static void sn9c102_empty_framequeues(struct sn9c102_device* cam)
+{
+ u32 i;
+
+ INIT_LIST_HEAD(&cam->inqueue);
+ INIT_LIST_HEAD(&cam->outqueue);
+
+ for (i = 0; i < SN9C102_MAX_FRAMES; i++) {
+ cam->frame[i].state = F_UNUSED;
+ cam->frame[i].buf.bytesused = 0;
+ }
+}
+
+
+static void sn9c102_queue_unusedframes(struct sn9c102_device* cam)
+{
+ unsigned long lock_flags;
+ u32 i;
+
+ for (i = 0; i < cam->nbuffers; i++)
+ if (cam->frame[i].state == F_UNUSED) {
+ cam->frame[i].state = F_QUEUED;
+ spin_lock_irqsave(&cam->queue_lock, lock_flags);
+ list_add_tail(&cam->frame[i].frame, &cam->inqueue);
+ spin_unlock_irqrestore(&cam->queue_lock, lock_flags);
+ }
+}
+
+/*****************************************************************************/
+
+int sn9c102_write_reg(struct sn9c102_device* cam, u8 value, u16 index)
+{
+ struct usb_device* udev = cam->usbdev;
+ u8* buff = cam->control_buffer;
+ int res;
+
+ *buff = value;
+
+ res = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 0x08, 0x41,
+ index, 0, buff, 1, SN9C102_CTRL_TIMEOUT);
+ if (res < 0) {
+ DBG(3, "Failed to write a register (value 0x%02X, index "
+ "0x%02X, error %d)", value, index, res)
+ return -1;
+ }
+
+ cam->reg[index] = value;
+
+ return 0;
+}
+
+
+/* NOTE: reading some registers always returns 0 */
+static int sn9c102_read_reg(struct sn9c102_device* cam, u16 index)
+{
+ struct usb_device* udev = cam->usbdev;
+ u8* buff = cam->control_buffer;
+ int res;
+
+ res = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 0x00, 0xc1,
+ index, 0, buff, 1, SN9C102_CTRL_TIMEOUT);
+ if (res < 0)
+ DBG(3, "Failed to read a register (index 0x%02X, error %d)",
+ index, res)
+
+ return (res >= 0) ? (int)(*buff) : -1;
+}
+
+
+int sn9c102_pread_reg(struct sn9c102_device* cam, u16 index)
+{
+ if (index > 0x1f)
+ return -EINVAL;
+
+ return cam->reg[index];
+}
+
+
+static int
+sn9c102_i2c_wait(struct sn9c102_device* cam, struct sn9c102_sensor* sensor)
+{
+ int i, r;
+
+ for (i = 1; i <= 5; i++) {
+ r = sn9c102_read_reg(cam, 0x08);
+ if (r < 0)
+ return -EIO;
+ if (r & 0x04)
+ return 0;
+ if (sensor->frequency & SN9C102_I2C_400KHZ)
+ udelay(5*8);
+ else
+ udelay(16*8);
+ }
+ return -EBUSY;
+}
+
+
+static int
+sn9c102_i2c_detect_read_error(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor)
+{
+ int r;
+ r = sn9c102_read_reg(cam, 0x08);
+ return (r < 0 || (r >= 0 && !(r & 0x08))) ? -EIO : 0;
+}
+
+
+static int
+sn9c102_i2c_detect_write_error(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor)
+{
+ int r;
+ r = sn9c102_read_reg(cam, 0x08);
+ return (r < 0 || (r >= 0 && (r & 0x08))) ? -EIO : 0;
+}
+
+
+int
+sn9c102_i2c_try_read(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 address)
+{
+ struct usb_device* udev = cam->usbdev;
+ u8* data = cam->control_buffer;
+ int err = 0, res;
+
+ /* Write cycle - address */
+ data[0] = ((sensor->interface == SN9C102_I2C_2WIRES) ? 0x80 : 0) |
+ ((sensor->frequency & SN9C102_I2C_400KHZ) ? 0x01 : 0) | 0x10;
+ data[1] = sensor->slave_write_id;
+ data[2] = address;
+ data[7] = 0x10;
+ res = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 0x08, 0x41,
+ 0x08, 0, data, 8, SN9C102_CTRL_TIMEOUT);
+ if (res < 0)
+ err += res;
+
+ err += sn9c102_i2c_wait(cam, sensor);
+
+ /* Read cycle - 1 byte */
+ data[0] = ((sensor->interface == SN9C102_I2C_2WIRES) ? 0x80 : 0) |
+ ((sensor->frequency & SN9C102_I2C_400KHZ) ? 0x01 : 0) |
+ 0x10 | 0x02;
+ data[1] = sensor->slave_read_id;
+ data[7] = 0x10;
+ res = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 0x08, 0x41,
+ 0x08, 0, data, 8, SN9C102_CTRL_TIMEOUT);
+ if (res < 0)
+ err += res;
+
+ err += sn9c102_i2c_wait(cam, sensor);
+
+ /* The read byte will be placed in data[4] */
+ res = usb_control_msg(udev, usb_rcvctrlpipe(udev, 0), 0x00, 0xc1,
+ 0x0a, 0, data, 5, SN9C102_CTRL_TIMEOUT);
+ if (res < 0)
+ err += res;
+
+ err += sn9c102_i2c_detect_read_error(cam, sensor);
+
+ if (err)
+ DBG(3, "I2C read failed for %s image sensor", sensor->name)
+
+ PDBGG("I2C read: address 0x%02X, value: 0x%02X", address, data[4])
+
+ return err ? -1 : (int)data[4];
+}
+
+
+int
+sn9c102_i2c_try_raw_write(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 n, u8 data0,
+ u8 data1, u8 data2, u8 data3, u8 data4, u8 data5)
+{
+ struct usb_device* udev = cam->usbdev;
+ u8* data = cam->control_buffer;
+ int err = 0, res;
+
+ /* Write cycle. It usually is address + value */
+ data[0] = ((sensor->interface == SN9C102_I2C_2WIRES) ? 0x80 : 0) |
+ ((sensor->frequency & SN9C102_I2C_400KHZ) ? 0x01 : 0)
+ | ((n - 1) << 4);
+ data[1] = data0;
+ data[2] = data1;
+ data[3] = data2;
+ data[4] = data3;
+ data[5] = data4;
+ data[6] = data5;
+ data[7] = 0x14;
+ res = usb_control_msg(udev, usb_sndctrlpipe(udev, 0), 0x08, 0x41,
+ 0x08, 0, data, 8, SN9C102_CTRL_TIMEOUT);
+ if (res < 0)
+ err += res;
+
+ err += sn9c102_i2c_wait(cam, sensor);
+ err += sn9c102_i2c_detect_write_error(cam, sensor);
+
+ if (err)
+ DBG(3, "I2C write failed for %s image sensor", sensor->name)
+
+ PDBGG("I2C raw write: %u bytes, data0 = 0x%02X, data1 = 0x%02X, "
+ "data2 = 0x%02X, data3 = 0x%02X, data4 = 0x%02X, data5 = 0x%02X",
+ n, data0, data1, data2, data3, data4, data5)
+
+ return err ? -1 : 0;
+}
+
+
+int
+sn9c102_i2c_try_write(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 address, u8 value)
+{
+ return sn9c102_i2c_try_raw_write(cam, sensor, 3,
+ sensor->slave_write_id, address,
+ value, 0, 0, 0);
+}
+
+
+int sn9c102_i2c_read(struct sn9c102_device* cam, u8 address)
+{
+ if (!cam->sensor)
+ return -1;
+
+ return sn9c102_i2c_try_read(cam, cam->sensor, address);
+}
+
+
+int sn9c102_i2c_write(struct sn9c102_device* cam, u8 address, u8 value)
+{
+ if (!cam->sensor)
+ return -1;
+
+ return sn9c102_i2c_try_write(cam, cam->sensor, address, value);
+}
+
+/*****************************************************************************/
+
+static void*
+sn9c102_find_sof_header(struct sn9c102_device* cam, void* mem, size_t len)
+{
+ size_t soflen = sizeof(sn9c102_sof_header_t), i;
+ u8 j, n = sizeof(sn9c102_sof_header) / soflen;
+
+ for (i = 0; (len >= soflen) && (i <= len - soflen); i++)
+ for (j = 0; j < n; j++)
+ /* It's enough to compare 7 bytes */
+ if (!memcmp(mem + i, sn9c102_sof_header[j], 7))
+ /* Skips the header */
+ return mem + i + soflen;
+
+ return NULL;
+}
+
+
+static void*
+sn9c102_find_eof_header(struct sn9c102_device* cam, void* mem, size_t len)
+{
+ size_t eoflen = sizeof(sn9c102_eof_header_t), i;
+ unsigned j, n = sizeof(sn9c102_eof_header) / eoflen;
+
+ if (cam->sensor->pix_format.pixelformat == V4L2_PIX_FMT_SN9C10X)
+ return NULL; /* EOF header does not exist in compressed data */
+
+ for (i = 0; (len >= eoflen) && (i <= len - eoflen); i++)
+ for (j = 0; j < n; j++)
+ if (!memcmp(mem + i, sn9c102_eof_header[j], eoflen))
+ return mem + i;
+
+ return NULL;
+}
+
+
+static void sn9c102_urb_complete(struct urb *urb, struct pt_regs* regs)
+{
+ struct sn9c102_device* cam = urb->context;
+ struct sn9c102_frame_t** f;
+ unsigned long lock_flags;
+ u8 i;
+ int err = 0;
+
+ if (urb->status == -ENOENT)
+ return;
+
+ f = &cam->frame_current;
+
+ if (cam->stream == STREAM_INTERRUPT) {
+ cam->stream = STREAM_OFF;
+ if ((*f))
+ (*f)->state = F_QUEUED;
+ DBG(3, "Stream interrupted")
+ wake_up_interruptible(&cam->wait_stream);
+ }
+
+ if ((cam->state & DEV_DISCONNECTED)||(cam->state & DEV_MISCONFIGURED))
+ return;
+
+ if (cam->stream == STREAM_OFF || list_empty(&cam->inqueue))
+ goto resubmit_urb;
+
+ if (!(*f))
+ (*f) = list_entry(cam->inqueue.next, struct sn9c102_frame_t,
+ frame);
+
+ for (i = 0; i < urb->number_of_packets; i++) {
+ unsigned int img, len, status;
+ void *pos, *sof, *eof;
+
+ len = urb->iso_frame_desc[i].actual_length;
+ status = urb->iso_frame_desc[i].status;
+ pos = urb->iso_frame_desc[i].offset + urb->transfer_buffer;
+
+ if (status) {
+ DBG(3, "Error in isochronous frame")
+ (*f)->state = F_ERROR;
+ continue;
+ }
+
+ PDBGG("Isochrnous frame: length %u, #%u i", len, i)
+
+ /*
+ NOTE: It is probably correct to assume that SOF and EOF
+ headers do not occur between two consecutive packets,
+ but who knows..Whatever is the truth, this assumption
+ doesn't introduce bugs.
+ */
+
+redo:
+ sof = sn9c102_find_sof_header(cam, pos, len);
+ if (!sof) {
+ eof = sn9c102_find_eof_header(cam, pos, len);
+ if ((*f)->state == F_GRABBING) {
+end_of_frame:
+ img = len;
+
+ if (eof)
+ img = (eof > pos) ? eof - pos - 1 : 0;
+
+ if ((*f)->buf.bytesused+img>(*f)->buf.length) {
+ u32 b = (*f)->buf.bytesused + img -
+ (*f)->buf.length;
+ img = (*f)->buf.length -
+ (*f)->buf.bytesused;
+ DBG(3, "Expected EOF not found: "
+ "video frame cut")
+ if (eof)
+ DBG(3, "Exceeded limit: +%u "
+ "bytes", (unsigned)(b))
+ }
+
+ memcpy((*f)->bufmem + (*f)->buf.bytesused, pos,
+ img);
+
+ if ((*f)->buf.bytesused == 0)
+ do_gettimeofday(&(*f)->buf.timestamp);
+
+ (*f)->buf.bytesused += img;
+
+ if ((*f)->buf.bytesused == (*f)->buf.length ||
+ (cam->sensor->pix_format.pixelformat ==
+ V4L2_PIX_FMT_SN9C10X && eof)) {
+ u32 b = (*f)->buf.bytesused;
+ (*f)->state = F_DONE;
+ (*f)->buf.sequence= ++cam->frame_count;
+ spin_lock_irqsave(&cam->queue_lock,
+ lock_flags);
+ list_move_tail(&(*f)->frame,
+ &cam->outqueue);
+ if (!list_empty(&cam->inqueue))
+ (*f) = list_entry(
+ cam->inqueue.next,
+ struct sn9c102_frame_t,
+ frame );
+ else
+ (*f) = NULL;
+ spin_unlock_irqrestore(&cam->queue_lock
+ , lock_flags);
+ DBG(3, "Video frame captured: "
+ "%lu bytes", (unsigned long)(b))
+
+ if (!(*f))
+ goto resubmit_urb;
+
+ } else if (eof) {
+ (*f)->state = F_ERROR;
+ DBG(3, "Not expected EOF after %lu "
+ "bytes of image data",
+ (unsigned long)((*f)->buf.bytesused))
+ }
+
+ if (sof) /* (1) */
+ goto start_of_frame;
+
+ } else if (eof) {
+ DBG(3, "EOF without SOF")
+ continue;
+
+ } else {
+ PDBGG("Ignoring pointless isochronous frame")
+ continue;
+ }
+
+ } else if ((*f)->state == F_QUEUED || (*f)->state == F_ERROR) {
+start_of_frame:
+ (*f)->state = F_GRABBING;
+ (*f)->buf.bytesused = 0;
+ len -= (sof - pos);
+ pos = sof;
+ DBG(3, "SOF detected: new video frame")
+ if (len)
+ goto redo;
+
+ } else if ((*f)->state == F_GRABBING) {
+ eof = sn9c102_find_eof_header(cam, pos, len);
+ if (eof && eof < sof)
+ goto end_of_frame; /* (1) */
+ else {
+ if (cam->sensor->pix_format.pixelformat ==
+ V4L2_PIX_FMT_SN9C10X) {
+ eof = sof-sizeof(sn9c102_sof_header_t);
+ goto end_of_frame;
+ } else {
+ DBG(3, "SOF before expected EOF after "
+ "%lu bytes of image data",
+ (unsigned long)((*f)->buf.bytesused))
+ goto start_of_frame;
+ }
+ }
+ }
+ }
+
+resubmit_urb:
+ urb->dev = cam->usbdev;
+ err = usb_submit_urb(urb, GFP_ATOMIC);
+ if (err < 0 && err != -EPERM) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "usb_submit_urb() failed")
+ }
+
+ wake_up_interruptible(&cam->wait_frame);
+}
+
+
+static int sn9c102_start_transfer(struct sn9c102_device* cam)
+{
+ struct usb_device *udev = cam->usbdev;
+ struct urb* urb;
+ const unsigned int wMaxPacketSize[] = {0, 128, 256, 384, 512,
+ 680, 800, 900, 1023};
+ const unsigned int psz = wMaxPacketSize[SN9C102_ALTERNATE_SETTING];
+ s8 i, j;
+ int err = 0;
+
+ for (i = 0; i < SN9C102_URBS; i++) {
+ cam->transfer_buffer[i] = kmalloc(SN9C102_ISO_PACKETS * psz,
+ GFP_KERNEL);
+ if (!cam->transfer_buffer[i]) {
+ err = -ENOMEM;
+ DBG(1, "Not enough memory")
+ goto free_buffers;
+ }
+ }
+
+ for (i = 0; i < SN9C102_URBS; i++) {
+ urb = usb_alloc_urb(SN9C102_ISO_PACKETS, GFP_KERNEL);
+ cam->urb[i] = urb;
+ if (!urb) {
+ err = -ENOMEM;
+ DBG(1, "usb_alloc_urb() failed")
+ goto free_urbs;
+ }
+ urb->dev = udev;
+ urb->context = cam;
+ urb->pipe = usb_rcvisocpipe(udev, 1);
+ urb->transfer_flags = URB_ISO_ASAP;
+ urb->number_of_packets = SN9C102_ISO_PACKETS;
+ urb->complete = sn9c102_urb_complete;
+ urb->transfer_buffer = cam->transfer_buffer[i];
+ urb->transfer_buffer_length = psz * SN9C102_ISO_PACKETS;
+ urb->interval = 1;
+ for (j = 0; j < SN9C102_ISO_PACKETS; j++) {
+ urb->iso_frame_desc[j].offset = psz * j;
+ urb->iso_frame_desc[j].length = psz;
+ }
+ }
+
+ /* Enable video */
+ if (!(cam->reg[0x01] & 0x04)) {
+ err = sn9c102_write_reg(cam, cam->reg[0x01] | 0x04, 0x01);
+ if (err) {
+ err = -EIO;
+ DBG(1, "I/O hardware error")
+ goto free_urbs;
+ }
+ }
+
+ err = usb_set_interface(udev, 0, SN9C102_ALTERNATE_SETTING);
+ if (err) {
+ DBG(1, "usb_set_interface() failed")
+ goto free_urbs;
+ }
+
+ cam->frame_current = NULL;
+
+ for (i = 0; i < SN9C102_URBS; i++) {
+ err = usb_submit_urb(cam->urb[i], GFP_KERNEL);
+ if (err) {
+ for (j = i-1; j >= 0; j--)
+ usb_kill_urb(cam->urb[j]);
+ DBG(1, "usb_submit_urb() failed, error %d", err)
+ goto free_urbs;
+ }
+ }
+
+ return 0;
+
+free_urbs:
+ for (i = 0; (i < SN9C102_URBS) && cam->urb[i]; i++)
+ usb_free_urb(cam->urb[i]);
+
+free_buffers:
+ for (i = 0; (i < SN9C102_URBS) && cam->transfer_buffer[i]; i++)
+ kfree(cam->transfer_buffer[i]);
+
+ return err;
+}
+
+
+static int sn9c102_stop_transfer(struct sn9c102_device* cam)
+{
+ struct usb_device *udev = cam->usbdev;
+ s8 i;
+ int err = 0;
+
+ if (cam->state & DEV_DISCONNECTED)
+ return 0;
+
+ for (i = SN9C102_URBS-1; i >= 0; i--) {
+ usb_kill_urb(cam->urb[i]);
+ usb_free_urb(cam->urb[i]);
+ kfree(cam->transfer_buffer[i]);
+ }
+
+ err = usb_set_interface(udev, 0, 0); /* 0 Mb/s */
+ if (err)
+ DBG(3, "usb_set_interface() failed")
+
+ return err;
+}
+
+
+int sn9c102_stream_interrupt(struct sn9c102_device* cam)
+{
+ int err = 0;
+
+ cam->stream = STREAM_INTERRUPT;
+ err = wait_event_timeout(cam->wait_stream,
+ (cam->stream == STREAM_OFF) ||
+ (cam->state & DEV_DISCONNECTED),
+ SN9C102_URB_TIMEOUT);
+ if (err) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "The camera is misconfigured. To use "
+ "it, close and open /dev/video%d "
+ "again.", cam->v4ldev->minor)
+ return err;
+ }
+ if (cam->state & DEV_DISCONNECTED)
+ return -ENODEV;
+
+ return 0;
+}
+
+/*****************************************************************************/
+
+static u8 sn9c102_strtou8(const char* buff, size_t len, ssize_t* count)
+{
+ char str[5];
+ char* endp;
+ unsigned long val;
+
+ if (len < 4) {
+ strncpy(str, buff, len);
+ str[len+1] = '\0';
+ } else {
+ strncpy(str, buff, 4);
+ str[4] = '\0';
+ }
+
+ val = simple_strtoul(str, &endp, 0);
+
+ *count = 0;
+ if (val <= 0xff)
+ *count = (ssize_t)(endp - str);
+ if ((*count) && (len == *count+1) && (buff[*count] == '\n'))
+ *count += 1;
+
+ return (u8)val;
+}
+
+/*
+ NOTE 1: being inside one of the following methods implies that the v4l
+ device exists for sure (see kobjects and reference counters)
+ NOTE 2: buffers are PAGE_SIZE long
+*/
+
+static ssize_t sn9c102_show_reg(struct class_device* cd, char* buf)
+{
+ struct sn9c102_device* cam;
+ ssize_t count;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ count = sprintf(buf, "%u\n", cam->sysfs.reg);
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t
+sn9c102_store_reg(struct class_device* cd, const char* buf, size_t len)
+{
+ struct sn9c102_device* cam;
+ u8 index;
+ ssize_t count;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ index = sn9c102_strtou8(buf, len, &count);
+ if (index > 0x1f || !count) {
+ up(&sn9c102_sysfs_lock);
+ return -EINVAL;
+ }
+
+ cam->sysfs.reg = index;
+
+ DBG(2, "Moved SN9C10X register index to 0x%02X", cam->sysfs.reg)
+ DBG(3, "Written bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t sn9c102_show_val(struct class_device* cd, char* buf)
+{
+ struct sn9c102_device* cam;
+ ssize_t count;
+ int val;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ if ((val = sn9c102_read_reg(cam, cam->sysfs.reg)) < 0) {
+ up(&sn9c102_sysfs_lock);
+ return -EIO;
+ }
+
+ count = sprintf(buf, "%d\n", val);
+
+ DBG(3, "Read bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t
+sn9c102_store_val(struct class_device* cd, const char* buf, size_t len)
+{
+ struct sn9c102_device* cam;
+ u8 value;
+ ssize_t count;
+ int err;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ value = sn9c102_strtou8(buf, len, &count);
+ if (!count) {
+ up(&sn9c102_sysfs_lock);
+ return -EINVAL;
+ }
+
+ err = sn9c102_write_reg(cam, value, cam->sysfs.reg);
+ if (err) {
+ up(&sn9c102_sysfs_lock);
+ return -EIO;
+ }
+
+ DBG(2, "Written SN9C10X reg. 0x%02X, val. 0x%02X",
+ cam->sysfs.reg, value)
+ DBG(3, "Written bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t sn9c102_show_i2c_reg(struct class_device* cd, char* buf)
+{
+ struct sn9c102_device* cam;
+ ssize_t count;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ count = sprintf(buf, "%u\n", cam->sysfs.i2c_reg);
+
+ DBG(3, "Read bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t
+sn9c102_store_i2c_reg(struct class_device* cd, const char* buf, size_t len)
+{
+ struct sn9c102_device* cam;
+ u8 index;
+ ssize_t count;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ index = sn9c102_strtou8(buf, len, &count);
+ if (!count) {
+ up(&sn9c102_sysfs_lock);
+ return -EINVAL;
+ }
+
+ cam->sysfs.i2c_reg = index;
+
+ DBG(2, "Moved sensor register index to 0x%02X", cam->sysfs.i2c_reg)
+ DBG(3, "Written bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t sn9c102_show_i2c_val(struct class_device* cd, char* buf)
+{
+ struct sn9c102_device* cam;
+ ssize_t count;
+ int val;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ if (cam->sensor->slave_read_id == SN9C102_I2C_SLAVEID_UNAVAILABLE) {
+ up(&sn9c102_sysfs_lock);
+ return -ENOSYS;
+ }
+
+ if ((val = sn9c102_i2c_read(cam, cam->sysfs.i2c_reg)) < 0) {
+ up(&sn9c102_sysfs_lock);
+ return -EIO;
+ }
+
+ count = sprintf(buf, "%d\n", val);
+
+ DBG(3, "Read bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t
+sn9c102_store_i2c_val(struct class_device* cd, const char* buf, size_t len)
+{
+ struct sn9c102_device* cam;
+ u8 value;
+ ssize_t count;
+ int err;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ value = sn9c102_strtou8(buf, len, &count);
+ if (!count) {
+ up(&sn9c102_sysfs_lock);
+ return -EINVAL;
+ }
+
+ err = sn9c102_i2c_write(cam, cam->sysfs.i2c_reg, value);
+ if (err) {
+ up(&sn9c102_sysfs_lock);
+ return -EIO;
+ }
+
+ DBG(2, "Written sensor reg. 0x%02X, val. 0x%02X",
+ cam->sysfs.i2c_reg, value)
+ DBG(3, "Written bytes: %zd", count)
+
+ up(&sn9c102_sysfs_lock);
+
+ return count;
+}
+
+
+static ssize_t
+sn9c102_store_green(struct class_device* cd, const char* buf, size_t len)
+{
+ struct sn9c102_device* cam;
+ enum sn9c102_bridge bridge;
+ ssize_t res = 0;
+ u8 value;
+ ssize_t count;
+
+ if (down_interruptible(&sn9c102_sysfs_lock))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(to_video_device(cd));
+ if (!cam) {
+ up(&sn9c102_sysfs_lock);
+ return -ENODEV;
+ }
+
+ bridge = cam->bridge;
+
+ up(&sn9c102_sysfs_lock);
+
+ value = sn9c102_strtou8(buf, len, &count);
+ if (!count)
+ return -EINVAL;
+
+ switch (bridge) {
+ case BRIDGE_SN9C101:
+ case BRIDGE_SN9C102:
+ if (value > 0x0f)
+ return -EINVAL;
+ if ((res = sn9c102_store_reg(cd, "0x11", 4)) >= 0)
+ res = sn9c102_store_val(cd, buf, len);
+ break;
+ case BRIDGE_SN9C103:
+ if (value > 0x7f)
+ return -EINVAL;
+ if ((res = sn9c102_store_reg(cd, "0x04", 4)) >= 0)
+ res = sn9c102_store_val(cd, buf, len);
+ break;
+ }
+
+ return res;
+}
+
+
+static ssize_t
+sn9c102_store_blue(struct class_device* cd, const char* buf, size_t len)
+{
+ ssize_t res = 0;
+ u8 value;
+ ssize_t count;
+
+ value = sn9c102_strtou8(buf, len, &count);
+ if (!count || value > 0x7f)
+ return -EINVAL;
+
+ if ((res = sn9c102_store_reg(cd, "0x06", 4)) >= 0)
+ res = sn9c102_store_val(cd, buf, len);
+
+ return res;
+}
+
+
+static ssize_t
+sn9c102_store_red(struct class_device* cd, const char* buf, size_t len)
+{
+ ssize_t res = 0;
+ u8 value;
+ ssize_t count;
+
+ value = sn9c102_strtou8(buf, len, &count);
+ if (!count || value > 0x7f)
+ return -EINVAL;
+
+ if ((res = sn9c102_store_reg(cd, "0x05", 4)) >= 0)
+ res = sn9c102_store_val(cd, buf, len);
+
+ return res;
+}
+
+
+static CLASS_DEVICE_ATTR(reg, S_IRUGO | S_IWUSR,
+ sn9c102_show_reg, sn9c102_store_reg);
+static CLASS_DEVICE_ATTR(val, S_IRUGO | S_IWUSR,
+ sn9c102_show_val, sn9c102_store_val);
+static CLASS_DEVICE_ATTR(i2c_reg, S_IRUGO | S_IWUSR,
+ sn9c102_show_i2c_reg, sn9c102_store_i2c_reg);
+static CLASS_DEVICE_ATTR(i2c_val, S_IRUGO | S_IWUSR,
+ sn9c102_show_i2c_val, sn9c102_store_i2c_val);
+static CLASS_DEVICE_ATTR(green, S_IWUGO, NULL, sn9c102_store_green);
+static CLASS_DEVICE_ATTR(blue, S_IWUGO, NULL, sn9c102_store_blue);
+static CLASS_DEVICE_ATTR(red, S_IWUGO, NULL, sn9c102_store_red);
+
+
+static void sn9c102_create_sysfs(struct sn9c102_device* cam)
+{
+ struct video_device *v4ldev = cam->v4ldev;
+
+ video_device_create_file(v4ldev, &class_device_attr_reg);
+ video_device_create_file(v4ldev, &class_device_attr_val);
+ if (cam->bridge == BRIDGE_SN9C101 || cam->bridge == BRIDGE_SN9C102)
+ video_device_create_file(v4ldev, &class_device_attr_green);
+ else if (cam->bridge == BRIDGE_SN9C103) {
+ video_device_create_file(v4ldev, &class_device_attr_blue);
+ video_device_create_file(v4ldev, &class_device_attr_red);
+ }
+ if (cam->sensor->slave_write_id != SN9C102_I2C_SLAVEID_UNAVAILABLE ||
+ cam->sensor->slave_read_id != SN9C102_I2C_SLAVEID_UNAVAILABLE) {
+ video_device_create_file(v4ldev, &class_device_attr_i2c_reg);
+ video_device_create_file(v4ldev, &class_device_attr_i2c_val);
+ }
+}
+
+/*****************************************************************************/
+
+static int
+sn9c102_set_format(struct sn9c102_device* cam, struct v4l2_pix_format* fmt)
+{
+ int err = 0;
+
+ if (fmt->pixelformat == V4L2_PIX_FMT_SN9C10X)
+ err += sn9c102_write_reg(cam, cam->reg[0x18] | 0x80, 0x18);
+ else
+ err += sn9c102_write_reg(cam, cam->reg[0x18] & 0x7f, 0x18);
+
+ return err ? -EIO : 0;
+}
+
+
+static int
+sn9c102_set_compression(struct sn9c102_device* cam,
+ struct v4l2_jpegcompression* compression)
+{
+ int err = 0;
+
+ if (compression->quality == 0)
+ err += sn9c102_write_reg(cam, cam->reg[0x17] | 0x01, 0x17);
+ else if (compression->quality == 1)
+ err += sn9c102_write_reg(cam, cam->reg[0x17] & 0xfe, 0x17);
+
+ return err ? -EIO : 0;
+}
+
+
+static int sn9c102_set_scale(struct sn9c102_device* cam, u8 scale)
+{
+ u8 r = 0;
+ int err = 0;
+
+ if (scale == 1)
+ r = cam->reg[0x18] & 0xcf;
+ else if (scale == 2) {
+ r = cam->reg[0x18] & 0xcf;
+ r |= 0x10;
+ } else if (scale == 4)
+ r = cam->reg[0x18] | 0x20;
+
+ err += sn9c102_write_reg(cam, r, 0x18);
+ if (err)
+ return -EIO;
+
+ PDBGG("Scaling factor: %u", scale)
+
+ return 0;
+}
+
+
+static int sn9c102_set_crop(struct sn9c102_device* cam, struct v4l2_rect* rect)
+{
+ struct sn9c102_sensor* s = cam->sensor;
+ u8 h_start = (u8)(rect->left - s->cropcap.bounds.left),
+ v_start = (u8)(rect->top - s->cropcap.bounds.top),
+ h_size = (u8)(rect->width / 16),
+ v_size = (u8)(rect->height / 16);
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, h_start, 0x12);
+ err += sn9c102_write_reg(cam, v_start, 0x13);
+ err += sn9c102_write_reg(cam, h_size, 0x15);
+ err += sn9c102_write_reg(cam, v_size, 0x16);
+ if (err)
+ return -EIO;
+
+ PDBGG("h_start, v_start, h_size, v_size, ho_size, vo_size "
+ "%u %u %u %u", h_start, v_start, h_size, v_size)
+
+ return 0;
+}
+
+
+static int sn9c102_init(struct sn9c102_device* cam)
+{
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_control ctrl;
+ struct v4l2_queryctrl *qctrl;
+ struct v4l2_rect* rect;
+ u8 i = 0, n = 0;
+ int err = 0;
+
+ if (!(cam->state & DEV_INITIALIZED)) {
+ init_waitqueue_head(&cam->open);
+ qctrl = s->qctrl;
+ rect = &(s->cropcap.defrect);
+ } else { /* use current values */
+ qctrl = s->_qctrl;
+ rect = &(s->_rect);
+ }
+
+ err += sn9c102_set_scale(cam, rect->width / s->pix_format.width);
+ err += sn9c102_set_crop(cam, rect);
+ if (err)
+ return err;
+
+ if (s->init) {
+ err = s->init(cam);
+ if (err) {
+ DBG(3, "Sensor initialization failed")
+ return err;
+ }
+ }
+
+ if (!(cam->state & DEV_INITIALIZED))
+ cam->compression.quality = cam->reg[0x17] & 0x01 ? 0 : 1;
+ else
+ err += sn9c102_set_compression(cam, &cam->compression);
+ err += sn9c102_set_format(cam, &s->pix_format);
+ if (err)
+ return err;
+
+ if (s->pix_format.pixelformat == V4L2_PIX_FMT_SN9C10X)
+ DBG(3, "Compressed video format is active, quality %d",
+ cam->compression.quality)
+ else
+ DBG(3, "Uncompressed video format is active")
+
+ if (s->set_crop)
+ if ((err = s->set_crop(cam, rect))) {
+ DBG(3, "set_crop() failed")
+ return err;
+ }
+
+ if (s->set_ctrl) {
+ n = sizeof(s->qctrl) / sizeof(s->qctrl[0]);
+ for (i = 0; i < n; i++)
+ if (s->qctrl[i].id != 0 &&
+ !(s->qctrl[i].flags & V4L2_CTRL_FLAG_DISABLED)) {
+ ctrl.id = s->qctrl[i].id;
+ ctrl.value = qctrl[i].default_value;
+ err = s->set_ctrl(cam, &ctrl);
+ if (err) {
+ DBG(3, "Set %s control failed",
+ s->qctrl[i].name)
+ return err;
+ }
+ DBG(3, "Image sensor supports '%s' control",
+ s->qctrl[i].name)
+ }
+ }
+
+ if (!(cam->state & DEV_INITIALIZED)) {
+ init_MUTEX(&cam->fileop_sem);
+ spin_lock_init(&cam->queue_lock);
+ init_waitqueue_head(&cam->wait_frame);
+ init_waitqueue_head(&cam->wait_stream);
+ cam->nreadbuffers = 2;
+ memcpy(s->_qctrl, s->qctrl, sizeof(s->qctrl));
+ memcpy(&(s->_rect), &(s->cropcap.defrect),
+ sizeof(struct v4l2_rect));
+ cam->state |= DEV_INITIALIZED;
+ }
+
+ DBG(2, "Initialization succeeded")
+ return 0;
+}
+
+
+static void sn9c102_release_resources(struct sn9c102_device* cam)
+{
+ down(&sn9c102_sysfs_lock);
+
+ DBG(2, "V4L2 device /dev/video%d deregistered", cam->v4ldev->minor)
+ video_set_drvdata(cam->v4ldev, NULL);
+ video_unregister_device(cam->v4ldev);
+
+ up(&sn9c102_sysfs_lock);
+
+ kfree(cam->control_buffer);
+}
+
+/*****************************************************************************/
+
+static int sn9c102_open(struct inode* inode, struct file* filp)
+{
+ struct sn9c102_device* cam;
+ int err = 0;
+
+ /*
+ This is the only safe way to prevent race conditions with
+ disconnect
+ */
+ if (!down_read_trylock(&sn9c102_disconnect))
+ return -ERESTARTSYS;
+
+ cam = video_get_drvdata(video_devdata(filp));
+
+ if (down_interruptible(&cam->dev_sem)) {
+ up_read(&sn9c102_disconnect);
+ return -ERESTARTSYS;
+ }
+
+ if (cam->users) {
+ DBG(2, "Device /dev/video%d is busy...", cam->v4ldev->minor)
+ if ((filp->f_flags & O_NONBLOCK) ||
+ (filp->f_flags & O_NDELAY)) {
+ err = -EWOULDBLOCK;
+ goto out;
+ }
+ up(&cam->dev_sem);
+ err = wait_event_interruptible_exclusive(cam->open,
+ cam->state & DEV_DISCONNECTED
+ || !cam->users);
+ if (err) {
+ up_read(&sn9c102_disconnect);
+ return err;
+ }
+ if (cam->state & DEV_DISCONNECTED) {
+ up_read(&sn9c102_disconnect);
+ return -ENODEV;
+ }
+ down(&cam->dev_sem);
+ }
+
+
+ if (cam->state & DEV_MISCONFIGURED) {
+ err = sn9c102_init(cam);
+ if (err) {
+ DBG(1, "Initialization failed again. "
+ "I will retry on next open().")
+ goto out;
+ }
+ cam->state &= ~DEV_MISCONFIGURED;
+ }
+
+ if ((err = sn9c102_start_transfer(cam)))
+ goto out;
+
+ filp->private_data = cam;
+ cam->users++;
+ cam->io = IO_NONE;
+ cam->stream = STREAM_OFF;
+ cam->nbuffers = 0;
+ cam->frame_count = 0;
+ sn9c102_empty_framequeues(cam);
+
+ DBG(3, "Video device /dev/video%d is open", cam->v4ldev->minor)
+
+out:
+ up(&cam->dev_sem);
+ up_read(&sn9c102_disconnect);
+ return err;
+}
+
+
+static int sn9c102_release(struct inode* inode, struct file* filp)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+
+ down(&cam->dev_sem); /* prevent disconnect() to be called */
+
+ sn9c102_stop_transfer(cam);
+
+ sn9c102_release_buffers(cam);
+
+ if (cam->state & DEV_DISCONNECTED) {
+ sn9c102_release_resources(cam);
+ up(&cam->dev_sem);
+ kfree(cam);
+ return 0;
+ }
+
+ cam->users--;
+ wake_up_interruptible_nr(&cam->open, 1);
+
+ DBG(3, "Video device /dev/video%d closed", cam->v4ldev->minor)
+
+ up(&cam->dev_sem);
+
+ return 0;
+}
+
+
+static ssize_t
+sn9c102_read(struct file* filp, char __user * buf, size_t count, loff_t* f_pos)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+ struct sn9c102_frame_t* f, * i;
+ unsigned long lock_flags;
+ int err = 0;
+
+ if (down_interruptible(&cam->fileop_sem))
+ return -ERESTARTSYS;
+
+ if (cam->state & DEV_DISCONNECTED) {
+ DBG(1, "Device not present")
+ up(&cam->fileop_sem);
+ return -ENODEV;
+ }
+
+ if (cam->state & DEV_MISCONFIGURED) {
+ DBG(1, "The camera is misconfigured. Close and open it again.")
+ up(&cam->fileop_sem);
+ return -EIO;
+ }
+
+ if (cam->io == IO_MMAP) {
+ DBG(3, "Close and open the device again to choose "
+ "the read method")
+ up(&cam->fileop_sem);
+ return -EINVAL;
+ }
+
+ if (cam->io == IO_NONE) {
+ if (!sn9c102_request_buffers(cam, cam->nreadbuffers)) {
+ DBG(1, "read() failed, not enough memory")
+ up(&cam->fileop_sem);
+ return -ENOMEM;
+ }
+ cam->io = IO_READ;
+ cam->stream = STREAM_ON;
+ sn9c102_queue_unusedframes(cam);
+ }
+
+ if (!count) {
+ up(&cam->fileop_sem);
+ return 0;
+ }
+
+ if (list_empty(&cam->outqueue)) {
+ if (filp->f_flags & O_NONBLOCK) {
+ up(&cam->fileop_sem);
+ return -EAGAIN;
+ }
+ err = wait_event_interruptible
+ ( cam->wait_frame,
+ (!list_empty(&cam->outqueue)) ||
+ (cam->state & DEV_DISCONNECTED) );
+ if (err) {
+ up(&cam->fileop_sem);
+ return err;
+ }
+ if (cam->state & DEV_DISCONNECTED) {
+ up(&cam->fileop_sem);
+ return -ENODEV;
+ }
+ }
+
+ f = list_entry(cam->outqueue.prev, struct sn9c102_frame_t, frame);
+
+ spin_lock_irqsave(&cam->queue_lock, lock_flags);
+ list_for_each_entry(i, &cam->outqueue, frame)
+ i->state = F_UNUSED;
+ INIT_LIST_HEAD(&cam->outqueue);
+ spin_unlock_irqrestore(&cam->queue_lock, lock_flags);
+
+ sn9c102_queue_unusedframes(cam);
+
+ if (count > f->buf.bytesused)
+ count = f->buf.bytesused;
+
+ if (copy_to_user(buf, f->bufmem, count)) {
+ up(&cam->fileop_sem);
+ return -EFAULT;
+ }
+ *f_pos += count;
+
+ PDBGG("Frame #%lu, bytes read: %zu", (unsigned long)f->buf.index,count)
+
+ up(&cam->fileop_sem);
+
+ return count;
+}
+
+
+static unsigned int sn9c102_poll(struct file *filp, poll_table *wait)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+ unsigned int mask = 0;
+
+ if (down_interruptible(&cam->fileop_sem))
+ return POLLERR;
+
+ if (cam->state & DEV_DISCONNECTED) {
+ DBG(1, "Device not present")
+ goto error;
+ }
+
+ if (cam->state & DEV_MISCONFIGURED) {
+ DBG(1, "The camera is misconfigured. Close and open it again.")
+ goto error;
+ }
+
+ if (cam->io == IO_NONE) {
+ if (!sn9c102_request_buffers(cam, 2)) {
+ DBG(1, "poll() failed, not enough memory")
+ goto error;
+ }
+ cam->io = IO_READ;
+ cam->stream = STREAM_ON;
+ }
+
+ if (cam->io == IO_READ)
+ sn9c102_queue_unusedframes(cam);
+
+ poll_wait(filp, &cam->wait_frame, wait);
+
+ if (!list_empty(&cam->outqueue))
+ mask |= POLLIN | POLLRDNORM;
+
+ up(&cam->fileop_sem);
+
+ return mask;
+
+error:
+ up(&cam->fileop_sem);
+ return POLLERR;
+}
+
+
+static void sn9c102_vm_open(struct vm_area_struct* vma)
+{
+ struct sn9c102_frame_t* f = vma->vm_private_data;
+ f->vma_use_count++;
+}
+
+
+static void sn9c102_vm_close(struct vm_area_struct* vma)
+{
+ /* NOTE: buffers are not freed here */
+ struct sn9c102_frame_t* f = vma->vm_private_data;
+ f->vma_use_count--;
+}
+
+
+static struct vm_operations_struct sn9c102_vm_ops = {
+ .open = sn9c102_vm_open,
+ .close = sn9c102_vm_close,
+};
+
+
+static int sn9c102_mmap(struct file* filp, struct vm_area_struct *vma)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+ unsigned long size = vma->vm_end - vma->vm_start,
+ start = vma->vm_start,
+ pos,
+ page;
+ u32 i;
+
+ if (down_interruptible(&cam->fileop_sem))
+ return -ERESTARTSYS;
+
+ if (cam->state & DEV_DISCONNECTED) {
+ DBG(1, "Device not present")
+ up(&cam->fileop_sem);
+ return -ENODEV;
+ }
+
+ if (cam->state & DEV_MISCONFIGURED) {
+ DBG(1, "The camera is misconfigured. Close and open it again.")
+ up(&cam->fileop_sem);
+ return -EIO;
+ }
+
+ if (cam->io != IO_MMAP || !(vma->vm_flags & VM_WRITE) ||
+ size != PAGE_ALIGN(cam->frame[0].buf.length)) {
+ up(&cam->fileop_sem);
+ return -EINVAL;
+ }
+
+ for (i = 0; i < cam->nbuffers; i++) {
+ if ((cam->frame[i].buf.m.offset>>PAGE_SHIFT) == vma->vm_pgoff)
+ break;
+ }
+ if (i == cam->nbuffers) {
+ up(&cam->fileop_sem);
+ return -EINVAL;
+ }
+
+ /* VM_IO is eventually going to replace PageReserved altogether */
+ vma->vm_flags |= VM_IO;
+ vma->vm_flags |= VM_RESERVED; /* avoid to swap out this VMA */
+
+ pos = (unsigned long)cam->frame[i].bufmem;
+ while (size > 0) { /* size is page-aligned */
+ page = vmalloc_to_pfn((void *)pos);
+ if (remap_pfn_range(vma, start, page, PAGE_SIZE,
+ vma->vm_page_prot)) {
+ up(&cam->fileop_sem);
+ return -EAGAIN;
+ }
+ start += PAGE_SIZE;
+ pos += PAGE_SIZE;
+ size -= PAGE_SIZE;
+ }
+
+ vma->vm_ops = &sn9c102_vm_ops;
+ vma->vm_private_data = &cam->frame[i];
+
+ sn9c102_vm_open(vma);
+
+ up(&cam->fileop_sem);
+
+ return 0;
+}
+
+
+static int sn9c102_v4l2_ioctl(struct inode* inode, struct file* filp,
+ unsigned int cmd, void __user * arg)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+
+ switch (cmd) {
+
+ case VIDIOC_QUERYCAP:
+ {
+ struct v4l2_capability cap = {
+ .driver = "sn9c102",
+ .version = SN9C102_MODULE_VERSION_CODE,
+ .capabilities = V4L2_CAP_VIDEO_CAPTURE |
+ V4L2_CAP_READWRITE |
+ V4L2_CAP_STREAMING,
+ };
+
+ strlcpy(cap.card, cam->v4ldev->name, sizeof(cap.card));
+ if (usb_make_path(cam->usbdev, cap.bus_info,
+ sizeof(cap.bus_info)) < 0)
+ strlcpy(cap.bus_info, cam->dev.bus_id,
+ sizeof(cap.bus_info));
+
+ if (copy_to_user(arg, &cap, sizeof(cap)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_ENUMINPUT:
+ {
+ struct v4l2_input i;
+
+ if (copy_from_user(&i, arg, sizeof(i)))
+ return -EFAULT;
+
+ if (i.index)
+ return -EINVAL;
+
+ memset(&i, 0, sizeof(i));
+ strcpy(i.name, "USB");
+
+ if (copy_to_user(arg, &i, sizeof(i)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_G_INPUT:
+ case VIDIOC_S_INPUT:
+ {
+ int index;
+
+ if (copy_from_user(&index, arg, sizeof(index)))
+ return -EFAULT;
+
+ if (index != 0)
+ return -EINVAL;
+
+ return 0;
+ }
+
+ case VIDIOC_QUERYCTRL:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_queryctrl qc;
+ u8 i, n;
+
+ if (copy_from_user(&qc, arg, sizeof(qc)))
+ return -EFAULT;
+
+ n = sizeof(s->qctrl) / sizeof(s->qctrl[0]);
+ for (i = 0; i < n; i++)
+ if (qc.id && qc.id == s->qctrl[i].id) {
+ memcpy(&qc, &(s->qctrl[i]), sizeof(qc));
+ if (copy_to_user(arg, &qc, sizeof(qc)))
+ return -EFAULT;
+ return 0;
+ }
+
+ return -EINVAL;
+ }
+
+ case VIDIOC_G_CTRL:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_control ctrl;
+ int err = 0;
+
+ if (!s->get_ctrl)
+ return -EINVAL;
+
+ if (copy_from_user(&ctrl, arg, sizeof(ctrl)))
+ return -EFAULT;
+
+ err = s->get_ctrl(cam, &ctrl);
+
+ if (copy_to_user(arg, &ctrl, sizeof(ctrl)))
+ return -EFAULT;
+
+ return err;
+ }
+
+ case VIDIOC_S_CTRL:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_control ctrl;
+ u8 i, n;
+ int err = 0;
+
+ if (!s->set_ctrl)
+ return -EINVAL;
+
+ if (copy_from_user(&ctrl, arg, sizeof(ctrl)))
+ return -EFAULT;
+
+ n = sizeof(s->qctrl) / sizeof(s->qctrl[0]);
+ for (i = 0; i < n; i++)
+ if (ctrl.id == s->qctrl[i].id) {
+ if (ctrl.value < s->qctrl[i].minimum ||
+ ctrl.value > s->qctrl[i].maximum)
+ return -ERANGE;
+ ctrl.value -= ctrl.value % s->qctrl[i].step;
+ break;
+ }
+
+ if ((err = s->set_ctrl(cam, &ctrl)))
+ return err;
+
+ s->_qctrl[i].default_value = ctrl.value;
+
+ PDBGG("VIDIOC_S_CTRL: id %lu, value %lu",
+ (unsigned long)ctrl.id, (unsigned long)ctrl.value)
+
+ return 0;
+ }
+
+ case VIDIOC_CROPCAP:
+ {
+ struct v4l2_cropcap* cc = &(cam->sensor->cropcap);
+
+ cc->type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ cc->pixelaspect.numerator = 1;
+ cc->pixelaspect.denominator = 1;
+
+ if (copy_to_user(arg, cc, sizeof(*cc)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_G_CROP:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_crop crop = {
+ .type = V4L2_BUF_TYPE_VIDEO_CAPTURE,
+ };
+
+ memcpy(&(crop.c), &(s->_rect), sizeof(struct v4l2_rect));
+
+ if (copy_to_user(arg, &crop, sizeof(crop)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_S_CROP:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_crop crop;
+ struct v4l2_rect* rect;
+ struct v4l2_rect* bounds = &(s->cropcap.bounds);
+ struct v4l2_pix_format* pix_format = &(s->pix_format);
+ u8 scale;
+ const enum sn9c102_stream_state stream = cam->stream;
+ const u32 nbuffers = cam->nbuffers;
+ u32 i;
+ int err = 0;
+
+ if (copy_from_user(&crop, arg, sizeof(crop)))
+ return -EFAULT;
+
+ rect = &(crop.c);
+
+ if (crop.type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
+ for (i = 0; i < cam->nbuffers; i++)
+ if (cam->frame[i].vma_use_count) {
+ DBG(3, "VIDIOC_S_CROP failed. "
+ "Unmap the buffers first.")
+ return -EINVAL;
+ }
+
+ /* Preserve R,G or B origin */
+ rect->left = (s->_rect.left & 1L) ?
+ rect->left | 1L : rect->left & ~1L;
+ rect->top = (s->_rect.top & 1L) ?
+ rect->top | 1L : rect->top & ~1L;
+
+ if (rect->width < 16)
+ rect->width = 16;
+ if (rect->height < 16)
+ rect->height = 16;
+ if (rect->width > bounds->width)
+ rect->width = bounds->width;
+ if (rect->height > bounds->height)
+ rect->height = bounds->height;
+ if (rect->left < bounds->left)
+ rect->left = bounds->left;
+ if (rect->top < bounds->top)
+ rect->top = bounds->top;
+ if (rect->left + rect->width > bounds->left + bounds->width)
+ rect->left = bounds->left+bounds->width - rect->width;
+ if (rect->top + rect->height > bounds->top + bounds->height)
+ rect->top = bounds->top+bounds->height - rect->height;
+
+ rect->width &= ~15L;
+ rect->height &= ~15L;
+
+ if (SN9C102_PRESERVE_IMGSCALE) {
+ /* Calculate the actual scaling factor */
+ u32 a, b;
+ a = rect->width * rect->height;
+ b = pix_format->width * pix_format->height;
+ scale = b ? (u8)((a / b) < 4 ? 1 :
+ ((a / b) < 16 ? 2 : 4)) : 1;
+ } else
+ scale = 1;
+
+ if (cam->stream == STREAM_ON)
+ if ((err = sn9c102_stream_interrupt(cam)))
+ return err;
+
+ if (copy_to_user(arg, &crop, sizeof(crop))) {
+ cam->stream = stream;
+ return -EFAULT;
+ }
+
+ sn9c102_release_buffers(cam);
+
+ err = sn9c102_set_crop(cam, rect);
+ if (s->set_crop)
+ err += s->set_crop(cam, rect);
+ err += sn9c102_set_scale(cam, scale);
+
+ if (err) { /* atomic, no rollback in ioctl() */
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "VIDIOC_S_CROP failed because of hardware "
+ "problems. To use the camera, close and open "
+ "/dev/video%d again.", cam->v4ldev->minor)
+ return -EIO;
+ }
+
+ s->pix_format.width = rect->width/scale;
+ s->pix_format.height = rect->height/scale;
+ memcpy(&(s->_rect), rect, sizeof(*rect));
+
+ if (nbuffers != sn9c102_request_buffers(cam, nbuffers)) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "VIDIOC_S_CROP failed because of not enough "
+ "memory. To use the camera, close and open "
+ "/dev/video%d again.", cam->v4ldev->minor)
+ return -ENOMEM;
+ }
+
+ cam->stream = stream;
+
+ return 0;
+ }
+
+ case VIDIOC_ENUM_FMT:
+ {
+ struct v4l2_fmtdesc fmtd;
+
+ if (copy_from_user(&fmtd, arg, sizeof(fmtd)))
+ return -EFAULT;
+
+ if (fmtd.index == 0) {
+ strcpy(fmtd.description, "bayer rgb");
+ fmtd.pixelformat = V4L2_PIX_FMT_SBGGR8;
+ } else if (fmtd.index == 1) {
+ strcpy(fmtd.description, "compressed");
+ fmtd.pixelformat = V4L2_PIX_FMT_SN9C10X;
+ fmtd.flags = V4L2_FMT_FLAG_COMPRESSED;
+ } else
+ return -EINVAL;
+
+ fmtd.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ memset(&fmtd.reserved, 0, sizeof(fmtd.reserved));
+
+ if (copy_to_user(arg, &fmtd, sizeof(fmtd)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_G_FMT:
+ {
+ struct v4l2_format format;
+ struct v4l2_pix_format* pfmt = &(cam->sensor->pix_format);
+
+ if (copy_from_user(&format, arg, sizeof(format)))
+ return -EFAULT;
+
+ if (format.type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
+ pfmt->bytesperline = (pfmt->pixelformat==V4L2_PIX_FMT_SN9C10X)
+ ? 0 : (pfmt->width * pfmt->priv) / 8;
+ pfmt->sizeimage = pfmt->height * ((pfmt->width*pfmt->priv)/8);
+ pfmt->field = V4L2_FIELD_NONE;
+ memcpy(&(format.fmt.pix), pfmt, sizeof(*pfmt));
+
+ if (copy_to_user(arg, &format, sizeof(format)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_TRY_FMT:
+ case VIDIOC_S_FMT:
+ {
+ struct sn9c102_sensor* s = cam->sensor;
+ struct v4l2_format format;
+ struct v4l2_pix_format* pix;
+ struct v4l2_pix_format* pfmt = &(s->pix_format);
+ struct v4l2_rect* bounds = &(s->cropcap.bounds);
+ struct v4l2_rect rect;
+ u8 scale;
+ const enum sn9c102_stream_state stream = cam->stream;
+ const u32 nbuffers = cam->nbuffers;
+ u32 i;
+ int err = 0;
+
+ if (copy_from_user(&format, arg, sizeof(format)))
+ return -EFAULT;
+
+ pix = &(format.fmt.pix);
+
+ if (format.type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
+ memcpy(&rect, &(s->_rect), sizeof(rect));
+
+ { /* calculate the actual scaling factor */
+ u32 a, b;
+ a = rect.width * rect.height;
+ b = pix->width * pix->height;
+ scale = b ? (u8)((a / b) < 4 ? 1 :
+ ((a / b) < 16 ? 2 : 4)) : 1;
+ }
+
+ rect.width = scale * pix->width;
+ rect.height = scale * pix->height;
+
+ if (rect.width < 16)
+ rect.width = 16;
+ if (rect.height < 16)
+ rect.height = 16;
+ if (rect.width > bounds->left + bounds->width - rect.left)
+ rect.width = bounds->left + bounds->width - rect.left;
+ if (rect.height > bounds->top + bounds->height - rect.top)
+ rect.height = bounds->top + bounds->height - rect.top;
+
+ rect.width &= ~15L;
+ rect.height &= ~15L;
+
+ { /* adjust the scaling factor */
+ u32 a, b;
+ a = rect.width * rect.height;
+ b = pix->width * pix->height;
+ scale = b ? (u8)((a / b) < 4 ? 1 :
+ ((a / b) < 16 ? 2 : 4)) : 1;
+ }
+
+ pix->width = rect.width / scale;
+ pix->height = rect.height / scale;
+
+ if (pix->pixelformat != V4L2_PIX_FMT_SN9C10X &&
+ pix->pixelformat != V4L2_PIX_FMT_SBGGR8)
+ pix->pixelformat = pfmt->pixelformat;
+ pix->priv = pfmt->priv; /* bpp */
+ pix->colorspace = pfmt->colorspace;
+ pix->bytesperline = (pix->pixelformat == V4L2_PIX_FMT_SN9C10X)
+ ? 0 : (pix->width * pix->priv) / 8;
+ pix->sizeimage = pix->height * ((pix->width * pix->priv) / 8);
+ pix->field = V4L2_FIELD_NONE;
+
+ if (cmd == VIDIOC_TRY_FMT) {
+ if (copy_to_user(arg, &format, sizeof(format)))
+ return -EFAULT;
+ return 0;
+ }
+
+ for (i = 0; i < cam->nbuffers; i++)
+ if (cam->frame[i].vma_use_count) {
+ DBG(3, "VIDIOC_S_FMT failed. "
+ "Unmap the buffers first.")
+ return -EINVAL;
+ }
+
+ if (cam->stream == STREAM_ON)
+ if ((err = sn9c102_stream_interrupt(cam)))
+ return err;
+
+ if (copy_to_user(arg, &format, sizeof(format))) {
+ cam->stream = stream;
+ return -EFAULT;
+ }
+
+ sn9c102_release_buffers(cam);
+
+ err += sn9c102_set_format(cam, pix);
+ err += sn9c102_set_crop(cam, &rect);
+ if (s->set_crop)
+ err += s->set_crop(cam, &rect);
+ err += sn9c102_set_scale(cam, scale);
+
+ if (err) { /* atomic, no rollback in ioctl() */
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "VIDIOC_S_FMT failed because of hardware "
+ "problems. To use the camera, close and open "
+ "/dev/video%d again.", cam->v4ldev->minor)
+ return -EIO;
+ }
+
+ memcpy(pfmt, pix, sizeof(*pix));
+ memcpy(&(s->_rect), &rect, sizeof(rect));
+
+ if (nbuffers != sn9c102_request_buffers(cam, nbuffers)) {
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "VIDIOC_S_FMT failed because of not enough "
+ "memory. To use the camera, close and open "
+ "/dev/video%d again.", cam->v4ldev->minor)
+ return -ENOMEM;
+ }
+
+ cam->stream = stream;
+
+ return 0;
+ }
+
+ case VIDIOC_G_JPEGCOMP:
+ {
+ if (copy_to_user(arg, &cam->compression,
+ sizeof(cam->compression)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_S_JPEGCOMP:
+ {
+ struct v4l2_jpegcompression jc;
+ const enum sn9c102_stream_state stream = cam->stream;
+ int err = 0;
+
+ if (copy_from_user(&jc, arg, sizeof(jc)))
+ return -EFAULT;
+
+ if (jc.quality != 0 && jc.quality != 1)
+ return -EINVAL;
+
+ if (cam->stream == STREAM_ON)
+ if ((err = sn9c102_stream_interrupt(cam)))
+ return err;
+
+ err += sn9c102_set_compression(cam, &jc);
+ if (err) { /* atomic, no rollback in ioctl() */
+ cam->state |= DEV_MISCONFIGURED;
+ DBG(1, "VIDIOC_S_JPEGCOMP failed because of hardware "
+ "problems. To use the camera, close and open "
+ "/dev/video%d again.", cam->v4ldev->minor)
+ return -EIO;
+ }
+
+ cam->compression.quality = jc.quality;
+
+ cam->stream = stream;
+
+ return 0;
+ }
+
+ case VIDIOC_REQBUFS:
+ {
+ struct v4l2_requestbuffers rb;
+ u32 i;
+ int err;
+
+ if (copy_from_user(&rb, arg, sizeof(rb)))
+ return -EFAULT;
+
+ if (rb.type != V4L2_BUF_TYPE_VIDEO_CAPTURE ||
+ rb.memory != V4L2_MEMORY_MMAP)
+ return -EINVAL;
+
+ if (cam->io == IO_READ) {
+ DBG(3, "Close and open the device again to choose "
+ "the mmap I/O method")
+ return -EINVAL;
+ }
+
+ for (i = 0; i < cam->nbuffers; i++)
+ if (cam->frame[i].vma_use_count) {
+ DBG(3, "VIDIOC_REQBUFS failed. "
+ "Previous buffers are still mapped.")
+ return -EINVAL;
+ }
+
+ if (cam->stream == STREAM_ON)
+ if ((err = sn9c102_stream_interrupt(cam)))
+ return err;
+
+ sn9c102_empty_framequeues(cam);
+
+ sn9c102_release_buffers(cam);
+ if (rb.count)
+ rb.count = sn9c102_request_buffers(cam, rb.count);
+
+ if (copy_to_user(arg, &rb, sizeof(rb))) {
+ sn9c102_release_buffers(cam);
+ cam->io = IO_NONE;
+ return -EFAULT;
+ }
+
+ cam->io = rb.count ? IO_MMAP : IO_NONE;
+
+ return 0;
+ }
+
+ case VIDIOC_QUERYBUF:
+ {
+ struct v4l2_buffer b;
+
+ if (copy_from_user(&b, arg, sizeof(b)))
+ return -EFAULT;
+
+ if (b.type != V4L2_BUF_TYPE_VIDEO_CAPTURE ||
+ b.index >= cam->nbuffers || cam->io != IO_MMAP)
+ return -EINVAL;
+
+ memcpy(&b, &cam->frame[b.index].buf, sizeof(b));
+
+ if (cam->frame[b.index].vma_use_count)
+ b.flags |= V4L2_BUF_FLAG_MAPPED;
+
+ if (cam->frame[b.index].state == F_DONE)
+ b.flags |= V4L2_BUF_FLAG_DONE;
+ else if (cam->frame[b.index].state != F_UNUSED)
+ b.flags |= V4L2_BUF_FLAG_QUEUED;
+
+ if (copy_to_user(arg, &b, sizeof(b)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_QBUF:
+ {
+ struct v4l2_buffer b;
+ unsigned long lock_flags;
+
+ if (copy_from_user(&b, arg, sizeof(b)))
+ return -EFAULT;
+
+ if (b.type != V4L2_BUF_TYPE_VIDEO_CAPTURE ||
+ b.index >= cam->nbuffers || cam->io != IO_MMAP)
+ return -EINVAL;
+
+ if (cam->frame[b.index].state != F_UNUSED)
+ return -EINVAL;
+
+ cam->frame[b.index].state = F_QUEUED;
+
+ spin_lock_irqsave(&cam->queue_lock, lock_flags);
+ list_add_tail(&cam->frame[b.index].frame, &cam->inqueue);
+ spin_unlock_irqrestore(&cam->queue_lock, lock_flags);
+
+ PDBGG("Frame #%lu queued", (unsigned long)b.index)
+
+ return 0;
+ }
+
+ case VIDIOC_DQBUF:
+ {
+ struct v4l2_buffer b;
+ struct sn9c102_frame_t *f;
+ unsigned long lock_flags;
+ int err = 0;
+
+ if (copy_from_user(&b, arg, sizeof(b)))
+ return -EFAULT;
+
+ if (b.type != V4L2_BUF_TYPE_VIDEO_CAPTURE || cam->io!= IO_MMAP)
+ return -EINVAL;
+
+ if (list_empty(&cam->outqueue)) {
+ if (cam->stream == STREAM_OFF)
+ return -EINVAL;
+ if (filp->f_flags & O_NONBLOCK)
+ return -EAGAIN;
+ err = wait_event_interruptible
+ ( cam->wait_frame,
+ (!list_empty(&cam->outqueue)) ||
+ (cam->state & DEV_DISCONNECTED) );
+ if (err)
+ return err;
+ if (cam->state & DEV_DISCONNECTED)
+ return -ENODEV;
+ }
+
+ spin_lock_irqsave(&cam->queue_lock, lock_flags);
+ f = list_entry(cam->outqueue.next, struct sn9c102_frame_t,
+ frame);
+ list_del(cam->outqueue.next);
+ spin_unlock_irqrestore(&cam->queue_lock, lock_flags);
+
+ f->state = F_UNUSED;
+
+ memcpy(&b, &f->buf, sizeof(b));
+ if (f->vma_use_count)
+ b.flags |= V4L2_BUF_FLAG_MAPPED;
+
+ if (copy_to_user(arg, &b, sizeof(b)))
+ return -EFAULT;
+
+ PDBGG("Frame #%lu dequeued", (unsigned long)f->buf.index)
+
+ return 0;
+ }
+
+ case VIDIOC_STREAMON:
+ {
+ int type;
+
+ if (copy_from_user(&type, arg, sizeof(type)))
+ return -EFAULT;
+
+ if (type != V4L2_BUF_TYPE_VIDEO_CAPTURE || cam->io != IO_MMAP)
+ return -EINVAL;
+
+ if (list_empty(&cam->inqueue))
+ return -EINVAL;
+
+ cam->stream = STREAM_ON;
+
+ DBG(3, "Stream on")
+
+ return 0;
+ }
+
+ case VIDIOC_STREAMOFF:
+ {
+ int type, err;
+
+ if (copy_from_user(&type, arg, sizeof(type)))
+ return -EFAULT;
+
+ if (type != V4L2_BUF_TYPE_VIDEO_CAPTURE || cam->io != IO_MMAP)
+ return -EINVAL;
+
+ if (cam->stream == STREAM_ON)
+ if ((err = sn9c102_stream_interrupt(cam)))
+ return err;
+
+ sn9c102_empty_framequeues(cam);
+
+ DBG(3, "Stream off")
+
+ return 0;
+ }
+
+ case VIDIOC_G_PARM:
+ {
+ struct v4l2_streamparm sp;
+
+ if (copy_from_user(&sp, arg, sizeof(sp)))
+ return -EFAULT;
+
+ if (sp.type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
+ sp.parm.capture.extendedmode = 0;
+ sp.parm.capture.readbuffers = cam->nreadbuffers;
+
+ if (copy_to_user(arg, &sp, sizeof(sp)))
+ return -EFAULT;
+
+ return 0;
+ }
+
+ case VIDIOC_S_PARM:
+ {
+ struct v4l2_streamparm sp;
+
+ if (copy_from_user(&sp, arg, sizeof(sp)))
+ return -EFAULT;
+
+ if (sp.type != V4L2_BUF_TYPE_VIDEO_CAPTURE)
+ return -EINVAL;
+
+ sp.parm.capture.extendedmode = 0;
+
+ if (sp.parm.capture.readbuffers == 0)
+ sp.parm.capture.readbuffers = cam->nreadbuffers;
+
+ if (sp.parm.capture.readbuffers > SN9C102_MAX_FRAMES)
+ sp.parm.capture.readbuffers = SN9C102_MAX_FRAMES;
+
+ if (copy_to_user(arg, &sp, sizeof(sp)))
+ return -EFAULT;
+
+ cam->nreadbuffers = sp.parm.capture.readbuffers;
+
+ return 0;
+ }
+
+ case VIDIOC_G_STD:
+ case VIDIOC_S_STD:
+ case VIDIOC_QUERYSTD:
+ case VIDIOC_ENUMSTD:
+ case VIDIOC_QUERYMENU:
+ return -EINVAL;
+
+ default:
+ return -EINVAL;
+
+ }
+}
+
+
+static int sn9c102_ioctl(struct inode* inode, struct file* filp,
+ unsigned int cmd, unsigned long arg)
+{
+ struct sn9c102_device* cam = video_get_drvdata(video_devdata(filp));
+ int err = 0;
+
+ if (down_interruptible(&cam->fileop_sem))
+ return -ERESTARTSYS;
+
+ if (cam->state & DEV_DISCONNECTED) {
+ DBG(1, "Device not present")
+ up(&cam->fileop_sem);
+ return -ENODEV;
+ }
+
+ if (cam->state & DEV_MISCONFIGURED) {
+ DBG(1, "The camera is misconfigured. Close and open it again.")
+ up(&cam->fileop_sem);
+ return -EIO;
+ }
+
+ err = sn9c102_v4l2_ioctl(inode, filp, cmd, (void __user *)arg);
+
+ up(&cam->fileop_sem);
+
+ return err;
+}
+
+
+static struct file_operations sn9c102_fops = {
+ .owner = THIS_MODULE,
+ .open = sn9c102_open,
+ .release = sn9c102_release,
+ .ioctl = sn9c102_ioctl,
+ .read = sn9c102_read,
+ .poll = sn9c102_poll,
+ .mmap = sn9c102_mmap,
+ .llseek = no_llseek,
+};
+
+/*****************************************************************************/
+
+/* It exists a single interface only. We do not need to validate anything. */
+static int
+sn9c102_usb_probe(struct usb_interface* intf, const struct usb_device_id* id)
+{
+ struct usb_device *udev = interface_to_usbdev(intf);
+ struct sn9c102_device* cam;
+ static unsigned int dev_nr = 0;
+ unsigned int i, n;
+ int err = 0, r;
+
+ n = sizeof(sn9c102_id_table)/sizeof(sn9c102_id_table[0]);
+ for (i = 0; i < n-1; i++)
+ if (udev->descriptor.idVendor==sn9c102_id_table[i].idVendor &&
+ udev->descriptor.idProduct==sn9c102_id_table[i].idProduct)
+ break;
+ if (i == n-1)
+ return -ENODEV;
+
+ if (!(cam = kmalloc(sizeof(struct sn9c102_device), GFP_KERNEL)))
+ return -ENOMEM;
+ memset(cam, 0, sizeof(*cam));
+
+ cam->usbdev = udev;
+
+ memcpy(&cam->dev, &udev->dev, sizeof(struct device));
+
+ if (!(cam->control_buffer = kmalloc(8, GFP_KERNEL))) {
+ DBG(1, "kmalloc() failed")
+ err = -ENOMEM;
+ goto fail;
+ }
+ memset(cam->control_buffer, 0, 8);
+
+ if (!(cam->v4ldev = video_device_alloc())) {
+ DBG(1, "video_device_alloc() failed")
+ err = -ENOMEM;
+ goto fail;
+ }
+
+ init_MUTEX(&cam->dev_sem);
+
+ r = sn9c102_read_reg(cam, 0x00);
+ if (r < 0 || r != 0x10) {
+ DBG(1, "Sorry, this is not a SN9C10x based camera "
+ "(vid/pid 0x%04X/0x%04X)",
+ sn9c102_id_table[i].idVendor,sn9c102_id_table[i].idProduct)
+ err = -ENODEV;
+ goto fail;
+ }
+
+ cam->bridge = (sn9c102_id_table[i].idProduct & 0xffc0) == 0x6080 ?
+ BRIDGE_SN9C103 : BRIDGE_SN9C102;
+ switch (cam->bridge) {
+ case BRIDGE_SN9C101:
+ case BRIDGE_SN9C102:
+ DBG(2, "SN9C10[12] PC Camera Controller detected "
+ "(vid/pid 0x%04X/0x%04X)", sn9c102_id_table[i].idVendor,
+ sn9c102_id_table[i].idProduct)
+ break;
+ case BRIDGE_SN9C103:
+ DBG(2, "SN9C103 PC Camera Controller detected "
+ "(vid/pid 0x%04X/0x%04X)", sn9c102_id_table[i].idVendor,
+ sn9c102_id_table[i].idProduct)
+ break;
+ }
+
+ for (i = 0; sn9c102_sensor_table[i]; i++) {
+ err = sn9c102_sensor_table[i](cam);
+ if (!err)
+ break;
+ }
+
+ if (!err && cam->sensor) {
+ DBG(2, "%s image sensor detected", cam->sensor->name)
+ DBG(3, "Support for %s maintained by %s",
+ cam->sensor->name, cam->sensor->maintainer)
+ } else {
+ DBG(1, "No supported image sensor detected")
+ err = -ENODEV;
+ goto fail;
+ }
+
+ if (sn9c102_init(cam)) {
+ DBG(1, "Initialization failed. I will retry on open().")
+ cam->state |= DEV_MISCONFIGURED;
+ }
+
+ strcpy(cam->v4ldev->name, "SN9C10x PC Camera");
+ cam->v4ldev->owner = THIS_MODULE;
+ cam->v4ldev->type = VID_TYPE_CAPTURE | VID_TYPE_SCALES;
+ cam->v4ldev->hardware = VID_HARDWARE_SN9C102;
+ cam->v4ldev->fops = &sn9c102_fops;
+ cam->v4ldev->minor = video_nr[dev_nr];
+ cam->v4ldev->release = video_device_release;
+ video_set_drvdata(cam->v4ldev, cam);
+
+ down(&cam->dev_sem);
+
+ err = video_register_device(cam->v4ldev, VFL_TYPE_GRABBER,
+ video_nr[dev_nr]);
+ if (err) {
+ DBG(1, "V4L2 device registration failed")
+ if (err == -ENFILE && video_nr[dev_nr] == -1)
+ DBG(1, "Free /dev/videoX node not found")
+ video_nr[dev_nr] = -1;
+ dev_nr = (dev_nr < SN9C102_MAX_DEVICES-1) ? dev_nr+1 : 0;
+ up(&cam->dev_sem);
+ goto fail;
+ }
+
+ DBG(2, "V4L2 device registered as /dev/video%d", cam->v4ldev->minor)
+
+ sn9c102_create_sysfs(cam);
+
+ usb_set_intfdata(intf, cam);
+
+ up(&cam->dev_sem);
+
+ return 0;
+
+fail:
+ if (cam) {
+ kfree(cam->control_buffer);
+ if (cam->v4ldev)
+ video_device_release(cam->v4ldev);
+ kfree(cam);
+ }
+ return err;
+}
+
+
+static void sn9c102_usb_disconnect(struct usb_interface* intf)
+{
+ struct sn9c102_device* cam = usb_get_intfdata(intf);
+
+ if (!cam)
+ return;
+
+ down_write(&sn9c102_disconnect);
+
+ down(&cam->dev_sem);
+
+ DBG(2, "Disconnecting %s...", cam->v4ldev->name)
+
+ wake_up_interruptible_all(&cam->open);
+
+ if (cam->users) {
+ DBG(2, "Device /dev/video%d is open! Deregistration and "
+ "memory deallocation are deferred on close.",
+ cam->v4ldev->minor)
+ cam->state |= DEV_MISCONFIGURED;
+ sn9c102_stop_transfer(cam);
+ cam->state |= DEV_DISCONNECTED;
+ wake_up_interruptible(&cam->wait_frame);
+ wake_up_interruptible(&cam->wait_stream);
+ } else {
+ cam->state |= DEV_DISCONNECTED;
+ sn9c102_release_resources(cam);
+ }
+
+ up(&cam->dev_sem);
+
+ if (!cam->users)
+ kfree(cam);
+
+ up_write(&sn9c102_disconnect);
+}
+
+
+static struct usb_driver sn9c102_usb_driver = {
+ .owner = THIS_MODULE,
+ .name = "sn9c102",
+ .id_table = sn9c102_id_table,
+ .probe = sn9c102_usb_probe,
+ .disconnect = sn9c102_usb_disconnect,
+};
+
+/*****************************************************************************/
+
+static int __init sn9c102_module_init(void)
+{
+ int err = 0;
+
+ KDBG(2, SN9C102_MODULE_NAME " v" SN9C102_MODULE_VERSION)
+ KDBG(3, SN9C102_MODULE_AUTHOR)
+
+ if ((err = usb_register(&sn9c102_usb_driver)))
+ KDBG(1, "usb_register() failed")
+
+ return err;
+}
+
+
+static void __exit sn9c102_module_exit(void)
+{
+ usb_deregister(&sn9c102_usb_driver);
+}
+
+
+module_init(sn9c102_module_init);
+module_exit(sn9c102_module_exit);
--- /dev/null
+/***************************************************************************
+ * Plug-in for PAS106B image sensor connected to the SN9C10x PC Camera *
+ * Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#include <linux/delay.h>
+#include "sn9c102_sensor.h"
+
+
+static struct sn9c102_sensor pas106b;
+
+
+static int pas106b_init(struct sn9c102_device* cam)
+{
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, 0x00, 0x10);
+ err += sn9c102_write_reg(cam, 0x00, 0x11);
+ err += sn9c102_write_reg(cam, 0x00, 0x14);
+ err += sn9c102_write_reg(cam, 0x20, 0x17);
+ err += sn9c102_write_reg(cam, 0x20, 0x19);
+ err += sn9c102_write_reg(cam, 0x09, 0x18);
+
+ err += sn9c102_i2c_write(cam, 0x02, 0x0c);
+ err += sn9c102_i2c_write(cam, 0x05, 0x5a);
+ err += sn9c102_i2c_write(cam, 0x06, 0x88);
+ err += sn9c102_i2c_write(cam, 0x07, 0x80);
+ err += sn9c102_i2c_write(cam, 0x10, 0x06);
+ err += sn9c102_i2c_write(cam, 0x11, 0x06);
+ err += sn9c102_i2c_write(cam, 0x12, 0x00);
+ err += sn9c102_i2c_write(cam, 0x14, 0x02);
+ err += sn9c102_i2c_write(cam, 0x13, 0x01);
+
+ msleep(400);
+
+ return err;
+}
+
+
+static int pas106b_get_ctrl(struct sn9c102_device* cam,
+ struct v4l2_control* ctrl)
+{
+ switch (ctrl->id) {
+ case V4L2_CID_EXPOSURE:
+ {
+ int r1 = sn9c102_i2c_read(cam, 0x03),
+ r2 = sn9c102_i2c_read(cam, 0x04);
+ if (r1 < 0 || r2 < 0)
+ return -EIO;
+ ctrl->value = (r1 << 4) | (r2 & 0x0f);
+ }
+ return 0;
+ case V4L2_CID_RED_BALANCE:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x0c)) < 0)
+ return -EIO;
+ ctrl->value &= 0x1f;
+ return 0;
+ case V4L2_CID_BLUE_BALANCE:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x09)) < 0)
+ return -EIO;
+ ctrl->value &= 0x1f;
+ return 0;
+ case V4L2_CID_GAIN:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x0e)) < 0)
+ return -EIO;
+ ctrl->value &= 0x1f;
+ return 0;
+ case V4L2_CID_CONTRAST:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x0f)) < 0)
+ return -EIO;
+ ctrl->value &= 0x07;
+ return 0;
+ case SN9C102_V4L2_CID_GREEN_BALANCE:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x0a)) < 0)
+ return -EIO;
+ ctrl->value = (ctrl->value & 0x1f) << 1;
+ return 0;
+ case SN9C102_V4L2_CID_DAC_MAGNITUDE:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x08)) < 0)
+ return -EIO;
+ ctrl->value &= 0xf8;
+ return 0;
+ case SN9C102_V4L2_CID_DAC_SIGN:
+ if ((ctrl->value = sn9c102_i2c_read(cam, 0x07)) < 0)
+ return -EIO;
+ ctrl->value &= 0x01;
+ return 0;
+ default:
+ return -EINVAL;
+ }
+}
+
+
+static int pas106b_set_ctrl(struct sn9c102_device* cam,
+ const struct v4l2_control* ctrl)
+{
+ int err = 0;
+
+ switch (ctrl->id) {
+ case V4L2_CID_EXPOSURE:
+ err += sn9c102_i2c_write(cam, 0x03, ctrl->value >> 4);
+ err += sn9c102_i2c_write(cam, 0x04, ctrl->value & 0x0f);
+ break;
+ case V4L2_CID_RED_BALANCE:
+ err += sn9c102_i2c_write(cam, 0x0c, ctrl->value);
+ break;
+ case V4L2_CID_BLUE_BALANCE:
+ err += sn9c102_i2c_write(cam, 0x09, ctrl->value);
+ break;
+ case V4L2_CID_GAIN:
+ err += sn9c102_i2c_write(cam, 0x0e, ctrl->value);
+ break;
+ case V4L2_CID_CONTRAST:
+ err += sn9c102_i2c_write(cam, 0x0f, ctrl->value);
+ break;
+ case SN9C102_V4L2_CID_GREEN_BALANCE:
+ err += sn9c102_i2c_write(cam, 0x0a, ctrl->value >> 1);
+ err += sn9c102_i2c_write(cam, 0x0b, ctrl->value >> 1);
+ break;
+ case SN9C102_V4L2_CID_DAC_MAGNITUDE:
+ err += sn9c102_i2c_write(cam, 0x08, ctrl->value << 3);
+ break;
+ case SN9C102_V4L2_CID_DAC_SIGN:
+ {
+ int r;
+ err += (r = sn9c102_i2c_read(cam, 0x07)) < 0 ? r : 0;
+ err += sn9c102_i2c_write(cam, 0x07, r | ctrl->value);
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+ err += sn9c102_i2c_write(cam, 0x13, 0x01);
+
+ return err ? -EIO : 0;
+}
+
+
+static int pas106b_set_crop(struct sn9c102_device* cam,
+ const struct v4l2_rect* rect)
+{
+ struct sn9c102_sensor* s = &pas106b;
+ int err = 0;
+ u8 h_start = (u8)(rect->left - s->cropcap.bounds.left) + 4,
+ v_start = (u8)(rect->top - s->cropcap.bounds.top) + 3;
+
+ err += sn9c102_write_reg(cam, h_start, 0x12);
+ err += sn9c102_write_reg(cam, v_start, 0x13);
+
+ return err;
+}
+
+
+static struct sn9c102_sensor pas106b = {
+ .name = "PAS106B",
+ .maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .frequency = SN9C102_I2C_400KHZ | SN9C102_I2C_100KHZ,
+ .interface = SN9C102_I2C_2WIRES,
+ .slave_read_id = 0x40,
+ .slave_write_id = 0x40,
+ .init = &pas106b_init,
+ .qctrl = {
+ {
+ .id = V4L2_CID_EXPOSURE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "exposure",
+ .minimum = 0x125,
+ .maximum = 0xfff,
+ .step = 0x01,
+ .default_value = 0x140,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_GAIN,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "global gain",
+ .minimum = 0x00,
+ .maximum = 0x1f,
+ .step = 0x01,
+ .default_value = 0x0d,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_CONTRAST,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "contrast",
+ .minimum = 0x00,
+ .maximum = 0x07,
+ .step = 0x01,
+ .default_value = 0x00, /* 0x00~0x03 have same effect */
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_RED_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "red balance",
+ .minimum = 0x00,
+ .maximum = 0x1f,
+ .step = 0x01,
+ .default_value = 0x04,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_BLUE_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "blue balance",
+ .minimum = 0x00,
+ .maximum = 0x1f,
+ .step = 0x01,
+ .default_value = 0x06,
+ .flags = 0,
+ },
+ {
+ .id = SN9C102_V4L2_CID_GREEN_BALANCE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "green balance",
+ .minimum = 0x00,
+ .maximum = 0x3e,
+ .step = 0x02,
+ .default_value = 0x02,
+ .flags = 0,
+ },
+ {
+ .id = SN9C102_V4L2_CID_DAC_MAGNITUDE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "DAC magnitude",
+ .minimum = 0x00,
+ .maximum = 0x1f,
+ .step = 0x01,
+ .default_value = 0x01,
+ .flags = 0,
+ },
+ {
+ .id = SN9C102_V4L2_CID_DAC_SIGN,
+ .type = V4L2_CTRL_TYPE_BOOLEAN,
+ .name = "DAC sign",
+ .minimum = 0x00,
+ .maximum = 0x01,
+ .step = 0x01,
+ .default_value = 0x00,
+ .flags = 0,
+ },
+ },
+ .get_ctrl = &pas106b_get_ctrl,
+ .set_ctrl = &pas106b_set_ctrl,
+ .cropcap = {
+ .bounds = {
+ .left = 0,
+ .top = 0,
+ .width = 352,
+ .height = 288,
+ },
+ .defrect = {
+ .left = 0,
+ .top = 0,
+ .width = 352,
+ .height = 288,
+ },
+ },
+ .set_crop = &pas106b_set_crop,
+ .pix_format = {
+ .width = 352,
+ .height = 288,
+ .pixelformat = V4L2_PIX_FMT_SBGGR8,
+ .priv = 8, /* we use this field as 'bits per pixel' */
+ }
+};
+
+
+int sn9c102_probe_pas106b(struct sn9c102_device* cam)
+{
+ int r0 = 0, r1 = 0, err = 0;
+ unsigned int pid = 0;
+
+ /*
+ Minimal initialization to enable the I2C communication
+ NOTE: do NOT change the values!
+ */
+ err += sn9c102_write_reg(cam, 0x01, 0x01); /* sensor power down */
+ err += sn9c102_write_reg(cam, 0x00, 0x01); /* sensor power on */
+ err += sn9c102_write_reg(cam, 0x28, 0x17); /* sensor clock at 24 MHz */
+ if (err)
+ return -EIO;
+
+ r0 = sn9c102_i2c_try_read(cam, &pas106b, 0x00);
+ r1 = sn9c102_i2c_try_read(cam, &pas106b, 0x01);
+
+ if (r0 < 0 || r1 < 0)
+ return -EIO;
+
+ pid = (r0 << 11) | ((r1 & 0xf0) >> 4);
+ if (pid != 0x007)
+ return -ENODEV;
+
+ sn9c102_attach_sensor(cam, &pas106b);
+
+ return 0;
+}
--- /dev/null
+/***************************************************************************
+ * API for image sensors connected to the SN9C10x PC Camera Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#ifndef _SN9C102_SENSOR_H_
+#define _SN9C102_SENSOR_H_
+
+#include <linux/usb.h>
+#include <linux/videodev.h>
+#include <linux/device.h>
+#include <linux/stddef.h>
+#include <linux/errno.h>
+#include <asm/types.h>
+
+struct sn9c102_device;
+struct sn9c102_sensor;
+
+/*****************************************************************************/
+
+/*
+ OVERVIEW.
+ This is a small interface that allows you to add support for any CCD/CMOS
+ image sensors connected to the SN9C10X bridges. The entire API is documented
+ below. In the most general case, to support a sensor there are three steps
+ you have to follow:
+ 1) define the main "sn9c102_sensor" structure by setting the basic fields;
+ 2) write a probing function to be called by the core module when the USB
+ camera is recognized, then add both the USB ids and the name of that
+ function to the two corresponding tables SENSOR_TABLE and ID_TABLE (see
+ below);
+ 3) implement the methods that you want/need (and fill the rest of the main
+ structure accordingly).
+ "sn9c102_pas106b.c" is an example of all this stuff. Remember that you do
+ NOT need to touch the source code of the core module for the things to work
+ properly, unless you find bugs or flaws in it. Finally, do not forget to
+ read the V4L2 API for completeness.
+*/
+
+/*****************************************************************************/
+
+/*
+ Probing functions: on success, you must attach the sensor to the camera
+ by calling sn9c102_attach_sensor() provided below.
+ To enable the I2C communication, you might need to perform a really basic
+ initialization of the SN9C10X chip by using the write function declared
+ ahead.
+ Functions must return 0 on success, the appropriate error otherwise.
+*/
+extern int sn9c102_probe_pas106b(struct sn9c102_device* cam);
+extern int sn9c102_probe_pas202bcb(struct sn9c102_device* cam);
+extern int sn9c102_probe_tas5110c1b(struct sn9c102_device* cam);
+extern int sn9c102_probe_tas5130d1b(struct sn9c102_device* cam);
+
+/*
+ Add the above entries to this table. Be sure to add the entry in the right
+ place, since, on failure, the next probing routine is called according to
+ the order of the list below, from top to bottom.
+*/
+#define SN9C102_SENSOR_TABLE \
+static int (*sn9c102_sensor_table[])(struct sn9c102_device*) = { \
+ &sn9c102_probe_pas106b, /* strong detection based on SENSOR ids */ \
+ &sn9c102_probe_pas202bcb, /* strong detection based on SENSOR ids */ \
+ &sn9c102_probe_tas5110c1b, /* detection based on USB pid/vid */ \
+ &sn9c102_probe_tas5130d1b, /* detection based on USB pid/vid */ \
+ NULL, \
+};
+
+/* Attach a probed sensor to the camera. */
+extern void
+sn9c102_attach_sensor(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor);
+
+/* Each SN9C10X camera has proper PID/VID identifiers. Add them here in case.*/
+#define SN9C102_ID_TABLE \
+static const struct usb_device_id sn9c102_id_table[] = { \
+ { USB_DEVICE(0x0c45, 0x6001), }, /* TAS5110C1B */ \
+ { USB_DEVICE(0x0c45, 0x6005), }, /* TAS5110C1B */ \
+ { USB_DEVICE(0x0c45, 0x6009), }, /* PAS106B */ \
+ { USB_DEVICE(0x0c45, 0x600d), }, /* PAS106B */ \
+ { USB_DEVICE(0x0c45, 0x6024), }, \
+ { USB_DEVICE(0x0c45, 0x6025), }, /* TAS5130D1B and TAS5110C1B */ \
+ { USB_DEVICE(0x0c45, 0x6028), }, /* PAS202BCB */ \
+ { USB_DEVICE(0x0c45, 0x6029), }, /* PAS106B */ \
+ { USB_DEVICE(0x0c45, 0x602a), }, /* HV7131[D|E1] */ \
+ { USB_DEVICE(0x0c45, 0x602b), }, /* MI-0343 */ \
+ { USB_DEVICE(0x0c45, 0x602c), }, /* OV7620 */ \
+ { USB_DEVICE(0x0c45, 0x6030), }, /* MI03x */ \
+ { USB_DEVICE(0x0c45, 0x6080), }, \
+ { USB_DEVICE(0x0c45, 0x6082), }, /* MI0343 and MI0360 */ \
+ { USB_DEVICE(0x0c45, 0x6083), }, /* HV7131[D|E1] */ \
+ { USB_DEVICE(0x0c45, 0x6088), }, \
+ { USB_DEVICE(0x0c45, 0x608a), }, \
+ { USB_DEVICE(0x0c45, 0x608b), }, \
+ { USB_DEVICE(0x0c45, 0x608c), }, /* HV7131x */ \
+ { USB_DEVICE(0x0c45, 0x608e), }, /* CIS-VF10 */ \
+ { USB_DEVICE(0x0c45, 0x608f), }, /* OV7630 */ \
+ { USB_DEVICE(0x0c45, 0x60a0), }, \
+ { USB_DEVICE(0x0c45, 0x60a2), }, \
+ { USB_DEVICE(0x0c45, 0x60a3), }, \
+ { USB_DEVICE(0x0c45, 0x60a8), }, /* PAS106B */ \
+ { USB_DEVICE(0x0c45, 0x60aa), }, /* TAS5130D1B */ \
+ { USB_DEVICE(0x0c45, 0x60ab), }, /* TAS5110C1B */ \
+ { USB_DEVICE(0x0c45, 0x60ac), }, \
+ { USB_DEVICE(0x0c45, 0x60ae), }, \
+ { USB_DEVICE(0x0c45, 0x60af), }, /* PAS202BCB */ \
+ { USB_DEVICE(0x0c45, 0x60b0), }, \
+ { USB_DEVICE(0x0c45, 0x60b2), }, \
+ { USB_DEVICE(0x0c45, 0x60b3), }, \
+ { USB_DEVICE(0x0c45, 0x60b8), }, \
+ { USB_DEVICE(0x0c45, 0x60ba), }, \
+ { USB_DEVICE(0x0c45, 0x60bb), }, \
+ { USB_DEVICE(0x0c45, 0x60bc), }, \
+ { USB_DEVICE(0x0c45, 0x60be), }, \
+ { } \
+};
+
+/*****************************************************************************/
+
+/*
+ Read/write routines: they always return -1 on error, 0 or the read value
+ otherwise. NOTE that a real read operation is not supported by the SN9C10X
+ chip for some of its registers. To work around this problem, a pseudo-read
+ call is provided instead: it returns the last successfully written value
+ on the register (0 if it has never been written), the usual -1 on error.
+*/
+
+/* The "try" I2C I/O versions are used when probing the sensor */
+extern int sn9c102_i2c_try_write(struct sn9c102_device*,struct sn9c102_sensor*,
+ u8 address, u8 value);
+extern int sn9c102_i2c_try_read(struct sn9c102_device*,struct sn9c102_sensor*,
+ u8 address);
+
+/*
+ This must be used if and only if the sensor doesn't implement the standard
+ I2C protocol. There a number of good reasons why you must use the
+ single-byte versions of this function: do not abuse. It writes n bytes,
+ from data0 to datan, (registers 0x09 - 0x09+n of SN9C10X chip).
+*/
+extern int sn9c102_i2c_try_raw_write(struct sn9c102_device* cam,
+ struct sn9c102_sensor* sensor, u8 n,
+ u8 data0, u8 data1, u8 data2, u8 data3,
+ u8 data4, u8 data5);
+
+/* To be used after the sensor struct has been attached to the camera struct */
+extern int sn9c102_i2c_write(struct sn9c102_device*, u8 address, u8 value);
+extern int sn9c102_i2c_read(struct sn9c102_device*, u8 address);
+
+/* I/O on registers in the bridge. Could be used by the sensor methods too */
+extern int sn9c102_write_reg(struct sn9c102_device*, u8 value, u16 index);
+extern int sn9c102_pread_reg(struct sn9c102_device*, u16 index);
+
+/*
+ NOTE: there are no debugging functions here. To uniform the output you must
+ use the dev_info()/dev_warn()/dev_err() macros defined in device.h, already
+ included here, the argument being the struct device 'dev' of the sensor
+ structure. Do NOT use these macros before the sensor is attached or the
+ kernel will crash! However you should not need to notify the user about
+ common errors or other messages, since this is done by the master module.
+*/
+
+/*****************************************************************************/
+
+enum sn9c102_i2c_frequency { /* sensors may support both the frequencies */
+ SN9C102_I2C_100KHZ = 0x01,
+ SN9C102_I2C_400KHZ = 0x02,
+};
+
+enum sn9c102_i2c_interface {
+ SN9C102_I2C_2WIRES,
+ SN9C102_I2C_3WIRES,
+};
+
+#define SN9C102_I2C_SLAVEID_FICTITIOUS 0xff
+#define SN9C102_I2C_SLAVEID_UNAVAILABLE 0x00
+
+struct sn9c102_sensor {
+ char name[32], /* sensor name */
+ maintainer[64]; /* name of the mantainer <email> */
+
+ /*
+ These sensor capabilities must be provided if the SN9C10X controller
+ needs to communicate through the sensor serial interface by using
+ at least one of the i2c functions available.
+ */
+ enum sn9c102_i2c_frequency frequency;
+ enum sn9c102_i2c_interface interface;
+
+ /*
+ These identifiers must be provided if the image sensor implements
+ the standard I2C protocol.
+ */
+ u8 slave_read_id, slave_write_id; /* reg. 0x09 */
+
+ /*
+ NOTE: Where not noted,most of the functions below are not mandatory.
+ Set to null if you do not implement them. If implemented,
+ they must return 0 on success, the proper error otherwise.
+ */
+
+ int (*init)(struct sn9c102_device* cam);
+ /*
+ This function is called after the sensor has been attached.
+ It should be used to initialize the sensor only, but may also
+ configure part of the SN9C10X chip if necessary. You don't need to
+ setup picture settings like brightness, contrast, etc.. here, if
+ the corrisponding controls are implemented (see below), since
+ they are adjusted in the core driver by calling the set_ctrl()
+ method after init(), where the arguments are the default values
+ specified in the v4l2_queryctrl list of supported controls;
+ Same suggestions apply for other settings, _if_ the corresponding
+ methods are present; if not, the initialization must configure the
+ sensor according to the default configuration structures below.
+ */
+
+ struct v4l2_queryctrl qctrl[V4L2_CID_LASTP1-V4L2_CID_BASE];
+ /*
+ Optional list of default controls, defined as indicated in the
+ V4L2 API. Menu type controls are not handled by this interface.
+ */
+
+ int (*get_ctrl)(struct sn9c102_device* cam, struct v4l2_control* ctrl);
+ int (*set_ctrl)(struct sn9c102_device* cam,
+ const struct v4l2_control* ctrl);
+ /*
+ You must implement at least the set_ctrl method if you have defined
+ the list above. The returned value must follow the V4L2
+ specifications for the VIDIOC_G|C_CTRL ioctls. V4L2_CID_H|VCENTER
+ are not supported by this driver, so do not implement them. Also,
+ you don't have to check whether the passed values are out of bounds,
+ given that this is done by the core module.
+ */
+
+ struct v4l2_cropcap cropcap;
+ /*
+ Think the image sensor as a grid of R,G,B monochromatic pixels
+ disposed according to a particular Bayer pattern, which describes
+ the complete array of pixels, from (0,0) to (xmax, ymax). We will
+ use this coordinate system from now on. It is assumed the sensor
+ chip can be programmed to capture/transmit a subsection of that
+ array of pixels: we will call this subsection "active window".
+ It is not always true that the largest achievable active window can
+ cover the whole array of pixels. The V4L2 API defines another
+ area called "source rectangle", which, in turn, is a subrectangle of
+ the active window. The SN9C10X chip is always programmed to read the
+ source rectangle.
+ The bounds of both the active window and the source rectangle are
+ specified in the cropcap substructures 'bounds' and 'defrect'.
+ By default, the source rectangle should cover the largest possible
+ area. Again, it is not always true that the largest source rectangle
+ can cover the entire active window, although it is a rare case for
+ the hardware we have. The bounds of the source rectangle _must_ be
+ multiple of 16 and must use the same coordinate system as indicated
+ before; their centers shall align initially.
+ If necessary, the sensor chip must be initialized during init() to
+ set the bounds of the active sensor window; however, by default, it
+ usually covers the largest achievable area (maxwidth x maxheight)
+ of pixels, so no particular initialization is needed, if you have
+ defined the correct default bounds in the structures.
+ See the V4L2 API for further details.
+ NOTE: once you have defined the bounds of the active window
+ (struct cropcap.bounds) you must not change them.anymore.
+ Only 'bounds' and 'defrect' fields are mandatory, other fields
+ will be ignored.
+ */
+
+ int (*set_crop)(struct sn9c102_device* cam,
+ const struct v4l2_rect* rect);
+ /*
+ To be called on VIDIOC_C_SETCROP. The core module always calls a
+ default routine which configures the appropriate SN9C10X regs (also
+ scaling), but you may need to override/adjust specific stuff.
+ 'rect' contains width and height values that are multiple of 16: in
+ case you override the default function, you always have to program
+ the chip to match those values; on error return the corresponding
+ error code without rolling back.
+ NOTE: in case, you must program the SN9C10X chip to get rid of
+ blank pixels or blank lines at the _start_ of each line or
+ frame after each HSYNC or VSYNC, so that the image starts with
+ real RGB data (see regs 0x12, 0x13) (having set H_SIZE and,
+ V_SIZE you don't have to care about blank pixels or blank
+ lines at the end of each line or frame).
+ */
+
+ struct v4l2_pix_format pix_format;
+ /*
+ What you have to define here are: 1) initial 'width' and 'height' of
+ the target rectangle 2) the initial 'pixelformat', which can be
+ either V4L2_PIX_FMT_SN9C10X (for compressed video) or
+ V4L2_PIX_FMT_SBGGR8 3) 'priv', which we'll be used to indicate the
+ number of bits per pixel for uncompressed video, 8 or 9 (despite the
+ current value of 'pixelformat').
+ NOTE 1: both 'width' and 'height' _must_ be either 1/1 or 1/2 or 1/4
+ of cropcap.defrect.width and cropcap.defrect.height. I
+ suggest 1/1.
+ NOTE 2: The initial compression quality is defined by the first bit
+ of reg 0x17 during the initialization of the image sensor.
+ NOTE 3: as said above, you have to program the SN9C10X chip to get
+ rid of any blank pixels, so that the output of the sensor
+ matches the RGB bayer sequence (i.e. BGBGBG...GRGRGR).
+ */
+
+ const struct device* dev;
+ /*
+ This is the argument for dev_err(), dev_info() and dev_warn(). It
+ is used for debugging purposes. You must not access the struct
+ before the sensor is attached.
+ */
+
+ const struct usb_device* usbdev;
+ /*
+ Points to the usb_device struct after the sensor is attached.
+ Do not touch unless you know what you are doing.
+ */
+
+ /*
+ Do NOT write to the data below, it's READ ONLY. It is used by the
+ core module to store successfully updated values of the above
+ settings, for rollbacks..etc..in case of errors during atomic I/O
+ */
+ struct v4l2_queryctrl _qctrl[V4L2_CID_LASTP1-V4L2_CID_BASE];
+ struct v4l2_rect _rect;
+};
+
+/*****************************************************************************/
+
+/* Private ioctl's for control settings supported by some image sensors */
+#define SN9C102_V4L2_CID_DAC_MAGNITUDE V4L2_CID_PRIVATE_BASE
+#define SN9C102_V4L2_CID_DAC_SIGN V4L2_CID_PRIVATE_BASE + 1
+#define SN9C102_V4L2_CID_GREEN_BALANCE V4L2_CID_PRIVATE_BASE + 2
+
+#endif /* _SN9C102_SENSOR_H_ */
--- /dev/null
+/***************************************************************************
+ * Plug-in for TAS5110C1B image sensor connected to the SN9C10x PC Camera *
+ * Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#include "sn9c102_sensor.h"
+
+
+static struct sn9c102_sensor tas5110c1b;
+
+static struct v4l2_control tas5110c1b_gain;
+
+
+static int tas5110c1b_init(struct sn9c102_device* cam)
+{
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, 0x01, 0x01);
+ err += sn9c102_write_reg(cam, 0x44, 0x01);
+ err += sn9c102_write_reg(cam, 0x00, 0x10);
+ err += sn9c102_write_reg(cam, 0x00, 0x11);
+ err += sn9c102_write_reg(cam, 0x0a, 0x14);
+ err += sn9c102_write_reg(cam, 0x60, 0x17);
+ err += sn9c102_write_reg(cam, 0x06, 0x18);
+ err += sn9c102_write_reg(cam, 0xfb, 0x19);
+
+ err += sn9c102_i2c_write(cam, 0xc0, 0x80);
+
+ return err;
+}
+
+
+static int tas5110c1b_get_ctrl(struct sn9c102_device* cam,
+ struct v4l2_control* ctrl)
+{
+ switch (ctrl->id) {
+ case V4L2_CID_GAIN:
+ ctrl->value = tas5110c1b_gain.value;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+
+static int tas5110c1b_set_ctrl(struct sn9c102_device* cam,
+ const struct v4l2_control* ctrl)
+{
+ int err = 0;
+
+ switch (ctrl->id) {
+ case V4L2_CID_GAIN:
+ if (!(err += sn9c102_i2c_write(cam, 0x20, 0xf6 - ctrl->value)))
+ tas5110c1b_gain.value = ctrl->value;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return err ? -EIO : 0;
+}
+
+
+static int tas5110c1b_set_crop(struct sn9c102_device* cam,
+ const struct v4l2_rect* rect)
+{
+ struct sn9c102_sensor* s = &tas5110c1b;
+ int err = 0;
+ u8 h_start = (u8)(rect->left - s->cropcap.bounds.left) + 69,
+ v_start = (u8)(rect->top - s->cropcap.bounds.top) + 9;
+
+ err += sn9c102_write_reg(cam, h_start, 0x12);
+ err += sn9c102_write_reg(cam, v_start, 0x13);
+
+ /* Don't change ! */
+ err += sn9c102_write_reg(cam, 0x14, 0x1a);
+ err += sn9c102_write_reg(cam, 0x0a, 0x1b);
+ err += sn9c102_write_reg(cam, 0xfb, 0x19);
+
+ return err;
+}
+
+
+static struct sn9c102_sensor tas5110c1b = {
+ .name = "TAS5110C1B",
+ .maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .frequency = SN9C102_I2C_100KHZ,
+ .interface = SN9C102_I2C_3WIRES,
+ .slave_read_id = SN9C102_I2C_SLAVEID_UNAVAILABLE,
+ .slave_write_id = SN9C102_I2C_SLAVEID_FICTITIOUS,
+ .init = &tas5110c1b_init,
+ .qctrl = {
+ {
+ .id = V4L2_CID_GAIN,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "global gain",
+ .minimum = 0x00,
+ .maximum = 0xf6,
+ .step = 0x01,
+ .default_value = 0x40,
+ .flags = 0,
+ },
+ },
+ .set_ctrl = &tas5110c1b_set_ctrl,
+ .cropcap = {
+ .bounds = {
+ .left = 0,
+ .top = 0,
+ .width = 352,
+ .height = 288,
+ },
+ .defrect = {
+ .left = 0,
+ .top = 0,
+ .width = 352,
+ .height = 288,
+ },
+ },
+ .get_ctrl = &tas5110c1b_get_ctrl,
+ .set_crop = &tas5110c1b_set_crop,
+ .pix_format = {
+ .width = 352,
+ .height = 288,
+ .pixelformat = V4L2_PIX_FMT_SBGGR8,
+ .priv = 8,
+ }
+};
+
+
+int sn9c102_probe_tas5110c1b(struct sn9c102_device* cam)
+{
+ /* This sensor has no identifiers, so let's attach it anyway */
+ sn9c102_attach_sensor(cam, &tas5110c1b);
+
+ /* Sensor detection is based on USB pid/vid */
+ if (tas5110c1b.usbdev->descriptor.idProduct != 0x6001 &&
+ tas5110c1b.usbdev->descriptor.idProduct != 0x6005 &&
+ tas5110c1b.usbdev->descriptor.idProduct != 0x60ab)
+ return -ENODEV;
+
+ return 0;
+}
--- /dev/null
+/***************************************************************************
+ * Plug-in for TAS5130D1B image sensor connected to the SN9C10x PC Camera *
+ * Controllers *
+ * *
+ * Copyright (C) 2004 by Luca Risolia <luca.risolia@studio.unibo.it> *
+ * *
+ * This program is free software; you can redistribute it and/or modify *
+ * it under the terms of the GNU General Public License as published by *
+ * the Free Software Foundation; either version 2 of the License, or *
+ * (at your option) any later version. *
+ * *
+ * This program is distributed in the hope that it will be useful, *
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of *
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the *
+ * GNU General Public License for more details. *
+ * *
+ * You should have received a copy of the GNU General Public License *
+ * along with this program; if not, write to the Free Software *
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. *
+ ***************************************************************************/
+
+#include "sn9c102_sensor.h"
+
+
+static struct sn9c102_sensor tas5130d1b;
+
+static struct v4l2_control tas5130d1b_gain, tas5130d1b_exposure;
+
+
+static int tas5130d1b_init(struct sn9c102_device* cam)
+{
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, 0x01, 0x01);
+ err += sn9c102_write_reg(cam, 0x20, 0x17);
+ err += sn9c102_write_reg(cam, 0x04, 0x01);
+ err += sn9c102_write_reg(cam, 0x01, 0x10);
+ err += sn9c102_write_reg(cam, 0x00, 0x11);
+ err += sn9c102_write_reg(cam, 0x00, 0x14);
+ err += sn9c102_write_reg(cam, 0x60, 0x17);
+ err += sn9c102_write_reg(cam, 0x07, 0x18);
+
+ return err;
+}
+
+
+static int tas5130d1b_get_ctrl(struct sn9c102_device* cam,
+ struct v4l2_control* ctrl)
+{
+ switch (ctrl->id) {
+ case V4L2_CID_GAIN:
+ ctrl->value = tas5130d1b_gain.value;
+ break;
+ case V4L2_CID_EXPOSURE:
+ ctrl->value = tas5130d1b_exposure.value;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+
+static int tas5130d1b_set_ctrl(struct sn9c102_device* cam,
+ const struct v4l2_control* ctrl)
+{
+ int err = 0;
+
+ switch (ctrl->id) {
+ case V4L2_CID_GAIN:
+ if (!(err += sn9c102_i2c_write(cam, 0x20, 0xf6 - ctrl->value)))
+ tas5130d1b_gain.value = ctrl->value;
+ break;
+ case V4L2_CID_EXPOSURE:
+ if (!(err += sn9c102_i2c_write(cam, 0x40, 0x47 - ctrl->value)))
+ tas5130d1b_exposure.value = ctrl->value;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return err ? -EIO : 0;
+}
+
+
+static int tas5130d1b_set_crop(struct sn9c102_device* cam,
+ const struct v4l2_rect* rect)
+{
+ struct sn9c102_sensor* s = &tas5130d1b;
+ u8 h_start = (u8)(rect->left - s->cropcap.bounds.left) + 104,
+ v_start = (u8)(rect->top - s->cropcap.bounds.top) + 12;
+ int err = 0;
+
+ err += sn9c102_write_reg(cam, h_start, 0x12);
+ err += sn9c102_write_reg(cam, v_start, 0x13);
+
+ /* Do NOT change! */
+ err += sn9c102_write_reg(cam, 0x1f, 0x1a);
+ err += sn9c102_write_reg(cam, 0x1a, 0x1b);
+ err += sn9c102_write_reg(cam, 0xf3, 0x19);
+
+ return err;
+}
+
+
+static struct sn9c102_sensor tas5130d1b = {
+ .name = "TAS5130D1B",
+ .maintainer = "Luca Risolia <luca.risolia@studio.unibo.it>",
+ .frequency = SN9C102_I2C_100KHZ,
+ .interface = SN9C102_I2C_3WIRES,
+ .slave_read_id = SN9C102_I2C_SLAVEID_UNAVAILABLE,
+ .slave_write_id = SN9C102_I2C_SLAVEID_FICTITIOUS,
+ .init = &tas5130d1b_init,
+ .qctrl = {
+ {
+ .id = V4L2_CID_GAIN,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "global gain",
+ .minimum = 0x00,
+ .maximum = 0xf6,
+ .step = 0x02,
+ .default_value = 0x00,
+ .flags = 0,
+ },
+ {
+ .id = V4L2_CID_EXPOSURE,
+ .type = V4L2_CTRL_TYPE_INTEGER,
+ .name = "exposure",
+ .minimum = 0x00,
+ .maximum = 0x47,
+ .step = 0x01,
+ .default_value = 0x00,
+ .flags = 0,
+ },
+ },
+ .get_ctrl = &tas5130d1b_get_ctrl,
+ .set_ctrl = &tas5130d1b_set_ctrl,
+ .cropcap = {
+ .bounds = {
+ .left = 0,
+ .top = 0,
+ .width = 640,
+ .height = 480,
+ },
+ .defrect = {
+ .left = 0,
+ .top = 0,
+ .width = 640,
+ .height = 480,
+ },
+ },
+ .set_crop = &tas5130d1b_set_crop,
+ .pix_format = {
+ .width = 640,
+ .height = 480,
+ .pixelformat = V4L2_PIX_FMT_SBGGR8,
+ .priv = 8,
+ }
+};
+
+
+int sn9c102_probe_tas5130d1b(struct sn9c102_device* cam)
+{
+ /* This sensor has no identifiers, so let's attach it anyway */
+ sn9c102_attach_sensor(cam, &tas5130d1b);
+
+ /* Sensor detection is based on USB pid/vid */
+ if (tas5130d1b.usbdev->descriptor.idProduct != 0x6025 &&
+ tas5130d1b.usbdev->descriptor.idProduct != 0x60aa)
+ return -ENODEV;
+
+ return 0;
+}
--- /dev/null
+/*
+ * USB PhidgetServo driver 1.0
+ *
+ * Copyright (C) 2004 Sean Young <sean@mess.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This is a driver for the USB PhidgetServo version 2.0 and 3.0 servo
+ * controllers available at: http://www.phidgets.com/
+ *
+ * Note that the driver takes input as: degrees.minutes
+ *
+ * CAUTION: Generally you should use 0 < degrees < 180 as anything else
+ * is probably beyond the range of your servo and may damage it.
+ *
+ * Jun 16, 2004: Sean Young <sean@mess.org>
+ * - cleanups
+ * - was using memory after kfree()
+ * Aug 8, 2004: Sean Young <sean@mess.org>
+ * - set the highest angle as high as the hardware allows, there are
+ * some odd servos out there
+ *
+ */
+
+#include <linux/config.h>
+#ifdef CONFIG_USB_DEBUG
+#define DEBUG 1
+#endif
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/usb.h>
+
+#define DRIVER_AUTHOR "Sean Young <sean@mess.org>"
+#define DRIVER_DESC "USB PhidgetServo Driver"
+
+#define VENDOR_ID_GLAB 0x06c2
+#define DEVICE_ID_GLAB_PHIDGETSERVO_QUAD 0x0038
+#define DEVICE_ID_GLAB_PHIDGETSERVO_UNI 0x0039
+
+#define VENDOR_ID_WISEGROUP 0x0925
+#define VENDOR_ID_WISEGROUP_PHIDGETSERVO_QUAD 0x8101
+#define VENDOR_ID_WISEGROUP_PHIDGETSERVO_UNI 0x8104
+
+#define SERVO_VERSION_30 0x01
+#define SERVO_COUNT_QUAD 0x02
+
+static struct usb_device_id id_table[] = {
+ {
+ USB_DEVICE(VENDOR_ID_GLAB, DEVICE_ID_GLAB_PHIDGETSERVO_QUAD),
+ .driver_info = SERVO_VERSION_30 | SERVO_COUNT_QUAD
+ },
+ {
+ USB_DEVICE(VENDOR_ID_GLAB, DEVICE_ID_GLAB_PHIDGETSERVO_UNI),
+ .driver_info = SERVO_VERSION_30
+ },
+ {
+ USB_DEVICE(VENDOR_ID_WISEGROUP,
+ VENDOR_ID_WISEGROUP_PHIDGETSERVO_QUAD),
+ .driver_info = SERVO_COUNT_QUAD
+ },
+ {
+ USB_DEVICE(VENDOR_ID_WISEGROUP,
+ VENDOR_ID_WISEGROUP_PHIDGETSERVO_UNI),
+ .driver_info = 0
+ },
+ {}
+};
+
+MODULE_DEVICE_TABLE(usb, id_table);
+
+struct phidget_servo {
+ struct usb_device *udev;
+ ulong type;
+ int pulse[4];
+ int degrees[4];
+ int minutes[4];
+};
+
+static int
+change_position_v30(struct phidget_servo *servo, int servo_no, int degrees,
+ int minutes)
+{
+ int retval;
+ unsigned char *buffer;
+
+ if (degrees < -23 || degrees > 362)
+ return -EINVAL;
+
+ buffer = kmalloc(6, GFP_KERNEL);
+ if (!buffer) {
+ dev_err(&servo->udev->dev, "%s - out of memory\n",
+ __FUNCTION__);
+ return -ENOMEM;
+ }
+
+ /*
+ * pulse = 0 - 4095
+ * angle = 0 - 180 degrees
+ *
+ * pulse = angle * 10.6 + 243.8
+ */
+ servo->pulse[servo_no] = ((degrees*60 + minutes)*106 + 2438*60)/600;
+ servo->degrees[servo_no]= degrees;
+ servo->minutes[servo_no]= minutes;
+
+ /*
+ * The PhidgetServo v3.0 is controlled by sending 6 bytes,
+ * 4 * 12 bits for each servo.
+ *
+ * low = lower 8 bits pulse
+ * high = higher 4 bits pulse
+ *
+ * offset bits
+ * +---+-----------------+
+ * | 0 | low 0 |
+ * +---+--------+--------+
+ * | 1 | high 1 | high 0 |
+ * +---+--------+--------+
+ * | 2 | low 1 |
+ * +---+-----------------+
+ * | 3 | low 2 |
+ * +---+--------+--------+
+ * | 4 | high 3 | high 2 |
+ * +---+--------+--------+
+ * | 5 | low 3 |
+ * +---+-----------------+
+ */
+
+ buffer[0] = servo->pulse[0] & 0xff;
+ buffer[1] = (servo->pulse[0] >> 8 & 0x0f)
+ | (servo->pulse[1] >> 4 & 0xf0);
+ buffer[2] = servo->pulse[1] & 0xff;
+ buffer[3] = servo->pulse[2] & 0xff;
+ buffer[4] = (servo->pulse[2] >> 8 & 0x0f)
+ | (servo->pulse[3] >> 4 & 0xf0);
+ buffer[5] = servo->pulse[3] & 0xff;
+
+ dev_dbg(&servo->udev->dev,
+ "data: %02x %02x %02x %02x %02x %02x\n",
+ buffer[0], buffer[1], buffer[2],
+ buffer[3], buffer[4], buffer[5]);
+
+ retval = usb_control_msg(servo->udev,
+ usb_sndctrlpipe(servo->udev, 0),
+ 0x09, 0x21, 0x0200, 0x0000, buffer, 6, 2 * HZ);
+
+ kfree(buffer);
+
+ return retval;
+}
+
+static int
+change_position_v20(struct phidget_servo *servo, int servo_no, int degrees,
+ int minutes)
+{
+ int retval;
+ unsigned char *buffer;
+
+ if (degrees < -23 || degrees > 278)
+ return -EINVAL;
+
+ buffer = kmalloc(2, GFP_KERNEL);
+ if (!buffer) {
+ dev_err(&servo->udev->dev, "%s - out of memory\n",
+ __FUNCTION__);
+ return -ENOMEM;
+ }
+
+ /*
+ * angle = 0 - 180 degrees
+ * pulse = angle + 23
+ */
+ servo->pulse[servo_no]= degrees + 23;
+ servo->degrees[servo_no]= degrees;
+ servo->minutes[servo_no]= 0;
+
+ /*
+ * The PhidgetServo v2.0 is controlled by sending two bytes. The
+ * first byte is the servo number xor'ed with 2:
+ *
+ * servo 0 = 2
+ * servo 1 = 3
+ * servo 2 = 0
+ * servo 3 = 1
+ *
+ * The second byte is the position.
+ */
+
+ buffer[0] = servo_no ^ 2;
+ buffer[1] = servo->pulse[servo_no];
+
+ dev_dbg(&servo->udev->dev, "data: %02x %02x\n", buffer[0], buffer[1]);
+
+ retval = usb_control_msg(servo->udev,
+ usb_sndctrlpipe(servo->udev, 0),
+ 0x09, 0x21, 0x0200, 0x0000, buffer, 2, 2 * HZ);
+
+ kfree(buffer);
+
+ return retval;
+}
+
+#define show_set(value) \
+static ssize_t set_servo##value (struct device *dev, \
+ const char *buf, size_t count) \
+{ \
+ int degrees, minutes, retval; \
+ struct usb_interface *intf = to_usb_interface (dev); \
+ struct phidget_servo *servo = usb_get_intfdata (intf); \
+ \
+ minutes = 0; \
+ /* must at least convert degrees */ \
+ if (sscanf (buf, "%d.%d", °rees, &minutes) < 1) { \
+ return -EINVAL; \
+ } \
+ \
+ if (minutes < 0 || minutes > 59) \
+ return -EINVAL; \
+ \
+ if (servo->type & SERVO_VERSION_30) \
+ retval = change_position_v30 (servo, value, degrees, \
+ minutes); \
+ else \
+ retval = change_position_v20 (servo, value, degrees, \
+ minutes); \
+ \
+ return retval < 0 ? retval : count; \
+} \
+ \
+static ssize_t show_servo##value (struct device *dev, char *buf) \
+{ \
+ struct usb_interface *intf = to_usb_interface (dev); \
+ struct phidget_servo *servo = usb_get_intfdata (intf); \
+ \
+ return sprintf (buf, "%d.%02d\n", servo->degrees[value], \
+ servo->minutes[value]); \
+} \
+static DEVICE_ATTR(servo##value, S_IWUGO | S_IRUGO, \
+ show_servo##value, set_servo##value);
+
+show_set(0);
+show_set(1);
+show_set(2);
+show_set(3);
+
+static int
+servo_probe(struct usb_interface *interface, const struct usb_device_id *id)
+{
+ struct usb_device *udev = interface_to_usbdev(interface);
+ struct phidget_servo *dev;
+
+ dev = kmalloc(sizeof (struct phidget_servo), GFP_KERNEL);
+ if (dev == NULL) {
+ dev_err(&interface->dev, "%s - out of memory\n", __FUNCTION__);
+ return -ENOMEM;
+ }
+ memset(dev, 0x00, sizeof (*dev));
+
+ dev->udev = usb_get_dev(udev);
+ dev->type = id->driver_info;
+ usb_set_intfdata(interface, dev);
+
+ device_create_file(&interface->dev, &dev_attr_servo0);
+ if (dev->type & SERVO_COUNT_QUAD) {
+ device_create_file(&interface->dev, &dev_attr_servo1);
+ device_create_file(&interface->dev, &dev_attr_servo2);
+ device_create_file(&interface->dev, &dev_attr_servo3);
+ }
+
+ dev_info(&interface->dev, "USB %d-Motor PhidgetServo v%d.0 attached\n",
+ dev->type & SERVO_COUNT_QUAD ? 4 : 1,
+ dev->type & SERVO_VERSION_30 ? 3 : 2);
+
+ if(!(dev->type & SERVO_VERSION_30))
+ dev_info(&interface->dev,
+ "WARNING: v2.0 not tested! Please report if it works.\n");
+
+ return 0;
+}
+
+static void
+servo_disconnect(struct usb_interface *interface)
+{
+ struct phidget_servo *dev;
+
+ dev = usb_get_intfdata(interface);
+ usb_set_intfdata(interface, NULL);
+
+ device_remove_file(&interface->dev, &dev_attr_servo0);
+ if (dev->type & SERVO_COUNT_QUAD) {
+ device_remove_file(&interface->dev, &dev_attr_servo1);
+ device_remove_file(&interface->dev, &dev_attr_servo2);
+ device_remove_file(&interface->dev, &dev_attr_servo3);
+ }
+
+ usb_put_dev(dev->udev);
+
+ dev_info(&interface->dev, "USB %d-Motor PhidgetServo v%d.0 detached\n",
+ dev->type & SERVO_COUNT_QUAD ? 4 : 1,
+ dev->type & SERVO_VERSION_30 ? 3 : 2);
+
+ kfree(dev);
+}
+
+static struct usb_driver servo_driver = {
+ .owner = THIS_MODULE,
+ .name = "phidgetservo",
+ .probe = servo_probe,
+ .disconnect = servo_disconnect,
+ .id_table = id_table
+};
+
+static int __init
+phidget_servo_init(void)
+{
+ int retval;
+
+ retval = usb_register(&servo_driver);
+ if (retval)
+ err("usb_register failed. Error number %d", retval);
+
+ return retval;
+}
+
+static void __exit
+phidget_servo_exit(void)
+{
+ usb_deregister(&servo_driver);
+}
+
+module_init(phidget_servo_init);
+module_exit(phidget_servo_exit);
+
+MODULE_AUTHOR(DRIVER_AUTHOR);
+MODULE_DESCRIPTION(DRIVER_DESC);
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * drivers/video/asiliantfb.c
+ * frame buffer driver for Asiliant 69000 chip
+ * Copyright (C) 2001-2003 Saito.K & Jeanne
+ *
+ * from driver/video/chipsfb.c and,
+ *
+ * drivers/video/asiliantfb.c -- frame buffer device for
+ * Asiliant 69030 chip (formerly Intel, formerly Chips & Technologies)
+ * Author: apc@agelectronics.co.uk
+ * Copyright (C) 2000 AG Electronics
+ * Note: the data sheets don't seem to be available from Asiliant.
+ * They are available by searching developer.intel.com, but are not otherwise
+ * linked to.
+ *
+ * This driver should be portable with minimal effort to the 69000 display
+ * chip, and to the twin-display mode of the 69030.
+ * Contains code from Thomas Hhenleitner <th@visuelle-maschinen.de> (thanks)
+ *
+ * Derived from the CT65550 driver chipsfb.c:
+ * Copyright (C) 1998 Paul Mackerras
+ * ...which was derived from the Powermac "chips" driver:
+ * Copyright (C) 1997 Fabio Riccardi.
+ * And from the frame buffer device for Open Firmware-initialized devices:
+ * Copyright (C) 1997 Geert Uytterhoeven.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive for
+ * more details.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/tty.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/delay.h>
+#include <linux/interrupt.h>
+#include <linux/fb.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <asm/io.h>
+
+/* Built in clock of the 69030 */
+const unsigned Fref = 14318180;
+
+#define mmio_base (p->screen_base + 0x400000)
+
+#define mm_write_ind(num, val, ap, dp) do { \
+ writeb((num), mmio_base + (ap)); writeb((val), mmio_base + (dp)); \
+} while (0)
+
+static void mm_write_xr(struct fb_info *p, u8 reg, u8 data)
+{
+ mm_write_ind(reg, data, 0x7ac, 0x7ad);
+}
+#define write_xr(num, val) mm_write_xr(p, num, val)
+
+static void mm_write_fr(struct fb_info *p, u8 reg, u8 data)
+{
+ mm_write_ind(reg, data, 0x7a0, 0x7a1);
+}
+#define write_fr(num, val) mm_write_fr(p, num, val)
+
+static void mm_write_cr(struct fb_info *p, u8 reg, u8 data)
+{
+ mm_write_ind(reg, data, 0x7a8, 0x7a9);
+}
+#define write_cr(num, val) mm_write_cr(p, num, val)
+
+static void mm_write_gr(struct fb_info *p, u8 reg, u8 data)
+{
+ mm_write_ind(reg, data, 0x79c, 0x79d);
+}
+#define write_gr(num, val) mm_write_gr(p, num, val)
+
+static void mm_write_sr(struct fb_info *p, u8 reg, u8 data)
+{
+ mm_write_ind(reg, data, 0x788, 0x789);
+}
+#define write_sr(num, val) mm_write_sr(p, num, val)
+
+static void mm_write_ar(struct fb_info *p, u8 reg, u8 data)
+{
+ readb(mmio_base + 0x7b4);
+ mm_write_ind(reg, data, 0x780, 0x780);
+}
+#define write_ar(num, val) mm_write_ar(p, num, val)
+
+/*
+ * Exported functions
+ */
+int asiliantfb_init(void);
+
+static int asiliantfb_pci_init(struct pci_dev *dp, const struct pci_device_id *);
+static int asiliantfb_check_var(struct fb_var_screeninfo *var,
+ struct fb_info *info);
+static int asiliantfb_set_par(struct fb_info *info);
+static int asiliantfb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
+ u_int transp, struct fb_info *info);
+
+static struct fb_ops asiliantfb_ops = {
+ .owner = THIS_MODULE,
+ .fb_check_var = asiliantfb_check_var,
+ .fb_set_par = asiliantfb_set_par,
+ .fb_setcolreg = asiliantfb_setcolreg,
+ .fb_fillrect = cfb_fillrect,
+ .fb_copyarea = cfb_copyarea,
+ .fb_imageblit = cfb_imageblit,
+ .fb_cursor = soft_cursor,
+};
+
+/* Calculate the ratios for the dot clocks without using a single long long
+ * value */
+static void asiliant_calc_dclk2(u32 *ppixclock, u8 *dclk2_m, u8 *dclk2_n, u8 *dclk2_div)
+{
+ unsigned pixclock = *ppixclock;
+ unsigned Ftarget = 1000000 * (1000000 / pixclock);
+ unsigned n;
+ unsigned best_error = 0xffffffff;
+ unsigned best_m = 0xffffffff,
+ best_n = 0xffffffff;
+ unsigned ratio;
+ unsigned remainder;
+ unsigned char divisor = 0;
+
+ /* Calculate the frequency required. This is hard enough. */
+ ratio = 1000000 / pixclock;
+ remainder = 1000000 % pixclock;
+ Ftarget = 1000000 * ratio + (1000000 * remainder) / pixclock;
+
+ while (Ftarget < 100000000) {
+ divisor += 0x10;
+ Ftarget <<= 1;
+ }
+
+ ratio = Ftarget / Fref;
+ remainder = Ftarget % Fref;
+
+ /* This expresses the constraint that 150kHz <= Fref/n <= 5Mhz,
+ * together with 3 <= n <= 257. */
+ for (n = 3; n <= 257; n++) {
+ unsigned m = n * ratio + (n * remainder) / Fref;
+
+ /* 3 <= m <= 257 */
+ if (m >= 3 && m <= 257) {
+ unsigned new_error = ((Ftarget * n) - (Fref * m)) >= 0 ?
+ ((Ftarget * n) - (Fref * m)) : ((Fref * m) - (Ftarget * n));
+ if (new_error < best_error) {
+ best_n = n;
+ best_m = m;
+ best_error = new_error;
+ }
+ }
+ /* But if VLD = 4, then 4m <= 1028 */
+ else if (m <= 1028) {
+ /* remember there are still only 8-bits of precision in m, so
+ * avoid over-optimistic error calculations */
+ unsigned new_error = ((Ftarget * n) - (Fref * (m & ~3))) >= 0 ?
+ ((Ftarget * n) - (Fref * (m & ~3))) : ((Fref * (m & ~3)) - (Ftarget * n));
+ if (new_error < best_error) {
+ best_n = n;
+ best_m = m;
+ best_error = new_error;
+ }
+ }
+ }
+ if (best_m > 257)
+ best_m >>= 2; /* divide m by 4, and leave VCO loop divide at 4 */
+ else
+ divisor |= 4; /* or set VCO loop divide to 1 */
+ *dclk2_m = best_m - 2;
+ *dclk2_n = best_n - 2;
+ *dclk2_div = divisor;
+ *ppixclock = pixclock;
+ return;
+}
+
+static void asiliant_set_timing(struct fb_info *p)
+{
+ unsigned hd = p->var.xres / 8;
+ unsigned hs = (p->var.xres + p->var.right_margin) / 8;
+ unsigned he = (p->var.xres + p->var.right_margin + p->var.hsync_len) / 8;
+ unsigned ht = (p->var.left_margin + p->var.xres + p->var.right_margin + p->var.hsync_len) / 8;
+ unsigned vd = p->var.yres;
+ unsigned vs = p->var.yres + p->var.lower_margin;
+ unsigned ve = p->var.yres + p->var.lower_margin + p->var.vsync_len;
+ unsigned vt = p->var.upper_margin + p->var.yres + p->var.lower_margin + p->var.vsync_len;
+ unsigned wd = (p->var.xres_virtual * ((p->var.bits_per_pixel+7)/8)) / 8;
+
+ if ((p->var.xres == 640) && (p->var.yres == 480) && (p->var.pixclock == 39722)) {
+ write_fr(0x01, 0x02); /* LCD */
+ } else {
+ write_fr(0x01, 0x01); /* CRT */
+ }
+
+ write_cr(0x11, (ve - 1) & 0x0f);
+ write_cr(0x00, (ht - 5) & 0xff);
+ write_cr(0x01, hd - 1);
+ write_cr(0x02, hd);
+ write_cr(0x03, ((ht - 1) & 0x1f) | 0x80);
+ write_cr(0x04, hs);
+ write_cr(0x05, (((ht - 1) & 0x20) <<2) | (he & 0x1f));
+ write_cr(0x3c, (ht - 1) & 0xc0);
+ write_cr(0x06, (vt - 2) & 0xff);
+ write_cr(0x30, (vt - 2) >> 8);
+ write_cr(0x07, 0x00);
+ write_cr(0x08, 0x00);
+ write_cr(0x09, 0x00);
+ write_cr(0x10, (vs - 1) & 0xff);
+ write_cr(0x32, ((vs - 1) >> 8) & 0xf);
+ write_cr(0x11, ((ve - 1) & 0x0f) | 0x80);
+ write_cr(0x12, (vd - 1) & 0xff);
+ write_cr(0x31, ((vd - 1) & 0xf00) >> 8);
+ write_cr(0x13, wd & 0xff);
+ write_cr(0x41, (wd & 0xf00) >> 8);
+ write_cr(0x15, (vs - 1) & 0xff);
+ write_cr(0x33, ((vs - 1) >> 8) & 0xf);
+ write_cr(0x38, ((ht - 5) & 0x100) >> 8);
+ write_cr(0x16, (vt - 1) & 0xff);
+ write_cr(0x18, 0x00);
+
+ if (p->var.xres == 640) {
+ writeb(0xc7, mmio_base + 0x784); /* set misc output reg */
+ } else {
+ writeb(0x07, mmio_base + 0x784); /* set misc output reg */
+ }
+}
+
+static int asiliantfb_check_var(struct fb_var_screeninfo *var,
+ struct fb_info *p)
+{
+ unsigned long Ftarget, ratio, remainder;
+
+ ratio = 1000000 / var->pixclock;
+ remainder = 1000000 % var->pixclock;
+ Ftarget = 1000000 * ratio + (1000000 * remainder) / var->pixclock;
+
+ /* First check the constraint that the maximum post-VCO divisor is 32,
+ * and the maximum Fvco is 220MHz */
+ if (Ftarget > 220000000 || Ftarget < 3125000) {
+ printk(KERN_ERR "asiliantfb dotclock must be between 3.125 and 220MHz\n");
+ return -ENXIO;
+ }
+ var->xres_virtual = var->xres;
+ var->yres_virtual = var->yres;
+
+ if (var->bits_per_pixel == 24) {
+ var->red.offset = 16;
+ var->green.offset = 8;
+ var->blue.offset = 0;
+ var->red.length = var->blue.length = var->green.length = 8;
+ } else if (var->bits_per_pixel == 16) {
+ switch (var->red.offset) {
+ case 11:
+ var->green.length = 6;
+ break;
+ case 10:
+ var->green.length = 5;
+ break;
+ default:
+ return -EINVAL;
+ }
+ var->green.offset = 5;
+ var->blue.offset = 0;
+ var->red.length = var->blue.length = 5;
+ } else if (var->bits_per_pixel == 8) {
+ var->red.offset = var->green.offset = var->blue.offset = 0;
+ var->red.length = var->green.length = var->blue.length = 8;
+ }
+ return 0;
+}
+
+static int asiliantfb_set_par(struct fb_info *p)
+{
+ u8 dclk2_m; /* Holds m-2 value for register */
+ u8 dclk2_n; /* Holds n-2 value for register */
+ u8 dclk2_div; /* Holds divisor bitmask */
+
+ /* Set pixclock */
+ asiliant_calc_dclk2(&p->var.pixclock, &dclk2_m, &dclk2_n, &dclk2_div);
+
+ /* Set color depth */
+ if (p->var.bits_per_pixel == 24) {
+ write_xr(0x81, 0x16); /* 24 bit packed color mode */
+ write_xr(0x82, 0x00); /* Disable palettes */
+ write_xr(0x20, 0x20); /* 24 bit blitter mode */
+ } else if (p->var.bits_per_pixel == 16) {
+ if (p->var.red.offset == 11)
+ write_xr(0x81, 0x15); /* 16 bit color mode */
+ else
+ write_xr(0x81, 0x14); /* 15 bit color mode */
+ write_xr(0x82, 0x00); /* Disable palettes */
+ write_xr(0x20, 0x10); /* 16 bit blitter mode */
+ } else if (p->var.bits_per_pixel == 8) {
+ write_xr(0x0a, 0x02); /* Linear */
+ write_xr(0x81, 0x12); /* 8 bit color mode */
+ write_xr(0x82, 0x00); /* Graphics gamma enable */
+ write_xr(0x20, 0x00); /* 8 bit blitter mode */
+ }
+ p->fix.line_length = p->var.xres * (p->var.bits_per_pixel >> 3);
+ p->fix.visual = (p->var.bits_per_pixel == 8) ? FB_VISUAL_PSEUDOCOLOR : FB_VISUAL_TRUECOLOR;
+ write_xr(0xc4, dclk2_m);
+ write_xr(0xc5, dclk2_n);
+ write_xr(0xc7, dclk2_div);
+ /* Set up the CR registers */
+ asiliant_set_timing(p);
+ return 0;
+}
+
+static int asiliantfb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
+ u_int transp, struct fb_info *p)
+{
+ if (regno > 255)
+ return 1;
+ red >>= 8;
+ green >>= 8;
+ blue >>= 8;
+
+ /* Set hardware palete */
+ writeb(regno, mmio_base + 0x790);
+ udelay(1);
+ writeb(red, mmio_base + 0x791);
+ writeb(green, mmio_base + 0x791);
+ writeb(blue, mmio_base + 0x791);
+
+ switch(p->var.bits_per_pixel) {
+ case 15:
+ if (regno < 16) {
+ ((u32 *)(p->pseudo_palette))[regno] =
+ ((red & 0xf8) << 7) |
+ ((green & 0xf8) << 2) |
+ ((blue & 0xf8) >> 3);
+ }
+ break;
+ case 16:
+ if (regno < 16) {
+ ((u32 *)(p->pseudo_palette))[regno] =
+ ((red & 0xf8) << 8) |
+ ((green & 0xfc) << 3) |
+ ((blue & 0xf8) >> 3);
+ }
+ break;
+ case 24:
+ if (regno < 24) {
+ ((u32 *)(p->pseudo_palette))[regno] =
+ (red << 16) |
+ (green << 8) |
+ (blue);
+ }
+ break;
+ }
+ return 0;
+}
+
+struct chips_init_reg {
+ unsigned char addr;
+ unsigned char data;
+};
+
+#define N_ELTS(x) (sizeof(x) / sizeof(x[0]))
+
+static struct chips_init_reg chips_init_sr[] =
+{
+ {0x00, 0x03}, /* Reset register */
+ {0x01, 0x01}, /* Clocking mode */
+ {0x02, 0x0f}, /* Plane mask */
+ {0x04, 0x0e} /* Memory mode */
+};
+
+static struct chips_init_reg chips_init_gr[] =
+{
+ {0x03, 0x00}, /* Data rotate */
+ {0x05, 0x00}, /* Graphics mode */
+ {0x06, 0x01}, /* Miscellaneous */
+ {0x08, 0x00} /* Bit mask */
+};
+
+static struct chips_init_reg chips_init_ar[] =
+{
+ {0x10, 0x01}, /* Mode control */
+ {0x11, 0x00}, /* Overscan */
+ {0x12, 0x0f}, /* Memory plane enable */
+ {0x13, 0x00} /* Horizontal pixel panning */
+};
+
+static struct chips_init_reg chips_init_cr[] =
+{
+ {0x0c, 0x00}, /* Start address high */
+ {0x0d, 0x00}, /* Start address low */
+ {0x40, 0x00}, /* Extended Start Address */
+ {0x41, 0x00}, /* Extended Start Address */
+ {0x14, 0x00}, /* Underline location */
+ {0x17, 0xe3}, /* CRT mode control */
+ {0x70, 0x00} /* Interlace control */
+};
+
+
+static struct chips_init_reg chips_init_fr[] =
+{
+ {0x01, 0x02},
+ {0x03, 0x08},
+ {0x08, 0xcc},
+ {0x0a, 0x08},
+ {0x18, 0x00},
+ {0x1e, 0x80},
+ {0x40, 0x83},
+ {0x41, 0x00},
+ {0x48, 0x13},
+ {0x4d, 0x60},
+ {0x4e, 0x0f},
+
+ {0x0b, 0x01},
+
+ {0x21, 0x51},
+ {0x22, 0x1d},
+ {0x23, 0x5f},
+ {0x20, 0x4f},
+ {0x34, 0x00},
+ {0x24, 0x51},
+ {0x25, 0x00},
+ {0x27, 0x0b},
+ {0x26, 0x00},
+ {0x37, 0x80},
+ {0x33, 0x0b},
+ {0x35, 0x11},
+ {0x36, 0x02},
+ {0x31, 0xea},
+ {0x32, 0x0c},
+ {0x30, 0xdf},
+ {0x10, 0x0c},
+ {0x11, 0xe0},
+ {0x12, 0x50},
+ {0x13, 0x00},
+ {0x16, 0x03},
+ {0x17, 0xbd},
+ {0x1a, 0x00},
+};
+
+
+static struct chips_init_reg chips_init_xr[] =
+{
+ {0xce, 0x00}, /* set default memory clock */
+ {0xcc, 200 }, /* MCLK ratio M */
+ {0xcd, 18 }, /* MCLK ratio N */
+ {0xce, 0x90}, /* MCLK divisor = 2 */
+
+ {0xc4, 209 },
+ {0xc5, 118 },
+ {0xc7, 32 },
+ {0xcf, 0x06},
+ {0x09, 0x01}, /* IO Control - CRT controller extensions */
+ {0x0a, 0x02}, /* Frame buffer mapping */
+ {0x0b, 0x01}, /* PCI burst write */
+ {0x40, 0x03}, /* Memory access control */
+ {0x80, 0x82}, /* Pixel pipeline configuration 0 */
+ {0x81, 0x12}, /* Pixel pipeline configuration 1 */
+ {0x82, 0x08}, /* Pixel pipeline configuration 2 */
+
+ {0xd0, 0x0f},
+ {0xd1, 0x01},
+};
+
+static void __init chips_hw_init(struct fb_info *p)
+{
+ int i;
+
+ for (i = 0; i < N_ELTS(chips_init_xr); ++i)
+ write_xr(chips_init_xr[i].addr, chips_init_xr[i].data);
+ write_xr(0x81, 0x12);
+ write_xr(0x82, 0x08);
+ write_xr(0x20, 0x00);
+ for (i = 0; i < N_ELTS(chips_init_sr); ++i)
+ write_sr(chips_init_sr[i].addr, chips_init_sr[i].data);
+ for (i = 0; i < N_ELTS(chips_init_gr); ++i)
+ write_gr(chips_init_gr[i].addr, chips_init_gr[i].data);
+ for (i = 0; i < N_ELTS(chips_init_ar); ++i)
+ write_ar(chips_init_ar[i].addr, chips_init_ar[i].data);
+ /* Enable video output in attribute index register */
+ writeb(0x20, mmio_base + 0x780);
+ for (i = 0; i < N_ELTS(chips_init_cr); ++i)
+ write_cr(chips_init_cr[i].addr, chips_init_cr[i].data);
+ for (i = 0; i < N_ELTS(chips_init_fr); ++i)
+ write_fr(chips_init_fr[i].addr, chips_init_fr[i].data);
+}
+
+static struct fb_fix_screeninfo asiliantfb_fix __initdata = {
+ .id = "Asiliant 69000",
+ .type = FB_TYPE_PACKED_PIXELS,
+ .visual = FB_VISUAL_PSEUDOCOLOR,
+ .accel = FB_ACCEL_NONE,
+ .line_length = 640,
+ .smem_len = 0x200000, /* 2MB */
+};
+
+static struct fb_var_screeninfo asiliantfb_var __initdata = {
+ .xres = 640,
+ .yres = 480,
+ .xres_virtual = 640,
+ .yres_virtual = 480,
+ .bits_per_pixel = 8,
+ .red = { .length = 8 },
+ .green = { .length = 8 },
+ .blue = { .length = 8 },
+ .height = -1,
+ .width = -1,
+ .vmode = FB_VMODE_NONINTERLACED,
+ .pixclock = 39722,
+ .left_margin = 48,
+ .right_margin = 16,
+ .upper_margin = 33,
+ .lower_margin = 10,
+ .hsync_len = 96,
+ .vsync_len = 2,
+};
+
+static void __init init_asiliant(struct fb_info *p, unsigned long addr)
+{
+ p->fix = asiliantfb_fix;
+ p->fix.smem_start = addr;
+ p->var = asiliantfb_var;
+ p->fbops = &asiliantfb_ops;
+ p->flags = FBINFO_DEFAULT;
+
+ fb_alloc_cmap(&p->cmap, 256, 0);
+
+ if (register_framebuffer(p) < 0) {
+ printk(KERN_ERR "C&T 69000 framebuffer failed to register\n");
+ return;
+ }
+
+ printk(KERN_INFO "fb%d: Asiliant 69000 frame buffer (%dK RAM detected)\n",
+ p->node, p->fix.smem_len / 1024);
+
+ writeb(0xff, mmio_base + 0x78c);
+ chips_hw_init(p);
+}
+
+static int __devinit
+asiliantfb_pci_init(struct pci_dev *dp, const struct pci_device_id *ent)
+{
+ unsigned long addr, size;
+ struct fb_info *p;
+
+ if ((dp->resource[0].flags & IORESOURCE_MEM) == 0)
+ return -ENODEV;
+ addr = pci_resource_start(dp, 0);
+ size = pci_resource_len(dp, 0);
+ if (addr == 0)
+ return -ENODEV;
+ if (!request_mem_region(addr, size, "asiliantfb"))
+ return -EBUSY;
+
+ p = framebuffer_alloc(sizeof(u32) * 256, &dp->dev);
+ if (!p) {
+ release_mem_region(addr, size);
+ return -ENOMEM;
+ }
+ p->pseudo_palette = p->par;
+ p->par = NULL;
+
+ p->screen_base = ioremap(addr, 0x800000);
+ if (p->screen_base == NULL) {
+ release_mem_region(addr, size);
+ framebuffer_release(p);
+ return -ENOMEM;
+ }
+
+ pci_write_config_dword(dp, 4, 0x02800083);
+ writeb(3, p->screen_base + 0x400784);
+
+ init_asiliant(p, addr);
+
+ pci_set_drvdata(dp, p);
+ return 0;
+}
+
+static void __devexit asiliantfb_remove(struct pci_dev *dp)
+{
+ struct fb_info *p = pci_get_drvdata(dp);
+
+ unregister_framebuffer(p);
+ iounmap(p->screen_base);
+ release_mem_region(pci_resource_start(dp, 0), pci_resource_len(dp, 0));
+ pci_set_drvdata(dp, NULL);
+ framebuffer_release(p);
+}
+
+static struct pci_device_id asiliantfb_pci_tbl[] __devinitdata = {
+ { PCI_VENDOR_ID_CT, PCI_DEVICE_ID_CT_69000, PCI_ANY_ID, PCI_ANY_ID },
+ { 0 }
+};
+
+MODULE_DEVICE_TABLE(pci, asiliantfb_pci_tbl);
+
+static struct pci_driver asiliantfb_driver = {
+ .name = "asiliantfb",
+ .id_table = asiliantfb_pci_tbl,
+ .probe = asiliantfb_pci_init,
+ .remove = __devexit_p(asiliantfb_remove),
+};
+
+int __init asiliantfb_init(void)
+{
+ if (fb_get_options("asiliantfb", NULL))
+ return -ENODEV;
+
+ return pci_module_init(&asiliantfb_driver);
+}
+
+module_init(asiliantfb_init);
+
+static void __exit asiliantfb_exit(void)
+{
+ pci_unregister_driver(&asiliantfb_driver);
+}
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * SGI GBE frame buffer driver
+ *
+ * Copyright (C) 1999 Silicon Graphics, Inc. - Jeffrey Newquist
+ * Copyright (C) 2002 Vivien Chappelier <vivien.chappelier@linux-mips.org>
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive for
+ * more details.
+ */
+
+#include <linux/config.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+#include <linux/errno.h>
+#include <linux/fb.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/mm.h>
+#include <linux/module.h>
+
+#ifdef CONFIG_X86
+#include <asm/mtrr.h>
+#endif
+#ifdef CONFIG_MIPS
+#include <asm/addrspace.h>
+#endif
+#include <asm/byteorder.h>
+#include <asm/io.h>
+#include <asm/tlbflush.h>
+
+#include <video/gbe.h>
+
+static struct sgi_gbe *gbe;
+
+struct gbefb_par {
+ struct fb_var_screeninfo var;
+ struct gbe_timing_info timing;
+ int valid;
+};
+
+#ifdef CONFIG_SGI_IP32
+#define GBE_BASE 0x16000000 /* SGI O2 */
+#endif
+
+#ifdef CONFIG_X86_VISWS
+#define GBE_BASE 0xd0000000 /* SGI Visual Workstation */
+#endif
+
+/* macro for fastest write-though access to the framebuffer */
+#ifdef CONFIG_MIPS
+#ifdef CONFIG_CPU_R10000
+#define pgprot_fb(_prot) (((_prot) & (~_CACHE_MASK)) | _CACHE_UNCACHED_ACCELERATED)
+#else
+#define pgprot_fb(_prot) (((_prot) & (~_CACHE_MASK)) | _CACHE_CACHABLE_NO_WA)
+#endif
+#endif
+#ifdef CONFIG_X86
+#define pgprot_fb(_prot) ((_prot) | _PAGE_PCD)
+#endif
+
+/*
+ * RAM we reserve for the frame buffer. This defines the maximum screen
+ * size
+ */
+#if CONFIG_FB_GBE_MEM > 8
+#error GBE Framebuffer cannot use more than 8MB of memory
+#endif
+
+#define TILE_SHIFT 16
+#define TILE_SIZE (1 << TILE_SHIFT)
+#define TILE_MASK (TILE_SIZE - 1)
+
+static unsigned int gbe_mem_size = CONFIG_FB_GBE_MEM * 1024*1024;
+static void *gbe_mem;
+static dma_addr_t gbe_dma_addr;
+unsigned long gbe_mem_phys;
+
+static struct {
+ uint16_t *cpu;
+ dma_addr_t dma;
+} gbe_tiles;
+
+static int gbe_revision;
+
+static struct fb_info fb_info;
+static int ypan, ywrap;
+
+static uint32_t pseudo_palette[256];
+
+static char *mode_option __initdata = NULL;
+
+/* default CRT mode */
+static struct fb_var_screeninfo default_var_CRT __initdata = {
+ /* 640x480, 60 Hz, Non-Interlaced (25.175 MHz dotclock) */
+ .xres = 640,
+ .yres = 480,
+ .xres_virtual = 640,
+ .yres_virtual = 480,
+ .xoffset = 0,
+ .yoffset = 0,
+ .bits_per_pixel = 8,
+ .grayscale = 0,
+ .red = { 0, 8, 0 },
+ .green = { 0, 8, 0 },
+ .blue = { 0, 8, 0 },
+ .transp = { 0, 0, 0 },
+ .nonstd = 0,
+ .activate = 0,
+ .height = -1,
+ .width = -1,
+ .accel_flags = 0,
+ .pixclock = 39722, /* picoseconds */
+ .left_margin = 48,
+ .right_margin = 16,
+ .upper_margin = 33,
+ .lower_margin = 10,
+ .hsync_len = 96,
+ .vsync_len = 2,
+ .sync = 0,
+ .vmode = FB_VMODE_NONINTERLACED,
+};
+
+/* default LCD mode */
+static struct fb_var_screeninfo default_var_LCD __initdata = {
+ /* 1600x1024, 8 bpp */
+ .xres = 1600,
+ .yres = 1024,
+ .xres_virtual = 1600,
+ .yres_virtual = 1024,
+ .xoffset = 0,
+ .yoffset = 0,
+ .bits_per_pixel = 8,
+ .grayscale = 0,
+ .red = { 0, 8, 0 },
+ .green = { 0, 8, 0 },
+ .blue = { 0, 8, 0 },
+ .transp = { 0, 0, 0 },
+ .nonstd = 0,
+ .activate = 0,
+ .height = -1,
+ .width = -1,
+ .accel_flags = 0,
+ .pixclock = 9353,
+ .left_margin = 20,
+ .right_margin = 30,
+ .upper_margin = 37,
+ .lower_margin = 3,
+ .hsync_len = 20,
+ .vsync_len = 3,
+ .sync = 0,
+ .vmode = FB_VMODE_NONINTERLACED
+};
+
+/* default modedb mode */
+/* 640x480, 60 Hz, Non-Interlaced (25.172 MHz dotclock) */
+static struct fb_videomode default_mode_CRT __initdata = {
+ .refresh = 60,
+ .xres = 640,
+ .yres = 480,
+ .pixclock = 39722,
+ .left_margin = 48,
+ .right_margin = 16,
+ .upper_margin = 33,
+ .lower_margin = 10,
+ .hsync_len = 96,
+ .vsync_len = 2,
+ .sync = 0,
+ .vmode = FB_VMODE_NONINTERLACED,
+};
+/* 1600x1024 SGI flatpanel 1600sw */
+static struct fb_videomode default_mode_LCD __initdata = {
+ /* 1600x1024, 8 bpp */
+ .xres = 1600,
+ .yres = 1024,
+ .pixclock = 9353,
+ .left_margin = 20,
+ .right_margin = 30,
+ .upper_margin = 37,
+ .lower_margin = 3,
+ .hsync_len = 20,
+ .vsync_len = 3,
+ .vmode = FB_VMODE_NONINTERLACED,
+};
+
+struct fb_videomode *default_mode = &default_mode_CRT;
+struct fb_var_screeninfo *default_var = &default_var_CRT;
+
+static int flat_panel_enabled = 0;
+
+static struct gbefb_par par_current;
+
+static void gbe_reset(void)
+{
+ /* Turn on dotclock PLL */
+ gbe->ctrlstat = 0x300aa000;
+}
+
+
+/*
+ * Function: gbe_turn_off
+ * Parameters: (None)
+ * Description: This should turn off the monitor and gbe. This is used
+ * when switching between the serial console and the graphics
+ * console.
+ */
+
+void gbe_turn_off(void)
+{
+ int i;
+ unsigned int val, x, y, vpixen_off;
+
+ /* check if pixel counter is on */
+ val = gbe->vt_xy;
+ if (GET_GBE_FIELD(VT_XY, FREEZE, val) == 1)
+ return;
+
+ /* turn off DMA */
+ val = gbe->ovr_control;
+ SET_GBE_FIELD(OVR_CONTROL, OVR_DMA_ENABLE, val, 0);
+ gbe->ovr_control = val;
+ udelay(1000);
+ val = gbe->frm_control;
+ SET_GBE_FIELD(FRM_CONTROL, FRM_DMA_ENABLE, val, 0);
+ gbe->frm_control = val;
+ udelay(1000);
+ val = gbe->did_control;
+ SET_GBE_FIELD(DID_CONTROL, DID_DMA_ENABLE, val, 0);
+ gbe->did_control = val;
+ udelay(1000);
+
+ /* We have to wait through two vertical retrace periods before
+ * the pixel DMA is turned off for sure. */
+ for (i = 0; i < 10000; i++) {
+ val = gbe->frm_inhwctrl;
+ if (GET_GBE_FIELD(FRM_INHWCTRL, FRM_DMA_ENABLE, val)) {
+ udelay(10);
+ } else {
+ val = gbe->ovr_inhwctrl;
+ if (GET_GBE_FIELD(OVR_INHWCTRL, OVR_DMA_ENABLE, val)) {
+ udelay(10);
+ } else {
+ val = gbe->did_inhwctrl;
+ if (GET_GBE_FIELD(DID_INHWCTRL, DID_DMA_ENABLE, val)) {
+ udelay(10);
+ } else
+ break;
+ }
+ }
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn off DMA timed out\n");
+
+ /* wait for vpixen_off */
+ val = gbe->vt_vpixen;
+ vpixen_off = GET_GBE_FIELD(VT_VPIXEN, VPIXEN_OFF, val);
+
+ for (i = 0; i < 100000; i++) {
+ val = gbe->vt_xy;
+ x = GET_GBE_FIELD(VT_XY, X, val);
+ y = GET_GBE_FIELD(VT_XY, Y, val);
+ if (y < vpixen_off)
+ break;
+ udelay(1);
+ }
+ if (i == 100000)
+ printk(KERN_ERR
+ "gbefb: wait for vpixen_off timed out\n");
+ for (i = 0; i < 10000; i++) {
+ val = gbe->vt_xy;
+ x = GET_GBE_FIELD(VT_XY, X, val);
+ y = GET_GBE_FIELD(VT_XY, Y, val);
+ if (y > vpixen_off)
+ break;
+ udelay(1);
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: wait for vpixen_off timed out\n");
+
+ /* turn off pixel counter */
+ val = 0;
+ SET_GBE_FIELD(VT_XY, FREEZE, val, 1);
+ gbe->vt_xy = val;
+ udelay(10000);
+ for (i = 0; i < 10000; i++) {
+ val = gbe->vt_xy;
+ if (GET_GBE_FIELD(VT_XY, FREEZE, val) != 1)
+ udelay(10);
+ else
+ break;
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn off pixel clock timed out\n");
+
+ /* turn off dot clock */
+ val = gbe->dotclock;
+ SET_GBE_FIELD(DOTCLK, RUN, val, 0);
+ gbe->dotclock = val;
+ udelay(10000);
+ for (i = 0; i < 10000; i++) {
+ val = gbe->dotclock;
+ if (GET_GBE_FIELD(DOTCLK, RUN, val))
+ udelay(10);
+ else
+ break;
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn off dotclock timed out\n");
+
+ /* reset the frame DMA FIFO */
+ val = gbe->frm_size_tile;
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_FIFO_RESET, val, 1);
+ gbe->frm_size_tile = val;
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_FIFO_RESET, val, 0);
+ gbe->frm_size_tile = val;
+}
+
+static void gbe_turn_on(void)
+{
+ unsigned int val, i;
+
+ /*
+ * Check if pixel counter is off, for unknown reason this
+ * code hangs Visual Workstations
+ */
+ if (gbe_revision < 2) {
+ val = gbe->vt_xy;
+ if (GET_GBE_FIELD(VT_XY, FREEZE, val) == 0)
+ return;
+ }
+
+ /* turn on dot clock */
+ val = gbe->dotclock;
+ SET_GBE_FIELD(DOTCLK, RUN, val, 1);
+ gbe->dotclock = val;
+ udelay(10000);
+ for (i = 0; i < 10000; i++) {
+ val = gbe->dotclock;
+ if (GET_GBE_FIELD(DOTCLK, RUN, val) != 1)
+ udelay(10);
+ else
+ break;
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn on dotclock timed out\n");
+
+ /* turn on pixel counter */
+ val = 0;
+ SET_GBE_FIELD(VT_XY, FREEZE, val, 0);
+ gbe->vt_xy = val;
+ udelay(10000);
+ for (i = 0; i < 10000; i++) {
+ val = gbe->vt_xy;
+ if (GET_GBE_FIELD(VT_XY, FREEZE, val))
+ udelay(10);
+ else
+ break;
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn on pixel clock timed out\n");
+
+ /* turn on DMA */
+ val = gbe->frm_control;
+ SET_GBE_FIELD(FRM_CONTROL, FRM_DMA_ENABLE, val, 1);
+ gbe->frm_control = val;
+ udelay(1000);
+ for (i = 0; i < 10000; i++) {
+ val = gbe->frm_inhwctrl;
+ if (GET_GBE_FIELD(FRM_INHWCTRL, FRM_DMA_ENABLE, val) != 1)
+ udelay(10);
+ else
+ break;
+ }
+ if (i == 10000)
+ printk(KERN_ERR "gbefb: turn on DMA timed out\n");
+}
+
+/*
+ * Blank the display.
+ */
+static int gbefb_blank(int blank, struct fb_info *info)
+{
+ /* 0 unblank, 1 blank, 2 no vsync, 3 no hsync, 4 off */
+ switch (blank) {
+ case FB_BLANK_UNBLANK: /* unblank */
+ gbe_turn_on();
+ break;
+
+ case FB_BLANK_NORMAL: /* blank */
+ gbe_turn_off();
+ break;
+
+ default:
+ /* Nothing */
+ break;
+ }
+ return 0;
+}
+
+/*
+ * Setup flatpanel related registers.
+ */
+static void gbefb_setup_flatpanel(struct gbe_timing_info *timing)
+{
+ int fp_wid, fp_hgt, fp_vbs, fp_vbe;
+ u32 outputVal = 0;
+
+ SET_GBE_FIELD(VT_FLAGS, HDRV_INVERT, outputVal,
+ (timing->flags & FB_SYNC_HOR_HIGH_ACT) ? 0 : 1);
+ SET_GBE_FIELD(VT_FLAGS, VDRV_INVERT, outputVal,
+ (timing->flags & FB_SYNC_VERT_HIGH_ACT) ? 0 : 1);
+ gbe->vt_flags = outputVal;
+
+ /* Turn on the flat panel */
+ fp_wid = 1600;
+ fp_hgt = 1024;
+ fp_vbs = 0;
+ fp_vbe = 1600;
+ timing->pll_m = 4;
+ timing->pll_n = 1;
+ timing->pll_p = 0;
+
+ outputVal = 0;
+ SET_GBE_FIELD(FP_DE, ON, outputVal, fp_vbs);
+ SET_GBE_FIELD(FP_DE, OFF, outputVal, fp_vbe);
+ gbe->fp_de = outputVal;
+ outputVal = 0;
+ SET_GBE_FIELD(FP_HDRV, OFF, outputVal, fp_wid);
+ gbe->fp_hdrv = outputVal;
+ outputVal = 0;
+ SET_GBE_FIELD(FP_VDRV, ON, outputVal, 1);
+ SET_GBE_FIELD(FP_VDRV, OFF, outputVal, fp_hgt + 1);
+ gbe->fp_vdrv = outputVal;
+}
+
+struct gbe_pll_info {
+ int clock_rate;
+ int fvco_min;
+ int fvco_max;
+};
+
+static struct gbe_pll_info gbe_pll_table[2] = {
+ { 20, 80, 220 },
+ { 27, 80, 220 },
+};
+
+static int compute_gbe_timing(struct fb_var_screeninfo *var,
+ struct gbe_timing_info *timing)
+{
+ int pll_m, pll_n, pll_p, error, best_m, best_n, best_p, best_error;
+ int pixclock;
+ struct gbe_pll_info *gbe_pll;
+
+ if (gbe_revision < 2)
+ gbe_pll = &gbe_pll_table[0];
+ else
+ gbe_pll = &gbe_pll_table[1];
+
+ /* Determine valid resolution and timing
+ * GBE crystal runs at 20Mhz or 27Mhz
+ * pll_m, pll_n, pll_p define the following frequencies
+ * fvco = pll_m * 20Mhz / pll_n
+ * fout = fvco / (2**pll_p) */
+ best_error = 1000000000;
+ best_n = best_m = best_p = 0;
+ for (pll_p = 0; pll_p < 4; pll_p++)
+ for (pll_m = 1; pll_m < 256; pll_m++)
+ for (pll_n = 1; pll_n < 64; pll_n++) {
+ pixclock = (1000000 / gbe_pll->clock_rate) *
+ (pll_n << pll_p) / pll_m;
+
+ error = var->pixclock - pixclock;
+
+ if (error < 0)
+ error = -error;
+
+ if (error < best_error &&
+ pll_m / pll_n >
+ gbe_pll->fvco_min / gbe_pll->clock_rate &&
+ pll_m / pll_n <
+ gbe_pll->fvco_max / gbe_pll->clock_rate) {
+ best_error = error;
+ best_m = pll_m;
+ best_n = pll_n;
+ best_p = pll_p;
+ }
+ }
+
+ if (!best_n || !best_m)
+ return -EINVAL; /* Resolution to high */
+
+ pixclock = (1000000 / gbe_pll->clock_rate) *
+ (best_n << best_p) / best_m;
+
+ /* set video timing information */
+ if (timing) {
+ timing->width = var->xres;
+ timing->height = var->yres;
+ timing->pll_m = best_m;
+ timing->pll_n = best_n;
+ timing->pll_p = best_p;
+ timing->cfreq = gbe_pll->clock_rate * 1000 * timing->pll_m /
+ (timing->pll_n << timing->pll_p);
+ timing->htotal = var->left_margin + var->xres +
+ var->right_margin + var->hsync_len;
+ timing->vtotal = var->upper_margin + var->yres +
+ var->lower_margin + var->vsync_len;
+ timing->fields_sec = 1000 * timing->cfreq / timing->htotal *
+ 1000 / timing->vtotal;
+ timing->hblank_start = var->xres;
+ timing->vblank_start = var->yres;
+ timing->hblank_end = timing->htotal;
+ timing->hsync_start = var->xres + var->right_margin + 1;
+ timing->hsync_end = timing->hsync_start + var->hsync_len;
+ timing->vblank_end = timing->vtotal;
+ timing->vsync_start = var->yres + var->lower_margin + 1;
+ timing->vsync_end = timing->vsync_start + var->vsync_len;
+ }
+
+ return pixclock;
+}
+
+static void gbe_set_timing_info(struct gbe_timing_info *timing)
+{
+ int temp;
+ unsigned int val;
+
+ /* setup dot clock PLL */
+ val = 0;
+ SET_GBE_FIELD(DOTCLK, M, val, timing->pll_m - 1);
+ SET_GBE_FIELD(DOTCLK, N, val, timing->pll_n - 1);
+ SET_GBE_FIELD(DOTCLK, P, val, timing->pll_p);
+ SET_GBE_FIELD(DOTCLK, RUN, val, 0); /* do not start yet */
+ gbe->dotclock = val;
+ udelay(10000);
+
+ /* setup pixel counter */
+ val = 0;
+ SET_GBE_FIELD(VT_XYMAX, MAXX, val, timing->htotal);
+ SET_GBE_FIELD(VT_XYMAX, MAXY, val, timing->vtotal);
+ gbe->vt_xymax = val;
+
+ /* setup video timing signals */
+ val = 0;
+ SET_GBE_FIELD(VT_VSYNC, VSYNC_ON, val, timing->vsync_start);
+ SET_GBE_FIELD(VT_VSYNC, VSYNC_OFF, val, timing->vsync_end);
+ gbe->vt_vsync = val;
+ val = 0;
+ SET_GBE_FIELD(VT_HSYNC, HSYNC_ON, val, timing->hsync_start);
+ SET_GBE_FIELD(VT_HSYNC, HSYNC_OFF, val, timing->hsync_end);
+ gbe->vt_hsync = val;
+ val = 0;
+ SET_GBE_FIELD(VT_VBLANK, VBLANK_ON, val, timing->vblank_start);
+ SET_GBE_FIELD(VT_VBLANK, VBLANK_OFF, val, timing->vblank_end);
+ gbe->vt_vblank = val;
+ val = 0;
+ SET_GBE_FIELD(VT_HBLANK, HBLANK_ON, val,
+ timing->hblank_start - 5);
+ SET_GBE_FIELD(VT_HBLANK, HBLANK_OFF, val,
+ timing->hblank_end - 3);
+ gbe->vt_hblank = val;
+
+ /* setup internal timing signals */
+ val = 0;
+ SET_GBE_FIELD(VT_VCMAP, VCMAP_ON, val, timing->vblank_start);
+ SET_GBE_FIELD(VT_VCMAP, VCMAP_OFF, val, timing->vblank_end);
+ gbe->vt_vcmap = val;
+ val = 0;
+ SET_GBE_FIELD(VT_HCMAP, HCMAP_ON, val, timing->hblank_start);
+ SET_GBE_FIELD(VT_HCMAP, HCMAP_OFF, val, timing->hblank_end);
+ gbe->vt_hcmap = val;
+
+ val = 0;
+ temp = timing->vblank_start - timing->vblank_end - 1;
+ if (temp > 0)
+ temp = -temp;
+
+ if (flat_panel_enabled)
+ gbefb_setup_flatpanel(timing);
+
+ SET_GBE_FIELD(DID_START_XY, DID_STARTY, val, (u32) temp);
+ if (timing->hblank_end >= 20)
+ SET_GBE_FIELD(DID_START_XY, DID_STARTX, val,
+ timing->hblank_end - 20);
+ else
+ SET_GBE_FIELD(DID_START_XY, DID_STARTX, val,
+ timing->htotal - (20 - timing->hblank_end));
+ gbe->did_start_xy = val;
+
+ val = 0;
+ SET_GBE_FIELD(CRS_START_XY, CRS_STARTY, val, (u32) (temp + 1));
+ if (timing->hblank_end >= GBE_CRS_MAGIC)
+ SET_GBE_FIELD(CRS_START_XY, CRS_STARTX, val,
+ timing->hblank_end - GBE_CRS_MAGIC);
+ else
+ SET_GBE_FIELD(CRS_START_XY, CRS_STARTX, val,
+ timing->htotal - (GBE_CRS_MAGIC -
+ timing->hblank_end));
+ gbe->crs_start_xy = val;
+
+ val = 0;
+ SET_GBE_FIELD(VC_START_XY, VC_STARTY, val, (u32) temp);
+ SET_GBE_FIELD(VC_START_XY, VC_STARTX, val, timing->hblank_end - 4);
+ gbe->vc_start_xy = val;
+
+ val = 0;
+ temp = timing->hblank_end - GBE_PIXEN_MAGIC_ON;
+ if (temp < 0)
+ temp += timing->htotal; /* allow blank to wrap around */
+
+ SET_GBE_FIELD(VT_HPIXEN, HPIXEN_ON, val, temp);
+ SET_GBE_FIELD(VT_HPIXEN, HPIXEN_OFF, val,
+ ((temp + timing->width -
+ GBE_PIXEN_MAGIC_OFF) % timing->htotal));
+ gbe->vt_hpixen = val;
+
+ val = 0;
+ SET_GBE_FIELD(VT_VPIXEN, VPIXEN_ON, val, timing->vblank_end);
+ SET_GBE_FIELD(VT_VPIXEN, VPIXEN_OFF, val, timing->vblank_start);
+ gbe->vt_vpixen = val;
+
+ /* turn off sync on green */
+ val = 0;
+ SET_GBE_FIELD(VT_FLAGS, SYNC_LOW, val, 1);
+ gbe->vt_flags = val;
+}
+
+/*
+ * Set the hardware according to 'par'.
+ */
+
+static int gbefb_set_par(struct fb_info *info)
+{
+ int i;
+ unsigned int val;
+ int wholeTilesX, partTilesX, maxPixelsPerTileX;
+ int height_pix;
+ int xpmax, ypmax; /* Monitor resolution */
+ int bytesPerPixel; /* Bytes per pixel */
+ struct gbefb_par *par = (struct gbefb_par *) info->par;
+
+ compute_gbe_timing(&info->var, &par->timing);
+
+ bytesPerPixel = info->var.bits_per_pixel / 8;
+ info->fix.line_length = info->var.xres_virtual * bytesPerPixel;
+ xpmax = par->timing.width;
+ ypmax = par->timing.height;
+
+ /* turn off GBE */
+ gbe_turn_off();
+
+ /* set timing info */
+ gbe_set_timing_info(&par->timing);
+
+ /* initialize DIDs */
+ val = 0;
+ switch (bytesPerPixel) {
+ case 1:
+ SET_GBE_FIELD(WID, TYP, val, GBE_CMODE_I8);
+ break;
+ case 2:
+ SET_GBE_FIELD(WID, TYP, val, GBE_CMODE_ARGB5);
+ break;
+ case 4:
+ SET_GBE_FIELD(WID, TYP, val, GBE_CMODE_RGB8);
+ break;
+ }
+ SET_GBE_FIELD(WID, BUF, val, GBE_BMODE_BOTH);
+
+ for (i = 0; i < 32; i++)
+ gbe->mode_regs[i] = val;
+
+ /* Initialize interrupts */
+ gbe->vt_intr01 = 0xffffffff;
+ gbe->vt_intr23 = 0xffffffff;
+
+ /* HACK:
+ The GBE hardware uses a tiled memory to screen mapping. Tiles are
+ blocks of 512x128, 256x128 or 128x128 pixels, respectively for 8bit,
+ 16bit and 32 bit modes (64 kB). They cover the screen with partial
+ tiles on the right and/or bottom of the screen if needed.
+ For exemple in 640x480 8 bit mode the mapping is:
+
+ <-------- 640 ----->
+ <---- 512 ----><128|384 offscreen>
+ ^ ^
+ | 128 [tile 0] [tile 1]
+ | v
+ ^
+ 4 128 [tile 2] [tile 3]
+ 8 v
+ 0 ^
+ 128 [tile 4] [tile 5]
+ | v
+ | ^
+ v 96 [tile 6] [tile 7]
+ 32 offscreen
+
+ Tiles have the advantage that they can be allocated individually in
+ memory. However, this mapping is not linear at all, which is not
+ really convienient. In order to support linear addressing, the GBE
+ DMA hardware is fooled into thinking the screen is only one tile
+ large and but has a greater height, so that the DMA transfer covers
+ the same region.
+ Tiles are still allocated as independent chunks of 64KB of
+ continuous physical memory and remapped so that the kernel sees the
+ framebuffer as a continuous virtual memory. The GBE tile table is
+ set up so that each tile references one of these 64k blocks:
+
+ GBE -> tile list framebuffer TLB <------------ CPU
+ [ tile 0 ] -> [ 64KB ] <- [ 16x 4KB page entries ] ^
+ ... ... ... linear virtual FB
+ [ tile n ] -> [ 64KB ] <- [ 16x 4KB page entries ] v
+
+
+ The GBE hardware is then told that the buffer is 512*tweaked_height,
+ with tweaked_height = real_width*real_height/pixels_per_tile.
+ Thus the GBE hardware will scan the first tile, filing the first 64k
+ covered region of the screen, and then will proceed to the next
+ tile, until the whole screen is covered.
+
+ Here is what would happen at 640x480 8bit:
+
+ normal tiling linear
+ ^ 11111111111111112222 11111111111111111111 ^
+ 128 11111111111111112222 11111111111111111111 102 lines
+ 11111111111111112222 11111111111111111111 v
+ V 11111111111111112222 11111111222222222222
+ 33333333333333334444 22222222222222222222
+ 33333333333333334444 22222222222222222222
+ < 512 > < 256 > 102*640+256 = 64k
+
+ NOTE: The only mode for which this is not working is 800x600 8bit,
+ as 800*600/512 = 937.5 which is not integer and thus causes
+ flickering.
+ I guess this is not so important as one can use 640x480 8bit or
+ 800x600 16bit anyway.
+ */
+
+ /* Tell gbe about the tiles table location */
+ /* tile_ptr -> [ tile 1 ] -> FB mem */
+ /* [ tile 2 ] -> FB mem */
+ /* ... */
+ val = 0;
+ SET_GBE_FIELD(FRM_CONTROL, FRM_TILE_PTR, val, gbe_tiles.dma >> 9);
+ SET_GBE_FIELD(FRM_CONTROL, FRM_DMA_ENABLE, val, 0); /* do not start */
+ SET_GBE_FIELD(FRM_CONTROL, FRM_LINEAR, val, 0);
+ gbe->frm_control = val;
+
+ maxPixelsPerTileX = 512 / bytesPerPixel;
+ wholeTilesX = 1;
+ partTilesX = 0;
+
+ /* Initialize the framebuffer */
+ val = 0;
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_WIDTH_TILE, val, wholeTilesX);
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_RHS, val, partTilesX);
+
+ switch (bytesPerPixel) {
+ case 1:
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_DEPTH, val,
+ GBE_FRM_DEPTH_8);
+ break;
+ case 2:
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_DEPTH, val,
+ GBE_FRM_DEPTH_16);
+ break;
+ case 4:
+ SET_GBE_FIELD(FRM_SIZE_TILE, FRM_DEPTH, val,
+ GBE_FRM_DEPTH_32);
+ break;
+ }
+ gbe->frm_size_tile = val;
+
+ /* compute tweaked height */
+ height_pix = xpmax * ypmax / maxPixelsPerTileX;
+
+ val = 0;
+ SET_GBE_FIELD(FRM_SIZE_PIXEL, FB_HEIGHT_PIX, val, height_pix);
+ gbe->frm_size_pixel = val;
+
+ /* turn off DID and overlay DMA */
+ gbe->did_control = 0;
+ gbe->ovr_width_tile = 0;
+
+ /* Turn off mouse cursor */
+ gbe->crs_ctl = 0;
+
+ /* Turn on GBE */
+ gbe_turn_on();
+
+ /* Initialize the gamma map */
+ udelay(10);
+ for (i = 0; i < 256; i++)
+ gbe->gmap[i] = (i << 24) | (i << 16) | (i << 8);
+
+ /* Initialize the color map */
+ for (i = 0; i < 256; i++) {
+ int j;
+
+ for (j = 0; j < 1000 && gbe->cm_fifo >= 63; j++)
+ udelay(10);
+ if (j == 1000)
+ printk(KERN_ERR "gbefb: cmap FIFO timeout\n");
+
+ gbe->cmap[i] = (i << 8) | (i << 16) | (i << 24);
+ }
+
+ return 0;
+}
+
+static void gbefb_encode_fix(struct fb_fix_screeninfo *fix,
+ struct fb_var_screeninfo *var)
+{
+ memset(fix, 0, sizeof(struct fb_fix_screeninfo));
+ strcpy(fix->id, "SGI GBE");
+ fix->smem_start = (unsigned long) gbe_mem;
+ fix->smem_len = gbe_mem_size;
+ fix->type = FB_TYPE_PACKED_PIXELS;
+ fix->type_aux = 0;
+ fix->accel = FB_ACCEL_NONE;
+ switch (var->bits_per_pixel) {
+ case 8:
+ fix->visual = FB_VISUAL_PSEUDOCOLOR;
+ break;
+ default:
+ fix->visual = FB_VISUAL_TRUECOLOR;
+ break;
+ }
+ fix->ywrapstep = 0;
+ fix->xpanstep = 0;
+ fix->ypanstep = 0;
+ fix->line_length = var->xres_virtual * var->bits_per_pixel / 8;
+ fix->mmio_start = GBE_BASE;
+ fix->mmio_len = sizeof(struct sgi_gbe);
+}
+
+/*
+ * Set a single color register. The values supplied are already
+ * rounded down to the hardware's capabilities (according to the
+ * entries in the var structure). Return != 0 for invalid regno.
+ */
+
+static int gbefb_setcolreg(unsigned regno, unsigned red, unsigned green,
+ unsigned blue, unsigned transp,
+ struct fb_info *info)
+{
+ int i;
+
+ if (regno > 255)
+ return 1;
+ red >>= 8;
+ green >>= 8;
+ blue >>= 8;
+
+ switch (info->var.bits_per_pixel) {
+ case 8:
+ /* wait for the color map FIFO to have a free entry */
+ for (i = 0; i < 1000 && gbe->cm_fifo >= 63; i++)
+ udelay(10);
+ if (i == 1000) {
+ printk(KERN_ERR "gbefb: cmap FIFO timeout\n");
+ return 1;
+ }
+ gbe->cmap[regno] = (red << 24) | (green << 16) | (blue << 8);
+ break;
+ case 15:
+ case 16:
+ red >>= 3;
+ green >>= 3;
+ blue >>= 3;
+ pseudo_palette[regno] =
+ (red << info->var.red.offset) |
+ (green << info->var.green.offset) |
+ (blue << info->var.blue.offset);
+ break;
+ case 32:
+ pseudo_palette[regno] =
+ (red << info->var.red.offset) |
+ (green << info->var.green.offset) |
+ (blue << info->var.blue.offset);
+ break;
+ }
+
+ return 0;
+}
+
+/*
+ * Check video mode validity, eventually modify var to best match.
+ */
+static int gbefb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+{
+ unsigned int line_length;
+ struct gbe_timing_info timing;
+
+ /* Limit bpp to 8, 16, and 32 */
+ if (var->bits_per_pixel <= 8)
+ var->bits_per_pixel = 8;
+ else if (var->bits_per_pixel <= 16)
+ var->bits_per_pixel = 16;
+ else if (var->bits_per_pixel <= 32)
+ var->bits_per_pixel = 32;
+ else
+ return -EINVAL;
+
+ /* Check the mode can be mapped linearly with the tile table trick. */
+ /* This requires width x height x bytes/pixel be a multiple of 512 */
+ if ((var->xres * var->yres * var->bits_per_pixel) & 4095)
+ return -EINVAL;
+
+ var->grayscale = 0; /* No grayscale for now */
+
+ if ((var->pixclock = compute_gbe_timing(var, &timing)) < 0)
+ return(-EINVAL);
+
+ /* Adjust virtual resolution, if necessary */
+ if (var->xres > var->xres_virtual || (!ywrap && !ypan))
+ var->xres_virtual = var->xres;
+ if (var->yres > var->yres_virtual || (!ywrap && !ypan))
+ var->yres_virtual = var->yres;
+
+ if (var->vmode & FB_VMODE_CONUPDATE) {
+ var->vmode |= FB_VMODE_YWRAP;
+ var->xoffset = info->var.xoffset;
+ var->yoffset = info->var.yoffset;
+ }
+
+ /* No grayscale for now */
+ var->grayscale = 0;
+
+ /* Memory limit */
+ line_length = var->xres_virtual * var->bits_per_pixel / 8;
+ if (line_length * var->yres_virtual > gbe_mem_size)
+ return -ENOMEM; /* Virtual resolution too high */
+
+ switch (var->bits_per_pixel) {
+ case 8:
+ var->red.offset = 0;
+ var->red.length = 8;
+ var->green.offset = 0;
+ var->green.length = 8;
+ var->blue.offset = 0;
+ var->blue.length = 8;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ break;
+ case 16: /* RGB 1555 */
+ var->red.offset = 10;
+ var->red.length = 5;
+ var->green.offset = 5;
+ var->green.length = 5;
+ var->blue.offset = 0;
+ var->blue.length = 5;
+ var->transp.offset = 0;
+ var->transp.length = 0;
+ break;
+ case 32: /* RGB 8888 */
+ var->red.offset = 24;
+ var->red.length = 8;
+ var->green.offset = 16;
+ var->green.length = 8;
+ var->blue.offset = 8;
+ var->blue.length = 8;
+ var->transp.offset = 0;
+ var->transp.length = 8;
+ break;
+ }
+ var->red.msb_right = 0;
+ var->green.msb_right = 0;
+ var->blue.msb_right = 0;
+ var->transp.msb_right = 0;
+
+ var->left_margin = timing.htotal - timing.hsync_end;
+ var->right_margin = timing.hsync_start - timing.width;
+ var->upper_margin = timing.vtotal - timing.vsync_end;
+ var->lower_margin = timing.vsync_start - timing.height;
+ var->hsync_len = timing.hsync_end - timing.hsync_start;
+ var->vsync_len = timing.vsync_end - timing.vsync_start;
+
+ return 0;
+}
+
+static int gbefb_mmap(struct fb_info *info, struct file *file,
+ struct vm_area_struct *vma)
+{
+ unsigned long size = vma->vm_end - vma->vm_start;
+ unsigned long offset = vma->vm_pgoff << PAGE_SHIFT;
+ unsigned long addr;
+ unsigned long phys_addr, phys_size;
+ u16 *tile;
+
+ /* check range */
+ if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT))
+ return -EINVAL;
+ if (offset + size > gbe_mem_size)
+ return -EINVAL;
+
+ /* remap using the fastest write-through mode on architecture */
+ /* try not polluting the cache when possible */
+ pgprot_val(vma->vm_page_prot) =
+ pgprot_fb(pgprot_val(vma->vm_page_prot));
+
+ vma->vm_flags |= VM_IO | VM_RESERVED;
+ vma->vm_file = file;
+
+ /* look for the starting tile */
+ tile = &gbe_tiles.cpu[offset >> TILE_SHIFT];
+ addr = vma->vm_start;
+ offset &= TILE_MASK;
+
+ /* remap each tile separately */
+ do {
+ phys_addr = (((unsigned long) (*tile)) << TILE_SHIFT) + offset;
+ if ((offset + size) < TILE_SIZE)
+ phys_size = size;
+ else
+ phys_size = TILE_SIZE - offset;
+
+ if (remap_pfn_range(vma, addr, phys_addr >> PAGE_SHIFT,
+ phys_size, vma->vm_page_prot))
+ return -EAGAIN;
+
+ offset = 0;
+ size -= phys_size;
+ addr += phys_size;
+ tile++;
+ } while (size);
+
+ return 0;
+}
+
+static struct fb_ops gbefb_ops = {
+ .owner = THIS_MODULE,
+ .fb_check_var = gbefb_check_var,
+ .fb_set_par = gbefb_set_par,
+ .fb_setcolreg = gbefb_setcolreg,
+ .fb_mmap = gbefb_mmap,
+ .fb_blank = gbefb_blank,
+ .fb_fillrect = cfb_fillrect,
+ .fb_copyarea = cfb_copyarea,
+ .fb_imageblit = cfb_imageblit,
+ .fb_cursor = soft_cursor,
+};
+
+/*
+ * Initialization
+ */
+
+int __init gbefb_setup(char *options)
+{
+ char *this_opt;
+
+ if (!options || !*options)
+ return 0;
+
+ while ((this_opt = strsep(&options, ",")) != NULL) {
+ if (!strncmp(this_opt, "monitor:", 8)) {
+ if (!strncmp(this_opt + 8, "crt", 3)) {
+ flat_panel_enabled = 0;
+ default_var = &default_var_CRT;
+ default_mode = &default_mode_CRT;
+ } else if (!strncmp(this_opt + 8, "1600sw", 6) ||
+ !strncmp(this_opt + 8, "lcd", 3)) {
+ flat_panel_enabled = 1;
+ default_var = &default_var_LCD;
+ default_mode = &default_mode_LCD;
+ }
+ } else if (!strncmp(this_opt, "mem:", 4)) {
+ gbe_mem_size = memparse(this_opt + 4, &this_opt);
+ if (gbe_mem_size > CONFIG_FB_GBE_MEM * 1024 * 1024)
+ gbe_mem_size = CONFIG_FB_GBE_MEM * 1024 * 1024;
+ if (gbe_mem_size < TILE_SIZE)
+ gbe_mem_size = TILE_SIZE;
+ } else
+ mode_option = this_opt;
+ }
+ return 0;
+}
+
+int __init gbefb_init(void)
+{
+ int i, ret = 0;
+
+#ifndef MODULE
+ char *options = NULL;
+
+ if (fb_get_options("gbefb", &options))
+ return -ENODEV;
+ gbefb_setup(options);
+#endif
+
+ if (!request_mem_region(GBE_BASE, sizeof(struct sgi_gbe), "GBE")) {
+ printk(KERN_ERR "gbefb: couldn't reserve mmio region\n");
+ return -EBUSY;
+ }
+
+ gbe = (struct sgi_gbe *) ioremap(GBE_BASE, sizeof(struct sgi_gbe));
+ if (!gbe) {
+ printk(KERN_ERR "gbefb: couldn't map mmio region\n");
+ ret = -ENXIO;
+ goto out_release_mem_region;
+ }
+ gbe_revision = gbe->ctrlstat & 15;
+
+ gbe_tiles.cpu =
+ dma_alloc_coherent(NULL, GBE_TLB_SIZE * sizeof(uint16_t),
+ &gbe_tiles.dma, GFP_KERNEL);
+ if (!gbe_tiles.cpu) {
+ printk(KERN_ERR "gbefb: couldn't allocate tiles table\n");
+ ret = -ENOMEM;
+ goto out_unmap;
+ }
+
+
+ if (gbe_mem_phys) {
+ /* memory was allocated at boot time */
+ gbe_mem = ioremap_nocache(gbe_mem_phys, gbe_mem_size);
+ gbe_dma_addr = 0;
+ } else {
+ /* try to allocate memory with the classical allocator
+ * this has high chance to fail on low memory machines */
+ gbe_mem = dma_alloc_coherent(NULL, gbe_mem_size, &gbe_dma_addr,
+ GFP_KERNEL);
+ gbe_mem_phys = (unsigned long) gbe_dma_addr;
+ }
+
+#ifdef CONFIG_X86
+ mtrr_add(gbe_mem_phys, gbe_mem_size, MTRR_TYPE_WRCOMB, 1);
+#endif
+
+ if (!gbe_mem) {
+ printk(KERN_ERR "gbefb: couldn't map framebuffer\n");
+ ret = -ENXIO;
+ goto out_tiles_free;
+ }
+
+ /* map framebuffer memory into tiles table */
+ for (i = 0; i < (gbe_mem_size >> TILE_SHIFT); i++)
+ gbe_tiles.cpu[i] = (gbe_mem_phys >> TILE_SHIFT) + i;
+
+ fb_info.fbops = &gbefb_ops;
+ fb_info.pseudo_palette = pseudo_palette;
+ fb_info.flags = FBINFO_DEFAULT;
+ fb_info.screen_base = gbe_mem;
+ fb_alloc_cmap(&fb_info.cmap, 256, 0);
+
+ /* reset GBE */
+ gbe_reset();
+
+ /* turn on default video mode */
+ if (fb_find_mode(&par_current.var, &fb_info, mode_option, NULL, 0,
+ default_mode, 8) == 0)
+ par_current.var = *default_var;
+ fb_info.var = par_current.var;
+ gbefb_check_var(&par_current.var, &fb_info);
+ gbefb_encode_fix(&fb_info.fix, &fb_info.var);
+ fb_info.par = &par_current;
+
+ if (register_framebuffer(&fb_info) < 0) {
+ ret = -ENXIO;
+ printk(KERN_ERR "gbefb: couldn't register framebuffer\n");
+ goto out_gbe_unmap;
+ }
+
+ printk(KERN_INFO "fb%d: %s rev %d @ 0x%08x using %dkB memory\n",
+ fb_info.node, fb_info.fix.id, gbe_revision, (unsigned) GBE_BASE,
+ gbe_mem_size >> 10);
+
+ return 0;
+
+out_gbe_unmap:
+ if (gbe_dma_addr)
+ dma_free_coherent(NULL, gbe_mem_size, gbe_mem, gbe_mem_phys);
+ else
+ iounmap(gbe_mem);
+out_tiles_free:
+ dma_free_coherent(NULL, GBE_TLB_SIZE * sizeof(uint16_t),
+ (void *)gbe_tiles.cpu, gbe_tiles.dma);
+out_unmap:
+ iounmap(gbe);
+out_release_mem_region:
+ release_mem_region(GBE_BASE, sizeof(struct sgi_gbe));
+ return ret;
+}
+
+void __exit gbefb_exit(void)
+{
+ unregister_framebuffer(&fb_info);
+ gbe_turn_off();
+ if (gbe_dma_addr)
+ dma_free_coherent(NULL, gbe_mem_size, gbe_mem, gbe_mem_phys);
+ else
+ iounmap(gbe_mem);
+ dma_free_coherent(NULL, GBE_TLB_SIZE * sizeof(uint16_t),
+ (void *)gbe_tiles.cpu, gbe_tiles.dma);
+ release_mem_region(GBE_BASE, sizeof(struct sgi_gbe));
+ iounmap(gbe);
+}
+
+module_init(gbefb_init);
+
+#ifdef MODULE
+module_exit(gbefb_exit);
+#endif
+
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * linux/drivers/video/pxafb.c
+ *
+ * Copyright (C) 1999 Eric A. Thomas.
+ * Copyright (C) 2004 Jean-Frederic Clere.
+ * Copyright (C) 2004 Ian Campbell.
+ * Copyright (C) 2004 Jeff Lackey.
+ * Based on sa1100fb.c Copyright (C) 1999 Eric A. Thomas
+ * which in turn is
+ * Based on acornfb.c Copyright (C) Russell King.
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive for
+ * more details.
+ *
+ * Intel PXA250/210 LCD Controller Frame Buffer Driver
+ *
+ * Please direct your questions and comments on this driver to the following
+ * email address:
+ *
+ * linux-arm-kernel@lists.arm.linux.org.uk
+ *
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/errno.h>
+#include <linux/string.h>
+#include <linux/interrupt.h>
+#include <linux/slab.h>
+#include <linux/fb.h>
+#include <linux/delay.h>
+#include <linux/init.h>
+#include <linux/ioport.h>
+#include <linux/cpufreq.h>
+#include <linux/device.h>
+#include <linux/dma-mapping.h>
+
+#include <asm/hardware.h>
+#include <asm/io.h>
+#include <asm/irq.h>
+#include <asm/uaccess.h>
+#include <asm/arch/pxa-regs.h>
+#include <asm/arch/bitfield.h>
+#include <asm/arch/pxafb.h>
+
+/*
+ * Complain if VAR is out of range.
+ */
+#define DEBUG_VAR 1
+
+#include "pxafb.h"
+
+/* Bits which should not be set in machine configuration structures */
+#define LCCR0_INVALID_CONFIG_MASK (LCCR0_OUM|LCCR0_BM|LCCR0_QDM|LCCR0_DIS|LCCR0_EFM|LCCR0_IUM|LCCR0_SFM|LCCR0_LDM|LCCR0_ENB)
+#define LCCR3_INVALID_CONFIG_MASK (LCCR3_HSP|LCCR3_VSP|LCCR3_PCD|LCCR3_BPP)
+
+static void (*pxafb_backlight_power)(int);
+static void (*pxafb_lcd_power)(int);
+
+static int pxafb_activate_var(struct fb_var_screeninfo *var, struct pxafb_info *);
+static void set_ctrlr_state(struct pxafb_info *fbi, u_int state);
+
+#ifdef CONFIG_FB_PXA_PARAMETERS
+#define PXAFB_OPTIONS_SIZE 256
+static char g_options[PXAFB_OPTIONS_SIZE] __initdata = "";
+#endif
+
+static inline void pxafb_schedule_work(struct pxafb_info *fbi, u_int state)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ /*
+ * We need to handle two requests being made at the same time.
+ * There are two important cases:
+ * 1. When we are changing VT (C_REENABLE) while unblanking (C_ENABLE)
+ * We must perform the unblanking, which will do our REENABLE for us.
+ * 2. When we are blanking, but immediately unblank before we have
+ * blanked. We do the "REENABLE" thing here as well, just to be sure.
+ */
+ if (fbi->task_state == C_ENABLE && state == C_REENABLE)
+ state = (u_int) -1;
+ if (fbi->task_state == C_DISABLE && state == C_ENABLE)
+ state = C_REENABLE;
+
+ if (state != (u_int)-1) {
+ fbi->task_state = state;
+ schedule_work(&fbi->task);
+ }
+ local_irq_restore(flags);
+}
+
+static inline u_int chan_to_field(u_int chan, struct fb_bitfield *bf)
+{
+ chan &= 0xffff;
+ chan >>= 16 - bf->length;
+ return chan << bf->offset;
+}
+
+static int
+pxafb_setpalettereg(u_int regno, u_int red, u_int green, u_int blue,
+ u_int trans, struct fb_info *info)
+{
+ struct pxafb_info *fbi = (struct pxafb_info *)info;
+ u_int val, ret = 1;
+
+ if (regno < fbi->palette_size) {
+ if (fbi->fb.var.grayscale) {
+ val = ((blue >> 8) & 0x00ff);
+ } else {
+ val = ((red >> 0) & 0xf800);
+ val |= ((green >> 5) & 0x07e0);
+ val |= ((blue >> 11) & 0x001f);
+ }
+ fbi->palette_cpu[regno] = val;
+ ret = 0;
+ }
+ return ret;
+}
+
+static int
+pxafb_setcolreg(u_int regno, u_int red, u_int green, u_int blue,
+ u_int trans, struct fb_info *info)
+{
+ struct pxafb_info *fbi = (struct pxafb_info *)info;
+ unsigned int val;
+ int ret = 1;
+
+ /*
+ * If inverse mode was selected, invert all the colours
+ * rather than the register number. The register number
+ * is what you poke into the framebuffer to produce the
+ * colour you requested.
+ */
+ if (fbi->cmap_inverse) {
+ red = 0xffff - red;
+ green = 0xffff - green;
+ blue = 0xffff - blue;
+ }
+
+ /*
+ * If greyscale is true, then we convert the RGB value
+ * to greyscale no matter what visual we are using.
+ */
+ if (fbi->fb.var.grayscale)
+ red = green = blue = (19595 * red + 38470 * green +
+ 7471 * blue) >> 16;
+
+ switch (fbi->fb.fix.visual) {
+ case FB_VISUAL_TRUECOLOR:
+ /*
+ * 16-bit True Colour. We encode the RGB value
+ * according to the RGB bitfield information.
+ */
+ if (regno < 16) {
+ u32 *pal = fbi->fb.pseudo_palette;
+
+ val = chan_to_field(red, &fbi->fb.var.red);
+ val |= chan_to_field(green, &fbi->fb.var.green);
+ val |= chan_to_field(blue, &fbi->fb.var.blue);
+
+ pal[regno] = val;
+ ret = 0;
+ }
+ break;
+
+ case FB_VISUAL_STATIC_PSEUDOCOLOR:
+ case FB_VISUAL_PSEUDOCOLOR:
+ ret = pxafb_setpalettereg(regno, red, green, blue, trans, info);
+ break;
+ }
+
+ return ret;
+}
+
+/*
+ * pxafb_bpp_to_lccr3():
+ * Convert a bits per pixel value to the correct bit pattern for LCCR3
+ */
+static int pxafb_bpp_to_lccr3(struct fb_var_screeninfo *var)
+{
+ int ret = 0;
+ switch (var->bits_per_pixel) {
+ case 1: ret = LCCR3_1BPP; break;
+ case 2: ret = LCCR3_2BPP; break;
+ case 4: ret = LCCR3_4BPP; break;
+ case 8: ret = LCCR3_8BPP; break;
+ case 16: ret = LCCR3_16BPP; break;
+ }
+ return ret;
+}
+
+#ifdef CONFIG_CPU_FREQ
+/*
+ * pxafb_display_dma_period()
+ * Calculate the minimum period (in picoseconds) between two DMA
+ * requests for the LCD controller. If we hit this, it means we're
+ * doing nothing but LCD DMA.
+ */
+static unsigned int pxafb_display_dma_period(struct fb_var_screeninfo *var)
+{
+ /*
+ * Period = pixclock * bits_per_byte * bytes_per_transfer
+ * / memory_bits_per_pixel;
+ */
+ return var->pixclock * 8 * 16 / var->bits_per_pixel;
+}
+
+extern unsigned int get_clk_frequency_khz(int info);
+#endif
+
+/*
+ * pxafb_check_var():
+ * Get the video params out of 'var'. If a value doesn't fit, round it up,
+ * if it's too big, return -EINVAL.
+ *
+ * Round up in the following order: bits_per_pixel, xres,
+ * yres, xres_virtual, yres_virtual, xoffset, yoffset, grayscale,
+ * bitfields, horizontal timing, vertical timing.
+ */
+static int pxafb_check_var(struct fb_var_screeninfo *var, struct fb_info *info)
+{
+ struct pxafb_info *fbi = (struct pxafb_info *)info;
+
+ if (var->xres < MIN_XRES)
+ var->xres = MIN_XRES;
+ if (var->yres < MIN_YRES)
+ var->yres = MIN_YRES;
+ if (var->xres > fbi->max_xres)
+ var->xres = fbi->max_xres;
+ if (var->yres > fbi->max_yres)
+ var->yres = fbi->max_yres;
+ var->xres_virtual =
+ max(var->xres_virtual, var->xres);
+ var->yres_virtual =
+ max(var->yres_virtual, var->yres);
+
+ /*
+ * Setup the RGB parameters for this display.
+ *
+ * The pixel packing format is described on page 7-11 of the
+ * PXA2XX Developer's Manual.
+ */
+ if (var->bits_per_pixel == 16) {
+ var->red.offset = 11; var->red.length = 5;
+ var->green.offset = 5; var->green.length = 6;
+ var->blue.offset = 0; var->blue.length = 5;
+ var->transp.offset = var->transp.length = 0;
+ } else {
+ var->red.offset = var->green.offset = var->blue.offset = var->transp.offset = 0;
+ var->red.length = 8;
+ var->green.length = 8;
+ var->blue.length = 8;
+ var->transp.length = 0;
+ }
+
+#ifdef CONFIG_CPU_FREQ
+ DPRINTK("dma period = %d ps, clock = %d kHz\n",
+ pxafb_display_dma_period(var),
+ get_clk_frequency_khz(0));
+#endif
+
+ return 0;
+}
+
+static inline void pxafb_set_truecolor(u_int is_true_color)
+{
+ DPRINTK("true_color = %d\n", is_true_color);
+ // do your machine-specific setup if needed
+}
+
+/*
+ * pxafb_set_par():
+ * Set the user defined part of the display for the specified console
+ */
+static int pxafb_set_par(struct fb_info *info)
+{
+ struct pxafb_info *fbi = (struct pxafb_info *)info;
+ struct fb_var_screeninfo *var = &info->var;
+ unsigned long palette_mem_size;
+
+ DPRINTK("set_par\n");
+
+ if (var->bits_per_pixel == 16)
+ fbi->fb.fix.visual = FB_VISUAL_TRUECOLOR;
+ else if (!fbi->cmap_static)
+ fbi->fb.fix.visual = FB_VISUAL_PSEUDOCOLOR;
+ else {
+ /*
+ * Some people have weird ideas about wanting static
+ * pseudocolor maps. I suspect their user space
+ * applications are broken.
+ */
+ fbi->fb.fix.visual = FB_VISUAL_STATIC_PSEUDOCOLOR;
+ }
+
+ fbi->fb.fix.line_length = var->xres_virtual *
+ var->bits_per_pixel / 8;
+ if (var->bits_per_pixel == 16)
+ fbi->palette_size = 0;
+ else
+ fbi->palette_size = var->bits_per_pixel == 1 ? 4 : 1 << var->bits_per_pixel;
+
+ palette_mem_size = fbi->palette_size * sizeof(u16);
+
+ DPRINTK("palette_mem_size = 0x%08lx\n", (u_long) palette_mem_size);
+
+ fbi->palette_cpu = (u16 *)(fbi->map_cpu + PAGE_SIZE - palette_mem_size);
+ fbi->palette_dma = fbi->map_dma + PAGE_SIZE - palette_mem_size;
+
+ /*
+ * Set (any) board control register to handle new color depth
+ */
+ pxafb_set_truecolor(fbi->fb.fix.visual == FB_VISUAL_TRUECOLOR);
+
+ if (fbi->fb.var.bits_per_pixel == 16)
+ fb_dealloc_cmap(&fbi->fb.cmap);
+ else
+ fb_alloc_cmap(&fbi->fb.cmap, 1<<fbi->fb.var.bits_per_pixel, 0);
+
+ pxafb_activate_var(var, fbi);
+
+ return 0;
+}
+
+/*
+ * Formal definition of the VESA spec:
+ * On
+ * This refers to the state of the display when it is in full operation
+ * Stand-By
+ * This defines an optional operating state of minimal power reduction with
+ * the shortest recovery time
+ * Suspend
+ * This refers to a level of power management in which substantial power
+ * reduction is achieved by the display. The display can have a longer
+ * recovery time from this state than from the Stand-by state
+ * Off
+ * This indicates that the display is consuming the lowest level of power
+ * and is non-operational. Recovery from this state may optionally require
+ * the user to manually power on the monitor
+ *
+ * Now, the fbdev driver adds an additional state, (blank), where they
+ * turn off the video (maybe by colormap tricks), but don't mess with the
+ * video itself: think of it semantically between on and Stand-By.
+ *
+ * So here's what we should do in our fbdev blank routine:
+ *
+ * VESA_NO_BLANKING (mode 0) Video on, front/back light on
+ * VESA_VSYNC_SUSPEND (mode 1) Video on, front/back light off
+ * VESA_HSYNC_SUSPEND (mode 2) Video on, front/back light off
+ * VESA_POWERDOWN (mode 3) Video off, front/back light off
+ *
+ * This will match the matrox implementation.
+ */
+
+/*
+ * pxafb_blank():
+ * Blank the display by setting all palette values to zero. Note, the
+ * 16 bpp mode does not really use the palette, so this will not
+ * blank the display in all modes.
+ */
+static int pxafb_blank(int blank, struct fb_info *info)
+{
+ struct pxafb_info *fbi = (struct pxafb_info *)info;
+ int i;
+
+ DPRINTK("pxafb_blank: blank=%d\n", blank);
+
+ switch (blank) {
+ case FB_BLANK_POWERDOWN:
+ case FB_BLANK_VSYNC_SUSPEND:
+ case FB_BLANK_HSYNC_SUSPEND:
+ case FB_BLANK_NORMAL:
+ if (fbi->fb.fix.visual == FB_VISUAL_PSEUDOCOLOR ||
+ fbi->fb.fix.visual == FB_VISUAL_STATIC_PSEUDOCOLOR)
+ for (i = 0; i < fbi->palette_size; i++)
+ pxafb_setpalettereg(i, 0, 0, 0, 0, info);
+
+ pxafb_schedule_work(fbi, C_DISABLE);
+ //TODO if (pxafb_blank_helper) pxafb_blank_helper(blank);
+ break;
+
+ case FB_BLANK_UNBLANK:
+ //TODO if (pxafb_blank_helper) pxafb_blank_helper(blank);
+ if (fbi->fb.fix.visual == FB_VISUAL_PSEUDOCOLOR ||
+ fbi->fb.fix.visual == FB_VISUAL_STATIC_PSEUDOCOLOR)
+ fb_set_cmap(&fbi->fb.cmap, info);
+ pxafb_schedule_work(fbi, C_ENABLE);
+ }
+ return 0;
+}
+
+static struct fb_ops pxafb_ops = {
+ .owner = THIS_MODULE,
+ .fb_check_var = pxafb_check_var,
+ .fb_set_par = pxafb_set_par,
+ .fb_setcolreg = pxafb_setcolreg,
+ .fb_fillrect = cfb_fillrect,
+ .fb_copyarea = cfb_copyarea,
+ .fb_imageblit = cfb_imageblit,
+ .fb_blank = pxafb_blank,
+ .fb_cursor = soft_cursor,
+};
+
+/*
+ * Calculate the PCD value from the clock rate (in picoseconds).
+ * We take account of the PPCR clock setting.
+ * From PXA Developer's Manual:
+ *
+ * PixelClock = LCLK
+ * -------------
+ * 2 ( PCD + 1 )
+ *
+ * PCD = LCLK
+ * ------------- - 1
+ * 2(PixelClock)
+ *
+ * Where:
+ * LCLK = LCD/Memory Clock
+ * PCD = LCCR3[7:0]
+ *
+ * PixelClock here is in Hz while the pixclock argument given is the
+ * period in picoseconds. Hence PixelClock = 1 / ( pixclock * 10^-12 )
+ *
+ * The function get_lclk_frequency_10khz returns LCLK in units of
+ * 10khz. Calling the result of this function lclk gives us the
+ * following
+ *
+ * PCD = (lclk * 10^4 ) * ( pixclock * 10^-12 )
+ * -------------------------------------- - 1
+ * 2
+ *
+ * Factoring the 10^4 and 10^-12 out gives 10^-8 == 1 / 100000000 as used below.
+ */
+static inline unsigned int get_pcd(unsigned int pixclock)
+{
+ unsigned long long pcd;
+
+ /* FIXME: Need to take into account Double Pixel Clock mode
+ * (DPC) bit? or perhaps set it based on the various clock
+ * speeds */
+
+ pcd = (unsigned long long)get_lcdclk_frequency_10khz() * pixclock;
+ pcd /= 100000000 * 2;
+ /* no need for this, since we should subtract 1 anyway. they cancel */
+ /* pcd += 1; */ /* make up for integer math truncations */
+ return (unsigned int)pcd;
+}
+
+/*
+ * pxafb_activate_var():
+ * Configures LCD Controller based on entries in var parameter. Settings are
+ * only written to the controller if changes were made.
+ */
+static int pxafb_activate_var(struct fb_var_screeninfo *var, struct pxafb_info *fbi)
+{
+ struct pxafb_lcd_reg new_regs;
+ u_long flags;
+ u_int lines_per_panel, pcd = get_pcd(var->pixclock);
+
+ DPRINTK("Configuring PXA LCD\n");
+
+ DPRINTK("var: xres=%d hslen=%d lm=%d rm=%d\n",
+ var->xres, var->hsync_len,
+ var->left_margin, var->right_margin);
+ DPRINTK("var: yres=%d vslen=%d um=%d bm=%d\n",
+ var->yres, var->vsync_len,
+ var->upper_margin, var->lower_margin);
+ DPRINTK("var: pixclock=%d pcd=%d\n", var->pixclock, pcd);
+
+#if DEBUG_VAR
+ if (var->xres < 16 || var->xres > 1024)
+ printk(KERN_ERR "%s: invalid xres %d\n",
+ fbi->fb.fix.id, var->xres);
+ switch(var->bits_per_pixel) {
+ case 1:
+ case 2:
+ case 4:
+ case 8:
+ case 16:
+ break;
+ default:
+ printk(KERN_ERR "%s: invalid bit depth %d\n",
+ fbi->fb.fix.id, var->bits_per_pixel);
+ break;
+ }
+ if (var->hsync_len < 1 || var->hsync_len > 64)
+ printk(KERN_ERR "%s: invalid hsync_len %d\n",
+ fbi->fb.fix.id, var->hsync_len);
+ if (var->left_margin < 1 || var->left_margin > 255)
+ printk(KERN_ERR "%s: invalid left_margin %d\n",
+ fbi->fb.fix.id, var->left_margin);
+ if (var->right_margin < 1 || var->right_margin > 255)
+ printk(KERN_ERR "%s: invalid right_margin %d\n",
+ fbi->fb.fix.id, var->right_margin);
+ if (var->yres < 1 || var->yres > 1024)
+ printk(KERN_ERR "%s: invalid yres %d\n",
+ fbi->fb.fix.id, var->yres);
+ if (var->vsync_len < 1 || var->vsync_len > 64)
+ printk(KERN_ERR "%s: invalid vsync_len %d\n",
+ fbi->fb.fix.id, var->vsync_len);
+ if (var->upper_margin < 0 || var->upper_margin > 255)
+ printk(KERN_ERR "%s: invalid upper_margin %d\n",
+ fbi->fb.fix.id, var->upper_margin);
+ if (var->lower_margin < 0 || var->lower_margin > 255)
+ printk(KERN_ERR "%s: invalid lower_margin %d\n",
+ fbi->fb.fix.id, var->lower_margin);
+#endif
+
+ new_regs.lccr0 = fbi->lccr0 |
+ (LCCR0_LDM | LCCR0_SFM | LCCR0_IUM | LCCR0_EFM |
+ LCCR0_QDM | LCCR0_BM | LCCR0_OUM);
+
+ new_regs.lccr1 =
+ LCCR1_DisWdth(var->xres) +
+ LCCR1_HorSnchWdth(var->hsync_len) +
+ LCCR1_BegLnDel(var->left_margin) +
+ LCCR1_EndLnDel(var->right_margin);
+
+ /*
+ * If we have a dual scan LCD, we need to halve
+ * the YRES parameter.
+ */
+ lines_per_panel = var->yres;
+ if ((fbi->lccr0 & LCCR0_SDS) == LCCR0_Dual)
+ lines_per_panel /= 2;
+
+ new_regs.lccr2 =
+ LCCR2_DisHght(lines_per_panel) +
+ LCCR2_VrtSnchWdth(var->vsync_len) +
+ LCCR2_BegFrmDel(var->upper_margin) +
+ LCCR2_EndFrmDel(var->lower_margin);
+
+ new_regs.lccr3 = fbi->lccr3 |
+ pxafb_bpp_to_lccr3(var) |
+ (var->sync & FB_SYNC_HOR_HIGH_ACT ? LCCR3_HorSnchH : LCCR3_HorSnchL) |
+ (var->sync & FB_SYNC_VERT_HIGH_ACT ? LCCR3_VrtSnchH : LCCR3_VrtSnchL);
+
+ if (pcd)
+ new_regs.lccr3 |= LCCR3_PixClkDiv(pcd);
+
+ DPRINTK("nlccr0 = 0x%08x\n", new_regs.lccr0);
+ DPRINTK("nlccr1 = 0x%08x\n", new_regs.lccr1);
+ DPRINTK("nlccr2 = 0x%08x\n", new_regs.lccr2);
+ DPRINTK("nlccr3 = 0x%08x\n", new_regs.lccr3);
+
+ /* Update shadow copy atomically */
+ local_irq_save(flags);
+
+ /* setup dma descriptors */
+ fbi->dmadesc_fblow_cpu = (struct pxafb_dma_descriptor *)((unsigned int)fbi->palette_cpu - 3*16);
+ fbi->dmadesc_fbhigh_cpu = (struct pxafb_dma_descriptor *)((unsigned int)fbi->palette_cpu - 2*16);
+ fbi->dmadesc_palette_cpu = (struct pxafb_dma_descriptor *)((unsigned int)fbi->palette_cpu - 1*16);
+
+ fbi->dmadesc_fblow_dma = fbi->palette_dma - 3*16;
+ fbi->dmadesc_fbhigh_dma = fbi->palette_dma - 2*16;
+ fbi->dmadesc_palette_dma = fbi->palette_dma - 1*16;
+
+#define BYTES_PER_PANEL (lines_per_panel * fbi->fb.fix.line_length)
+
+ /* populate descriptors */
+ fbi->dmadesc_fblow_cpu->fdadr = fbi->dmadesc_fblow_dma;
+ fbi->dmadesc_fblow_cpu->fsadr = fbi->screen_dma + BYTES_PER_PANEL;
+ fbi->dmadesc_fblow_cpu->fidr = 0;
+ fbi->dmadesc_fblow_cpu->ldcmd = BYTES_PER_PANEL;
+
+ fbi->fdadr1 = fbi->dmadesc_fblow_dma; /* only used in dual-panel mode */
+
+ fbi->dmadesc_fbhigh_cpu->fsadr = fbi->screen_dma;
+ fbi->dmadesc_fbhigh_cpu->fidr = 0;
+ fbi->dmadesc_fbhigh_cpu->ldcmd = BYTES_PER_PANEL;
+
+ fbi->dmadesc_palette_cpu->fsadr = fbi->palette_dma;
+ fbi->dmadesc_palette_cpu->fidr = 0;
+ fbi->dmadesc_palette_cpu->ldcmd = (fbi->palette_size * 2) | LDCMD_PAL;
+
+ if (var->bits_per_pixel == 16) {
+ /* palette shouldn't be loaded in true-color mode */
+ fbi->dmadesc_fbhigh_cpu->fdadr = fbi->dmadesc_fbhigh_dma;
+ fbi->fdadr0 = fbi->dmadesc_fbhigh_dma; /* no pal just fbhigh */
+ /* init it to something, even though we won't be using it */
+ fbi->dmadesc_palette_cpu->fdadr = fbi->dmadesc_palette_dma;
+ } else {
+ fbi->dmadesc_palette_cpu->fdadr = fbi->dmadesc_fbhigh_dma;
+ fbi->dmadesc_fbhigh_cpu->fdadr = fbi->dmadesc_palette_dma;
+ fbi->fdadr0 = fbi->dmadesc_palette_dma; /* flips back and forth between pal and fbhigh */
+ }
+
+#if 0
+ DPRINTK("fbi->dmadesc_fblow_cpu = 0x%p\n", fbi->dmadesc_fblow_cpu);
+ DPRINTK("fbi->dmadesc_fbhigh_cpu = 0x%p\n", fbi->dmadesc_fbhigh_cpu);
+ DPRINTK("fbi->dmadesc_palette_cpu = 0x%p\n", fbi->dmadesc_palette_cpu);
+ DPRINTK("fbi->dmadesc_fblow_dma = 0x%x\n", fbi->dmadesc_fblow_dma);
+ DPRINTK("fbi->dmadesc_fbhigh_dma = 0x%x\n", fbi->dmadesc_fbhigh_dma);
+ DPRINTK("fbi->dmadesc_palette_dma = 0x%x\n", fbi->dmadesc_palette_dma);
+
+ DPRINTK("fbi->dmadesc_fblow_cpu->fdadr = 0x%x\n", fbi->dmadesc_fblow_cpu->fdadr);
+ DPRINTK("fbi->dmadesc_fbhigh_cpu->fdadr = 0x%x\n", fbi->dmadesc_fbhigh_cpu->fdadr);
+ DPRINTK("fbi->dmadesc_palette_cpu->fdadr = 0x%x\n", fbi->dmadesc_palette_cpu->fdadr);
+
+ DPRINTK("fbi->dmadesc_fblow_cpu->fsadr = 0x%x\n", fbi->dmadesc_fblow_cpu->fsadr);
+ DPRINTK("fbi->dmadesc_fbhigh_cpu->fsadr = 0x%x\n", fbi->dmadesc_fbhigh_cpu->fsadr);
+ DPRINTK("fbi->dmadesc_palette_cpu->fsadr = 0x%x\n", fbi->dmadesc_palette_cpu->fsadr);
+
+ DPRINTK("fbi->dmadesc_fblow_cpu->ldcmd = 0x%x\n", fbi->dmadesc_fblow_cpu->ldcmd);
+ DPRINTK("fbi->dmadesc_fbhigh_cpu->ldcmd = 0x%x\n", fbi->dmadesc_fbhigh_cpu->ldcmd);
+ DPRINTK("fbi->dmadesc_palette_cpu->ldcmd = 0x%x\n", fbi->dmadesc_palette_cpu->ldcmd);
+#endif
+
+ fbi->reg_lccr0 = new_regs.lccr0;
+ fbi->reg_lccr1 = new_regs.lccr1;
+ fbi->reg_lccr2 = new_regs.lccr2;
+ fbi->reg_lccr3 = new_regs.lccr3;
+ local_irq_restore(flags);
+
+ /*
+ * Only update the registers if the controller is enabled
+ * and something has changed.
+ */
+ if ((LCCR0 != fbi->reg_lccr0) || (LCCR1 != fbi->reg_lccr1) ||
+ (LCCR2 != fbi->reg_lccr2) || (LCCR3 != fbi->reg_lccr3) ||
+ (FDADR0 != fbi->fdadr0) || (FDADR1 != fbi->fdadr1))
+ pxafb_schedule_work(fbi, C_REENABLE);
+
+ return 0;
+}
+
+/*
+ * NOTE! The following functions are purely helpers for set_ctrlr_state.
+ * Do not call them directly; set_ctrlr_state does the correct serialisation
+ * to ensure that things happen in the right way 100% of time time.
+ * -- rmk
+ */
+static inline void __pxafb_backlight_power(struct pxafb_info *fbi, int on)
+{
+ DPRINTK("backlight o%s\n", on ? "n" : "ff");
+
+ if (pxafb_backlight_power)
+ pxafb_backlight_power(on);
+}
+
+static inline void __pxafb_lcd_power(struct pxafb_info *fbi, int on)
+{
+ DPRINTK("LCD power o%s\n", on ? "n" : "ff");
+
+ if (pxafb_lcd_power)
+ pxafb_lcd_power(on);
+}
+
+static void pxafb_setup_gpio(struct pxafb_info *fbi)
+{
+ int gpio, ldd_bits;
+ unsigned int lccr0 = fbi->lccr0;
+
+ /*
+ * setup is based on type of panel supported
+ */
+
+ /* 4 bit interface */
+ if ((lccr0 & LCCR0_CMS) == LCCR0_Mono &&
+ (lccr0 & LCCR0_SDS) == LCCR0_Sngl &&
+ (lccr0 & LCCR0_DPD) == LCCR0_4PixMono)
+ ldd_bits = 4;
+
+ /* 8 bit interface */
+ else if (((lccr0 & LCCR0_CMS) == LCCR0_Mono &&
+ ((lccr0 & LCCR0_SDS) == LCCR0_Dual || (lccr0 & LCCR0_DPD) == LCCR0_8PixMono)) ||
+ ((lccr0 & LCCR0_CMS) == LCCR0_Color &&
+ (lccr0 & LCCR0_PAS) == LCCR0_Pas && (lccr0 & LCCR0_SDS) == LCCR0_Sngl))
+ ldd_bits = 8;
+
+ /* 16 bit interface */
+ else if ((lccr0 & LCCR0_CMS) == LCCR0_Color &&
+ ((lccr0 & LCCR0_SDS) == LCCR0_Dual || (lccr0 & LCCR0_PAS) == LCCR0_Act))
+ ldd_bits = 16;
+
+ else {
+ printk(KERN_ERR "pxafb_setup_gpio: unable to determine bits per pixel\n");
+ return;
+ }
+
+ for (gpio = 58; ldd_bits; gpio++, ldd_bits--)
+ pxa_gpio_mode(gpio | GPIO_ALT_FN_2_OUT);
+ pxa_gpio_mode(GPIO74_LCD_FCLK_MD);
+ pxa_gpio_mode(GPIO75_LCD_LCLK_MD);
+ pxa_gpio_mode(GPIO76_LCD_PCLK_MD);
+ pxa_gpio_mode(GPIO77_LCD_ACBIAS_MD);
+}
+
+static void pxafb_enable_controller(struct pxafb_info *fbi)
+{
+ DPRINTK("Enabling LCD controller\n");
+ DPRINTK("fdadr0 0x%08x\n", (unsigned int) fbi->fdadr0);
+ DPRINTK("fdadr1 0x%08x\n", (unsigned int) fbi->fdadr1);
+ DPRINTK("reg_lccr0 0x%08x\n", (unsigned int) fbi->reg_lccr0);
+ DPRINTK("reg_lccr1 0x%08x\n", (unsigned int) fbi->reg_lccr1);
+ DPRINTK("reg_lccr2 0x%08x\n", (unsigned int) fbi->reg_lccr2);
+ DPRINTK("reg_lccr3 0x%08x\n", (unsigned int) fbi->reg_lccr3);
+
+ /* Sequence from 11.7.10 */
+ LCCR3 = fbi->reg_lccr3;
+ LCCR2 = fbi->reg_lccr2;
+ LCCR1 = fbi->reg_lccr1;
+ LCCR0 = fbi->reg_lccr0 & ~LCCR0_ENB;
+
+ FDADR0 = fbi->fdadr0;
+ FDADR1 = fbi->fdadr1;
+ LCCR0 |= LCCR0_ENB;
+
+ DPRINTK("FDADR0 0x%08x\n", (unsigned int) FDADR0);
+ DPRINTK("FDADR1 0x%08x\n", (unsigned int) FDADR1);
+ DPRINTK("LCCR0 0x%08x\n", (unsigned int) LCCR0);
+ DPRINTK("LCCR1 0x%08x\n", (unsigned int) LCCR1);
+ DPRINTK("LCCR2 0x%08x\n", (unsigned int) LCCR2);
+ DPRINTK("LCCR3 0x%08x\n", (unsigned int) LCCR3);
+}
+
+static void pxafb_disable_controller(struct pxafb_info *fbi)
+{
+ DECLARE_WAITQUEUE(wait, current);
+
+ DPRINTK("Disabling LCD controller\n");
+
+ add_wait_queue(&fbi->ctrlr_wait, &wait);
+ set_current_state(TASK_UNINTERRUPTIBLE);
+
+ LCSR = 0xffffffff; /* Clear LCD Status Register */
+ LCCR0 &= ~LCCR0_LDM; /* Enable LCD Disable Done Interrupt */
+ LCCR0 |= LCCR0_DIS; /* Disable LCD Controller */
+
+ schedule_timeout(20 * HZ / 1000);
+ remove_wait_queue(&fbi->ctrlr_wait, &wait);
+}
+
+/*
+ * pxafb_handle_irq: Handle 'LCD DONE' interrupts.
+ */
+static irqreturn_t pxafb_handle_irq(int irq, void *dev_id, struct pt_regs *regs)
+{
+ struct pxafb_info *fbi = dev_id;
+ unsigned int lcsr = LCSR;
+
+ if (lcsr & LCSR_LDD) {
+ LCCR0 |= LCCR0_LDM;
+ wake_up(&fbi->ctrlr_wait);
+ }
+
+ LCSR = lcsr;
+ return IRQ_HANDLED;
+}
+
+/*
+ * This function must be called from task context only, since it will
+ * sleep when disabling the LCD controller, or if we get two contending
+ * processes trying to alter state.
+ */
+static void set_ctrlr_state(struct pxafb_info *fbi, u_int state)
+{
+ u_int old_state;
+
+ down(&fbi->ctrlr_sem);
+
+ old_state = fbi->state;
+
+ /*
+ * Hack around fbcon initialisation.
+ */
+ if (old_state == C_STARTUP && state == C_REENABLE)
+ state = C_ENABLE;
+
+ switch (state) {
+ case C_DISABLE_CLKCHANGE:
+ /*
+ * Disable controller for clock change. If the
+ * controller is already disabled, then do nothing.
+ */
+ if (old_state != C_DISABLE && old_state != C_DISABLE_PM) {
+ fbi->state = state;
+ //TODO __pxafb_lcd_power(fbi, 0);
+ pxafb_disable_controller(fbi);
+ }
+ break;
+
+ case C_DISABLE_PM:
+ case C_DISABLE:
+ /*
+ * Disable controller
+ */
+ if (old_state != C_DISABLE) {
+ fbi->state = state;
+ __pxafb_backlight_power(fbi, 0);
+ __pxafb_lcd_power(fbi, 0);
+ if (old_state != C_DISABLE_CLKCHANGE)
+ pxafb_disable_controller(fbi);
+ }
+ break;
+
+ case C_ENABLE_CLKCHANGE:
+ /*
+ * Enable the controller after clock change. Only
+ * do this if we were disabled for the clock change.
+ */
+ if (old_state == C_DISABLE_CLKCHANGE) {
+ fbi->state = C_ENABLE;
+ pxafb_enable_controller(fbi);
+ //TODO __pxafb_lcd_power(fbi, 1);
+ }
+ break;
+
+ case C_REENABLE:
+ /*
+ * Re-enable the controller only if it was already
+ * enabled. This is so we reprogram the control
+ * registers.
+ */
+ if (old_state == C_ENABLE) {
+ pxafb_disable_controller(fbi);
+ pxafb_setup_gpio(fbi);
+ pxafb_enable_controller(fbi);
+ }
+ break;
+
+ case C_ENABLE_PM:
+ /*
+ * Re-enable the controller after PM. This is not
+ * perfect - think about the case where we were doing
+ * a clock change, and we suspended half-way through.
+ */
+ if (old_state != C_DISABLE_PM)
+ break;
+ /* fall through */
+
+ case C_ENABLE:
+ /*
+ * Power up the LCD screen, enable controller, and
+ * turn on the backlight.
+ */
+ if (old_state != C_ENABLE) {
+ fbi->state = C_ENABLE;
+ pxafb_setup_gpio(fbi);
+ pxafb_enable_controller(fbi);
+ __pxafb_lcd_power(fbi, 1);
+ __pxafb_backlight_power(fbi, 1);
+ }
+ break;
+ }
+ up(&fbi->ctrlr_sem);
+}
+
+/*
+ * Our LCD controller task (which is called when we blank or unblank)
+ * via keventd.
+ */
+static void pxafb_task(void *dummy)
+{
+ struct pxafb_info *fbi = dummy;
+ u_int state = xchg(&fbi->task_state, -1);
+
+ set_ctrlr_state(fbi, state);
+}
+
+#ifdef CONFIG_CPU_FREQ
+/*
+ * CPU clock speed change handler. We need to adjust the LCD timing
+ * parameters when the CPU clock is adjusted by the power management
+ * subsystem.
+ *
+ * TODO: Determine why f->new != 10*get_lclk_frequency_10khz()
+ */
+static int
+pxafb_freq_transition(struct notifier_block *nb, unsigned long val, void *data)
+{
+ struct pxafb_info *fbi = TO_INF(nb, freq_transition);
+ //TODO struct cpufreq_freqs *f = data;
+ u_int pcd;
+
+ switch (val) {
+ case CPUFREQ_PRECHANGE:
+ set_ctrlr_state(fbi, C_DISABLE_CLKCHANGE);
+ break;
+
+ case CPUFREQ_POSTCHANGE:
+ pcd = get_pcd(fbi->fb.var.pixclock);
+ fbi->reg_lccr3 = (fbi->reg_lccr3 & ~0xff) | LCCR3_PixClkDiv(pcd);
+ set_ctrlr_state(fbi, C_ENABLE_CLKCHANGE);
+ break;
+ }
+ return 0;
+}
+
+static int
+pxafb_freq_policy(struct notifier_block *nb, unsigned long val, void *data)
+{
+ struct pxafb_info *fbi = TO_INF(nb, freq_policy);
+ struct fb_var_screeninfo *var = &fbi->fb.var;
+ struct cpufreq_policy *policy = data;
+
+ switch (val) {
+ case CPUFREQ_ADJUST:
+ case CPUFREQ_INCOMPATIBLE:
+ printk(KERN_DEBUG "min dma period: %d ps, "
+ "new clock %d kHz\n", pxafb_display_dma_period(var),
+ policy->max);
+ // TODO: fill in min/max values
+ break;
+#if 0
+ case CPUFREQ_NOTIFY:
+ printk(KERN_ERR "%s: got CPUFREQ_NOTIFY\n", __FUNCTION__);
+ do {} while(0);
+ /* todo: panic if min/max values aren't fulfilled
+ * [can't really happen unless there's a bug in the
+ * CPU policy verification process *
+ */
+ break;
+#endif
+ }
+ return 0;
+}
+#endif
+
+#ifdef CONFIG_PM
+/*
+ * Power management hooks. Note that we won't be called from IRQ context,
+ * unlike the blank functions above, so we may sleep.
+ */
+static int pxafb_suspend(struct device *dev, u32 state, u32 level)
+{
+ struct pxafb_info *fbi = dev_get_drvdata(dev);
+
+ if (level == SUSPEND_DISABLE || level == SUSPEND_POWER_DOWN)
+ set_ctrlr_state(fbi, C_DISABLE_PM);
+ return 0;
+}
+
+static int pxafb_resume(struct device *dev, u32 level)
+{
+ struct pxafb_info *fbi = dev_get_drvdata(dev);
+
+ if (level == RESUME_ENABLE)
+ set_ctrlr_state(fbi, C_ENABLE_PM);
+ return 0;
+}
+#else
+#define pxafb_suspend NULL
+#define pxafb_resume NULL
+#endif
+
+/*
+ * pxafb_map_video_memory():
+ * Allocates the DRAM memory for the frame buffer. This buffer is
+ * remapped into a non-cached, non-buffered, memory region to
+ * allow palette and pixel writes to occur without flushing the
+ * cache. Once this area is remapped, all virtual memory
+ * access to the video memory should occur at the new region.
+ */
+static int __init pxafb_map_video_memory(struct pxafb_info *fbi)
+{
+ u_long palette_mem_size;
+
+ /*
+ * We reserve one page for the palette, plus the size
+ * of the framebuffer.
+ */
+ fbi->map_size = PAGE_ALIGN(fbi->fb.fix.smem_len + PAGE_SIZE);
+ fbi->map_cpu = dma_alloc_writecombine(fbi->dev, fbi->map_size,
+ &fbi->map_dma, GFP_KERNEL);
+
+ if (fbi->map_cpu) {
+ /* prevent initial garbage on screen */
+ memset(fbi->map_cpu, 0, fbi->map_size);
+ fbi->fb.screen_base = fbi->map_cpu + PAGE_SIZE;
+ fbi->screen_dma = fbi->map_dma + PAGE_SIZE;
+ /*
+ * FIXME: this is actually the wrong thing to place in
+ * smem_start. But fbdev suffers from the problem that
+ * it needs an API which doesn't exist (in this case,
+ * dma_writecombine_mmap)
+ */
+ fbi->fb.fix.smem_start = fbi->screen_dma;
+
+ fbi->palette_size = fbi->fb.var.bits_per_pixel == 8 ? 256 : 16;
+
+ palette_mem_size = fbi->palette_size * sizeof(u16);
+ DPRINTK("palette_mem_size = 0x%08lx\n", (u_long) palette_mem_size);
+
+ fbi->palette_cpu = (u16 *)(fbi->map_cpu + PAGE_SIZE - palette_mem_size);
+ fbi->palette_dma = fbi->map_dma + PAGE_SIZE - palette_mem_size;
+ }
+
+ return fbi->map_cpu ? 0 : -ENOMEM;
+}
+
+static struct pxafb_info * __init pxafb_init_fbinfo(struct device *dev)
+{
+ struct pxafb_info *fbi;
+ void *addr;
+ struct pxafb_mach_info *inf = dev->platform_data;
+
+ /* Alloc the pxafb_info and pseudo_palette in one step */
+ fbi = kmalloc(sizeof(struct pxafb_info) + sizeof(u32) * 16, GFP_KERNEL);
+ if (!fbi)
+ return NULL;
+
+ memset(fbi, 0, sizeof(struct pxafb_info));
+ fbi->dev = dev;
+
+ strcpy(fbi->fb.fix.id, PXA_NAME);
+
+ fbi->fb.fix.type = FB_TYPE_PACKED_PIXELS;
+ fbi->fb.fix.type_aux = 0;
+ fbi->fb.fix.xpanstep = 0;
+ fbi->fb.fix.ypanstep = 0;
+ fbi->fb.fix.ywrapstep = 0;
+ fbi->fb.fix.accel = FB_ACCEL_NONE;
+
+ fbi->fb.var.nonstd = 0;
+ fbi->fb.var.activate = FB_ACTIVATE_NOW;
+ fbi->fb.var.height = -1;
+ fbi->fb.var.width = -1;
+ fbi->fb.var.accel_flags = 0;
+ fbi->fb.var.vmode = FB_VMODE_NONINTERLACED;
+
+ fbi->fb.fbops = &pxafb_ops;
+ fbi->fb.flags = FBINFO_DEFAULT;
+ fbi->fb.node = -1;
+
+ addr = fbi;
+ addr = addr + sizeof(struct pxafb_info);
+ fbi->fb.pseudo_palette = addr;
+
+ fbi->max_xres = inf->xres;
+ fbi->fb.var.xres = inf->xres;
+ fbi->fb.var.xres_virtual = inf->xres;
+ fbi->max_yres = inf->yres;
+ fbi->fb.var.yres = inf->yres;
+ fbi->fb.var.yres_virtual = inf->yres;
+ fbi->max_bpp = inf->bpp;
+ fbi->fb.var.bits_per_pixel = inf->bpp;
+ fbi->fb.var.pixclock = inf->pixclock;
+ fbi->fb.var.hsync_len = inf->hsync_len;
+ fbi->fb.var.left_margin = inf->left_margin;
+ fbi->fb.var.right_margin = inf->right_margin;
+ fbi->fb.var.vsync_len = inf->vsync_len;
+ fbi->fb.var.upper_margin = inf->upper_margin;
+ fbi->fb.var.lower_margin = inf->lower_margin;
+ fbi->fb.var.sync = inf->sync;
+ fbi->fb.var.grayscale = inf->cmap_greyscale;
+ fbi->cmap_inverse = inf->cmap_inverse;
+ fbi->cmap_static = inf->cmap_static;
+ fbi->lccr0 = inf->lccr0;
+ fbi->lccr3 = inf->lccr3;
+ fbi->state = C_STARTUP;
+ fbi->task_state = (u_char)-1;
+ fbi->fb.fix.smem_len = fbi->max_xres * fbi->max_yres *
+ fbi->max_bpp / 8;
+
+ init_waitqueue_head(&fbi->ctrlr_wait);
+ INIT_WORK(&fbi->task, pxafb_task, fbi);
+ init_MUTEX(&fbi->ctrlr_sem);
+
+ return fbi;
+}
+
+#ifdef CONFIG_FB_PXA_PARAMETERS
+static int __init pxafb_parse_options(struct device *dev, char *options)
+{
+ struct pxafb_mach_info *inf = dev->platform_data;
+ char *this_opt;
+
+ if (!options || !*options)
+ return 0;
+
+ dev_dbg(dev, "options are \"%s\"\n", options ? options : "null");
+
+ /* could be made table driven or similar?... */
+ while ((this_opt = strsep(&options, ",")) != NULL) {
+ if (!strncmp(this_opt, "mode:", 5)) {
+ const char *name = this_opt+5;
+ unsigned int namelen = strlen(name);
+ int res_specified = 0, bpp_specified = 0;
+ unsigned int xres = 0, yres = 0, bpp = 0;
+ int yres_specified = 0;
+ int i;
+ for (i = namelen-1; i >= 0; i--) {
+ switch (name[i]) {
+ case '-':
+ namelen = i;
+ if (!bpp_specified && !yres_specified) {
+ bpp = simple_strtoul(&name[i+1], NULL, 0);
+ bpp_specified = 1;
+ } else
+ goto done;
+ break;
+ case 'x':
+ if (!yres_specified) {
+ yres = simple_strtoul(&name[i+1], NULL, 0);
+ yres_specified = 1;
+ } else
+ goto done;
+ break;
+ case '0'...'9':
+ break;
+ default:
+ goto done;
+ }
+ }
+ if (i < 0 && yres_specified) {
+ xres = simple_strtoul(name, NULL, 0);
+ res_specified = 1;
+ }
+ done:
+ if (res_specified) {
+ dev_info(dev, "overriding resolution: %dx%d\n", xres, yres);
+ inf->xres = xres; inf->yres = yres;
+ }
+ if (bpp_specified)
+ switch (bpp) {
+ case 1:
+ case 2:
+ case 4:
+ case 8:
+ case 16:
+ inf->bpp = bpp;
+ dev_info(dev, "overriding bit depth: %d\n", bpp);
+ break;
+ default:
+ dev_err(dev, "Depth %d is not valid\n", bpp);
+ }
+ } else if (!strncmp(this_opt, "pixclock:", 9)) {
+ inf->pixclock = simple_strtoul(this_opt+9, NULL, 0);
+ dev_info(dev, "override pixclock: %ld\n", inf->pixclock);
+ } else if (!strncmp(this_opt, "left:", 5)) {
+ inf->left_margin = simple_strtoul(this_opt+5, NULL, 0);
+ dev_info(dev, "override left: %u\n", inf->left_margin);
+ } else if (!strncmp(this_opt, "right:", 6)) {
+ inf->right_margin = simple_strtoul(this_opt+6, NULL, 0);
+ dev_info(dev, "override right: %u\n", inf->right_margin);
+ } else if (!strncmp(this_opt, "upper:", 6)) {
+ inf->upper_margin = simple_strtoul(this_opt+6, NULL, 0);
+ dev_info(dev, "override upper: %u\n", inf->upper_margin);
+ } else if (!strncmp(this_opt, "lower:", 6)) {
+ inf->lower_margin = simple_strtoul(this_opt+6, NULL, 0);
+ dev_info(dev, "override lower: %u\n", inf->lower_margin);
+ } else if (!strncmp(this_opt, "hsynclen:", 9)) {
+ inf->hsync_len = simple_strtoul(this_opt+9, NULL, 0);
+ dev_info(dev, "override hsynclen: %u\n", inf->hsync_len);
+ } else if (!strncmp(this_opt, "vsynclen:", 9)) {
+ inf->vsync_len = simple_strtoul(this_opt+9, NULL, 0);
+ dev_info(dev, "override vsynclen: %u\n", inf->vsync_len);
+ } else if (!strncmp(this_opt, "hsync:", 6)) {
+ if (simple_strtoul(this_opt+6, NULL, 0) == 0) {
+ dev_info(dev, "override hsync: Active Low\n");
+ inf->sync &= ~FB_SYNC_HOR_HIGH_ACT;
+ } else {
+ dev_info(dev, "override hsync: Active High\n");
+ inf->sync |= FB_SYNC_HOR_HIGH_ACT;
+ }
+ } else if (!strncmp(this_opt, "vsync:", 6)) {
+ if (simple_strtoul(this_opt+6, NULL, 0) == 0) {
+ dev_info(dev, "override vsync: Active Low\n");
+ inf->sync &= ~FB_SYNC_VERT_HIGH_ACT;
+ } else {
+ dev_info(dev, "override vsync: Active High\n");
+ inf->sync |= FB_SYNC_VERT_HIGH_ACT;
+ }
+ } else if (!strncmp(this_opt, "dpc:", 4)) {
+ if (simple_strtoul(this_opt+4, NULL, 0) == 0) {
+ dev_info(dev, "override double pixel clock: false\n");
+ inf->lccr3 &= ~LCCR3_DPC;
+ } else {
+ dev_info(dev, "override double pixel clock: true\n");
+ inf->lccr3 |= LCCR3_DPC;
+ }
+ } else if (!strncmp(this_opt, "outputen:", 9)) {
+ if (simple_strtoul(this_opt+9, NULL, 0) == 0) {
+ dev_info(dev, "override output enable: active low\n");
+ inf->lccr3 = (inf->lccr3 & ~LCCR3_OEP) | LCCR3_OutEnL;
+ } else {
+ dev_info(dev, "override output enable: active high\n");
+ inf->lccr3 = (inf->lccr3 & ~LCCR3_OEP) | LCCR3_OutEnH;
+ }
+ } else if (!strncmp(this_opt, "pixclockpol:", 12)) {
+ if (simple_strtoul(this_opt+12, NULL, 0) == 0) {
+ dev_info(dev, "override pixel clock polarity: falling edge\n");
+ inf->lccr3 = (inf->lccr3 & ~LCCR3_PCP) | LCCR3_PixFlEdg;
+ } else {
+ dev_info(dev, "override pixel clock polarity: rising edge\n");
+ inf->lccr3 = (inf->lccr3 & ~LCCR3_PCP) | LCCR3_PixRsEdg;
+ }
+ } else if (!strncmp(this_opt, "color", 5)) {
+ inf->lccr0 = (inf->lccr0 & ~LCCR0_CMS) | LCCR0_Color;
+ } else if (!strncmp(this_opt, "mono", 4)) {
+ inf->lccr0 = (inf->lccr0 & ~LCCR0_CMS) | LCCR0_Mono;
+ } else if (!strncmp(this_opt, "active", 6)) {
+ inf->lccr0 = (inf->lccr0 & ~LCCR0_PAS) | LCCR0_Act;
+ } else if (!strncmp(this_opt, "passive", 7)) {
+ inf->lccr0 = (inf->lccr0 & ~LCCR0_PAS) | LCCR0_Pas;
+ } else if (!strncmp(this_opt, "single", 6)) {
+ inf->lccr0 = (inf->lccr0 & ~LCCR0_SDS) | LCCR0_Sngl;
+ } else if (!strncmp(this_opt, "dual", 4)) {
+ inf->lccr0 = (inf->lccr0 & ~LCCR0_SDS) | LCCR0_Dual;
+ } else if (!strncmp(this_opt, "4pix", 4)) {
+ inf->lccr0 = (inf->lccr0 & ~LCCR0_DPD) | LCCR0_4PixMono;
+ } else if (!strncmp(this_opt, "8pix", 4)) {
+ inf->lccr0 = (inf->lccr0 & ~LCCR0_DPD) | LCCR0_8PixMono;
+ } else {
+ dev_err(dev, "unknown option: %s\n", this_opt);
+ return -EINVAL;
+ }
+ }
+ return 0;
+
+}
+#endif
+
+int __init pxafb_probe(struct device *dev)
+{
+ struct pxafb_info *fbi;
+ struct pxafb_mach_info *inf;
+ int ret;
+
+ dev_dbg(dev, "pxafb_probe\n");
+
+ inf = dev->platform_data;
+ ret = -ENOMEM;
+ fbi = NULL;
+ if (!inf)
+ goto failed;
+
+#ifdef CONFIG_FB_PXA_PARAMETERS
+ ret = pxafb_parse_options(dev, g_options);
+ if (ret < 0)
+ goto failed;
+#endif
+
+#ifdef DEBUG_VAR
+ /* Check for various illegal bit-combinations. Currently only
+ * a warning is given. */
+
+ if (inf->lccr0 & LCCR0_INVALID_CONFIG_MASK)
+ dev_warn(dev, "machine LCCR0 setting contains illegal bits: %08x\n",
+ inf->lccr0 & LCCR0_INVALID_CONFIG_MASK);
+ if (inf->lccr3 & LCCR3_INVALID_CONFIG_MASK)
+ dev_warn(dev, "machine LCCR3 setting contains illegal bits: %08x\n",
+ inf->lccr3 & LCCR3_INVALID_CONFIG_MASK);
+ if (inf->lccr0 & LCCR0_DPD &&
+ ((inf->lccr0 & LCCR0_PAS) != LCCR0_Pas ||
+ (inf->lccr0 & LCCR0_SDS) != LCCR0_Sngl ||
+ (inf->lccr0 & LCCR0_CMS) != LCCR0_Mono))
+ dev_warn(dev, "Double Pixel Data (DPD) mode is only valid in passive mono"
+ " single panel mode\n");
+ if ((inf->lccr0 & LCCR0_PAS) == LCCR0_Act &&
+ (inf->lccr0 & LCCR0_SDS) == LCCR0_Dual)
+ dev_warn(dev, "Dual panel only valid in passive mode\n");
+ if ((inf->lccr0 & LCCR0_PAS) == LCCR0_Pas &&
+ (inf->upper_margin || inf->lower_margin))
+ dev_warn(dev, "Upper and lower margins must be 0 in passive mode\n");
+#endif
+
+ dev_dbg(dev, "got a %dx%dx%d LCD\n",inf->xres, inf->yres, inf->bpp);
+ if (inf->xres == 0 || inf->yres == 0 || inf->bpp == 0) {
+ dev_err(dev, "Invalid resolution or bit depth\n");
+ ret = -EINVAL;
+ goto failed;
+ }
+ pxafb_backlight_power = inf->pxafb_backlight_power;
+ pxafb_lcd_power = inf->pxafb_lcd_power;
+ fbi = pxafb_init_fbinfo(dev);
+ if (!fbi) {
+ dev_err(dev, "Failed to initialize framebuffer device\n");
+ ret = -ENOMEM; // only reason for pxafb_init_fbinfo to fail is kmalloc
+ goto failed;
+ }
+
+ /* Initialize video memory */
+ ret = pxafb_map_video_memory(fbi);
+ if (ret) {
+ dev_err(dev, "Failed to allocate video RAM: %d\n", ret);
+ ret = -ENOMEM;
+ goto failed;
+ }
+ /* enable LCD controller clock */
+ pxa_set_cken(CKEN16_LCD, 1);
+
+ ret = request_irq(IRQ_LCD, pxafb_handle_irq, SA_INTERRUPT, "LCD", fbi);
+ if (ret) {
+ dev_err(dev, "request_irq failed: %d\n", ret);
+ ret = -EBUSY;
+ goto failed;
+ }
+
+ /*
+ * This makes sure that our colour bitfield
+ * descriptors are correctly initialised.
+ */
+ pxafb_check_var(&fbi->fb.var, &fbi->fb);
+ pxafb_set_par(&fbi->fb);
+
+ dev_set_drvdata(dev, fbi);
+
+ ret = register_framebuffer(&fbi->fb);
+ if (ret < 0) {
+ dev_err(dev, "Failed to register framebuffer device: %d\n", ret);
+ goto failed;
+ }
+
+#ifdef CONFIG_PM
+ // TODO
+#endif
+
+#ifdef CONFIG_CPU_FREQ
+ fbi->freq_transition.notifier_call = pxafb_freq_transition;
+ fbi->freq_policy.notifier_call = pxafb_freq_policy;
+ cpufreq_register_notifier(&fbi->freq_transition, CPUFREQ_TRANSITION_NOTIFIER);
+ cpufreq_register_notifier(&fbi->freq_policy, CPUFREQ_POLICY_NOTIFIER);
+#endif
+
+ /*
+ * Ok, now enable the LCD controller
+ */
+ set_ctrlr_state(fbi, C_ENABLE);
+
+ return 0;
+
+failed:
+ dev_set_drvdata(dev, NULL);
+ if (fbi)
+ kfree(fbi);
+ return ret;
+}
+
+static struct device_driver pxafb_driver = {
+ .name = "pxa2xx-fb",
+ .bus = &platform_bus_type,
+ .probe = pxafb_probe,
+#ifdef CONFIG_PM
+ .suspend = pxafb_suspend,
+ .resume = pxafb_resume,
+#endif
+};
+
+#ifndef MODULE
+int __devinit pxafb_setup(char *options)
+{
+# ifdef CONFIG_FB_PXA_PARAMETERS
+ strlcpy(g_options, options, sizeof(g_options));
+# endif
+ return 0;
+}
+#else
+# ifdef CONFIG_FB_PXA_PARAMETERS
+module_param_string(options, g_options, sizeof(g_options), 0);
+MODULE_PARM_DESC(options, "LCD parameters (see Documentation/fb/pxafb.txt)");
+# endif
+#endif
+
+int __devinit pxafb_init(void)
+{
+#ifndef MODULE
+ char *option = NULL;
+
+ if (fb_get_options("pxafb", &option))
+ return -ENODEV;
+ pxafb_setup(option);
+#endif
+ return driver_register(&pxafb_driver);
+}
+
+module_init(pxafb_init);
+
+MODULE_DESCRIPTION("loadable framebuffer driver for PXA");
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * linux/drivers/video/riva/fbdev-i2c.c - nVidia i2c
+ *
+ * Maintained by Ani Joshi <ajoshi@shell.unixbox.com>
+ *
+ * Copyright 2004 Antonino A. Daplas <adaplas @pol.net>
+ *
+ * Based on radeonfb-i2c.c
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file COPYING in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/delay.h>
+#include <linux/pci.h>
+#include <linux/fb.h>
+
+#include <asm/io.h>
+
+#include "rivafb.h"
+#include "../edid.h"
+
+#define RIVA_DDC 0x50
+
+static void riva_gpio_setscl(void* data, int state)
+{
+ struct riva_i2c_chan *chan = (struct riva_i2c_chan *)data;
+ struct riva_par *par = chan->par;
+ u32 val;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base + 1);
+ val = VGA_RD08(par->riva.PCIO, 0x3d5) & 0xf0;
+
+ if (state)
+ val |= 0x20;
+ else
+ val &= ~0x20;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base + 1);
+ VGA_WR08(par->riva.PCIO, 0x3d5, val | 0x1);
+}
+
+static void riva_gpio_setsda(void* data, int state)
+{
+ struct riva_i2c_chan *chan = (struct riva_i2c_chan *)data;
+ struct riva_par *par = chan->par;
+ u32 val;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base + 1);
+ val = VGA_RD08(par->riva.PCIO, 0x3d5) & 0xf0;
+
+ if (state)
+ val |= 0x10;
+ else
+ val &= ~0x10;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base + 1);
+ VGA_WR08(par->riva.PCIO, 0x3d5, val | 0x1);
+}
+
+static int riva_gpio_getscl(void* data)
+{
+ struct riva_i2c_chan *chan = (struct riva_i2c_chan *)data;
+ struct riva_par *par = chan->par;
+ u32 val = 0;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base);
+ if (VGA_RD08(par->riva.PCIO, 0x3d5) & 0x04)
+ val = 1;
+
+ val = VGA_RD08(par->riva.PCIO, 0x3d5);
+
+ return val;
+}
+
+static int riva_gpio_getsda(void* data)
+{
+ struct riva_i2c_chan *chan = (struct riva_i2c_chan *)data;
+ struct riva_par *par = chan->par;
+ u32 val = 0;
+
+ VGA_WR08(par->riva.PCIO, 0x3d4, chan->ddc_base);
+ if (VGA_RD08(par->riva.PCIO, 0x3d5) & 0x08)
+ val = 1;
+
+ return val;
+}
+
+#define I2C_ALGO_RIVA 0x0e0000
+static int riva_setup_i2c_bus(struct riva_i2c_chan *chan, const char *name)
+{
+ int rc;
+
+ strcpy(chan->adapter.name, name);
+ chan->adapter.owner = THIS_MODULE;
+ chan->adapter.id = I2C_ALGO_RIVA;
+ chan->adapter.algo_data = &chan->algo;
+ chan->adapter.dev.parent = &chan->par->pdev->dev;
+ chan->algo.setsda = riva_gpio_setsda;
+ chan->algo.setscl = riva_gpio_setscl;
+ chan->algo.getsda = riva_gpio_getsda;
+ chan->algo.getscl = riva_gpio_getscl;
+ chan->algo.udelay = 40;
+ chan->algo.mdelay = 5;
+ chan->algo.timeout = 20;
+ chan->algo.data = chan;
+
+ i2c_set_adapdata(&chan->adapter, chan);
+
+ /* Raise SCL and SDA */
+ riva_gpio_setsda(chan, 1);
+ riva_gpio_setscl(chan, 1);
+ udelay(20);
+
+ rc = i2c_bit_add_bus(&chan->adapter);
+ if (rc == 0)
+ dev_dbg(&chan->par->pdev->dev, "I2C bus %s registered.\n", name);
+ else
+ dev_warn(&chan->par->pdev->dev, "Failed to register I2C bus %s.\n", name);
+ return rc;
+}
+
+void riva_create_i2c_busses(struct riva_par *par)
+{
+ par->chan[0].par = par;
+ par->chan[1].par = par;
+ par->chan[2].par = par;
+
+ par->bus = 0;
+
+ switch ((par->pdev->device >> 4) & 0xff) {
+ case 0x17:
+ case 0x18:
+ case 0x25:
+ case 0x28:
+ case 0x30:
+ case 0x31:
+ case 0x32:
+ case 0x33:
+ case 0x34:
+ par->chan[2].ddc_base = 0x50;
+ par->bus++;
+ riva_setup_i2c_bus(&par->chan[2], "BUS3");
+ case 0x04:
+ case 0x05:
+ case 0x10:
+ case 0x11:
+ case 0x15:
+ case 0x20:
+ par->chan[1].ddc_base = 0x36;
+ par->bus++;
+ riva_setup_i2c_bus(&par->chan[1], "BUS2");
+ case 0x03:
+ par->chan[0].ddc_base = 0x3e;
+ par->bus++;
+ riva_setup_i2c_bus(&par->chan[0], "BUS1");
+ }
+}
+
+void riva_delete_i2c_busses(struct riva_par *par)
+{
+ if (par->chan[0].par)
+ i2c_bit_del_bus(&par->chan[0].adapter);
+ par->chan[0].par = NULL;
+
+ if (par->chan[1].par)
+ i2c_bit_del_bus(&par->chan[1].adapter);
+ par->chan[1].par = NULL;
+
+}
+
+static u8 *riva_do_probe_i2c_edid(struct riva_i2c_chan *chan)
+{
+ u8 start = 0x0;
+ struct i2c_msg msgs[] = {
+ {
+ .addr = RIVA_DDC,
+ .len = 1,
+ .buf = &start,
+ }, {
+ .addr = RIVA_DDC,
+ .flags = I2C_M_RD,
+ .len = EDID_LENGTH,
+ },
+ };
+ u8 *buf;
+
+ buf = kmalloc(EDID_LENGTH, GFP_KERNEL);
+ if (!buf) {
+ dev_warn(&chan->par->pdev->dev, "Out of memory!\n");
+ return NULL;
+ }
+ msgs[1].buf = buf;
+
+ if (i2c_transfer(&chan->adapter, msgs, 2) == 2)
+ return buf;
+ dev_dbg(&chan->par->pdev->dev, "Unable to read EDID block.\n");
+ kfree(buf);
+ return NULL;
+}
+
+int riva_probe_i2c_connector(struct riva_par *par, int conn, u8 **out_edid)
+{
+ u8 *edid = NULL;
+ int i;
+
+ for (i = 0; i < 3; i++) {
+ /* Do the real work */
+ edid = riva_do_probe_i2c_edid(&par->chan[conn-1]);
+ if (edid)
+ break;
+ }
+ if (out_edid)
+ *out_edid = edid;
+ if (!edid)
+ return 1;
+
+ return 0;
+}
+
--- /dev/null
+menu "Dallas's 1-wire bus"
+
+config W1
+ tristate "Dallas's 1-wire support"
+ ---help---
+ Dallas's 1-wire bus is usefull to connect slow 1-pin devices
+ such as iButtons and thermal sensors.
+
+ If you want W1 support, you should say Y here.
+
+ This W1 support can also be built as a module. If so, the module
+ will be called wire.ko.
+
+config W1_MATROX
+ tristate "Matrox G400 transport layer for 1-wire"
+ depends on W1 && PCI
+ help
+ Say Y here if you want to communicate with your 1-wire devices
+ using Matrox's G400 GPIO pins.
+
+ This support is also available as a module. If so, the module
+ will be called matrox_w1.ko.
+
+config W1_DS9490
+ tristate "DS9490R transport layer driver"
+ depends on W1 && USB
+ help
+ Say Y here if you want to have a driver for DS9490R UWB <-> W1 bridge.
+
+ This support is also available as a module. If so, the module
+ will be called ds9490r.ko.
+
+config W1_DS9490_BRIDGE
+ tristate "DS9490R USB <-> W1 transport layer for 1-wire"
+ depends on W1_DS9490
+ help
+ Say Y here if you want to communicate with your 1-wire devices
+ using DS9490R USB bridge.
+
+ This support is also available as a module. If so, the module
+ will be called ds_w1_bridge.ko.
+
+config W1_THERM
+ tristate "Thermal family implementation"
+ depends on W1
+ help
+ Say Y here if you want to connect 1-wire thermal sensors to you
+ wire.
+
+config W1_SMEM
+ tristate "Simple 64bit memory family implementation"
+ depends on W1
+ help
+ Say Y here if you want to connect 1-wire
+ simple 64bit memory rom(ds2401/ds2411/ds1990*) to you wire.
+
+endmenu
--- /dev/null
+#
+# Makefile for the Dallas's 1-wire bus.
+#
+
+ifneq ($(CONFIG_NET), y)
+EXTRA_CFLAGS += -DNETLINK_DISABLED
+endif
+
+obj-$(CONFIG_W1) += wire.o
+wire-objs := w1.o w1_int.o w1_family.o w1_netlink.o w1_io.o
+
+obj-$(CONFIG_W1_MATROX) += matrox_w1.o
+obj-$(CONFIG_W1_THERM) += w1_therm.o
+obj-$(CONFIG_W1_SMEM) += w1_smem.o
+
+obj-$(CONFIG_W1_DS9490) += ds9490r.o
+ds9490r-objs := dscore.o
+
+obj-$(CONFIG_W1_DS9490_BRIDGE) += ds_w1_bridge.o
+
--- /dev/null
+/*
+ * matrox_w1.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <asm/atomic.h>
+#include <asm/types.h>
+#include <asm/io.h>
+
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/list.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+#include <linux/slab.h>
+#include <linux/pci_ids.h>
+#include <linux/pci.h>
+#include <linux/timer.h>
+
+#include "w1.h"
+#include "w1_int.h"
+#include "w1_log.h"
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Evgeniy Polyakov <johnpol@2ka.mipt.ru>");
+MODULE_DESCRIPTION("Driver for transport(Dallas 1-wire prtocol) over VGA DDC(matrox gpio).");
+
+static struct pci_device_id matrox_w1_tbl[] = {
+ { PCI_DEVICE(PCI_VENDOR_ID_MATROX, PCI_DEVICE_ID_MATROX_G400) },
+ { },
+};
+MODULE_DEVICE_TABLE(pci, matrox_w1_tbl);
+
+static int __devinit matrox_w1_probe(struct pci_dev *, const struct pci_device_id *);
+static void __devexit matrox_w1_remove(struct pci_dev *);
+
+static struct pci_driver matrox_w1_pci_driver = {
+ .name = "matrox_w1",
+ .id_table = matrox_w1_tbl,
+ .probe = matrox_w1_probe,
+ .remove = __devexit_p(matrox_w1_remove),
+};
+
+/*
+ * Matrox G400 DDC registers.
+ */
+
+#define MATROX_G400_DDC_CLK (1<<4)
+#define MATROX_G400_DDC_DATA (1<<1)
+
+#define MATROX_BASE 0x3C00
+#define MATROX_STATUS 0x1e14
+
+#define MATROX_PORT_INDEX_OFFSET 0x00
+#define MATROX_PORT_DATA_OFFSET 0x0A
+
+#define MATROX_GET_CONTROL 0x2A
+#define MATROX_GET_DATA 0x2B
+#define MATROX_CURSOR_CTL 0x06
+
+struct matrox_device
+{
+ void __iomem *base_addr;
+ void __iomem *port_index;
+ void __iomem *port_data;
+ u8 data_mask;
+
+ unsigned long phys_addr;
+ void __iomem *virt_addr;
+ unsigned long found;
+
+ struct w1_bus_master *bus_master;
+};
+
+static u8 matrox_w1_read_ddc_bit(unsigned long);
+static void matrox_w1_write_ddc_bit(unsigned long, u8);
+
+/*
+ * These functions read and write DDC Data bit.
+ *
+ * Using tristate pins, since i can't find any open-drain pin in whole motherboard.
+ * Unfortunately we can't connect to Intel's 82801xx IO controller
+ * since we don't know motherboard schema, wich has pretty unused(may be not) GPIO.
+ *
+ * I've heard that PIIX also has open drain pin.
+ *
+ * Port mapping.
+ */
+static __inline__ u8 matrox_w1_read_reg(struct matrox_device *dev, u8 reg)
+{
+ u8 ret;
+
+ writeb(reg, dev->port_index);
+ ret = readb(dev->port_data);
+ barrier();
+
+ return ret;
+}
+
+static __inline__ void matrox_w1_write_reg(struct matrox_device *dev, u8 reg, u8 val)
+{
+ writeb(reg, dev->port_index);
+ writeb(val, dev->port_data);
+ wmb();
+}
+
+static void matrox_w1_write_ddc_bit(unsigned long data, u8 bit)
+{
+ u8 ret;
+ struct matrox_device *dev = (struct matrox_device *) data;
+
+ if (bit)
+ bit = 0;
+ else
+ bit = dev->data_mask;
+
+ ret = matrox_w1_read_reg(dev, MATROX_GET_CONTROL);
+ matrox_w1_write_reg(dev, MATROX_GET_CONTROL, ((ret & ~dev->data_mask) | bit));
+ matrox_w1_write_reg(dev, MATROX_GET_DATA, 0x00);
+}
+
+static u8 matrox_w1_read_ddc_bit(unsigned long data)
+{
+ u8 ret;
+ struct matrox_device *dev = (struct matrox_device *) data;
+
+ ret = matrox_w1_read_reg(dev, MATROX_GET_DATA);
+
+ return ret;
+}
+
+static void matrox_w1_hw_init(struct matrox_device *dev)
+{
+ matrox_w1_write_reg(dev, MATROX_GET_DATA, 0xFF);
+ matrox_w1_write_reg(dev, MATROX_GET_CONTROL, 0x00);
+}
+
+static int __devinit matrox_w1_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
+{
+ struct matrox_device *dev;
+ int err;
+
+ assert(pdev != NULL);
+ assert(ent != NULL);
+
+ if (pdev->vendor != PCI_VENDOR_ID_MATROX || pdev->device != PCI_DEVICE_ID_MATROX_G400)
+ return -ENODEV;
+
+ dev = kmalloc(sizeof(struct matrox_device) +
+ sizeof(struct w1_bus_master), GFP_KERNEL);
+ if (!dev) {
+ dev_err(&pdev->dev,
+ "%s: Failed to create new matrox_device object.\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ memset(dev, 0, sizeof(struct matrox_device) + sizeof(struct w1_bus_master));
+
+ dev->bus_master = (struct w1_bus_master *)(dev + 1);
+
+ /*
+ * True for G400, for some other we need resource 0, see drivers/video/matrox/matroxfb_base.c
+ */
+
+ dev->phys_addr = pci_resource_start(pdev, 1);
+
+ dev->virt_addr = ioremap_nocache(dev->phys_addr, 16384);
+ if (!dev->virt_addr) {
+ dev_err(&pdev->dev, "%s: failed to ioremap(0x%lx, %d).\n",
+ __func__, dev->phys_addr, 16384);
+ err = -EIO;
+ goto err_out_free_device;
+ }
+
+ dev->base_addr = dev->virt_addr + MATROX_BASE;
+ dev->port_index = dev->base_addr + MATROX_PORT_INDEX_OFFSET;
+ dev->port_data = dev->base_addr + MATROX_PORT_DATA_OFFSET;
+ dev->data_mask = (MATROX_G400_DDC_DATA);
+
+ matrox_w1_hw_init(dev);
+
+ dev->bus_master->data = (unsigned long) dev;
+ dev->bus_master->read_bit = &matrox_w1_read_ddc_bit;
+ dev->bus_master->write_bit = &matrox_w1_write_ddc_bit;
+
+ err = w1_add_master_device(dev->bus_master);
+ if (err)
+ goto err_out_free_device;
+
+ pci_set_drvdata(pdev, dev);
+
+ dev->found = 1;
+
+ dev_info(&pdev->dev, "Matrox G400 GPIO transport layer for 1-wire.\n");
+
+ return 0;
+
+err_out_free_device:
+ kfree(dev);
+
+ return err;
+}
+
+static void __devexit matrox_w1_remove(struct pci_dev *pdev)
+{
+ struct matrox_device *dev = pci_get_drvdata(pdev);
+
+ assert(dev != NULL);
+
+ if (dev->found) {
+ w1_remove_master_device(dev->bus_master);
+ iounmap(dev->virt_addr);
+ }
+ kfree(dev);
+}
+
+static int __init matrox_w1_init(void)
+{
+ return pci_module_init(&matrox_w1_pci_driver);
+}
+
+static void __exit matrox_w1_fini(void)
+{
+ pci_unregister_driver(&matrox_w1_pci_driver);
+}
+
+module_init(matrox_w1_init);
+module_exit(matrox_w1_fini);
--- /dev/null
+/*
+ * w1.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <asm/atomic.h>
+
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/list.h>
+#include <linux/interrupt.h>
+#include <linux/spinlock.h>
+#include <linux/timer.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/suspend.h>
+
+#include "w1.h"
+#include "w1_io.h"
+#include "w1_log.h"
+#include "w1_int.h"
+#include "w1_family.h"
+#include "w1_netlink.h"
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Evgeniy Polyakov <johnpol@2ka.mipt.ru>");
+MODULE_DESCRIPTION("Driver for 1-wire Dallas network protocol.");
+
+static int w1_timeout = 10;
+int w1_max_slave_count = 10;
+int w1_max_slave_ttl = 10;
+
+module_param_named(timeout, w1_timeout, int, 0);
+module_param_named(max_slave_count, w1_max_slave_count, int, 0);
+module_param_named(slave_ttl, w1_max_slave_ttl, int, 0);
+
+spinlock_t w1_mlock = SPIN_LOCK_UNLOCKED;
+LIST_HEAD(w1_masters);
+
+static pid_t control_thread;
+static int control_needs_exit;
+static DECLARE_COMPLETION(w1_control_complete);
+static DECLARE_WAIT_QUEUE_HEAD(w1_control_wait);
+
+static int w1_master_match(struct device *dev, struct device_driver *drv)
+{
+ return 1;
+}
+
+static int w1_master_probe(struct device *dev)
+{
+ return -ENODEV;
+}
+
+static int w1_master_remove(struct device *dev)
+{
+ return 0;
+}
+
+static void w1_master_release(struct device *dev)
+{
+ struct w1_master *md = container_of(dev, struct w1_master, dev);
+
+ complete(&md->dev_released);
+}
+
+static void w1_slave_release(struct device *dev)
+{
+ struct w1_slave *sl = container_of(dev, struct w1_slave, dev);
+
+ complete(&sl->dev_released);
+}
+
+static ssize_t w1_default_read_name(struct device *dev, char *buf)
+{
+ return sprintf(buf, "No family registered.\n");
+}
+
+static ssize_t w1_default_read_bin(struct kobject *kobj, char *buf, loff_t off,
+ size_t count)
+{
+ return sprintf(buf, "No family registered.\n");
+}
+
+struct bus_type w1_bus_type = {
+ .name = "w1",
+ .match = w1_master_match,
+};
+
+struct device_driver w1_driver = {
+ .name = "w1_driver",
+ .bus = &w1_bus_type,
+ .probe = w1_master_probe,
+ .remove = w1_master_remove,
+};
+
+struct device w1_device = {
+ .parent = NULL,
+ .bus = &w1_bus_type,
+ .bus_id = "w1 bus master",
+ .driver = &w1_driver,
+ .release = &w1_master_release
+};
+
+static struct device_attribute w1_slave_attribute = {
+ .attr = {
+ .name = "name",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE
+ },
+ .show = &w1_default_read_name,
+};
+
+static struct device_attribute w1_slave_attribute_val = {
+ .attr = {
+ .name = "value",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE
+ },
+ .show = &w1_default_read_name,
+};
+
+ssize_t w1_master_attribute_show_name(struct device *dev, char *buf)
+{
+ struct w1_master *md = container_of (dev, struct w1_master, dev);
+ ssize_t count;
+
+ if (down_interruptible (&md->mutex))
+ return -EBUSY;
+
+ count = sprintf(buf, "%s\n", md->name);
+
+ up(&md->mutex);
+
+ return count;
+}
+
+ssize_t w1_master_attribute_show_pointer(struct device *dev, char *buf)
+{
+ struct w1_master *md = container_of(dev, struct w1_master, dev);
+ ssize_t count;
+
+ if (down_interruptible(&md->mutex))
+ return -EBUSY;
+
+ count = sprintf(buf, "0x%p\n", md->bus_master);
+
+ up(&md->mutex);
+ return count;
+}
+
+ssize_t w1_master_attribute_show_timeout(struct device *dev, char *buf)
+{
+ ssize_t count;
+ count = sprintf(buf, "%d\n", w1_timeout);
+ return count;
+}
+
+ssize_t w1_master_attribute_show_max_slave_count(struct device *dev, char *buf)
+{
+ struct w1_master *md = container_of(dev, struct w1_master, dev);
+ ssize_t count;
+
+ if (down_interruptible(&md->mutex))
+ return -EBUSY;
+
+ count = sprintf(buf, "%d\n", md->max_slave_count);
+
+ up(&md->mutex);
+ return count;
+}
+
+ssize_t w1_master_attribute_show_attempts(struct device *dev, char *buf)
+{
+ struct w1_master *md = container_of(dev, struct w1_master, dev);
+ ssize_t count;
+
+ if (down_interruptible(&md->mutex))
+ return -EBUSY;
+
+ count = sprintf(buf, "%lu\n", md->attempts);
+
+ up(&md->mutex);
+ return count;
+}
+
+ssize_t w1_master_attribute_show_slave_count(struct device *dev, char *buf)
+{
+ struct w1_master *md = container_of(dev, struct w1_master, dev);
+ ssize_t count;
+
+ if (down_interruptible(&md->mutex))
+ return -EBUSY;
+
+ count = sprintf(buf, "%d\n", md->slave_count);
+
+ up(&md->mutex);
+ return count;
+}
+
+ssize_t w1_master_attribute_show_slaves(struct device *dev, char *buf)
+
+{
+ struct w1_master *md = container_of(dev, struct w1_master, dev);
+ int c = PAGE_SIZE;
+
+ if (down_interruptible(&md->mutex))
+ return -EBUSY;
+
+ if (md->slave_count == 0)
+ c -= snprintf(buf + PAGE_SIZE - c, c, "not found.\n");
+ else {
+ struct list_head *ent, *n;
+ struct w1_slave *sl;
+
+ list_for_each_safe(ent, n, &md->slist) {
+ sl = list_entry(ent, struct w1_slave, w1_slave_entry);
+
+ c -= snprintf(buf + PAGE_SIZE - c, c, "%s\n", sl->name);
+ }
+ }
+
+ up(&md->mutex);
+
+ return PAGE_SIZE - c;
+}
+
+static struct device_attribute w1_master_attribute_slaves = {
+ .attr = {
+ .name = "w1_master_slaves",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE,
+ },
+ .show = &w1_master_attribute_show_slaves,
+};
+static struct device_attribute w1_master_attribute_slave_count = {
+ .attr = {
+ .name = "w1_master_slave_count",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE
+ },
+ .show = &w1_master_attribute_show_slave_count,
+};
+static struct device_attribute w1_master_attribute_attempts = {
+ .attr = {
+ .name = "w1_master_attempts",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE
+ },
+ .show = &w1_master_attribute_show_attempts,
+};
+static struct device_attribute w1_master_attribute_max_slave_count = {
+ .attr = {
+ .name = "w1_master_max_slave_count",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE
+ },
+ .show = &w1_master_attribute_show_max_slave_count,
+};
+static struct device_attribute w1_master_attribute_timeout = {
+ .attr = {
+ .name = "w1_master_timeout",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE
+ },
+ .show = &w1_master_attribute_show_timeout,
+};
+static struct device_attribute w1_master_attribute_pointer = {
+ .attr = {
+ .name = "w1_master_pointer",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE
+ },
+ .show = &w1_master_attribute_show_pointer,
+};
+static struct device_attribute w1_master_attribute_name = {
+ .attr = {
+ .name = "w1_master_name",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE
+ },
+ .show = &w1_master_attribute_show_name,
+};
+
+static struct bin_attribute w1_slave_bin_attribute = {
+ .attr = {
+ .name = "w1_slave",
+ .mode = S_IRUGO,
+ .owner = THIS_MODULE,
+ },
+ .size = W1_SLAVE_DATA_SIZE,
+ .read = &w1_default_read_bin,
+};
+
+static int __w1_attach_slave_device(struct w1_slave *sl)
+{
+ int err;
+
+ sl->dev.parent = &sl->master->dev;
+ sl->dev.driver = sl->master->driver;
+ sl->dev.bus = &w1_bus_type;
+ sl->dev.release = &w1_slave_release;
+
+ snprintf(&sl->dev.bus_id[0], sizeof(sl->dev.bus_id),
+ "%02x-%012llx",
+ (unsigned int) sl->reg_num.family,
+ (unsigned long long) sl->reg_num.id);
+ snprintf (&sl->name[0], sizeof(sl->name),
+ "%02x-%012llx",
+ (unsigned int) sl->reg_num.family,
+ (unsigned long long) sl->reg_num.id);
+
+ dev_dbg(&sl->dev, "%s: registering %s.\n", __func__,
+ &sl->dev.bus_id[0]);
+
+ err = device_register(&sl->dev);
+ if (err < 0) {
+ dev_err(&sl->dev,
+ "Device registration [%s] failed. err=%d\n",
+ sl->dev.bus_id, err);
+ return err;
+ }
+
+ memcpy(&sl->attr_bin, &w1_slave_bin_attribute, sizeof(sl->attr_bin));
+ memcpy(&sl->attr_name, &w1_slave_attribute, sizeof(sl->attr_name));
+ memcpy(&sl->attr_val, &w1_slave_attribute_val, sizeof(sl->attr_val));
+
+ sl->attr_bin.read = sl->family->fops->rbin;
+ sl->attr_name.show = sl->family->fops->rname;
+ sl->attr_val.show = sl->family->fops->rval;
+ sl->attr_val.attr.name = sl->family->fops->rvalname;
+
+ err = device_create_file(&sl->dev, &sl->attr_name);
+ if (err < 0) {
+ dev_err(&sl->dev,
+ "sysfs file creation for [%s] failed. err=%d\n",
+ sl->dev.bus_id, err);
+ device_unregister(&sl->dev);
+ return err;
+ }
+
+ err = device_create_file(&sl->dev, &sl->attr_val);
+ if (err < 0) {
+ dev_err(&sl->dev,
+ "sysfs file creation for [%s] failed. err=%d\n",
+ sl->dev.bus_id, err);
+ device_remove_file(&sl->dev, &sl->attr_name);
+ device_unregister(&sl->dev);
+ return err;
+ }
+
+ err = sysfs_create_bin_file(&sl->dev.kobj, &sl->attr_bin);
+ if (err < 0) {
+ dev_err(&sl->dev,
+ "sysfs file creation for [%s] failed. err=%d\n",
+ sl->dev.bus_id, err);
+ device_remove_file(&sl->dev, &sl->attr_name);
+ device_remove_file(&sl->dev, &sl->attr_val);
+ device_unregister(&sl->dev);
+ return err;
+ }
+
+ list_add_tail(&sl->w1_slave_entry, &sl->master->slist);
+
+ return 0;
+}
+
+static int w1_attach_slave_device(struct w1_master *dev, struct w1_reg_num *rn)
+{
+ struct w1_slave *sl;
+ struct w1_family *f;
+ int err;
+ struct w1_netlink_msg msg;
+
+ sl = kmalloc(sizeof(struct w1_slave), GFP_KERNEL);
+ if (!sl) {
+ dev_err(&dev->dev,
+ "%s: failed to allocate new slave device.\n",
+ __func__);
+ return -ENOMEM;
+ }
+
+ memset(sl, 0, sizeof(*sl));
+
+ sl->owner = THIS_MODULE;
+ sl->master = dev;
+ set_bit(W1_SLAVE_ACTIVE, (long *)&sl->flags);
+
+ memcpy(&sl->reg_num, rn, sizeof(sl->reg_num));
+ atomic_set(&sl->refcnt, 0);
+ init_completion(&sl->dev_released);
+
+ spin_lock(&w1_flock);
+ f = w1_family_registered(rn->family);
+ if (!f) {
+ spin_unlock(&w1_flock);
+ dev_info(&dev->dev, "Family %x for %02x.%012llx.%02x is not registered.\n",
+ rn->family, rn->family, rn->id, rn->crc);
+ kfree(sl);
+ return -ENODEV;
+ }
+ __w1_family_get(f);
+ spin_unlock(&w1_flock);
+
+ sl->family = f;
+
+
+ err = __w1_attach_slave_device(sl);
+ if (err < 0) {
+ dev_err(&dev->dev, "%s: Attaching %s failed.\n", __func__,
+ sl->name);
+ w1_family_put(sl->family);
+ kfree(sl);
+ return err;
+ }
+
+ sl->ttl = dev->slave_ttl;
+ dev->slave_count++;
+
+ memcpy(&msg.id.id, rn, sizeof(msg.id.id));
+ msg.type = W1_SLAVE_ADD;
+ w1_netlink_send(dev, &msg);
+
+ return 0;
+}
+
+static void w1_slave_detach(struct w1_slave *sl)
+{
+ struct w1_netlink_msg msg;
+
+ dev_info(&sl->dev, "%s: detaching %s.\n", __func__, sl->name);
+
+ while (atomic_read(&sl->refcnt)) {
+ printk(KERN_INFO "Waiting for %s to become free: refcnt=%d.\n",
+ sl->name, atomic_read(&sl->refcnt));
+
+ if (msleep_interruptible(1000))
+ flush_signals(current);
+ }
+
+ sysfs_remove_bin_file (&sl->dev.kobj, &sl->attr_bin);
+ device_remove_file(&sl->dev, &sl->attr_name);
+ device_remove_file(&sl->dev, &sl->attr_val);
+ device_unregister(&sl->dev);
+ w1_family_put(sl->family);
+
+ memcpy(&msg.id.id, &sl->reg_num, sizeof(msg.id.id));
+ msg.type = W1_SLAVE_REMOVE;
+ w1_netlink_send(sl->master, &msg);
+}
+
+static void w1_search(struct w1_master *dev)
+{
+ u64 last, rn, tmp;
+ int i, count = 0, slave_count;
+ int last_family_desc, last_zero, last_device;
+ int search_bit, id_bit, comp_bit, desc_bit;
+ struct list_head *ent;
+ struct w1_slave *sl;
+ int family_found = 0;
+
+ dev->attempts++;
+
+ search_bit = id_bit = comp_bit = 0;
+ rn = tmp = last = 0;
+ last_device = last_zero = last_family_desc = 0;
+
+ desc_bit = 64;
+
+ while (!(id_bit && comp_bit) && !last_device
+ && count++ < dev->max_slave_count) {
+ last = rn;
+ rn = 0;
+
+ last_family_desc = 0;
+
+ /*
+ * Reset bus and all 1-wire device state machines
+ * so they can respond to our requests.
+ *
+ * Return 0 - device(s) present, 1 - no devices present.
+ */
+ if (w1_reset_bus(dev)) {
+ dev_info(&dev->dev, "No devices present on the wire.\n");
+ break;
+ }
+
+#if 1
+ w1_write_8(dev, W1_SEARCH);
+ for (i = 0; i < 64; ++i) {
+ /*
+ * Read 2 bits from bus.
+ * All who don't sleep must send ID bit and COMPLEMENT ID bit.
+ * They actually are ANDed between all senders.
+ */
+ id_bit = w1_touch_bit(dev, 1);
+ comp_bit = w1_touch_bit(dev, 1);
+
+ if (id_bit && comp_bit)
+ break;
+
+ if (id_bit == 0 && comp_bit == 0) {
+ if (i == desc_bit)
+ search_bit = 1;
+ else if (i > desc_bit)
+ search_bit = 0;
+ else
+ search_bit = ((last >> i) & 0x1);
+
+ if (search_bit == 0) {
+ last_zero = i;
+ if (last_zero < 9)
+ last_family_desc = last_zero;
+ }
+
+ }
+ else
+ search_bit = id_bit;
+
+ tmp = search_bit;
+ rn |= (tmp << i);
+
+ /*
+ * Write 1 bit to bus
+ * and make all who don't have "search_bit" in "i"'th position
+ * in it's registration number sleep.
+ */
+ if (dev->bus_master->touch_bit)
+ w1_touch_bit(dev, search_bit);
+ else
+ w1_write_bit(dev, search_bit);
+
+ }
+#endif
+
+ if (desc_bit == last_zero)
+ last_device = 1;
+
+ desc_bit = last_zero;
+
+ slave_count = 0;
+ list_for_each(ent, &dev->slist) {
+ struct w1_reg_num *tmp;
+
+ tmp = (struct w1_reg_num *) &rn;
+
+ sl = list_entry(ent, struct w1_slave, w1_slave_entry);
+
+ if (sl->reg_num.family == tmp->family &&
+ sl->reg_num.id == tmp->id &&
+ sl->reg_num.crc == tmp->crc) {
+ set_bit(W1_SLAVE_ACTIVE, (long *)&sl->flags);
+ break;
+ }
+ else if (sl->reg_num.family == tmp->family) {
+ family_found = 1;
+ break;
+ }
+
+ slave_count++;
+ }
+
+ if (slave_count == dev->slave_count &&
+ rn && ((rn >> 56) & 0xff) == w1_calc_crc8((u8 *)&rn, 7)) {
+ w1_attach_slave_device(dev, (struct w1_reg_num *) &rn);
+ }
+ }
+}
+
+int w1_create_master_attributes(struct w1_master *dev)
+{
+ if ( device_create_file(&dev->dev, &w1_master_attribute_slaves) < 0 ||
+ device_create_file(&dev->dev, &w1_master_attribute_slave_count) < 0 ||
+ device_create_file(&dev->dev, &w1_master_attribute_attempts) < 0 ||
+ device_create_file(&dev->dev, &w1_master_attribute_max_slave_count) < 0 ||
+ device_create_file(&dev->dev, &w1_master_attribute_timeout) < 0||
+ device_create_file(&dev->dev, &w1_master_attribute_pointer) < 0||
+ device_create_file(&dev->dev, &w1_master_attribute_name) < 0)
+ return -EINVAL;
+
+ return 0;
+}
+
+void w1_destroy_master_attributes(struct w1_master *dev)
+{
+ device_remove_file(&dev->dev, &w1_master_attribute_slaves);
+ device_remove_file(&dev->dev, &w1_master_attribute_slave_count);
+ device_remove_file(&dev->dev, &w1_master_attribute_attempts);
+ device_remove_file(&dev->dev, &w1_master_attribute_max_slave_count);
+ device_remove_file(&dev->dev, &w1_master_attribute_timeout);
+ device_remove_file(&dev->dev, &w1_master_attribute_pointer);
+ device_remove_file(&dev->dev, &w1_master_attribute_name);
+}
+
+
+int w1_control(void *data)
+{
+ struct w1_slave *sl;
+ struct w1_master *dev;
+ struct list_head *ent, *ment, *n, *mn;
+ int err, have_to_wait = 0, timeout;
+
+ daemonize("w1_control");
+ allow_signal(SIGTERM);
+
+ while (!control_needs_exit || have_to_wait) {
+ have_to_wait = 0;
+
+ timeout = w1_timeout*HZ;
+ do {
+ timeout = interruptible_sleep_on_timeout(&w1_control_wait, timeout);
+ if (current->flags & PF_FREEZE)
+ refrigerator(PF_FREEZE);
+ } while (!signal_pending(current) && (timeout > 0));
+
+ if (signal_pending(current))
+ flush_signals(current);
+
+ list_for_each_safe(ment, mn, &w1_masters) {
+ dev = list_entry(ment, struct w1_master, w1_master_entry);
+
+ if (!control_needs_exit && !dev->need_exit)
+ continue;
+ /*
+ * Little race: we can create thread but not set the flag.
+ * Get a chance for external process to set flag up.
+ */
+ if (!dev->initialized) {
+ have_to_wait = 1;
+ continue;
+ }
+
+ spin_lock(&w1_mlock);
+ list_del(&dev->w1_master_entry);
+ spin_unlock(&w1_mlock);
+
+ if (control_needs_exit) {
+ dev->need_exit = 1;
+
+ err = kill_proc(dev->kpid, SIGTERM, 1);
+ if (err)
+ dev_err(&dev->dev,
+ "Failed to send signal to w1 kernel thread %d.\n",
+ dev->kpid);
+ }
+
+ wait_for_completion(&dev->dev_exited);
+
+ list_for_each_safe(ent, n, &dev->slist) {
+ sl = list_entry(ent, struct w1_slave, w1_slave_entry);
+
+ if (!sl)
+ dev_warn(&dev->dev,
+ "%s: slave entry is NULL.\n",
+ __func__);
+ else {
+ list_del(&sl->w1_slave_entry);
+
+ w1_slave_detach(sl);
+ kfree(sl);
+ }
+ }
+ w1_destroy_master_attributes(dev);
+ atomic_dec(&dev->refcnt);
+ }
+ }
+
+ complete_and_exit(&w1_control_complete, 0);
+}
+
+int w1_process(void *data)
+{
+ struct w1_master *dev = (struct w1_master *) data;
+ unsigned long timeout;
+ struct list_head *ent, *n;
+ struct w1_slave *sl;
+
+ daemonize("%s", dev->name);
+ allow_signal(SIGTERM);
+
+ while (!dev->need_exit) {
+ timeout = w1_timeout*HZ;
+ do {
+ timeout = interruptible_sleep_on_timeout(&dev->kwait, timeout);
+ if (current->flags & PF_FREEZE)
+ refrigerator(PF_FREEZE);
+ } while (!signal_pending(current) && (timeout > 0));
+
+ if (signal_pending(current))
+ flush_signals(current);
+
+ if (dev->need_exit)
+ break;
+
+ if (!dev->initialized)
+ continue;
+
+ if (down_interruptible(&dev->mutex))
+ continue;
+
+ list_for_each_safe(ent, n, &dev->slist) {
+ sl = list_entry(ent, struct w1_slave, w1_slave_entry);
+
+ if (sl)
+ clear_bit(W1_SLAVE_ACTIVE, (long *)&sl->flags);
+ }
+
+ w1_search(dev);
+
+ list_for_each_safe(ent, n, &dev->slist) {
+ sl = list_entry(ent, struct w1_slave, w1_slave_entry);
+
+ if (sl && !test_bit(W1_SLAVE_ACTIVE, (unsigned long *)&sl->flags) && !--sl->ttl) {
+ list_del (&sl->w1_slave_entry);
+
+ w1_slave_detach (sl);
+ kfree (sl);
+
+ dev->slave_count--;
+ }
+ else if (test_bit(W1_SLAVE_ACTIVE, (unsigned long *)&sl->flags))
+ sl->ttl = dev->slave_ttl;
+ }
+ up(&dev->mutex);
+ }
+
+ atomic_dec(&dev->refcnt);
+ complete_and_exit(&dev->dev_exited, 0);
+
+ return 0;
+}
+
+int w1_init(void)
+{
+ int retval;
+
+ printk(KERN_INFO "Driver for 1-wire Dallas network protocol.\n");
+
+ retval = bus_register(&w1_bus_type);
+ if (retval) {
+ printk(KERN_ERR "Failed to register bus. err=%d.\n", retval);
+ goto err_out_exit_init;
+ }
+
+ retval = driver_register(&w1_driver);
+ if (retval) {
+ printk(KERN_ERR
+ "Failed to register master driver. err=%d.\n",
+ retval);
+ goto err_out_bus_unregister;
+ }
+
+ control_thread = kernel_thread(&w1_control, NULL, 0);
+ if (control_thread < 0) {
+ printk(KERN_ERR "Failed to create control thread. err=%d\n",
+ control_thread);
+ retval = control_thread;
+ goto err_out_driver_unregister;
+ }
+
+ return 0;
+
+err_out_driver_unregister:
+ driver_unregister(&w1_driver);
+
+err_out_bus_unregister:
+ bus_unregister(&w1_bus_type);
+
+err_out_exit_init:
+ return retval;
+}
+
+void w1_fini(void)
+{
+ struct w1_master *dev;
+ struct list_head *ent, *n;
+
+ list_for_each_safe(ent, n, &w1_masters) {
+ dev = list_entry(ent, struct w1_master, w1_master_entry);
+ __w1_remove_master_device(dev);
+ }
+
+ control_needs_exit = 1;
+
+ wait_for_completion(&w1_control_complete);
+
+ driver_unregister(&w1_driver);
+ bus_unregister(&w1_bus_type);
+}
+
+module_init(w1_init);
+module_exit(w1_fini);
+
+EXPORT_SYMBOL(w1_create_master_attributes);
+EXPORT_SYMBOL(w1_destroy_master_attributes);
--- /dev/null
+/*
+ * w1.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_H
+#define __W1_H
+
+struct w1_reg_num
+{
+ __u64 family:8,
+ id:48,
+ crc:8;
+};
+
+#ifdef __KERNEL__
+
+#include <linux/completion.h>
+#include <linux/device.h>
+
+#include <net/sock.h>
+
+#include <asm/semaphore.h>
+
+#include "w1_family.h"
+
+#define W1_MAXNAMELEN 32
+#define W1_SLAVE_DATA_SIZE 128
+
+#define W1_SEARCH 0xF0
+#define W1_CONDITIONAL_SEARCH 0xEC
+#define W1_CONVERT_TEMP 0x44
+#define W1_SKIP_ROM 0xCC
+#define W1_READ_SCRATCHPAD 0xBE
+#define W1_READ_ROM 0x33
+#define W1_READ_PSUPPLY 0xB4
+#define W1_MATCH_ROM 0x55
+
+#define W1_SLAVE_ACTIVE (1<<0)
+
+struct w1_slave
+{
+ struct module *owner;
+ unsigned char name[W1_MAXNAMELEN];
+ struct list_head w1_slave_entry;
+ struct w1_reg_num reg_num;
+ atomic_t refcnt;
+ u8 rom[9];
+ u32 flags;
+ int ttl;
+
+ struct w1_master *master;
+ struct w1_family *family;
+ struct device dev;
+ struct completion dev_released;
+
+ struct bin_attribute attr_bin;
+ struct device_attribute attr_name, attr_val;
+};
+
+struct w1_bus_master
+{
+ unsigned long data;
+
+ u8 (*read_bit)(unsigned long);
+ void (*write_bit)(unsigned long, u8);
+
+ u8 (*read_byte)(unsigned long);
+ void (*write_byte)(unsigned long, u8);
+
+ u8 (*read_block)(unsigned long, u8 *, int);
+ void (*write_block)(unsigned long, u8 *, int);
+
+ u8 (*touch_bit)(unsigned long, u8);
+
+ u8 (*reset_bus)(unsigned long);
+};
+
+struct w1_master
+{
+ struct list_head w1_master_entry;
+ struct module *owner;
+ unsigned char name[W1_MAXNAMELEN];
+ struct list_head slist;
+ int max_slave_count, slave_count;
+ unsigned long attempts;
+ int slave_ttl;
+ int initialized;
+ u32 id;
+
+ atomic_t refcnt;
+
+ void *priv;
+ int priv_size;
+
+ int need_exit;
+ pid_t kpid;
+ wait_queue_head_t kwait;
+ struct semaphore mutex;
+
+ struct device_driver *driver;
+ struct device dev;
+ struct completion dev_released;
+ struct completion dev_exited;
+
+ struct w1_bus_master *bus_master;
+
+ u32 seq, groups;
+ struct sock *nls;
+};
+
+int w1_create_master_attributes(struct w1_master *);
+void w1_destroy_master_attributes(struct w1_master *);
+
+#endif /* __KERNEL__ */
+
+#endif /* __W1_H */
--- /dev/null
+/*
+ * w1_family.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/spinlock.h>
+#include <linux/list.h>
+#include <linux/delay.h>
+
+#include "w1_family.h"
+
+spinlock_t w1_flock = SPIN_LOCK_UNLOCKED;
+static LIST_HEAD(w1_families);
+
+static int w1_check_family(struct w1_family *f)
+{
+ if (!f->fops->rname || !f->fops->rbin || !f->fops->rval || !f->fops->rvalname)
+ return -EINVAL;
+
+ return 0;
+}
+
+int w1_register_family(struct w1_family *newf)
+{
+ struct list_head *ent, *n;
+ struct w1_family *f;
+ int ret = 0;
+
+ if (w1_check_family(newf))
+ return -EINVAL;
+
+ spin_lock(&w1_flock);
+ list_for_each_safe(ent, n, &w1_families) {
+ f = list_entry(ent, struct w1_family, family_entry);
+
+ if (f->fid == newf->fid) {
+ ret = -EEXIST;
+ break;
+ }
+ }
+
+ if (!ret) {
+ atomic_set(&newf->refcnt, 0);
+ newf->need_exit = 0;
+ list_add_tail(&newf->family_entry, &w1_families);
+ }
+
+ spin_unlock(&w1_flock);
+
+ return ret;
+}
+
+void w1_unregister_family(struct w1_family *fent)
+{
+ struct list_head *ent, *n;
+ struct w1_family *f;
+
+ spin_lock(&w1_flock);
+ list_for_each_safe(ent, n, &w1_families) {
+ f = list_entry(ent, struct w1_family, family_entry);
+
+ if (f->fid == fent->fid) {
+ list_del(&fent->family_entry);
+ break;
+ }
+ }
+
+ fent->need_exit = 1;
+
+ spin_unlock(&w1_flock);
+
+ while (atomic_read(&fent->refcnt)) {
+ printk(KERN_INFO "Waiting for family %u to become free: refcnt=%d.\n",
+ fent->fid, atomic_read(&fent->refcnt));
+
+ if (msleep_interruptible(1000))
+ flush_signals(current);
+ }
+}
+
+/*
+ * Should be called under w1_flock held.
+ */
+struct w1_family * w1_family_registered(u8 fid)
+{
+ struct list_head *ent, *n;
+ struct w1_family *f = NULL;
+ int ret = 0;
+
+ list_for_each_safe(ent, n, &w1_families) {
+ f = list_entry(ent, struct w1_family, family_entry);
+
+ if (f->fid == fid) {
+ ret = 1;
+ break;
+ }
+ }
+
+ return (ret) ? f : NULL;
+}
+
+void w1_family_put(struct w1_family *f)
+{
+ spin_lock(&w1_flock);
+ __w1_family_put(f);
+ spin_unlock(&w1_flock);
+}
+
+void __w1_family_put(struct w1_family *f)
+{
+ if (atomic_dec_and_test(&f->refcnt))
+ f->need_exit = 1;
+}
+
+void w1_family_get(struct w1_family *f)
+{
+ spin_lock(&w1_flock);
+ __w1_family_get(f);
+ spin_unlock(&w1_flock);
+
+}
+
+void __w1_family_get(struct w1_family *f)
+{
+ atomic_inc(&f->refcnt);
+}
+
+EXPORT_SYMBOL(w1_family_get);
+EXPORT_SYMBOL(w1_family_put);
+EXPORT_SYMBOL(__w1_family_get);
+EXPORT_SYMBOL(__w1_family_put);
+EXPORT_SYMBOL(w1_family_registered);
+EXPORT_SYMBOL(w1_unregister_family);
+EXPORT_SYMBOL(w1_register_family);
--- /dev/null
+/*
+ * w1_family.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_FAMILY_H
+#define __W1_FAMILY_H
+
+#include <linux/types.h>
+#include <linux/device.h>
+#include <asm/atomic.h>
+
+#define W1_FAMILY_DEFAULT 0
+#define W1_FAMILY_THERM 0x10
+#define W1_FAMILY_SMEM 0x01
+
+#define MAXNAMELEN 32
+
+struct w1_family_ops
+{
+ ssize_t (* rname)(struct device *, char *);
+ ssize_t (* rbin)(struct kobject *, char *, loff_t, size_t);
+
+ ssize_t (* rval)(struct device *, char *);
+ unsigned char rvalname[MAXNAMELEN];
+};
+
+struct w1_family
+{
+ struct list_head family_entry;
+ u8 fid;
+
+ struct w1_family_ops *fops;
+
+ atomic_t refcnt;
+ u8 need_exit;
+};
+
+extern spinlock_t w1_flock;
+
+void w1_family_get(struct w1_family *);
+void w1_family_put(struct w1_family *);
+void __w1_family_get(struct w1_family *);
+void __w1_family_put(struct w1_family *);
+struct w1_family * w1_family_registered(u8);
+void w1_unregister_family(struct w1_family *);
+int w1_register_family(struct w1_family *);
+
+#endif /* __W1_FAMILY_H */
--- /dev/null
+/*
+ * w1_int.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/delay.h>
+
+#include "w1.h"
+#include "w1_log.h"
+#include "w1_netlink.h"
+
+static u32 w1_ids = 1;
+
+extern struct device_driver w1_driver;
+extern struct bus_type w1_bus_type;
+extern struct device w1_device;
+extern int w1_max_slave_count;
+extern int w1_max_slave_ttl;
+extern struct list_head w1_masters;
+extern spinlock_t w1_mlock;
+
+extern int w1_process(void *);
+
+struct w1_master * w1_alloc_dev(u32 id, int slave_count, int slave_ttl,
+ struct device_driver *driver, struct device *device)
+{
+ struct w1_master *dev;
+ int err;
+
+ /*
+ * We are in process context(kernel thread), so can sleep.
+ */
+ dev = kmalloc(sizeof(struct w1_master) + sizeof(struct w1_bus_master), GFP_KERNEL);
+ if (!dev) {
+ printk(KERN_ERR
+ "Failed to allocate %zd bytes for new w1 device.\n",
+ sizeof(struct w1_master));
+ return NULL;
+ }
+
+ memset(dev, 0, sizeof(struct w1_master) + sizeof(struct w1_bus_master));
+
+ dev->bus_master = (struct w1_bus_master *)(dev + 1);
+
+ dev->owner = THIS_MODULE;
+ dev->max_slave_count = slave_count;
+ dev->slave_count = 0;
+ dev->attempts = 0;
+ dev->kpid = -1;
+ dev->initialized = 0;
+ dev->id = id;
+ dev->slave_ttl = slave_ttl;
+
+ atomic_set(&dev->refcnt, 2);
+
+ INIT_LIST_HEAD(&dev->slist);
+ init_MUTEX(&dev->mutex);
+
+ init_waitqueue_head(&dev->kwait);
+ init_completion(&dev->dev_released);
+ init_completion(&dev->dev_exited);
+
+ memcpy(&dev->dev, device, sizeof(struct device));
+ snprintf(dev->dev.bus_id, sizeof(dev->dev.bus_id),
+ "w1_bus_master%u", dev->id);
+ snprintf(dev->name, sizeof(dev->name), "w1_bus_master%u", dev->id);
+
+ dev->driver = driver;
+
+ dev->groups = 23;
+ dev->seq = 1;
+ dev->nls = netlink_kernel_create(NETLINK_NFLOG, NULL);
+ if (!dev->nls) {
+ printk(KERN_ERR "Failed to create new netlink socket(%u) for w1 master %s.\n",
+ NETLINK_NFLOG, dev->dev.bus_id);
+ }
+
+ err = device_register(&dev->dev);
+ if (err) {
+ printk(KERN_ERR "Failed to register master device. err=%d\n", err);
+ if (dev->nls && dev->nls->sk_socket)
+ sock_release(dev->nls->sk_socket);
+ memset(dev, 0, sizeof(struct w1_master));
+ kfree(dev);
+ dev = NULL;
+ }
+
+ return dev;
+}
+
+void w1_free_dev(struct w1_master *dev)
+{
+ device_unregister(&dev->dev);
+ if (dev->nls && dev->nls->sk_socket)
+ sock_release(dev->nls->sk_socket);
+ memset(dev, 0, sizeof(struct w1_master) + sizeof(struct w1_bus_master));
+ kfree(dev);
+}
+
+int w1_add_master_device(struct w1_bus_master *master)
+{
+ struct w1_master *dev;
+ int retval = 0;
+ struct w1_netlink_msg msg;
+
+ dev = w1_alloc_dev(w1_ids++, w1_max_slave_count, w1_max_slave_ttl, &w1_driver, &w1_device);
+ if (!dev)
+ return -ENOMEM;
+
+ dev->kpid = kernel_thread(&w1_process, dev, 0);
+ if (dev->kpid < 0) {
+ dev_err(&dev->dev,
+ "Failed to create new kernel thread. err=%d\n",
+ dev->kpid);
+ retval = dev->kpid;
+ goto err_out_free_dev;
+ }
+
+ retval = w1_create_master_attributes(dev);
+ if (retval)
+ goto err_out_kill_thread;
+
+ memcpy(dev->bus_master, master, sizeof(struct w1_bus_master));
+
+ dev->initialized = 1;
+
+ spin_lock(&w1_mlock);
+ list_add(&dev->w1_master_entry, &w1_masters);
+ spin_unlock(&w1_mlock);
+
+ msg.id.mst.id = dev->id;
+ msg.id.mst.pid = dev->kpid;
+ msg.type = W1_MASTER_ADD;
+ w1_netlink_send(dev, &msg);
+
+ return 0;
+
+err_out_kill_thread:
+ dev->need_exit = 1;
+ if (kill_proc(dev->kpid, SIGTERM, 1))
+ dev_err(&dev->dev,
+ "Failed to send signal to w1 kernel thread %d.\n",
+ dev->kpid);
+ wait_for_completion(&dev->dev_exited);
+
+err_out_free_dev:
+ w1_free_dev(dev);
+
+ return retval;
+}
+
+void __w1_remove_master_device(struct w1_master *dev)
+{
+ int err;
+ struct w1_netlink_msg msg;
+
+ dev->need_exit = 1;
+ err = kill_proc(dev->kpid, SIGTERM, 1);
+ if (err)
+ dev_err(&dev->dev,
+ "%s: Failed to send signal to w1 kernel thread %d.\n",
+ __func__, dev->kpid);
+
+ while (atomic_read(&dev->refcnt)) {
+ printk(KERN_INFO "Waiting for %s to become free: refcnt=%d.\n",
+ dev->name, atomic_read(&dev->refcnt));
+
+ if (msleep_interruptible(1000))
+ flush_signals(current);
+ }
+
+ msg.id.mst.id = dev->id;
+ msg.id.mst.pid = dev->kpid;
+ msg.type = W1_MASTER_REMOVE;
+ w1_netlink_send(dev, &msg);
+
+ w1_free_dev(dev);
+}
+
+void w1_remove_master_device(struct w1_bus_master *bm)
+{
+ struct w1_master *dev = NULL;
+ struct list_head *ent, *n;
+
+ list_for_each_safe(ent, n, &w1_masters) {
+ dev = list_entry(ent, struct w1_master, w1_master_entry);
+ if (!dev->initialized)
+ continue;
+
+ if (dev->bus_master->data == bm->data)
+ break;
+ }
+
+ if (!dev) {
+ printk(KERN_ERR "Device doesn't exist.\n");
+ return;
+ }
+
+ __w1_remove_master_device(dev);
+}
+
+EXPORT_SYMBOL(w1_alloc_dev);
+EXPORT_SYMBOL(w1_free_dev);
+EXPORT_SYMBOL(w1_add_master_device);
+EXPORT_SYMBOL(w1_remove_master_device);
+EXPORT_SYMBOL(__w1_remove_master_device);
--- /dev/null
+/*
+ * w1_int.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_INT_H
+#define __W1_INT_H
+
+#include <linux/kernel.h>
+#include <linux/device.h>
+
+#include "w1.h"
+
+struct w1_master * w1_alloc_dev(u32, int, int, struct device_driver *, struct device *);
+void w1_free_dev(struct w1_master *dev);
+int w1_add_master_device(struct w1_bus_master *);
+void w1_remove_master_device(struct w1_bus_master *);
+void __w1_remove_master_device(struct w1_master *);
+
+#endif /* __W1_INT_H */
--- /dev/null
+/*
+ * w1_io.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <asm/io.h>
+
+#include <linux/delay.h>
+#include <linux/moduleparam.h>
+
+#include "w1.h"
+#include "w1_log.h"
+#include "w1_io.h"
+
+int w1_delay_parm = 1;
+module_param_named(delay_coef, w1_delay_parm, int, 0);
+
+static u8 w1_crc8_table[] = {
+ 0, 94, 188, 226, 97, 63, 221, 131, 194, 156, 126, 32, 163, 253, 31, 65,
+ 157, 195, 33, 127, 252, 162, 64, 30, 95, 1, 227, 189, 62, 96, 130, 220,
+ 35, 125, 159, 193, 66, 28, 254, 160, 225, 191, 93, 3, 128, 222, 60, 98,
+ 190, 224, 2, 92, 223, 129, 99, 61, 124, 34, 192, 158, 29, 67, 161, 255,
+ 70, 24, 250, 164, 39, 121, 155, 197, 132, 218, 56, 102, 229, 187, 89, 7,
+ 219, 133, 103, 57, 186, 228, 6, 88, 25, 71, 165, 251, 120, 38, 196, 154,
+ 101, 59, 217, 135, 4, 90, 184, 230, 167, 249, 27, 69, 198, 152, 122, 36,
+ 248, 166, 68, 26, 153, 199, 37, 123, 58, 100, 134, 216, 91, 5, 231, 185,
+ 140, 210, 48, 110, 237, 179, 81, 15, 78, 16, 242, 172, 47, 113, 147, 205,
+ 17, 79, 173, 243, 112, 46, 204, 146, 211, 141, 111, 49, 178, 236, 14, 80,
+ 175, 241, 19, 77, 206, 144, 114, 44, 109, 51, 209, 143, 12, 82, 176, 238,
+ 50, 108, 142, 208, 83, 13, 239, 177, 240, 174, 76, 18, 145, 207, 45, 115,
+ 202, 148, 118, 40, 171, 245, 23, 73, 8, 86, 180, 234, 105, 55, 213, 139,
+ 87, 9, 235, 181, 54, 104, 138, 212, 149, 203, 41, 119, 244, 170, 72, 22,
+ 233, 183, 85, 11, 136, 214, 52, 106, 43, 117, 151, 201, 74, 20, 246, 168,
+ 116, 42, 200, 150, 21, 75, 169, 247, 182, 232, 10, 84, 215, 137, 107, 53
+};
+
+void w1_delay(unsigned long tm)
+{
+ udelay(tm * w1_delay_parm);
+}
+
+u8 w1_touch_bit(struct w1_master *dev, int bit)
+{
+ if (dev->bus_master->touch_bit)
+ return dev->bus_master->touch_bit(dev->bus_master->data, bit);
+ else
+ return w1_read_bit(dev);
+}
+
+void w1_write_bit(struct w1_master *dev, int bit)
+{
+ if (bit) {
+ dev->bus_master->write_bit(dev->bus_master->data, 0);
+ w1_delay(6);
+ dev->bus_master->write_bit(dev->bus_master->data, 1);
+ w1_delay(64);
+ } else {
+ dev->bus_master->write_bit(dev->bus_master->data, 0);
+ w1_delay(60);
+ dev->bus_master->write_bit(dev->bus_master->data, 1);
+ w1_delay(10);
+ }
+}
+
+void w1_write_8(struct w1_master *dev, u8 byte)
+{
+ int i;
+
+ if (dev->bus_master->write_byte)
+ dev->bus_master->write_byte(dev->bus_master->data, byte);
+ else
+ for (i = 0; i < 8; ++i)
+ w1_write_bit(dev, (byte >> i) & 0x1);
+}
+
+u8 w1_read_bit(struct w1_master *dev)
+{
+ int result;
+
+ dev->bus_master->write_bit(dev->bus_master->data, 0);
+ w1_delay(6);
+ dev->bus_master->write_bit(dev->bus_master->data, 1);
+ w1_delay(9);
+
+ result = dev->bus_master->read_bit(dev->bus_master->data);
+ w1_delay(55);
+
+ return result & 0x1;
+}
+
+u8 w1_read_8(struct w1_master * dev)
+{
+ int i;
+ u8 res = 0;
+
+ if (dev->bus_master->read_byte)
+ res = dev->bus_master->read_byte(dev->bus_master->data);
+ else
+ for (i = 0; i < 8; ++i)
+ res |= (w1_read_bit(dev) << i);
+
+ return res;
+}
+
+void w1_write_block(struct w1_master *dev, u8 *buf, int len)
+{
+ int i;
+
+ if (dev->bus_master->write_block)
+ dev->bus_master->write_block(dev->bus_master->data, buf, len);
+ else
+ for (i = 0; i < len; ++i)
+ w1_write_8(dev, buf[i]);
+}
+
+u8 w1_read_block(struct w1_master *dev, u8 *buf, int len)
+{
+ int i;
+ u8 ret;
+
+ if (dev->bus_master->read_block)
+ ret = dev->bus_master->read_block(dev->bus_master->data, buf, len);
+ else {
+ for (i = 0; i < len; ++i)
+ buf[i] = w1_read_8(dev);
+ ret = len;
+ }
+
+ return ret;
+}
+
+int w1_reset_bus(struct w1_master *dev)
+{
+ int result = 0;
+
+ if (dev->bus_master->reset_bus)
+ result = dev->bus_master->reset_bus(dev->bus_master->data) & 0x1;
+ else {
+ dev->bus_master->write_bit(dev->bus_master->data, 0);
+ w1_delay(480);
+ dev->bus_master->write_bit(dev->bus_master->data, 1);
+ w1_delay(70);
+
+ result = dev->bus_master->read_bit(dev->bus_master->data) & 0x1;
+ w1_delay(410);
+ }
+
+ return result;
+}
+
+u8 w1_calc_crc8(u8 * data, int len)
+{
+ u8 crc = 0;
+
+ while (len--)
+ crc = w1_crc8_table[crc ^ *data++];
+
+ return crc;
+}
+
+EXPORT_SYMBOL(w1_write_bit);
+EXPORT_SYMBOL(w1_write_8);
+EXPORT_SYMBOL(w1_read_bit);
+EXPORT_SYMBOL(w1_read_8);
+EXPORT_SYMBOL(w1_reset_bus);
+EXPORT_SYMBOL(w1_calc_crc8);
+EXPORT_SYMBOL(w1_delay);
+EXPORT_SYMBOL(w1_read_block);
+EXPORT_SYMBOL(w1_write_block);
--- /dev/null
+/*
+ * w1_io.h
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_IO_H
+#define __W1_IO_H
+
+#include "w1.h"
+
+void w1_delay(unsigned long);
+u8 w1_touch_bit(struct w1_master *, int);
+void w1_write_bit(struct w1_master *, int);
+void w1_write_8(struct w1_master *, u8);
+u8 w1_read_bit(struct w1_master *);
+u8 w1_read_8(struct w1_master *);
+int w1_reset_bus(struct w1_master *);
+u8 w1_calc_crc8(u8 *, int);
+void w1_write_block(struct w1_master *, u8 *, int);
+u8 w1_read_block(struct w1_master *, u8 *, int);
+
+#endif /* __W1_IO_H */
--- /dev/null
+/*
+ * w1_netlink.c
+ *
+ * Copyright (c) 2003 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/skbuff.h>
+#include <linux/netlink.h>
+
+#include "w1.h"
+#include "w1_log.h"
+#include "w1_netlink.h"
+
+#ifndef NETLINK_DISABLED
+void w1_netlink_send(struct w1_master *dev, struct w1_netlink_msg *msg)
+{
+ unsigned int size;
+ struct sk_buff *skb;
+ struct w1_netlink_msg *data;
+ struct nlmsghdr *nlh;
+
+ if (!dev->nls)
+ return;
+
+ size = NLMSG_SPACE(sizeof(struct w1_netlink_msg));
+
+ skb = alloc_skb(size, GFP_ATOMIC);
+ if (!skb) {
+ dev_err(&dev->dev, "skb_alloc() failed.\n");
+ return;
+ }
+
+ nlh = NLMSG_PUT(skb, 0, dev->seq++, NLMSG_DONE, size - sizeof(*nlh));
+
+ data = (struct w1_netlink_msg *)NLMSG_DATA(nlh);
+
+ memcpy(data, msg, sizeof(struct w1_netlink_msg));
+
+ NETLINK_CB(skb).dst_groups = dev->groups;
+ netlink_broadcast(dev->nls, skb, 0, dev->groups, GFP_ATOMIC);
+
+nlmsg_failure:
+ return;
+}
+#else
+#warning Netlink support is disabled. Please compile with NET support enabled.
+
+void w1_netlink_send(struct w1_master *dev, struct w1_netlink_msg *msg)
+{
+}
+#endif
--- /dev/null
+/*
+ * w1_netlink.h
+ *
+ * Copyright (c) 2003 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __W1_NETLINK_H
+#define __W1_NETLINK_H
+
+#include <asm/types.h>
+
+#include "w1.h"
+
+enum w1_netlink_message_types {
+ W1_SLAVE_ADD = 0,
+ W1_SLAVE_REMOVE,
+ W1_MASTER_ADD,
+ W1_MASTER_REMOVE,
+};
+
+struct w1_netlink_msg
+{
+ __u8 type;
+ __u8 reserved[3];
+ union
+ {
+ struct w1_reg_num id;
+ __u64 w1_id;
+ struct
+ {
+ __u32 id;
+ __u32 pid;
+ } mst;
+ } id;
+};
+
+#ifdef __KERNEL__
+
+void w1_netlink_send(struct w1_master *, struct w1_netlink_msg *);
+
+#endif /* __KERNEL__ */
+#endif /* __W1_NETLINK_H */
--- /dev/null
+/*
+ * w1_therm.c
+ *
+ * Copyright (c) 2004 Evgeniy Polyakov <johnpol@2ka.mipt.ru>
+ *
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the therms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <asm/types.h>
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/device.h>
+#include <linux/types.h>
+
+#include "w1.h"
+#include "w1_io.h"
+#include "w1_int.h"
+#include "w1_family.h"
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Evgeniy Polyakov <johnpol@2ka.mipt.ru>");
+MODULE_DESCRIPTION("Driver for 1-wire Dallas network protocol, temperature family.");
+
+static u8 bad_roms[][9] = {
+ {0xaa, 0x00, 0x4b, 0x46, 0xff, 0xff, 0x0c, 0x10, 0x87},
+ {}
+ };
+
+static ssize_t w1_therm_read_name(struct device *, char *);
+static ssize_t w1_therm_read_temp(struct device *, char *);
+static ssize_t w1_therm_read_bin(struct kobject *, char *, loff_t, size_t);
+
+static struct w1_family_ops w1_therm_fops = {
+ .rname = &w1_therm_read_name,
+ .rbin = &w1_therm_read_bin,
+ .rval = &w1_therm_read_temp,
+ .rvalname = "temp1_input",
+};
+
+static ssize_t w1_therm_read_name(struct device *dev, char *buf)
+{
+ struct w1_slave *sl = container_of(dev, struct w1_slave, dev);
+
+ return sprintf(buf, "%s\n", sl->name);
+}
+
+static inline int w1_convert_temp(u8 rom[9])
+{
+ int t, h;
+
+ if (rom[1] == 0)
+ t = ((s32)rom[0] >> 1)*1000;
+ else
+ t = 1000*(-1*(s32)(0x100-rom[0]) >> 1);
+
+ t -= 250;
+ h = 1000*((s32)rom[7] - (s32)rom[6]);
+ h /= (s32)rom[7];
+ t += h;
+
+ return t;
+}
+
+static ssize_t w1_therm_read_temp(struct device *dev, char *buf)
+{
+ struct w1_slave *sl = container_of(dev, struct w1_slave, dev);
+
+ return sprintf(buf, "%d\n", w1_convert_temp(sl->rom));
+}
+
+static int w1_therm_check_rom(u8 rom[9])
+{
+ int i;
+
+ for (i=0; i<sizeof(bad_roms)/9; ++i)
+ if (!memcmp(bad_roms[i], rom, 9))
+ return 1;
+
+ return 0;
+}
+
+static ssize_t w1_therm_read_bin(struct kobject *kobj, char *buf, loff_t off, size_t count)
+{
+ struct w1_slave *sl = container_of(container_of(kobj, struct device, kobj),
+ struct w1_slave, dev);
+ struct w1_master *dev = sl->master;
+ u8 rom[9], crc, verdict;
+ int i, max_trying = 10;
+
+ atomic_inc(&sl->refcnt);
+ if (down_interruptible(&sl->master->mutex)) {
+ count = 0;
+ goto out_dec;
+ }
+
+ if (off > W1_SLAVE_DATA_SIZE) {
+ count = 0;
+ goto out;
+ }
+ if (off + count > W1_SLAVE_DATA_SIZE) {
+ count = 0;
+ goto out;
+ }
+
+ memset(buf, 0, count);
+ memset(rom, 0, sizeof(rom));
+
+ count = 0;
+ verdict = 0;
+ crc = 0;
+
+ while (max_trying--) {
+ if (!w1_reset_bus (dev)) {
+ int count = 0;
+ u8 match[9] = {W1_MATCH_ROM, };
+ unsigned long tm;
+
+ memcpy(&match[1], (u64 *) & sl->reg_num, 8);
+
+ w1_write_block(dev, match, 9);
+
+ w1_write_8(dev, W1_CONVERT_TEMP);
+
+ tm = jiffies + msecs_to_jiffies(750);
+ while(time_before(jiffies, tm)) {
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout(tm-jiffies);
+
+ if (signal_pending(current))
+ flush_signals(current);
+ }
+
+ if (!w1_reset_bus (dev)) {
+ w1_write_block(dev, match, 9);
+
+ w1_write_8(dev, W1_READ_SCRATCHPAD);
+ if ((count = w1_read_block(dev, rom, 9)) != 9) {
+ dev_warn(&dev->dev, "w1_read_block() returned %d instead of 9.\n", count);
+ }
+
+ crc = w1_calc_crc8(rom, 8);
+
+ if (rom[8] == crc && rom[0])
+ verdict = 1;
+
+ }
+ }
+
+ if (!w1_therm_check_rom(rom))
+ break;
+ }
+
+ for (i = 0; i < 9; ++i)
+ count += sprintf(buf + count, "%02x ", rom[i]);
+ count += sprintf(buf + count, ": crc=%02x %s\n",
+ crc, (verdict) ? "YES" : "NO");
+ if (verdict)
+ memcpy(sl->rom, rom, sizeof(sl->rom));
+ else
+ dev_warn(&dev->dev, "18S20 doesn't respond to CONVERT_TEMP.\n");
+
+ for (i = 0; i < 9; ++i)
+ count += sprintf(buf + count, "%02x ", sl->rom[i]);
+
+ count += sprintf(buf + count, "t=%d\n", w1_convert_temp(rom));
+out:
+ up(&dev->mutex);
+out_dec:
+ atomic_dec(&sl->refcnt);
+
+ return count;
+}
+
+static struct w1_family w1_therm_family = {
+ .fid = W1_FAMILY_THERM,
+ .fops = &w1_therm_fops,
+};
+
+static int __init w1_therm_init(void)
+{
+ return w1_register_family(&w1_therm_family);
+}
+
+static void __exit w1_therm_fini(void)
+{
+ w1_unregister_family(&w1_therm_family);
+}
+
+module_init(w1_therm_init);
+module_exit(w1_therm_fini);
--- /dev/null
+/*
+ * linux/fs/nls_ascii.c
+ *
+ * Charset ascii translation tables.
+ * Generated automatically from the Unicode and charset
+ * tables from the Unicode Organization (www.unicode.org).
+ * The Unicode to charset table has only exact mappings.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/nls.h>
+#include <linux/errno.h>
+
+static wchar_t charset2uni[256] = {
+ /* 0x00*/
+ 0x0000, 0x0001, 0x0002, 0x0003,
+ 0x0004, 0x0005, 0x0006, 0x0007,
+ 0x0008, 0x0009, 0x000a, 0x000b,
+ 0x000c, 0x000d, 0x000e, 0x000f,
+ /* 0x10*/
+ 0x0010, 0x0011, 0x0012, 0x0013,
+ 0x0014, 0x0015, 0x0016, 0x0017,
+ 0x0018, 0x0019, 0x001a, 0x001b,
+ 0x001c, 0x001d, 0x001e, 0x001f,
+ /* 0x20*/
+ 0x0020, 0x0021, 0x0022, 0x0023,
+ 0x0024, 0x0025, 0x0026, 0x0027,
+ 0x0028, 0x0029, 0x002a, 0x002b,
+ 0x002c, 0x002d, 0x002e, 0x002f,
+ /* 0x30*/
+ 0x0030, 0x0031, 0x0032, 0x0033,
+ 0x0034, 0x0035, 0x0036, 0x0037,
+ 0x0038, 0x0039, 0x003a, 0x003b,
+ 0x003c, 0x003d, 0x003e, 0x003f,
+ /* 0x40*/
+ 0x0040, 0x0041, 0x0042, 0x0043,
+ 0x0044, 0x0045, 0x0046, 0x0047,
+ 0x0048, 0x0049, 0x004a, 0x004b,
+ 0x004c, 0x004d, 0x004e, 0x004f,
+ /* 0x50*/
+ 0x0050, 0x0051, 0x0052, 0x0053,
+ 0x0054, 0x0055, 0x0056, 0x0057,
+ 0x0058, 0x0059, 0x005a, 0x005b,
+ 0x005c, 0x005d, 0x005e, 0x005f,
+ /* 0x60*/
+ 0x0060, 0x0061, 0x0062, 0x0063,
+ 0x0064, 0x0065, 0x0066, 0x0067,
+ 0x0068, 0x0069, 0x006a, 0x006b,
+ 0x006c, 0x006d, 0x006e, 0x006f,
+ /* 0x70*/
+ 0x0070, 0x0071, 0x0072, 0x0073,
+ 0x0074, 0x0075, 0x0076, 0x0077,
+ 0x0078, 0x0079, 0x007a, 0x007b,
+ 0x007c, 0x007d, 0x007e, 0x007f,
+};
+
+static unsigned char page00[256] = {
+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, /* 0x00-0x07 */
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, /* 0x08-0x0f */
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, /* 0x10-0x17 */
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, /* 0x18-0x1f */
+ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, /* 0x20-0x27 */
+ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, /* 0x28-0x2f */
+ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, /* 0x30-0x37 */
+ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, /* 0x38-0x3f */
+ 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, /* 0x40-0x47 */
+ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, /* 0x48-0x4f */
+ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, /* 0x50-0x57 */
+ 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f, /* 0x58-0x5f */
+ 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, /* 0x60-0x67 */
+ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, /* 0x68-0x6f */
+ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, /* 0x70-0x77 */
+ 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, /* 0x78-0x7f */
+};
+
+static unsigned char *page_uni2charset[256] = {
+ page00,
+};
+
+static unsigned char charset2lower[256] = {
+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, /* 0x00-0x07 */
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, /* 0x08-0x0f */
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, /* 0x10-0x17 */
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, /* 0x18-0x1f */
+ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, /* 0x20-0x27 */
+ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, /* 0x28-0x2f */
+ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, /* 0x30-0x37 */
+ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, /* 0x38-0x3f */
+ 0x40, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, /* 0x40-0x47 */
+ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, /* 0x48-0x4f */
+ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, /* 0x50-0x57 */
+ 0x78, 0x79, 0x7a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f, /* 0x58-0x5f */
+ 0x60, 0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, /* 0x60-0x67 */
+ 0x68, 0x69, 0x6a, 0x6b, 0x6c, 0x6d, 0x6e, 0x6f, /* 0x68-0x6f */
+ 0x70, 0x71, 0x72, 0x73, 0x74, 0x75, 0x76, 0x77, /* 0x70-0x77 */
+ 0x78, 0x79, 0x7a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, /* 0x78-0x7f */
+};
+
+static unsigned char charset2upper[256] = {
+ 0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, /* 0x00-0x07 */
+ 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f, /* 0x08-0x0f */
+ 0x10, 0x11, 0x12, 0x13, 0x14, 0x15, 0x16, 0x17, /* 0x10-0x17 */
+ 0x18, 0x19, 0x1a, 0x1b, 0x1c, 0x1d, 0x1e, 0x1f, /* 0x18-0x1f */
+ 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26, 0x27, /* 0x20-0x27 */
+ 0x28, 0x29, 0x2a, 0x2b, 0x2c, 0x2d, 0x2e, 0x2f, /* 0x28-0x2f */
+ 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, /* 0x30-0x37 */
+ 0x38, 0x39, 0x3a, 0x3b, 0x3c, 0x3d, 0x3e, 0x3f, /* 0x38-0x3f */
+ 0x40, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, /* 0x40-0x47 */
+ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, /* 0x48-0x4f */
+ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, /* 0x50-0x57 */
+ 0x58, 0x59, 0x5a, 0x5b, 0x5c, 0x5d, 0x5e, 0x5f, /* 0x58-0x5f */
+ 0x60, 0x41, 0x42, 0x43, 0x44, 0x45, 0x46, 0x47, /* 0x60-0x67 */
+ 0x48, 0x49, 0x4a, 0x4b, 0x4c, 0x4d, 0x4e, 0x4f, /* 0x68-0x6f */
+ 0x50, 0x51, 0x52, 0x53, 0x54, 0x55, 0x56, 0x57, /* 0x70-0x77 */
+ 0x58, 0x59, 0x5a, 0x7b, 0x7c, 0x7d, 0x7e, 0x7f, /* 0x78-0x7f */
+};
+
+static int uni2char(wchar_t uni, unsigned char *out, int boundlen)
+{
+ unsigned char *uni2charset;
+ unsigned char cl = uni & 0x00ff;
+ unsigned char ch = (uni & 0xff00) >> 8;
+
+ if (boundlen <= 0)
+ return -ENAMETOOLONG;
+
+ uni2charset = page_uni2charset[ch];
+ if (uni2charset && uni2charset[cl])
+ out[0] = uni2charset[cl];
+ else
+ return -EINVAL;
+ return 1;
+}
+
+static int char2uni(const unsigned char *rawstring, int boundlen, wchar_t *uni)
+{
+ *uni = charset2uni[*rawstring];
+ if (*uni == 0x0000)
+ return -EINVAL;
+ return 1;
+}
+
+static struct nls_table table = {
+ .charset = "ascii",
+ .uni2char = uni2char,
+ .char2uni = char2uni,
+ .charset2lower = charset2lower,
+ .charset2upper = charset2upper,
+ .owner = THIS_MODULE,
+};
+
+static int __init init_nls_ascii(void)
+{
+ return register_nls(&table);
+}
+
+static void __exit exit_nls_ascii(void)
+{
+ unregister_nls(&table);
+}
+
+module_init(init_nls_ascii)
+module_exit(exit_nls_ascii)
+
+MODULE_LICENSE("Dual BSD/GPL");
--- /dev/null
+/*
+ * collate.c - NTFS kernel collation handling. Part of the Linux-NTFS project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include "collate.h"
+#include "debug.h"
+#include "ntfs.h"
+
+static int ntfs_collate_binary(ntfs_volume *vol,
+ const void *data1, const int data1_len,
+ const void *data2, const int data2_len)
+{
+ int rc;
+
+ ntfs_debug("Entering.");
+ rc = memcmp(data1, data2, min(data1_len, data2_len));
+ if (!rc && (data1_len != data2_len)) {
+ if (data1_len < data2_len)
+ rc = -1;
+ else
+ rc = 1;
+ }
+ ntfs_debug("Done, returning %i", rc);
+ return rc;
+}
+
+static int ntfs_collate_ntofs_ulong(ntfs_volume *vol,
+ const void *data1, const int data1_len,
+ const void *data2, const int data2_len)
+{
+ int rc;
+ u32 d1, d2;
+
+ ntfs_debug("Entering.");
+ // FIXME: We don't really want to bug here.
+ BUG_ON(data1_len != data2_len);
+ BUG_ON(data1_len != 4);
+ d1 = le32_to_cpup(data1);
+ d2 = le32_to_cpup(data2);
+ if (d1 < d2)
+ rc = -1;
+ else {
+ if (d1 == d2)
+ rc = 0;
+ else
+ rc = 1;
+ }
+ ntfs_debug("Done, returning %i", rc);
+ return rc;
+}
+
+typedef int (*ntfs_collate_func_t)(ntfs_volume *, const void *, const int,
+ const void *, const int);
+
+static ntfs_collate_func_t ntfs_do_collate0x0[3] = {
+ ntfs_collate_binary,
+ NULL/*ntfs_collate_file_name*/,
+ NULL/*ntfs_collate_unicode_string*/,
+};
+
+static ntfs_collate_func_t ntfs_do_collate0x1[4] = {
+ ntfs_collate_ntofs_ulong,
+ NULL/*ntfs_collate_ntofs_sid*/,
+ NULL/*ntfs_collate_ntofs_security_hash*/,
+ NULL/*ntfs_collate_ntofs_ulongs*/,
+};
+
+/**
+ * ntfs_collate - collate two data items using a specified collation rule
+ * @vol: ntfs volume to which the data items belong
+ * @cr: collation rule to use when comparing the items
+ * @data1: first data item to collate
+ * @data1_len: length in bytes of @data1
+ * @data2: second data item to collate
+ * @data2_len: length in bytes of @data2
+ *
+ * Collate the two data items @data1 and @data2 using the collation rule @cr
+ * and return -1, 0, ir 1 if @data1 is found, respectively, to collate before,
+ * to match, or to collate after @data2.
+ *
+ * For speed we use the collation rule @cr as an index into two tables of
+ * function pointers to call the appropriate collation function.
+ */
+int ntfs_collate(ntfs_volume *vol, COLLATION_RULE cr,
+ const void *data1, const int data1_len,
+ const void *data2, const int data2_len) {
+ int i;
+
+ ntfs_debug("Entering.");
+ /*
+ * FIXME: At the moment we only support COLLATION_BINARY and
+ * COLLATION_NTOFS_ULONG, so we BUG() for everything else for now.
+ */
+ BUG_ON(cr != COLLATION_BINARY && cr != COLLATION_NTOFS_ULONG);
+ i = le32_to_cpu(cr);
+ BUG_ON(i < 0);
+ if (i <= 0x02)
+ return ntfs_do_collate0x0[i](vol, data1, data1_len,
+ data2, data2_len);
+ BUG_ON(i < 0x10);
+ i -= 0x10;
+ if (likely(i <= 3))
+ return ntfs_do_collate0x1[i](vol, data1, data1_len,
+ data2, data2_len);
+ BUG();
+ return 0;
+}
--- /dev/null
+/*
+ * collate.h - Defines for NTFS kernel collation handling. Part of the
+ * Linux-NTFS project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef _LINUX_NTFS_COLLATE_H
+#define _LINUX_NTFS_COLLATE_H
+
+#include "types.h"
+#include "volume.h"
+
+static inline BOOL ntfs_is_collation_rule_supported(COLLATION_RULE cr) {
+ int i;
+
+ /*
+ * FIXME: At the moment we only support COLLATION_BINARY and
+ * COLLATION_NTOFS_ULONG, so we return false for everything else for
+ * now.
+ */
+ if (unlikely(cr != COLLATION_BINARY && cr != COLLATION_NTOFS_ULONG))
+ return FALSE;
+ i = le32_to_cpu(cr);
+ if (likely(((i >= 0) && (i <= 0x02)) ||
+ ((i >= 0x10) && (i <= 0x13))))
+ return TRUE;
+ return FALSE;
+}
+
+extern int ntfs_collate(ntfs_volume *vol, COLLATION_RULE cr,
+ const void *data1, const int data1_len,
+ const void *data2, const int data2_len);
+
+#endif /* _LINUX_NTFS_COLLATE_H */
--- /dev/null
+/*
+ * index.c - NTFS kernel index handling. Part of the Linux-NTFS project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include "aops.h"
+#include "collate.h"
+#include "debug.h"
+#include "index.h"
+#include "ntfs.h"
+
+/**
+ * ntfs_index_ctx_get - allocate and initialize a new index context
+ * @idx_ni: ntfs index inode with which to initialize the context
+ *
+ * Allocate a new index context, initialize it with @idx_ni and return it.
+ * Return NULL if allocation failed.
+ *
+ * Locking: Caller must hold i_sem on the index inode.
+ */
+ntfs_index_context *ntfs_index_ctx_get(ntfs_inode *idx_ni)
+{
+ ntfs_index_context *ictx;
+
+ ictx = kmem_cache_alloc(ntfs_index_ctx_cache, SLAB_NOFS);
+ if (ictx) {
+ ictx->idx_ni = idx_ni;
+ ictx->entry = NULL;
+ ictx->data = NULL;
+ ictx->data_len = 0;
+ ictx->is_in_root = 0;
+ ictx->ir = NULL;
+ ictx->actx = NULL;
+ ictx->base_ni = NULL;
+ ictx->ia = NULL;
+ ictx->page = NULL;
+ }
+ return ictx;
+}
+
+/**
+ * ntfs_index_ctx_put - release an index context
+ * @ictx: index context to free
+ *
+ * Release the index context @ictx, releasing all associated resources.
+ *
+ * Locking: Caller must hold i_sem on the index inode.
+ */
+void ntfs_index_ctx_put(ntfs_index_context *ictx)
+{
+ if (ictx->entry) {
+ if (ictx->is_in_root) {
+ if (ictx->actx)
+ ntfs_attr_put_search_ctx(ictx->actx);
+ if (ictx->base_ni)
+ unmap_mft_record(ictx->base_ni);
+ } else {
+ struct page *page = ictx->page;
+ if (page) {
+ BUG_ON(!PageLocked(page));
+ unlock_page(page);
+ ntfs_unmap_page(page);
+ }
+ }
+ }
+ kmem_cache_free(ntfs_index_ctx_cache, ictx);
+ return;
+}
+
+/**
+ * ntfs_index_lookup - find a key in an index and return its index entry
+ * @key: [IN] key for which to search in the index
+ * @key_len: [IN] length of @key in bytes
+ * @ictx: [IN/OUT] context describing the index and the returned entry
+ *
+ * Before calling ntfs_index_lookup(), @ictx must have been obtained from a
+ * call to ntfs_index_ctx_get().
+ *
+ * Look for the @key in the index specified by the index lookup context @ictx.
+ * ntfs_index_lookup() walks the contents of the index looking for the @key.
+ *
+ * If the @key is found in the index, 0 is returned and @ictx is setup to
+ * describe the index entry containing the matching @key. @ictx->entry is the
+ * index entry and @ictx->data and @ictx->data_len are the index entry data and
+ * its length in bytes, respectively.
+ *
+ * If the @key is not found in the index, -ENOENT is returned and @ictx is
+ * setup to describe the index entry whose key collates immediately after the
+ * search @key, i.e. this is the position in the index at which an index entry
+ * with a key of @key would need to be inserted.
+ *
+ * If an error occurs return the negative error code and @ictx is left
+ * untouched.
+ *
+ * When finished with the entry and its data, call ntfs_index_ctx_put() to free
+ * the context and other associated resources.
+ *
+ * If the index entry was modified, call flush_dcache_index_entry_page()
+ * immediately after the modification and either ntfs_index_entry_mark_dirty()
+ * or ntfs_index_entry_write() before the call to ntfs_index_ctx_put() to
+ * ensure that the changes are written to disk.
+ *
+ * Locking: - Caller must hold i_sem on the index inode.
+ * - Each page cache page in the index allocation mapping must be
+ * locked whilst being accessed otherwise we may find a corrupt
+ * page due to it being under ->writepage at the moment which
+ * applies the mst protection fixups before writing out and then
+ * removes them again after the write is complete after which it
+ * unlocks the page.
+ */
+int ntfs_index_lookup(const void *key, const int key_len,
+ ntfs_index_context *ictx)
+{
+ VCN vcn, old_vcn;
+ ntfs_inode *idx_ni = ictx->idx_ni;
+ ntfs_volume *vol = idx_ni->vol;
+ struct super_block *sb = vol->sb;
+ ntfs_inode *base_ni = idx_ni->ext.base_ntfs_ino;
+ MFT_RECORD *m;
+ INDEX_ROOT *ir;
+ INDEX_ENTRY *ie;
+ INDEX_ALLOCATION *ia;
+ u8 *index_end, *kaddr;
+ ntfs_attr_search_ctx *actx;
+ struct address_space *ia_mapping;
+ struct page *page;
+ int rc, err = 0;
+
+ ntfs_debug("Entering.");
+ BUG_ON(!NInoAttr(idx_ni));
+ BUG_ON(idx_ni->type != AT_INDEX_ALLOCATION);
+ BUG_ON(idx_ni->nr_extents != -1);
+ BUG_ON(!base_ni);
+ BUG_ON(!key);
+ BUG_ON(key_len <= 0);
+ if (!ntfs_is_collation_rule_supported(
+ idx_ni->itype.index.collation_rule)) {
+ ntfs_error(sb, "Index uses unsupported collation rule 0x%x. "
+ "Aborting lookup.", le32_to_cpu(
+ idx_ni->itype.index.collation_rule));
+ return -EOPNOTSUPP;
+ }
+ /* Get hold of the mft record for the index inode. */
+ m = map_mft_record(base_ni);
+ if (IS_ERR(m)) {
+ ntfs_error(sb, "map_mft_record() failed with error code %ld.",
+ -PTR_ERR(m));
+ return PTR_ERR(m);
+ }
+ actx = ntfs_attr_get_search_ctx(base_ni, m);
+ if (unlikely(!actx)) {
+ err = -ENOMEM;
+ goto err_out;
+ }
+ /* Find the index root attribute in the mft record. */
+ err = ntfs_attr_lookup(AT_INDEX_ROOT, idx_ni->name, idx_ni->name_len,
+ CASE_SENSITIVE, 0, NULL, 0, actx);
+ if (unlikely(err)) {
+ if (err == -ENOENT) {
+ ntfs_error(sb, "Index root attribute missing in inode "
+ "0x%lx.", idx_ni->mft_no);
+ err = -EIO;
+ }
+ goto err_out;
+ }
+ /* Get to the index root value (it has been verified in read_inode). */
+ ir = (INDEX_ROOT*)((u8*)actx->attr +
+ le16_to_cpu(actx->attr->data.resident.value_offset));
+ index_end = (u8*)&ir->index + le32_to_cpu(ir->index.index_length);
+ /* The first index entry. */
+ ie = (INDEX_ENTRY*)((u8*)&ir->index +
+ le32_to_cpu(ir->index.entries_offset));
+ /*
+ * Loop until we exceed valid memory (corruption case) or until we
+ * reach the last entry.
+ */
+ for (;; ie = (INDEX_ENTRY*)((u8*)ie + le16_to_cpu(ie->length))) {
+ /* Bounds checks. */
+ if ((u8*)ie < (u8*)actx->mrec || (u8*)ie +
+ sizeof(INDEX_ENTRY_HEADER) > index_end ||
+ (u8*)ie + le16_to_cpu(ie->length) > index_end)
+ goto idx_err_out;
+ /*
+ * The last entry cannot contain a key. It can however contain
+ * a pointer to a child node in the B+tree so we just break out.
+ */
+ if (ie->flags & INDEX_ENTRY_END)
+ break;
+ /* Further bounds checks. */
+ if ((u32)sizeof(INDEX_ENTRY_HEADER) +
+ le16_to_cpu(ie->key_length) >
+ le16_to_cpu(ie->data.vi.data_offset) ||
+ (u32)le16_to_cpu(ie->data.vi.data_offset) +
+ le16_to_cpu(ie->data.vi.data_length) >
+ le16_to_cpu(ie->length))
+ goto idx_err_out;
+ /* If the keys match perfectly, we setup @ictx and return 0. */
+ if ((key_len == le16_to_cpu(ie->key_length)) && !memcmp(key,
+ &ie->key, key_len)) {
+ir_done:
+ ictx->is_in_root = TRUE;
+ ictx->actx = actx;
+ ictx->base_ni = base_ni;
+ ictx->ia = NULL;
+ ictx->page = NULL;
+done:
+ ictx->entry = ie;
+ ictx->data = (u8*)ie +
+ le16_to_cpu(ie->data.vi.data_offset);
+ ictx->data_len = le16_to_cpu(ie->data.vi.data_length);
+ ntfs_debug("Done.");
+ return err;
+ }
+ /*
+ * Not a perfect match, need to do full blown collation so we
+ * know which way in the B+tree we have to go.
+ */
+ rc = ntfs_collate(vol, idx_ni->itype.index.collation_rule, key,
+ key_len, &ie->key, le16_to_cpu(ie->key_length));
+ /*
+ * If @key collates before the key of the current entry, there
+ * is definitely no such key in this index but we might need to
+ * descend into the B+tree so we just break out of the loop.
+ */
+ if (rc == -1)
+ break;
+ /*
+ * A match should never happen as the memcmp() call should have
+ * cought it, but we still treat it correctly.
+ */
+ if (!rc)
+ goto ir_done;
+ /* The keys are not equal, continue the search. */
+ }
+ /*
+ * We have finished with this index without success. Check for the
+ * presence of a child node and if not present setup @ictx and return
+ * -ENOENT.
+ */
+ if (!(ie->flags & INDEX_ENTRY_NODE)) {
+ ntfs_debug("Entry not found.");
+ err = -ENOENT;
+ goto ir_done;
+ } /* Child node present, descend into it. */
+ /* Consistency check: Verify that an index allocation exists. */
+ if (!NInoIndexAllocPresent(idx_ni)) {
+ ntfs_error(sb, "No index allocation attribute but index entry "
+ "requires one. Inode 0x%lx is corrupt or "
+ "driver bug.", idx_ni->mft_no);
+ goto err_out;
+ }
+ /* Get the starting vcn of the index_block holding the child node. */
+ vcn = sle64_to_cpup((sle64*)((u8*)ie + le16_to_cpu(ie->length) - 8));
+ ia_mapping = VFS_I(idx_ni)->i_mapping;
+ /*
+ * We are done with the index root and the mft record. Release them,
+ * otherwise we deadlock with ntfs_map_page().
+ */
+ ntfs_attr_put_search_ctx(actx);
+ unmap_mft_record(base_ni);
+ m = NULL;
+ actx = NULL;
+descend_into_child_node:
+ /*
+ * Convert vcn to index into the index allocation attribute in units
+ * of PAGE_CACHE_SIZE and map the page cache page, reading it from
+ * disk if necessary.
+ */
+ page = ntfs_map_page(ia_mapping, vcn <<
+ idx_ni->itype.index.vcn_size_bits >> PAGE_CACHE_SHIFT);
+ if (IS_ERR(page)) {
+ ntfs_error(sb, "Failed to map index page, error %ld.",
+ -PTR_ERR(page));
+ err = PTR_ERR(page);
+ goto err_out;
+ }
+ lock_page(page);
+ kaddr = (u8*)page_address(page);
+fast_descend_into_child_node:
+ /* Get to the index allocation block. */
+ ia = (INDEX_ALLOCATION*)(kaddr + ((vcn <<
+ idx_ni->itype.index.vcn_size_bits) & ~PAGE_CACHE_MASK));
+ /* Bounds checks. */
+ if ((u8*)ia < kaddr || (u8*)ia > kaddr + PAGE_CACHE_SIZE) {
+ ntfs_error(sb, "Out of bounds check failed. Corrupt inode "
+ "0x%lx or driver bug.", idx_ni->mft_no);
+ goto unm_err_out;
+ }
+ /* Catch multi sector transfer fixup errors. */
+ if (unlikely(!ntfs_is_indx_record(ia->magic))) {
+ ntfs_error(sb, "Index record with vcn 0x%llx is corrupt. "
+ "Corrupt inode 0x%lx. Run chkdsk.",
+ (long long)vcn, idx_ni->mft_no);
+ goto unm_err_out;
+ }
+ if (sle64_to_cpu(ia->index_block_vcn) != vcn) {
+ ntfs_error(sb, "Actual VCN (0x%llx) of index buffer is "
+ "different from expected VCN (0x%llx). Inode "
+ "0x%lx is corrupt or driver bug.",
+ (unsigned long long)
+ sle64_to_cpu(ia->index_block_vcn),
+ (unsigned long long)vcn, idx_ni->mft_no);
+ goto unm_err_out;
+ }
+ if (le32_to_cpu(ia->index.allocated_size) + 0x18 !=
+ idx_ni->itype.index.block_size) {
+ ntfs_error(sb, "Index buffer (VCN 0x%llx) of inode 0x%lx has "
+ "a size (%u) differing from the index "
+ "specified size (%u). Inode is corrupt or "
+ "driver bug.", (unsigned long long)vcn,
+ idx_ni->mft_no,
+ le32_to_cpu(ia->index.allocated_size) + 0x18,
+ idx_ni->itype.index.block_size);
+ goto unm_err_out;
+ }
+ index_end = (u8*)ia + idx_ni->itype.index.block_size;
+ if (index_end > kaddr + PAGE_CACHE_SIZE) {
+ ntfs_error(sb, "Index buffer (VCN 0x%llx) of inode 0x%lx "
+ "crosses page boundary. Impossible! Cannot "
+ "access! This is probably a bug in the "
+ "driver.", (unsigned long long)vcn,
+ idx_ni->mft_no);
+ goto unm_err_out;
+ }
+ index_end = (u8*)&ia->index + le32_to_cpu(ia->index.index_length);
+ if (index_end > (u8*)ia + idx_ni->itype.index.block_size) {
+ ntfs_error(sb, "Size of index buffer (VCN 0x%llx) of inode "
+ "0x%lx exceeds maximum size.",
+ (unsigned long long)vcn, idx_ni->mft_no);
+ goto unm_err_out;
+ }
+ /* The first index entry. */
+ ie = (INDEX_ENTRY*)((u8*)&ia->index +
+ le32_to_cpu(ia->index.entries_offset));
+ /*
+ * Iterate similar to above big loop but applied to index buffer, thus
+ * loop until we exceed valid memory (corruption case) or until we
+ * reach the last entry.
+ */
+ for (;; ie = (INDEX_ENTRY*)((u8*)ie + le16_to_cpu(ie->length))) {
+ /* Bounds checks. */
+ if ((u8*)ie < (u8*)ia || (u8*)ie +
+ sizeof(INDEX_ENTRY_HEADER) > index_end ||
+ (u8*)ie + le16_to_cpu(ie->length) > index_end) {
+ ntfs_error(sb, "Index entry out of bounds in inode "
+ "0x%lx.", idx_ni->mft_no);
+ goto unm_err_out;
+ }
+ /*
+ * The last entry cannot contain a key. It can however contain
+ * a pointer to a child node in the B+tree so we just break out.
+ */
+ if (ie->flags & INDEX_ENTRY_END)
+ break;
+ /* Further bounds checks. */
+ if ((u32)sizeof(INDEX_ENTRY_HEADER) +
+ le16_to_cpu(ie->key_length) >
+ le16_to_cpu(ie->data.vi.data_offset) ||
+ (u32)le16_to_cpu(ie->data.vi.data_offset) +
+ le16_to_cpu(ie->data.vi.data_length) >
+ le16_to_cpu(ie->length)) {
+ ntfs_error(sb, "Index entry out of bounds in inode "
+ "0x%lx.", idx_ni->mft_no);
+ goto unm_err_out;
+ }
+ /* If the keys match perfectly, we setup @ictx and return 0. */
+ if ((key_len == le16_to_cpu(ie->key_length)) && !memcmp(key,
+ &ie->key, key_len)) {
+ia_done:
+ ictx->is_in_root = FALSE;
+ ictx->actx = NULL;
+ ictx->base_ni = NULL;
+ ictx->ia = ia;
+ ictx->page = page;
+ goto done;
+ }
+ /*
+ * Not a perfect match, need to do full blown collation so we
+ * know which way in the B+tree we have to go.
+ */
+ rc = ntfs_collate(vol, idx_ni->itype.index.collation_rule, key,
+ key_len, &ie->key, le16_to_cpu(ie->key_length));
+ /*
+ * If @key collates before the key of the current entry, there
+ * is definitely no such key in this index but we might need to
+ * descend into the B+tree so we just break out of the loop.
+ */
+ if (rc == -1)
+ break;
+ /*
+ * A match should never happen as the memcmp() call should have
+ * cought it, but we still treat it correctly.
+ */
+ if (!rc)
+ goto ia_done;
+ /* The keys are not equal, continue the search. */
+ }
+ /*
+ * We have finished with this index buffer without success. Check for
+ * the presence of a child node and if not present return -ENOENT.
+ */
+ if (!(ie->flags & INDEX_ENTRY_NODE)) {
+ ntfs_debug("Entry not found.");
+ err = -ENOENT;
+ goto ia_done;
+ }
+ if ((ia->index.flags & NODE_MASK) == LEAF_NODE) {
+ ntfs_error(sb, "Index entry with child node found in a leaf "
+ "node in inode 0x%lx.", idx_ni->mft_no);
+ goto unm_err_out;
+ }
+ /* Child node present, descend into it. */
+ old_vcn = vcn;
+ vcn = sle64_to_cpup((sle64*)((u8*)ie + le16_to_cpu(ie->length) - 8));
+ if (vcn >= 0) {
+ /*
+ * If vcn is in the same page cache page as old_vcn we recycle
+ * the mapped page.
+ */
+ if (old_vcn << vol->cluster_size_bits >>
+ PAGE_CACHE_SHIFT == vcn <<
+ vol->cluster_size_bits >>
+ PAGE_CACHE_SHIFT)
+ goto fast_descend_into_child_node;
+ unlock_page(page);
+ ntfs_unmap_page(page);
+ goto descend_into_child_node;
+ }
+ ntfs_error(sb, "Negative child node vcn in inode 0x%lx.",
+ idx_ni->mft_no);
+unm_err_out:
+ unlock_page(page);
+ ntfs_unmap_page(page);
+err_out:
+ if (!err)
+ err = -EIO;
+ if (actx)
+ ntfs_attr_put_search_ctx(actx);
+ if (m)
+ unmap_mft_record(base_ni);
+ return err;
+idx_err_out:
+ ntfs_error(sb, "Corrupt index. Aborting lookup.");
+ goto err_out;
+}
--- /dev/null
+/*
+ * index.h - Defines for NTFS kernel index handling. Part of the Linux-NTFS
+ * project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef _LINUX_NTFS_INDEX_H
+#define _LINUX_NTFS_INDEX_H
+
+#include <linux/fs.h>
+
+#include "types.h"
+#include "layout.h"
+#include "inode.h"
+#include "attrib.h"
+#include "mft.h"
+#include "aops.h"
+
+/**
+ * @idx_ni: index inode containing the @entry described by this context
+ * @entry: index entry (points into @ir or @ia)
+ * @data: index entry data (points into @entry)
+ * @data_len: length in bytes of @data
+ * @is_in_root: TRUE if @entry is in @ir and FALSE if it is in @ia
+ * @ir: index root if @is_in_root and NULL otherwise
+ * @actx: attribute search context if @is_in_root and NULL otherwise
+ * @base_ni: base inode if @is_in_root and NULL otherwise
+ * @ia: index block if @is_in_root is FALSE and NULL otherwise
+ * @page: page if @is_in_root is FALSE and NULL otherwise
+ *
+ * @idx_ni is the index inode this context belongs to.
+ *
+ * @entry is the index entry described by this context. @data and @data_len
+ * are the index entry data and its length in bytes, respectively. @data
+ * simply points into @entry. This is probably what the user is interested in.
+ *
+ * If @is_in_root is TRUE, @entry is in the index root attribute @ir described
+ * by the attribute search context @actx and the base inode @base_ni. @ia and
+ * @page are NULL in this case.
+ *
+ * If @is_in_root is FALSE, @entry is in the index allocation attribute and @ia
+ * and @page point to the index allocation block and the mapped, locked page it
+ * is in, respectively. @ir, @actx and @base_ni are NULL in this case.
+ *
+ * To obtain a context call ntfs_index_ctx_get().
+ *
+ * We use this context to allow ntfs_index_lookup() to return the found index
+ * @entry and its @data without having to allocate a buffer and copy the @entry
+ * and/or its @data into it.
+ *
+ * When finished with the @entry and its @data, call ntfs_index_ctx_put() to
+ * free the context and other associated resources.
+ *
+ * If the index entry was modified, call flush_dcache_index_entry_page()
+ * immediately after the modification and either ntfs_index_entry_mark_dirty()
+ * or ntfs_index_entry_write() before the call to ntfs_index_ctx_put() to
+ * ensure that the changes are written to disk.
+ */
+typedef struct {
+ ntfs_inode *idx_ni;
+ INDEX_ENTRY *entry;
+ void *data;
+ u16 data_len;
+ BOOL is_in_root;
+ INDEX_ROOT *ir;
+ ntfs_attr_search_ctx *actx;
+ ntfs_inode *base_ni;
+ INDEX_ALLOCATION *ia;
+ struct page *page;
+} ntfs_index_context;
+
+extern ntfs_index_context *ntfs_index_ctx_get(ntfs_inode *idx_ni);
+extern void ntfs_index_ctx_put(ntfs_index_context *ictx);
+
+extern int ntfs_index_lookup(const void *key, const int key_len,
+ ntfs_index_context *ictx);
+
+#ifdef NTFS_RW
+
+/**
+ * ntfs_index_entry_flush_dcache_page - flush_dcache_page() for index entries
+ * @ictx: ntfs index context describing the index entry
+ *
+ * Call flush_dcache_page() for the page in which an index entry resides.
+ *
+ * This must be called every time an index entry is modified, just after the
+ * modification.
+ *
+ * If the index entry is in the index root attribute, simply flush the page
+ * containing the mft record containing the index root attribute.
+ *
+ * If the index entry is in an index block belonging to the index allocation
+ * attribute, simply flush the page cache page containing the index block.
+ */
+static inline void ntfs_index_entry_flush_dcache_page(ntfs_index_context *ictx)
+{
+ if (ictx->is_in_root)
+ flush_dcache_mft_record_page(ictx->actx->ntfs_ino);
+ else
+ flush_dcache_page(ictx->page);
+}
+
+/**
+ * ntfs_index_entry_mark_dirty - mark an index entry dirty
+ * @ictx: ntfs index context describing the index entry
+ *
+ * Mark the index entry described by the index entry context @ictx dirty.
+ *
+ * If the index entry is in the index root attribute, simply mark the mft
+ * record containing the index root attribute dirty. This ensures the mft
+ * record, and hence the index root attribute, will be written out to disk
+ * later.
+ *
+ * If the index entry is in an index block belonging to the index allocation
+ * attribute, mark the buffers belonging to the index record as well as the
+ * page cache page the index block is in dirty. This automatically marks the
+ * VFS inode of the ntfs index inode to which the index entry belongs dirty,
+ * too (I_DIRTY_PAGES) and this in turn ensures the page buffers, and hence the
+ * dirty index block, will be written out to disk later.
+ */
+static inline void ntfs_index_entry_mark_dirty(ntfs_index_context *ictx)
+{
+ if (ictx->is_in_root)
+ mark_mft_record_dirty(ictx->actx->ntfs_ino);
+ else
+ mark_ntfs_record_dirty(ictx->page,
+ (u8*)ictx->ia - (u8*)page_address(ictx->page));
+}
+
+#endif /* NTFS_RW */
+
+#endif /* _LINUX_NTFS_INDEX_H */
--- /dev/null
+/*
+ * quota.c - NTFS kernel quota ($Quota) handling. Part of the Linux-NTFS
+ * project.
+ *
+ * Copyright (c) 2004 Anton Altaparmakov
+ *
+ * This program/include file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as published
+ * by the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program/include file is distributed in the hope that it will be
+ * useful, but WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program (in the main directory of the Linux-NTFS
+ * distribution in the file COPYING); if not, write to the Free Software
+ * Foundation,Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifdef NTFS_RW
+
+#include "index.h"
+#include "quota.h"
+#include "debug.h"
+#include "ntfs.h"
+
+/**
+ * ntfs_mark_quotas_out_of_date - mark the quotas out of date on an ntfs volume
+ * @vol: ntfs volume on which to mark the quotas out of date
+ *
+ * Mark the quotas out of date on the ntfs volume @vol and return TRUE on
+ * success and FALSE on error.
+ */
+BOOL ntfs_mark_quotas_out_of_date(ntfs_volume *vol)
+{
+ ntfs_index_context *ictx;
+ QUOTA_CONTROL_ENTRY *qce;
+ const le32 qid = QUOTA_DEFAULTS_ID;
+ int err;
+
+ ntfs_debug("Entering.");
+ if (NVolQuotaOutOfDate(vol))
+ goto done;
+ if (!vol->quota_ino || !vol->quota_q_ino) {
+ ntfs_error(vol->sb, "Quota inodes are not open.");
+ return FALSE;
+ }
+ down(&vol->quota_q_ino->i_sem);
+ ictx = ntfs_index_ctx_get(NTFS_I(vol->quota_q_ino));
+ if (!ictx) {
+ ntfs_error(vol->sb, "Failed to get index context.");
+ goto err_out;
+ }
+ err = ntfs_index_lookup(&qid, sizeof(qid), ictx);
+ if (err) {
+ if (err == -ENOENT)
+ ntfs_error(vol->sb, "Quota defaults entry is not "
+ "present.");
+ else
+ ntfs_error(vol->sb, "Lookup of quota defaults entry "
+ "failed.");
+ goto err_out;
+ }
+ if (ictx->data_len < offsetof(QUOTA_CONTROL_ENTRY, sid)) {
+ ntfs_error(vol->sb, "Quota defaults entry size is invalid. "
+ "Run chkdsk.");
+ goto err_out;
+ }
+ qce = (QUOTA_CONTROL_ENTRY*)ictx->data;
+ if (le32_to_cpu(qce->version) != QUOTA_VERSION) {
+ ntfs_error(vol->sb, "Quota defaults entry version 0x%x is not "
+ "supported.", le32_to_cpu(qce->version));
+ goto err_out;
+ }
+ ntfs_debug("Quota defaults flags = 0x%x.", le32_to_cpu(qce->flags));
+ /* If quotas are already marked out of date, no need to do anything. */
+ if (qce->flags & QUOTA_FLAG_OUT_OF_DATE)
+ goto set_done;
+ /*
+ * If quota tracking is neither requested, nor enabled and there are no
+ * pending deletes, no need to mark the quotas out of date.
+ */
+ if (!(qce->flags & (QUOTA_FLAG_TRACKING_ENABLED |
+ QUOTA_FLAG_TRACKING_REQUESTED |
+ QUOTA_FLAG_PENDING_DELETES)))
+ goto set_done;
+ /*
+ * Set the QUOTA_FLAG_OUT_OF_DATE bit thus marking quotas out of date.
+ * This is verified on WinXP to be sufficient to cause windows to
+ * rescan the volume on boot and update all quota entries.
+ */
+ qce->flags |= QUOTA_FLAG_OUT_OF_DATE;
+ /* Ensure the modified flags are written to disk. */
+ ntfs_index_entry_flush_dcache_page(ictx);
+ ntfs_index_entry_mark_dirty(ictx);
+set_done:
+ ntfs_index_ctx_put(ictx);
+ up(&vol->quota_q_ino->i_sem);
+ /*
+ * We set the flag so we do not try to mark the quotas out of date
+ * again on remount.
+ */
+ NVolSetQuotaOutOfDate(vol);
+done:
+ ntfs_debug("Done.");
+ return TRUE;
+err_out:
+ if (ictx)
+ ntfs_index_ctx_put(ictx);
+ up(&vol->quota_q_ino->i_sem);
+ return FALSE;
+}
+
+#endif /* NTFS_RW */
--- /dev/null
+#include <linux/fs.h>
+#include <linux/posix_acl.h>
+#include <linux/reiserfs_fs.h>
+#include <linux/errno.h>
+#include <linux/pagemap.h>
+#include <linux/xattr.h>
+#include <linux/xattr_acl.h>
+#include <linux/reiserfs_xattr.h>
+#include <linux/reiserfs_acl.h>
+#include <asm/uaccess.h>
+
+static int
+xattr_set_acl(struct inode *inode, int type, const void *value, size_t size)
+{
+ struct posix_acl *acl;
+ int error;
+
+ if (!reiserfs_posixacl(inode->i_sb))
+ return -EOPNOTSUPP;
+ if ((current->fsuid != inode->i_uid) && !capable(CAP_FOWNER))
+ return -EPERM;
+
+ if (value) {
+ acl = posix_acl_from_xattr(value, size);
+ if (IS_ERR(acl)) {
+ return PTR_ERR(acl);
+ } else if (acl) {
+ error = posix_acl_valid(acl);
+ if (error)
+ goto release_and_out;
+ }
+ } else
+ acl = NULL;
+
+ error = reiserfs_set_acl (inode, type, acl);
+
+release_and_out:
+ posix_acl_release(acl);
+ return error;
+}
+
+
+static int
+xattr_get_acl(struct inode *inode, int type, void *buffer, size_t size)
+{
+ struct posix_acl *acl;
+ int error;
+
+ if (!reiserfs_posixacl(inode->i_sb))
+ return -EOPNOTSUPP;
+
+ acl = reiserfs_get_acl (inode, type);
+ if (IS_ERR(acl))
+ return PTR_ERR(acl);
+ if (acl == NULL)
+ return -ENODATA;
+ error = posix_acl_to_xattr(acl, buffer, size);
+ posix_acl_release(acl);
+
+ return error;
+}
+
+
+/*
+ * Convert from filesystem to in-memory representation.
+ */
+static struct posix_acl *
+posix_acl_from_disk(const void *value, size_t size)
+{
+ const char *end = (char *)value + size;
+ int n, count;
+ struct posix_acl *acl;
+
+ if (!value)
+ return NULL;
+ if (size < sizeof(reiserfs_acl_header))
+ return ERR_PTR(-EINVAL);
+ if (((reiserfs_acl_header *)value)->a_version !=
+ cpu_to_le32(REISERFS_ACL_VERSION))
+ return ERR_PTR(-EINVAL);
+ value = (char *)value + sizeof(reiserfs_acl_header);
+ count = reiserfs_acl_count(size);
+ if (count < 0)
+ return ERR_PTR(-EINVAL);
+ if (count == 0)
+ return NULL;
+ acl = posix_acl_alloc(count, GFP_NOFS);
+ if (!acl)
+ return ERR_PTR(-ENOMEM);
+ for (n=0; n < count; n++) {
+ reiserfs_acl_entry *entry =
+ (reiserfs_acl_entry *)value;
+ if ((char *)value + sizeof(reiserfs_acl_entry_short) > end)
+ goto fail;
+ acl->a_entries[n].e_tag = le16_to_cpu(entry->e_tag);
+ acl->a_entries[n].e_perm = le16_to_cpu(entry->e_perm);
+ switch(acl->a_entries[n].e_tag) {
+ case ACL_USER_OBJ:
+ case ACL_GROUP_OBJ:
+ case ACL_MASK:
+ case ACL_OTHER:
+ value = (char *)value +
+ sizeof(reiserfs_acl_entry_short);
+ acl->a_entries[n].e_id = ACL_UNDEFINED_ID;
+ break;
+
+ case ACL_USER:
+ case ACL_GROUP:
+ value = (char *)value + sizeof(reiserfs_acl_entry);
+ if ((char *)value > end)
+ goto fail;
+ acl->a_entries[n].e_id =
+ le32_to_cpu(entry->e_id);
+ break;
+
+ default:
+ goto fail;
+ }
+ }
+ if (value != end)
+ goto fail;
+ return acl;
+
+fail:
+ posix_acl_release(acl);
+ return ERR_PTR(-EINVAL);
+}
+
+/*
+ * Convert from in-memory to filesystem representation.
+ */
+static void *
+posix_acl_to_disk(const struct posix_acl *acl, size_t *size)
+{
+ reiserfs_acl_header *ext_acl;
+ char *e;
+ int n;
+
+ *size = reiserfs_acl_size(acl->a_count);
+ ext_acl = (reiserfs_acl_header *)kmalloc(sizeof(reiserfs_acl_header) +
+ acl->a_count * sizeof(reiserfs_acl_entry), GFP_NOFS);
+ if (!ext_acl)
+ return ERR_PTR(-ENOMEM);
+ ext_acl->a_version = cpu_to_le32(REISERFS_ACL_VERSION);
+ e = (char *)ext_acl + sizeof(reiserfs_acl_header);
+ for (n=0; n < acl->a_count; n++) {
+ reiserfs_acl_entry *entry = (reiserfs_acl_entry *)e;
+ entry->e_tag = cpu_to_le16(acl->a_entries[n].e_tag);
+ entry->e_perm = cpu_to_le16(acl->a_entries[n].e_perm);
+ switch(acl->a_entries[n].e_tag) {
+ case ACL_USER:
+ case ACL_GROUP:
+ entry->e_id =
+ cpu_to_le32(acl->a_entries[n].e_id);
+ e += sizeof(reiserfs_acl_entry);
+ break;
+
+ case ACL_USER_OBJ:
+ case ACL_GROUP_OBJ:
+ case ACL_MASK:
+ case ACL_OTHER:
+ e += sizeof(reiserfs_acl_entry_short);
+ break;
+
+ default:
+ goto fail;
+ }
+ }
+ return (char *)ext_acl;
+
+fail:
+ kfree(ext_acl);
+ return ERR_PTR(-EINVAL);
+}
+
+/*
+ * Inode operation get_posix_acl().
+ *
+ * inode->i_sem: down
+ * BKL held [before 2.5.x]
+ */
+struct posix_acl *
+reiserfs_get_acl(struct inode *inode, int type)
+{
+ char *name, *value;
+ struct posix_acl *acl, **p_acl;
+ size_t size;
+ int retval;
+ struct reiserfs_inode_info *reiserfs_i = REISERFS_I(inode);
+
+ switch (type) {
+ case ACL_TYPE_ACCESS:
+ name = XATTR_NAME_ACL_ACCESS;
+ p_acl = &reiserfs_i->i_acl_access;
+ break;
+ case ACL_TYPE_DEFAULT:
+ name = XATTR_NAME_ACL_DEFAULT;
+ p_acl = &reiserfs_i->i_acl_default;
+ break;
+ default:
+ return ERR_PTR (-EINVAL);
+ }
+
+ if (IS_ERR (*p_acl)) {
+ if (PTR_ERR (*p_acl) == -ENODATA)
+ return NULL;
+ } else if (*p_acl != NULL)
+ return posix_acl_dup (*p_acl);
+
+ size = reiserfs_xattr_get (inode, name, NULL, 0);
+ if ((int)size < 0) {
+ if (size == -ENODATA || size == -ENOSYS) {
+ *p_acl = ERR_PTR (-ENODATA);
+ return NULL;
+ }
+ return ERR_PTR (size);
+ }
+
+ value = kmalloc (size, GFP_NOFS);
+ if (!value)
+ return ERR_PTR (-ENOMEM);
+
+ retval = reiserfs_xattr_get(inode, name, value, size);
+ if (retval == -ENODATA || retval == -ENOSYS) {
+ /* This shouldn't actually happen as it should have
+ been caught above.. but just in case */
+ acl = NULL;
+ *p_acl = ERR_PTR (-ENODATA);
+ } else if (retval < 0) {
+ acl = ERR_PTR(retval);
+ } else {
+ acl = posix_acl_from_disk(value, retval);
+ *p_acl = posix_acl_dup (acl);
+ }
+
+ kfree(value);
+ return acl;
+}
+
+/*
+ * Inode operation set_posix_acl().
+ *
+ * inode->i_sem: down
+ * BKL held [before 2.5.x]
+ */
+int
+reiserfs_set_acl(struct inode *inode, int type, struct posix_acl *acl)
+{
+ char *name;
+ void *value = NULL;
+ struct posix_acl **p_acl;
+ size_t size;
+ int error;
+ struct reiserfs_inode_info *reiserfs_i = REISERFS_I(inode);
+
+ if (S_ISLNK(inode->i_mode))
+ return -EOPNOTSUPP;
+
+ switch (type) {
+ case ACL_TYPE_ACCESS:
+ name = XATTR_NAME_ACL_ACCESS;
+ p_acl = &reiserfs_i->i_acl_access;
+ if (acl) {
+ mode_t mode = inode->i_mode;
+ error = posix_acl_equiv_mode (acl, &mode);
+ if (error < 0)
+ return error;
+ else {
+ inode->i_mode = mode;
+ if (error == 0)
+ acl = NULL;
+ }
+ }
+ break;
+ case ACL_TYPE_DEFAULT:
+ name = XATTR_NAME_ACL_DEFAULT;
+ p_acl = &reiserfs_i->i_acl_default;
+ if (!S_ISDIR (inode->i_mode))
+ return acl ? -EACCES : 0;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (acl) {
+ value = posix_acl_to_disk(acl, &size);
+ if (IS_ERR(value))
+ return (int)PTR_ERR(value);
+ error = reiserfs_xattr_set(inode, name, value, size, 0);
+ } else {
+ error = reiserfs_xattr_del (inode, name);
+ if (error == -ENODATA) {
+ /* This may seem odd here, but it means that the ACL was set
+ * with a value representable with mode bits. If there was
+ * an ACL before, reiserfs_xattr_del already dirtied the inode.
+ */
+ mark_inode_dirty (inode);
+ error = 0;
+ }
+ }
+
+ if (value)
+ kfree(value);
+
+ if (!error) {
+ /* Release the old one */
+ if (!IS_ERR (*p_acl) && *p_acl)
+ posix_acl_release (*p_acl);
+
+ if (acl == NULL)
+ *p_acl = ERR_PTR (-ENODATA);
+ else
+ *p_acl = posix_acl_dup (acl);
+ }
+
+ return error;
+}
+
+/* dir->i_sem: down,
+ * inode is new and not released into the wild yet */
+int
+reiserfs_inherit_default_acl (struct inode *dir, struct dentry *dentry, struct inode *inode)
+{
+ struct posix_acl *acl;
+ int err = 0;
+
+ /* ACLs only get applied to files and directories */
+ if (S_ISLNK (inode->i_mode))
+ return 0;
+
+ /* ACLs can only be used on "new" objects, so if it's an old object
+ * there is nothing to inherit from */
+ if (get_inode_sd_version (dir) == STAT_DATA_V1)
+ goto apply_umask;
+
+ /* Don't apply ACLs to objects in the .reiserfs_priv tree.. This
+ * would be useless since permissions are ignored, and a pain because
+ * it introduces locking cycles */
+ if (is_reiserfs_priv_object (dir)) {
+ REISERFS_I(inode)->i_flags |= i_priv_object;
+ goto apply_umask;
+ }
+
+ acl = reiserfs_get_acl (dir, ACL_TYPE_DEFAULT);
+ if (IS_ERR (acl)) {
+ if (PTR_ERR (acl) == -ENODATA)
+ goto apply_umask;
+ return PTR_ERR (acl);
+ }
+
+ if (acl) {
+ struct posix_acl *acl_copy;
+ mode_t mode = inode->i_mode;
+ int need_acl;
+
+ /* Copy the default ACL to the default ACL of a new directory */
+ if (S_ISDIR (inode->i_mode)) {
+ err = reiserfs_set_acl (inode, ACL_TYPE_DEFAULT, acl);
+ if (err)
+ goto cleanup;
+ }
+
+ /* Now we reconcile the new ACL and the mode,
+ potentially modifying both */
+ acl_copy = posix_acl_clone (acl, GFP_NOFS);
+ if (!acl_copy) {
+ err = -ENOMEM;
+ goto cleanup;
+ }
+
+
+ need_acl = posix_acl_create_masq (acl_copy, &mode);
+ if (need_acl >= 0) {
+ if (mode != inode->i_mode) {
+ inode->i_mode = mode;
+ }
+
+ /* If we need an ACL.. */
+ if (need_acl > 0) {
+ err = reiserfs_set_acl (inode, ACL_TYPE_ACCESS, acl_copy);
+ if (err)
+ goto cleanup_copy;
+ }
+ }
+cleanup_copy:
+ posix_acl_release (acl_copy);
+cleanup:
+ posix_acl_release (acl);
+ } else {
+apply_umask:
+ /* no ACL, apply umask */
+ inode->i_mode &= ~current->fs->umask;
+ }
+
+ return err;
+}
+
+/* Looks up and caches the result of the default ACL.
+ * We do this so that we don't need to carry the xattr_sem into
+ * reiserfs_new_inode if we don't need to */
+int
+reiserfs_cache_default_acl (struct inode *inode)
+{
+ int ret = 0;
+ if (reiserfs_posixacl (inode->i_sb) &&
+ !is_reiserfs_priv_object (inode)) {
+ struct posix_acl *acl;
+ reiserfs_read_lock_xattr_i (inode);
+ reiserfs_read_lock_xattrs (inode->i_sb);
+ acl = reiserfs_get_acl (inode, ACL_TYPE_DEFAULT);
+ reiserfs_read_unlock_xattrs (inode->i_sb);
+ reiserfs_read_unlock_xattr_i (inode);
+ ret = acl ? 1 : 0;
+ posix_acl_release (acl);
+ }
+
+ return ret;
+}
+
+int
+reiserfs_acl_chmod (struct inode *inode)
+{
+ struct posix_acl *acl, *clone;
+ int error;
+
+ if (S_ISLNK(inode->i_mode))
+ return -EOPNOTSUPP;
+
+ if (get_inode_sd_version (inode) == STAT_DATA_V1 ||
+ !reiserfs_posixacl(inode->i_sb))
+ {
+ return 0;
+ }
+
+ reiserfs_read_lock_xattrs (inode->i_sb);
+ acl = reiserfs_get_acl(inode, ACL_TYPE_ACCESS);
+ reiserfs_read_unlock_xattrs (inode->i_sb);
+ if (!acl)
+ return 0;
+ if (IS_ERR(acl))
+ return PTR_ERR(acl);
+ clone = posix_acl_clone(acl, GFP_NOFS);
+ posix_acl_release(acl);
+ if (!clone)
+ return -ENOMEM;
+ error = posix_acl_chmod_masq(clone, inode->i_mode);
+ if (!error) {
+ int lock = !has_xattr_dir (inode);
+ reiserfs_write_lock_xattr_i (inode);
+ if (lock)
+ reiserfs_write_lock_xattrs (inode->i_sb);
+ else
+ reiserfs_read_lock_xattrs (inode->i_sb);
+ error = reiserfs_set_acl(inode, ACL_TYPE_ACCESS, clone);
+ if (lock)
+ reiserfs_write_unlock_xattrs (inode->i_sb);
+ else
+ reiserfs_read_unlock_xattrs (inode->i_sb);
+ reiserfs_write_unlock_xattr_i (inode);
+ }
+ posix_acl_release(clone);
+ return error;
+}
+
+static int
+posix_acl_access_get(struct inode *inode, const char *name,
+ void *buffer, size_t size)
+{
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_ACCESS)-1)
+ return -EINVAL;
+ return xattr_get_acl(inode, ACL_TYPE_ACCESS, buffer, size);
+}
+
+static int
+posix_acl_access_set(struct inode *inode, const char *name,
+ const void *value, size_t size, int flags)
+{
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_ACCESS)-1)
+ return -EINVAL;
+ return xattr_set_acl(inode, ACL_TYPE_ACCESS, value, size);
+}
+
+static int
+posix_acl_access_del (struct inode *inode, const char *name)
+{
+ struct reiserfs_inode_info *reiserfs_i = REISERFS_I(inode);
+ struct posix_acl **acl = &reiserfs_i->i_acl_access;
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_ACCESS)-1)
+ return -EINVAL;
+ if (!IS_ERR (*acl) && *acl) {
+ posix_acl_release (*acl);
+ *acl = ERR_PTR (-ENODATA);
+ }
+
+ return 0;
+}
+
+static int
+posix_acl_access_list (struct inode *inode, const char *name, int namelen, char *out)
+{
+ int len = namelen;
+ if (!reiserfs_posixacl (inode->i_sb))
+ return 0;
+ if (out)
+ memcpy (out, name, len);
+
+ return len;
+}
+
+struct reiserfs_xattr_handler posix_acl_access_handler = {
+ .prefix = XATTR_NAME_ACL_ACCESS,
+ .get = posix_acl_access_get,
+ .set = posix_acl_access_set,
+ .del = posix_acl_access_del,
+ .list = posix_acl_access_list,
+};
+
+static int
+posix_acl_default_get (struct inode *inode, const char *name,
+ void *buffer, size_t size)
+{
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_DEFAULT)-1)
+ return -EINVAL;
+ return xattr_get_acl(inode, ACL_TYPE_DEFAULT, buffer, size);
+}
+
+static int
+posix_acl_default_set(struct inode *inode, const char *name,
+ const void *value, size_t size, int flags)
+{
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_DEFAULT)-1)
+ return -EINVAL;
+ return xattr_set_acl(inode, ACL_TYPE_DEFAULT, value, size);
+}
+
+static int
+posix_acl_default_del (struct inode *inode, const char *name)
+{
+ struct reiserfs_inode_info *reiserfs_i = REISERFS_I(inode);
+ struct posix_acl **acl = &reiserfs_i->i_acl_default;
+ if (strlen(name) != sizeof(XATTR_NAME_ACL_DEFAULT)-1)
+ return -EINVAL;
+ if (!IS_ERR (*acl) && *acl) {
+ posix_acl_release (*acl);
+ *acl = ERR_PTR (-ENODATA);
+ }
+
+ return 0;
+}
+
+static int
+posix_acl_default_list (struct inode *inode, const char *name, int namelen, char *out)
+{
+ int len = namelen;
+ if (!reiserfs_posixacl (inode->i_sb))
+ return 0;
+ if (out)
+ memcpy (out, name, len);
+
+ return len;
+}
+
+struct reiserfs_xattr_handler posix_acl_default_handler = {
+ .prefix = XATTR_NAME_ACL_DEFAULT,
+ .get = posix_acl_default_get,
+ .set = posix_acl_default_set,
+ .del = posix_acl_default_del,
+ .list = posix_acl_default_list,
+};
--- /dev/null
+#include <linux/reiserfs_fs.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/pagemap.h>
+#include <linux/xattr.h>
+#include <linux/reiserfs_xattr.h>
+#include <asm/uaccess.h>
+
+#define XATTR_SECURITY_PREFIX "security."
+
+static int
+security_get (struct inode *inode, const char *name, void *buffer, size_t size)
+{
+ if (strlen(name) < sizeof(XATTR_SECURITY_PREFIX))
+ return -EINVAL;
+
+ if (is_reiserfs_priv_object(inode))
+ return -EPERM;
+
+ return reiserfs_xattr_get (inode, name, buffer, size);
+}
+
+static int
+security_set (struct inode *inode, const char *name, const void *buffer,
+ size_t size, int flags)
+{
+ if (strlen(name) < sizeof(XATTR_SECURITY_PREFIX))
+ return -EINVAL;
+
+ if (is_reiserfs_priv_object(inode))
+ return -EPERM;
+
+ return reiserfs_xattr_set (inode, name, buffer, size, flags);
+}
+
+static int
+security_del (struct inode *inode, const char *name)
+{
+ if (strlen(name) < sizeof(XATTR_SECURITY_PREFIX))
+ return -EINVAL;
+
+ if (is_reiserfs_priv_object(inode))
+ return -EPERM;
+
+ return 0;
+}
+
+static int
+security_list (struct inode *inode, const char *name, int namelen, char *out)
+{
+ int len = namelen;
+
+ if (is_reiserfs_priv_object(inode))
+ return 0;
+
+ if (out)
+ memcpy (out, name, len);
+
+ return len;
+}
+
+
+struct reiserfs_xattr_handler security_handler = {
+ .prefix = XATTR_SECURITY_PREFIX,
+ .get = security_get,
+ .set = security_set,
+ .del = security_del,
+ .list = security_list,
+};
--- /dev/null
+#include <linux/reiserfs_fs.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/pagemap.h>
+#include <linux/xattr.h>
+#include <linux/reiserfs_xattr.h>
+#include <asm/uaccess.h>
+
+#define XATTR_TRUSTED_PREFIX "trusted."
+
+static int
+trusted_get (struct inode *inode, const char *name, void *buffer, size_t size)
+{
+ if (strlen(name) < sizeof(XATTR_TRUSTED_PREFIX))
+ return -EINVAL;
+
+ if (!reiserfs_xattrs (inode->i_sb))
+ return -EOPNOTSUPP;
+
+ if (!(capable(CAP_SYS_ADMIN) || is_reiserfs_priv_object(inode)))
+ return -EPERM;
+
+ return reiserfs_xattr_get (inode, name, buffer, size);
+}
+
+static int
+trusted_set (struct inode *inode, const char *name, const void *buffer,
+ size_t size, int flags)
+{
+ if (strlen(name) < sizeof(XATTR_TRUSTED_PREFIX))
+ return -EINVAL;
+
+ if (!reiserfs_xattrs (inode->i_sb))
+ return -EOPNOTSUPP;
+
+ if (!(capable(CAP_SYS_ADMIN) || is_reiserfs_priv_object(inode)))
+ return -EPERM;
+
+ return reiserfs_xattr_set (inode, name, buffer, size, flags);
+}
+
+static int
+trusted_del (struct inode *inode, const char *name)
+{
+ if (strlen(name) < sizeof(XATTR_TRUSTED_PREFIX))
+ return -EINVAL;
+
+ if (!reiserfs_xattrs (inode->i_sb))
+ return -EOPNOTSUPP;
+
+ if (!(capable(CAP_SYS_ADMIN) || is_reiserfs_priv_object(inode)))
+ return -EPERM;
+
+ return 0;
+}
+
+static int
+trusted_list (struct inode *inode, const char *name, int namelen, char *out)
+{
+ int len = namelen;
+
+ if (!reiserfs_xattrs (inode->i_sb))
+ return 0;
+
+ if (!(capable(CAP_SYS_ADMIN) || is_reiserfs_priv_object(inode)))
+ return 0;
+
+ if (out)
+ memcpy (out, name, len);
+
+ return len;
+}
+
+
+struct reiserfs_xattr_handler trusted_handler = {
+ .prefix = XATTR_TRUSTED_PREFIX,
+ .get = trusted_get,
+ .set = trusted_set,
+ .del = trusted_del,
+ .list = trusted_list,
+};
--- /dev/null
+#include <linux/reiserfs_fs.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/pagemap.h>
+#include <linux/xattr.h>
+#include <linux/reiserfs_xattr.h>
+#include <asm/uaccess.h>
+
+#ifdef CONFIG_REISERFS_FS_POSIX_ACL
+# include <linux/reiserfs_acl.h>
+#endif
+
+#define XATTR_USER_PREFIX "user."
+
+static int
+user_get (struct inode *inode, const char *name, void *buffer, size_t size)
+{
+
+ int error;
+
+ if (strlen(name) < sizeof(XATTR_USER_PREFIX))
+ return -EINVAL;
+
+ if (!reiserfs_xattrs_user (inode->i_sb))
+ return -EOPNOTSUPP;
+
+ error = reiserfs_permission_locked (inode, MAY_READ, NULL);
+ if (error)
+ return error;
+
+ return reiserfs_xattr_get (inode, name, buffer, size);
+}
+
+static int
+user_set (struct inode *inode, const char *name, const void *buffer,
+ size_t size, int flags)
+{
+
+ int error;
+
+ if (strlen(name) < sizeof(XATTR_USER_PREFIX))
+ return -EINVAL;
+
+ if (!reiserfs_xattrs_user (inode->i_sb))
+ return -EOPNOTSUPP;
+
+ if (!S_ISREG (inode->i_mode) &&
+ (!S_ISDIR (inode->i_mode) || inode->i_mode & S_ISVTX))
+ return -EPERM;
+
+ error = reiserfs_permission_locked (inode, MAY_WRITE, NULL);
+ if (error)
+ return error;
+
+ return reiserfs_xattr_set (inode, name, buffer, size, flags);
+}
+
+static int
+user_del (struct inode *inode, const char *name)
+{
+ int error;
+
+ if (strlen(name) < sizeof(XATTR_USER_PREFIX))
+ return -EINVAL;
+
+ if (!reiserfs_xattrs_user (inode->i_sb))
+ return -EOPNOTSUPP;
+
+ if (!S_ISREG (inode->i_mode) &&
+ (!S_ISDIR (inode->i_mode) || inode->i_mode & S_ISVTX))
+ return -EPERM;
+
+ error = reiserfs_permission_locked (inode, MAY_WRITE, NULL);
+ if (error)
+ return error;
+
+ return 0;
+}
+
+static int
+user_list (struct inode *inode, const char *name, int namelen, char *out)
+{
+ int len = namelen;
+ if (!reiserfs_xattrs_user (inode->i_sb))
+ return 0;
+
+ if (out)
+ memcpy (out, name, len);
+
+ return len;
+}
+
+struct reiserfs_xattr_handler user_handler = {
+ .prefix = XATTR_USER_PREFIX,
+ .get = user_get,
+ .set = user_set,
+ .del = user_del,
+ .list = user_list,
+};
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/vmalloc.h>
+#include <linux/highmem.h>
+#include <linux/swap.h>
+#include <linux/blkdev.h>
+
+#include "time.h"
+#include "kmem.h"
+
+#define MAX_VMALLOCS 6
+#define MAX_SLAB_SIZE 0x20000
+
+
+void *
+kmem_alloc(size_t size, int flags)
+{
+ int retries = 0;
+ int lflags = kmem_flags_convert(flags);
+ void *ptr;
+
+ do {
+ if (size < MAX_SLAB_SIZE || retries > MAX_VMALLOCS)
+ ptr = kmalloc(size, lflags);
+ else
+ ptr = __vmalloc(size, lflags, PAGE_KERNEL);
+ if (ptr || (flags & (KM_MAYFAIL|KM_NOSLEEP)))
+ return ptr;
+ if (!(++retries % 100))
+ printk(KERN_ERR "XFS: possible memory allocation "
+ "deadlock in %s (mode:0x%x)\n",
+ __FUNCTION__, lflags);
+ blk_congestion_wait(WRITE, HZ/50);
+ } while (1);
+}
+
+void *
+kmem_zalloc(size_t size, int flags)
+{
+ void *ptr;
+
+ ptr = kmem_alloc(size, flags);
+ if (ptr)
+ memset((char *)ptr, 0, (int)size);
+ return ptr;
+}
+
+void
+kmem_free(void *ptr, size_t size)
+{
+ if (((unsigned long)ptr < VMALLOC_START) ||
+ ((unsigned long)ptr >= VMALLOC_END)) {
+ kfree(ptr);
+ } else {
+ vfree(ptr);
+ }
+}
+
+void *
+kmem_realloc(void *ptr, size_t newsize, size_t oldsize, int flags)
+{
+ void *new;
+
+ new = kmem_alloc(newsize, flags);
+ if (ptr) {
+ if (new)
+ memcpy(new, ptr,
+ ((oldsize < newsize) ? oldsize : newsize));
+ kmem_free(ptr, oldsize);
+ }
+ return new;
+}
+
+void *
+kmem_zone_alloc(kmem_zone_t *zone, int flags)
+{
+ int retries = 0;
+ int lflags = kmem_flags_convert(flags);
+ void *ptr;
+
+ do {
+ ptr = kmem_cache_alloc(zone, lflags);
+ if (ptr || (flags & (KM_MAYFAIL|KM_NOSLEEP)))
+ return ptr;
+ if (!(++retries % 100))
+ printk(KERN_ERR "XFS: possible memory allocation "
+ "deadlock in %s (mode:0x%x)\n",
+ __FUNCTION__, lflags);
+ blk_congestion_wait(WRITE, HZ/50);
+ } while (1);
+}
+
+void *
+kmem_zone_zalloc(kmem_zone_t *zone, int flags)
+{
+ void *ptr;
+
+ ptr = kmem_zone_alloc(zone, flags);
+ if (ptr)
+ memset((char *)ptr, 0, kmem_cache_size(zone));
+ return ptr;
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_SUPPORT_KMEM_H__
+#define __XFS_SUPPORT_KMEM_H__
+
+#include <linux/slab.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+
+/*
+ * memory management routines
+ */
+#define KM_SLEEP 0x0001
+#define KM_NOSLEEP 0x0002
+#define KM_NOFS 0x0004
+#define KM_MAYFAIL 0x0008
+
+#define kmem_zone kmem_cache_s
+#define kmem_zone_t kmem_cache_t
+
+typedef unsigned long xfs_pflags_t;
+
+#define PFLAGS_TEST_NOIO() (current->flags & PF_NOIO)
+#define PFLAGS_TEST_FSTRANS() (current->flags & PF_FSTRANS)
+
+#define PFLAGS_SET_NOIO() do { \
+ current->flags |= PF_NOIO; \
+} while (0)
+
+#define PFLAGS_CLEAR_NOIO() do { \
+ current->flags &= ~PF_NOIO; \
+} while (0)
+
+/* these could be nested, so we save state */
+#define PFLAGS_SET_FSTRANS(STATEP) do { \
+ *(STATEP) = current->flags; \
+ current->flags |= PF_FSTRANS; \
+} while (0)
+
+#define PFLAGS_CLEAR_FSTRANS(STATEP) do { \
+ *(STATEP) = current->flags; \
+ current->flags &= ~PF_FSTRANS; \
+} while (0)
+
+/* Restore the PF_FSTRANS state to what was saved in STATEP */
+#define PFLAGS_RESTORE_FSTRANS(STATEP) do { \
+ current->flags = ((current->flags & ~PF_FSTRANS) | \
+ (*(STATEP) & PF_FSTRANS)); \
+} while (0)
+
+#define PFLAGS_DUP(OSTATEP, NSTATEP) do { \
+ *(NSTATEP) = *(OSTATEP); \
+} while (0)
+
+static __inline unsigned int kmem_flags_convert(int flags)
+{
+ int lflags = __GFP_NOWARN; /* we'll report problems, if need be */
+
+#ifdef DEBUG
+ if (unlikely(flags & ~(KM_SLEEP|KM_NOSLEEP|KM_NOFS|KM_MAYFAIL))) {
+ printk(KERN_WARNING
+ "XFS: memory allocation with wrong flags (%x)\n", flags);
+ BUG();
+ }
+#endif
+
+ if (flags & KM_NOSLEEP) {
+ lflags = GFP_ATOMIC;
+ } else {
+ lflags = GFP_KERNEL;
+
+ /* avoid recusive callbacks to filesystem during transactions */
+ if (PFLAGS_TEST_FSTRANS() || (flags & KM_NOFS))
+ lflags &= ~__GFP_FS;
+ }
+
+ return lflags;
+}
+
+static __inline kmem_zone_t *
+kmem_zone_init(int size, char *zone_name)
+{
+ return kmem_cache_create(zone_name, size, 0, 0, NULL, NULL);
+}
+
+static __inline void
+kmem_zone_free(kmem_zone_t *zone, void *ptr)
+{
+ kmem_cache_free(zone, ptr);
+}
+
+static __inline void
+kmem_zone_destroy(kmem_zone_t *zone)
+{
+ if (zone && kmem_cache_destroy(zone))
+ BUG();
+}
+
+static __inline int
+kmem_zone_shrink(kmem_zone_t *zone)
+{
+ return kmem_cache_shrink(zone);
+}
+
+extern void *kmem_zone_zalloc(kmem_zone_t *, int);
+extern void *kmem_zone_alloc(kmem_zone_t *, int);
+
+extern void *kmem_alloc(size_t, int);
+extern void *kmem_realloc(void *, size_t, size_t, int);
+extern void *kmem_zalloc(size_t, int);
+extern void kmem_free(void *, size_t);
+
+typedef struct shrinker *kmem_shaker_t;
+typedef int (*kmem_shake_func_t)(int, unsigned int);
+
+static __inline kmem_shaker_t
+kmem_shake_register(kmem_shake_func_t sfunc)
+{
+ return set_shrinker(DEFAULT_SEEKS, sfunc);
+}
+
+static __inline void
+kmem_shake_deregister(kmem_shaker_t shrinker)
+{
+ remove_shrinker(shrinker);
+}
+
+static __inline int
+kmem_shake_allow(unsigned int gfp_mask)
+{
+ return (gfp_mask & __GFP_WAIT);
+}
+
+#endif /* __XFS_SUPPORT_KMEM_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+#include "xfs_inum.h"
+#include "xfs_log.h"
+#include "xfs_sb.h"
+#include "xfs_dir.h"
+#include "xfs_dir2.h"
+#include "xfs_trans.h"
+#include "xfs_dmapi.h"
+#include "xfs_mount.h"
+#include "xfs_bmap_btree.h"
+#include "xfs_alloc_btree.h"
+#include "xfs_ialloc_btree.h"
+#include "xfs_alloc.h"
+#include "xfs_btree.h"
+#include "xfs_attr_sf.h"
+#include "xfs_dir_sf.h"
+#include "xfs_dir2_sf.h"
+#include "xfs_dinode.h"
+#include "xfs_inode.h"
+#include "xfs_error.h"
+#include "xfs_rw.h"
+#include "xfs_iomap.h"
+#include <linux/mpage.h>
+#include <linux/writeback.h>
+
+STATIC void xfs_count_page_state(struct page *, int *, int *, int *);
+STATIC void xfs_convert_page(struct inode *, struct page *, xfs_iomap_t *,
+ struct writeback_control *wbc, void *, int, int);
+
+#if defined(XFS_RW_TRACE)
+void
+xfs_page_trace(
+ int tag,
+ struct inode *inode,
+ struct page *page,
+ int mask)
+{
+ xfs_inode_t *ip;
+ bhv_desc_t *bdp;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ loff_t isize = i_size_read(inode);
+ loff_t offset = page->index << PAGE_CACHE_SHIFT;
+ int delalloc = -1, unmapped = -1, unwritten = -1;
+
+ if (page_has_buffers(page))
+ xfs_count_page_state(page, &delalloc, &unmapped, &unwritten);
+
+ bdp = vn_bhv_lookup(VN_BHV_HEAD(vp), &xfs_vnodeops);
+ ip = XFS_BHVTOI(bdp);
+ if (!ip->i_rwtrace)
+ return;
+
+ ktrace_enter(ip->i_rwtrace,
+ (void *)((unsigned long)tag),
+ (void *)ip,
+ (void *)inode,
+ (void *)page,
+ (void *)((unsigned long)mask),
+ (void *)((unsigned long)((ip->i_d.di_size >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(ip->i_d.di_size & 0xffffffff)),
+ (void *)((unsigned long)((isize >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(isize & 0xffffffff)),
+ (void *)((unsigned long)((offset >> 32) & 0xffffffff)),
+ (void *)((unsigned long)(offset & 0xffffffff)),
+ (void *)((unsigned long)delalloc),
+ (void *)((unsigned long)unmapped),
+ (void *)((unsigned long)unwritten),
+ (void *)NULL,
+ (void *)NULL);
+}
+#else
+#define xfs_page_trace(tag, inode, page, mask)
+#endif
+
+void
+linvfs_unwritten_done(
+ struct buffer_head *bh,
+ int uptodate)
+{
+ xfs_buf_t *pb = (xfs_buf_t *)bh->b_private;
+
+ ASSERT(buffer_unwritten(bh));
+ bh->b_end_io = NULL;
+ clear_buffer_unwritten(bh);
+ if (!uptodate)
+ pagebuf_ioerror(pb, EIO);
+ if (atomic_dec_and_test(&pb->pb_io_remaining) == 1) {
+ pagebuf_iodone(pb, 1, 1);
+ }
+ end_buffer_async_write(bh, uptodate);
+}
+
+/*
+ * Issue transactions to convert a buffer range from unwritten
+ * to written extents (buffered IO).
+ */
+STATIC void
+linvfs_unwritten_convert(
+ xfs_buf_t *bp)
+{
+ vnode_t *vp = XFS_BUF_FSPRIVATE(bp, vnode_t *);
+ int error;
+
+ BUG_ON(atomic_read(&bp->pb_hold) < 1);
+ VOP_BMAP(vp, XFS_BUF_OFFSET(bp), XFS_BUF_SIZE(bp),
+ BMAPI_UNWRITTEN, NULL, NULL, error);
+ XFS_BUF_SET_FSPRIVATE(bp, NULL);
+ XFS_BUF_CLR_IODONE_FUNC(bp);
+ XFS_BUF_UNDATAIO(bp);
+ iput(LINVFS_GET_IP(vp));
+ pagebuf_iodone(bp, 0, 0);
+}
+
+/*
+ * Issue transactions to convert a buffer range from unwritten
+ * to written extents (direct IO).
+ */
+STATIC void
+linvfs_unwritten_convert_direct(
+ struct inode *inode,
+ loff_t offset,
+ ssize_t size,
+ void *private)
+{
+ ASSERT(!private || inode == (struct inode *)private);
+
+ /* private indicates an unwritten extent lay beneath this IO,
+ * see linvfs_get_block_core.
+ */
+ if (private && size > 0) {
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error;
+
+ VOP_BMAP(vp, offset, size, BMAPI_UNWRITTEN, NULL, NULL, error);
+ }
+}
+
+STATIC int
+xfs_map_blocks(
+ struct inode *inode,
+ loff_t offset,
+ ssize_t count,
+ xfs_iomap_t *mapp,
+ int flags)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error, nmaps = 1;
+
+ VOP_BMAP(vp, offset, count, flags, mapp, &nmaps, error);
+ if (!error && (flags & (BMAPI_WRITE|BMAPI_ALLOCATE)))
+ VMODIFY(vp);
+ return -error;
+}
+
+/*
+ * Finds the corresponding mapping in block @map array of the
+ * given @offset within a @page.
+ */
+STATIC xfs_iomap_t *
+xfs_offset_to_map(
+ struct page *page,
+ xfs_iomap_t *iomapp,
+ unsigned long offset)
+{
+ loff_t full_offset; /* offset from start of file */
+
+ ASSERT(offset < PAGE_CACHE_SIZE);
+
+ full_offset = page->index; /* NB: using 64bit number */
+ full_offset <<= PAGE_CACHE_SHIFT; /* offset from file start */
+ full_offset += offset; /* offset from page start */
+
+ if (full_offset < iomapp->iomap_offset)
+ return NULL;
+ if (iomapp->iomap_offset + (iomapp->iomap_bsize -1) >= full_offset)
+ return iomapp;
+ return NULL;
+}
+
+STATIC void
+xfs_map_at_offset(
+ struct page *page,
+ struct buffer_head *bh,
+ unsigned long offset,
+ int block_bits,
+ xfs_iomap_t *iomapp)
+{
+ xfs_daddr_t bn;
+ loff_t delta;
+ int sector_shift;
+
+ ASSERT(!(iomapp->iomap_flags & IOMAP_HOLE));
+ ASSERT(!(iomapp->iomap_flags & IOMAP_DELAY));
+ ASSERT(iomapp->iomap_bn != IOMAP_DADDR_NULL);
+
+ delta = page->index;
+ delta <<= PAGE_CACHE_SHIFT;
+ delta += offset;
+ delta -= iomapp->iomap_offset;
+ delta >>= block_bits;
+
+ sector_shift = block_bits - BBSHIFT;
+ bn = iomapp->iomap_bn >> sector_shift;
+ bn += delta;
+ BUG_ON(!bn && !(iomapp->iomap_flags & IOMAP_REALTIME));
+ ASSERT((bn << sector_shift) >= iomapp->iomap_bn);
+
+ lock_buffer(bh);
+ bh->b_blocknr = bn;
+ bh->b_bdev = iomapp->iomap_target->pbr_bdev;
+ set_buffer_mapped(bh);
+ clear_buffer_delay(bh);
+}
+
+/*
+ * Look for a page at index which is unlocked and contains our
+ * unwritten extent flagged buffers at its head. Returns page
+ * locked and with an extra reference count, and length of the
+ * unwritten extent component on this page that we can write,
+ * in units of filesystem blocks.
+ */
+STATIC struct page *
+xfs_probe_unwritten_page(
+ struct address_space *mapping,
+ pgoff_t index,
+ xfs_iomap_t *iomapp,
+ xfs_buf_t *pb,
+ unsigned long max_offset,
+ unsigned long *fsbs,
+ unsigned int bbits)
+{
+ struct page *page;
+
+ page = find_trylock_page(mapping, index);
+ if (!page)
+ return NULL;
+ if (PageWriteback(page))
+ goto out;
+
+ if (page->mapping && page_has_buffers(page)) {
+ struct buffer_head *bh, *head;
+ unsigned long p_offset = 0;
+
+ *fsbs = 0;
+ bh = head = page_buffers(page);
+ do {
+ if (!buffer_unwritten(bh) || !buffer_uptodate(bh))
+ break;
+ if (!xfs_offset_to_map(page, iomapp, p_offset))
+ break;
+ if (p_offset >= max_offset)
+ break;
+ xfs_map_at_offset(page, bh, p_offset, bbits, iomapp);
+ set_buffer_unwritten_io(bh);
+ bh->b_private = pb;
+ p_offset += bh->b_size;
+ (*fsbs)++;
+ } while ((bh = bh->b_this_page) != head);
+
+ if (p_offset)
+ return page;
+ }
+
+out:
+ unlock_page(page);
+ return NULL;
+}
+
+/*
+ * Look for a page at index which is unlocked and not mapped
+ * yet - clustering for mmap write case.
+ */
+STATIC unsigned int
+xfs_probe_unmapped_page(
+ struct address_space *mapping,
+ pgoff_t index,
+ unsigned int pg_offset)
+{
+ struct page *page;
+ int ret = 0;
+
+ page = find_trylock_page(mapping, index);
+ if (!page)
+ return 0;
+ if (PageWriteback(page))
+ goto out;
+
+ if (page->mapping && PageDirty(page)) {
+ if (page_has_buffers(page)) {
+ struct buffer_head *bh, *head;
+
+ bh = head = page_buffers(page);
+ do {
+ if (buffer_mapped(bh) || !buffer_uptodate(bh))
+ break;
+ ret += bh->b_size;
+ if (ret >= pg_offset)
+ break;
+ } while ((bh = bh->b_this_page) != head);
+ } else
+ ret = PAGE_CACHE_SIZE;
+ }
+
+out:
+ unlock_page(page);
+ return ret;
+}
+
+STATIC unsigned int
+xfs_probe_unmapped_cluster(
+ struct inode *inode,
+ struct page *startpage,
+ struct buffer_head *bh,
+ struct buffer_head *head)
+{
+ pgoff_t tindex, tlast, tloff;
+ unsigned int pg_offset, len, total = 0;
+ struct address_space *mapping = inode->i_mapping;
+
+ /* First sum forwards in this page */
+ do {
+ if (buffer_mapped(bh))
+ break;
+ total += bh->b_size;
+ } while ((bh = bh->b_this_page) != head);
+
+ /* If we reached the end of the page, sum forwards in
+ * following pages.
+ */
+ if (bh == head) {
+ tlast = i_size_read(inode) >> PAGE_CACHE_SHIFT;
+ /* Prune this back to avoid pathological behavior */
+ tloff = min(tlast, startpage->index + 64);
+ for (tindex = startpage->index + 1; tindex < tloff; tindex++) {
+ len = xfs_probe_unmapped_page(mapping, tindex,
+ PAGE_CACHE_SIZE);
+ if (!len)
+ return total;
+ total += len;
+ }
+ if (tindex == tlast &&
+ (pg_offset = i_size_read(inode) & (PAGE_CACHE_SIZE - 1))) {
+ total += xfs_probe_unmapped_page(mapping,
+ tindex, pg_offset);
+ }
+ }
+ return total;
+}
+
+/*
+ * Probe for a given page (index) in the inode and test if it is delayed
+ * and without unwritten buffers. Returns page locked and with an extra
+ * reference count.
+ */
+STATIC struct page *
+xfs_probe_delalloc_page(
+ struct inode *inode,
+ pgoff_t index)
+{
+ struct page *page;
+
+ page = find_trylock_page(inode->i_mapping, index);
+ if (!page)
+ return NULL;
+ if (PageWriteback(page))
+ goto out;
+
+ if (page->mapping && page_has_buffers(page)) {
+ struct buffer_head *bh, *head;
+ int acceptable = 0;
+
+ bh = head = page_buffers(page);
+ do {
+ if (buffer_unwritten(bh)) {
+ acceptable = 0;
+ break;
+ } else if (buffer_delay(bh)) {
+ acceptable = 1;
+ }
+ } while ((bh = bh->b_this_page) != head);
+
+ if (acceptable)
+ return page;
+ }
+
+out:
+ unlock_page(page);
+ return NULL;
+}
+
+STATIC int
+xfs_map_unwritten(
+ struct inode *inode,
+ struct page *start_page,
+ struct buffer_head *head,
+ struct buffer_head *curr,
+ unsigned long p_offset,
+ int block_bits,
+ xfs_iomap_t *iomapp,
+ struct writeback_control *wbc,
+ int startio,
+ int all_bh)
+{
+ struct buffer_head *bh = curr;
+ xfs_iomap_t *tmp;
+ xfs_buf_t *pb;
+ loff_t offset, size;
+ unsigned long nblocks = 0;
+
+ offset = start_page->index;
+ offset <<= PAGE_CACHE_SHIFT;
+ offset += p_offset;
+
+ /* get an "empty" pagebuf to manage IO completion
+ * Proper values will be set before returning */
+ pb = pagebuf_lookup(iomapp->iomap_target, 0, 0, 0);
+ if (!pb)
+ return -EAGAIN;
+
+ /* Take a reference to the inode to prevent it from
+ * being reclaimed while we have outstanding unwritten
+ * extent IO on it.
+ */
+ if ((igrab(inode)) != inode) {
+ pagebuf_free(pb);
+ return -EAGAIN;
+ }
+
+ /* Set the count to 1 initially, this will stop an I/O
+ * completion callout which happens before we have started
+ * all the I/O from calling pagebuf_iodone too early.
+ */
+ atomic_set(&pb->pb_io_remaining, 1);
+
+ /* First map forwards in the page consecutive buffers
+ * covering this unwritten extent
+ */
+ do {
+ if (!buffer_unwritten(bh))
+ break;
+ tmp = xfs_offset_to_map(start_page, iomapp, p_offset);
+ if (!tmp)
+ break;
+ xfs_map_at_offset(start_page, bh, p_offset, block_bits, iomapp);
+ set_buffer_unwritten_io(bh);
+ bh->b_private = pb;
+ p_offset += bh->b_size;
+ nblocks++;
+ } while ((bh = bh->b_this_page) != head);
+
+ atomic_add(nblocks, &pb->pb_io_remaining);
+
+ /* If we reached the end of the page, map forwards in any
+ * following pages which are also covered by this extent.
+ */
+ if (bh == head) {
+ struct address_space *mapping = inode->i_mapping;
+ pgoff_t tindex, tloff, tlast;
+ unsigned long bs;
+ unsigned int pg_offset, bbits = inode->i_blkbits;
+ struct page *page;
+
+ tlast = i_size_read(inode) >> PAGE_CACHE_SHIFT;
+ tloff = (iomapp->iomap_offset + iomapp->iomap_bsize) >> PAGE_CACHE_SHIFT;
+ tloff = min(tlast, tloff);
+ for (tindex = start_page->index + 1; tindex < tloff; tindex++) {
+ page = xfs_probe_unwritten_page(mapping,
+ tindex, iomapp, pb,
+ PAGE_CACHE_SIZE, &bs, bbits);
+ if (!page)
+ break;
+ nblocks += bs;
+ atomic_add(bs, &pb->pb_io_remaining);
+ xfs_convert_page(inode, page, iomapp, wbc, pb,
+ startio, all_bh);
+ /* stop if converting the next page might add
+ * enough blocks that the corresponding byte
+ * count won't fit in our ulong page buf length */
+ if (nblocks >= ((ULONG_MAX - PAGE_SIZE) >> block_bits))
+ goto enough;
+ }
+
+ if (tindex == tlast &&
+ (pg_offset = (i_size_read(inode) & (PAGE_CACHE_SIZE - 1)))) {
+ page = xfs_probe_unwritten_page(mapping,
+ tindex, iomapp, pb,
+ pg_offset, &bs, bbits);
+ if (page) {
+ nblocks += bs;
+ atomic_add(bs, &pb->pb_io_remaining);
+ xfs_convert_page(inode, page, iomapp, wbc, pb,
+ startio, all_bh);
+ if (nblocks >= ((ULONG_MAX - PAGE_SIZE) >> block_bits))
+ goto enough;
+ }
+ }
+ }
+
+enough:
+ size = nblocks; /* NB: using 64bit number here */
+ size <<= block_bits; /* convert fsb's to byte range */
+
+ XFS_BUF_DATAIO(pb);
+ XFS_BUF_ASYNC(pb);
+ XFS_BUF_SET_SIZE(pb, size);
+ XFS_BUF_SET_COUNT(pb, size);
+ XFS_BUF_SET_OFFSET(pb, offset);
+ XFS_BUF_SET_FSPRIVATE(pb, LINVFS_GET_VP(inode));
+ XFS_BUF_SET_IODONE_FUNC(pb, linvfs_unwritten_convert);
+
+ if (atomic_dec_and_test(&pb->pb_io_remaining) == 1) {
+ pagebuf_iodone(pb, 1, 1);
+ }
+
+ return 0;
+}
+
+STATIC void
+xfs_submit_page(
+ struct page *page,
+ struct writeback_control *wbc,
+ struct buffer_head *bh_arr[],
+ int bh_count,
+ int probed_page,
+ int clear_dirty)
+{
+ struct buffer_head *bh;
+ int i;
+
+ BUG_ON(PageWriteback(page));
+ set_page_writeback(page);
+ if (clear_dirty)
+ clear_page_dirty(page);
+ unlock_page(page);
+
+ if (bh_count) {
+ for (i = 0; i < bh_count; i++) {
+ bh = bh_arr[i];
+ mark_buffer_async_write(bh);
+ if (buffer_unwritten(bh))
+ set_buffer_unwritten_io(bh);
+ set_buffer_uptodate(bh);
+ clear_buffer_dirty(bh);
+ }
+
+ for (i = 0; i < bh_count; i++)
+ submit_bh(WRITE, bh_arr[i]);
+
+ if (probed_page && clear_dirty)
+ wbc->nr_to_write--; /* Wrote an "extra" page */
+ } else {
+ end_page_writeback(page);
+ wbc->pages_skipped++; /* We didn't write this page */
+ }
+}
+
+/*
+ * Allocate & map buffers for page given the extent map. Write it out.
+ * except for the original page of a writepage, this is called on
+ * delalloc/unwritten pages only, for the original page it is possible
+ * that the page has no mapping at all.
+ */
+STATIC void
+xfs_convert_page(
+ struct inode *inode,
+ struct page *page,
+ xfs_iomap_t *iomapp,
+ struct writeback_control *wbc,
+ void *private,
+ int startio,
+ int all_bh)
+{
+ struct buffer_head *bh_arr[MAX_BUF_PER_PAGE], *bh, *head;
+ xfs_iomap_t *mp = iomapp, *tmp;
+ unsigned long end, offset;
+ pgoff_t end_index;
+ int i = 0, index = 0;
+ int bbits = inode->i_blkbits;
+
+ end_index = i_size_read(inode) >> PAGE_CACHE_SHIFT;
+ if (page->index < end_index) {
+ end = PAGE_CACHE_SIZE;
+ } else {
+ end = i_size_read(inode) & (PAGE_CACHE_SIZE-1);
+ }
+ bh = head = page_buffers(page);
+ do {
+ offset = i << bbits;
+ if (offset >= end)
+ break;
+ if (!(PageUptodate(page) || buffer_uptodate(bh)))
+ continue;
+ if (buffer_mapped(bh) && all_bh &&
+ !(buffer_unwritten(bh) || buffer_delay(bh))) {
+ if (startio) {
+ lock_buffer(bh);
+ bh_arr[index++] = bh;
+ }
+ continue;
+ }
+ tmp = xfs_offset_to_map(page, mp, offset);
+ if (!tmp)
+ continue;
+ ASSERT(!(tmp->iomap_flags & IOMAP_HOLE));
+ ASSERT(!(tmp->iomap_flags & IOMAP_DELAY));
+
+ /* If this is a new unwritten extent buffer (i.e. one
+ * that we haven't passed in private data for, we must
+ * now map this buffer too.
+ */
+ if (buffer_unwritten(bh) && !bh->b_end_io) {
+ ASSERT(tmp->iomap_flags & IOMAP_UNWRITTEN);
+ xfs_map_unwritten(inode, page, head, bh, offset,
+ bbits, tmp, wbc, startio, all_bh);
+ } else if (! (buffer_unwritten(bh) && buffer_locked(bh))) {
+ xfs_map_at_offset(page, bh, offset, bbits, tmp);
+ if (buffer_unwritten(bh)) {
+ set_buffer_unwritten_io(bh);
+ bh->b_private = private;
+ ASSERT(private);
+ }
+ }
+ if (startio) {
+ bh_arr[index++] = bh;
+ } else {
+ set_buffer_dirty(bh);
+ unlock_buffer(bh);
+ mark_buffer_dirty(bh);
+ }
+ } while (i++, (bh = bh->b_this_page) != head);
+
+ if (startio) {
+ xfs_submit_page(page, wbc, bh_arr, index, 1, index == i);
+ } else {
+ unlock_page(page);
+ }
+}
+
+/*
+ * Convert & write out a cluster of pages in the same extent as defined
+ * by mp and following the start page.
+ */
+STATIC void
+xfs_cluster_write(
+ struct inode *inode,
+ pgoff_t tindex,
+ xfs_iomap_t *iomapp,
+ struct writeback_control *wbc,
+ int startio,
+ int all_bh,
+ pgoff_t tlast)
+{
+ struct page *page;
+
+ for (; tindex <= tlast; tindex++) {
+ page = xfs_probe_delalloc_page(inode, tindex);
+ if (!page)
+ break;
+ xfs_convert_page(inode, page, iomapp, wbc, NULL,
+ startio, all_bh);
+ }
+}
+
+/*
+ * Calling this without startio set means we are being asked to make a dirty
+ * page ready for freeing it's buffers. When called with startio set then
+ * we are coming from writepage.
+ *
+ * When called with startio set it is important that we write the WHOLE
+ * page if possible.
+ * The bh->b_state's cannot know if any of the blocks or which block for
+ * that matter are dirty due to mmap writes, and therefore bh uptodate is
+ * only vaild if the page itself isn't completely uptodate. Some layers
+ * may clear the page dirty flag prior to calling write page, under the
+ * assumption the entire page will be written out; by not writing out the
+ * whole page the page can be reused before all valid dirty data is
+ * written out. Note: in the case of a page that has been dirty'd by
+ * mapwrite and but partially setup by block_prepare_write the
+ * bh->b_states's will not agree and only ones setup by BPW/BCW will have
+ * valid state, thus the whole page must be written out thing.
+ */
+
+STATIC int
+xfs_page_state_convert(
+ struct inode *inode,
+ struct page *page,
+ struct writeback_control *wbc,
+ int startio,
+ int unmapped) /* also implies page uptodate */
+{
+ struct buffer_head *bh_arr[MAX_BUF_PER_PAGE], *bh, *head;
+ xfs_iomap_t *iomp, iomap;
+ loff_t offset;
+ unsigned long p_offset = 0;
+ __uint64_t end_offset;
+ pgoff_t end_index, last_index, tlast;
+ int len, err, i, cnt = 0, uptodate = 1;
+ int flags = startio ? 0 : BMAPI_TRYLOCK;
+ int page_dirty = 1;
+ int delalloc = 0;
+
+
+ /* Are we off the end of the file ? */
+ offset = i_size_read(inode);
+ end_index = offset >> PAGE_CACHE_SHIFT;
+ last_index = (offset - 1) >> PAGE_CACHE_SHIFT;
+ if (page->index >= end_index) {
+ if ((page->index >= end_index + 1) ||
+ !(i_size_read(inode) & (PAGE_CACHE_SIZE - 1))) {
+ err = -EIO;
+ goto error;
+ }
+ }
+
+ offset = (loff_t)page->index << PAGE_CACHE_SHIFT;
+ end_offset = min_t(unsigned long long,
+ offset + PAGE_CACHE_SIZE, i_size_read(inode));
+
+ bh = head = page_buffers(page);
+ iomp = NULL;
+
+ len = bh->b_size;
+ do {
+ if (offset >= end_offset)
+ break;
+ if (!buffer_uptodate(bh))
+ uptodate = 0;
+ if (!(PageUptodate(page) || buffer_uptodate(bh)) && !startio)
+ continue;
+
+ if (iomp) {
+ iomp = xfs_offset_to_map(page, &iomap, p_offset);
+ }
+
+ /*
+ * First case, map an unwritten extent and prepare for
+ * extent state conversion transaction on completion.
+ */
+ if (buffer_unwritten(bh)) {
+ if (!startio)
+ continue;
+ if (!iomp) {
+ err = xfs_map_blocks(inode, offset, len, &iomap,
+ BMAPI_READ|BMAPI_IGNSTATE);
+ if (err) {
+ goto error;
+ }
+ iomp = xfs_offset_to_map(page, &iomap,
+ p_offset);
+ }
+ if (iomp) {
+ if (!bh->b_end_io) {
+ err = xfs_map_unwritten(inode, page,
+ head, bh, p_offset,
+ inode->i_blkbits, iomp,
+ wbc, startio, unmapped);
+ if (err) {
+ goto error;
+ }
+ } else {
+ set_bit(BH_Lock, &bh->b_state);
+ }
+ BUG_ON(!buffer_locked(bh));
+ bh_arr[cnt++] = bh;
+ page_dirty = 0;
+ }
+ /*
+ * Second case, allocate space for a delalloc buffer.
+ * We can return EAGAIN here in the release page case.
+ */
+ } else if (buffer_delay(bh)) {
+ if (!iomp) {
+ delalloc = 1;
+ err = xfs_map_blocks(inode, offset, len, &iomap,
+ BMAPI_ALLOCATE | flags);
+ if (err) {
+ goto error;
+ }
+ iomp = xfs_offset_to_map(page, &iomap,
+ p_offset);
+ }
+ if (iomp) {
+ xfs_map_at_offset(page, bh, p_offset,
+ inode->i_blkbits, iomp);
+ if (startio) {
+ bh_arr[cnt++] = bh;
+ } else {
+ set_buffer_dirty(bh);
+ unlock_buffer(bh);
+ mark_buffer_dirty(bh);
+ }
+ page_dirty = 0;
+ }
+ } else if ((buffer_uptodate(bh) || PageUptodate(page)) &&
+ (unmapped || startio)) {
+
+ if (!buffer_mapped(bh)) {
+ int size;
+
+ /*
+ * Getting here implies an unmapped buffer
+ * was found, and we are in a path where we
+ * need to write the whole page out.
+ */
+ if (!iomp) {
+ size = xfs_probe_unmapped_cluster(
+ inode, page, bh, head);
+ err = xfs_map_blocks(inode, offset,
+ size, &iomap,
+ BMAPI_WRITE|BMAPI_MMAP);
+ if (err) {
+ goto error;
+ }
+ iomp = xfs_offset_to_map(page, &iomap,
+ p_offset);
+ }
+ if (iomp) {
+ xfs_map_at_offset(page,
+ bh, p_offset,
+ inode->i_blkbits, iomp);
+ if (startio) {
+ bh_arr[cnt++] = bh;
+ } else {
+ set_buffer_dirty(bh);
+ unlock_buffer(bh);
+ mark_buffer_dirty(bh);
+ }
+ page_dirty = 0;
+ }
+ } else if (startio) {
+ if (buffer_uptodate(bh) &&
+ !test_and_set_bit(BH_Lock, &bh->b_state)) {
+ bh_arr[cnt++] = bh;
+ page_dirty = 0;
+ }
+ }
+ }
+ } while (offset += len, p_offset += len,
+ ((bh = bh->b_this_page) != head));
+
+ if (uptodate && bh == head)
+ SetPageUptodate(page);
+
+ if (startio)
+ xfs_submit_page(page, wbc, bh_arr, cnt, 0, 1);
+
+ if (iomp) {
+ tlast = (iomp->iomap_offset + iomp->iomap_bsize - 1) >>
+ PAGE_CACHE_SHIFT;
+ if (delalloc && (tlast > last_index))
+ tlast = last_index;
+ xfs_cluster_write(inode, page->index + 1, iomp, wbc,
+ startio, unmapped, tlast);
+ }
+
+ return page_dirty;
+
+error:
+ for (i = 0; i < cnt; i++) {
+ unlock_buffer(bh_arr[i]);
+ }
+
+ /*
+ * If it's delalloc and we have nowhere to put it,
+ * throw it away, unless the lower layers told
+ * us to try again.
+ */
+ if (err != -EAGAIN) {
+ if (!unmapped) {
+ block_invalidatepage(page, 0);
+ }
+ ClearPageUptodate(page);
+ }
+ return err;
+}
+
+STATIC int
+linvfs_get_block_core(
+ struct inode *inode,
+ sector_t iblock,
+ unsigned long blocks,
+ struct buffer_head *bh_result,
+ int create,
+ int direct,
+ bmapi_flags_t flags)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ xfs_iomap_t iomap;
+ int retpbbm = 1;
+ int error;
+ ssize_t size;
+ loff_t offset = (loff_t)iblock << inode->i_blkbits;
+
+ if (blocks)
+ size = blocks << inode->i_blkbits;
+ else
+ size = 1 << inode->i_blkbits;
+
+ VOP_BMAP(vp, offset, size,
+ create ? flags : BMAPI_READ, &iomap, &retpbbm, error);
+ if (error)
+ return -error;
+
+ if (retpbbm == 0)
+ return 0;
+
+ if (iomap.iomap_bn != IOMAP_DADDR_NULL) {
+ xfs_daddr_t bn;
+ loff_t delta;
+
+ /* For unwritten extents do not report a disk address on
+ * the read case (treat as if we're reading into a hole).
+ */
+ if (create || !(iomap.iomap_flags & IOMAP_UNWRITTEN)) {
+ delta = offset - iomap.iomap_offset;
+ delta >>= inode->i_blkbits;
+
+ bn = iomap.iomap_bn >> (inode->i_blkbits - BBSHIFT);
+ bn += delta;
+ BUG_ON(!bn && !(iomap.iomap_flags & IOMAP_REALTIME));
+ bh_result->b_blocknr = bn;
+ set_buffer_mapped(bh_result);
+ }
+ if (create && (iomap.iomap_flags & IOMAP_UNWRITTEN)) {
+ if (direct)
+ bh_result->b_private = inode;
+ set_buffer_unwritten(bh_result);
+ set_buffer_delay(bh_result);
+ }
+ }
+
+ /* If this is a realtime file, data might be on a new device */
+ bh_result->b_bdev = iomap.iomap_target->pbr_bdev;
+
+ /* If we previously allocated a block out beyond eof and
+ * we are now coming back to use it then we will need to
+ * flag it as new even if it has a disk address.
+ */
+ if (create &&
+ ((!buffer_mapped(bh_result) && !buffer_uptodate(bh_result)) ||
+ (offset >= i_size_read(inode)) || (iomap.iomap_flags & IOMAP_NEW))) {
+ set_buffer_new(bh_result);
+ }
+
+ if (iomap.iomap_flags & IOMAP_DELAY) {
+ BUG_ON(direct);
+ if (create) {
+ set_buffer_mapped(bh_result);
+ set_buffer_uptodate(bh_result);
+ }
+ set_buffer_delay(bh_result);
+ }
+
+ if (blocks) {
+ bh_result->b_size = (ssize_t)min(
+ (loff_t)(iomap.iomap_bsize - iomap.iomap_delta),
+ (loff_t)(blocks << inode->i_blkbits));
+ }
+
+ return 0;
+}
+
+int
+linvfs_get_block(
+ struct inode *inode,
+ sector_t iblock,
+ struct buffer_head *bh_result,
+ int create)
+{
+ return linvfs_get_block_core(inode, iblock, 0, bh_result,
+ create, 0, BMAPI_WRITE);
+}
+
+STATIC int
+linvfs_get_blocks_direct(
+ struct inode *inode,
+ sector_t iblock,
+ unsigned long max_blocks,
+ struct buffer_head *bh_result,
+ int create)
+{
+ return linvfs_get_block_core(inode, iblock, max_blocks, bh_result,
+ create, 1, BMAPI_WRITE|BMAPI_DIRECT);
+}
+
+STATIC ssize_t
+linvfs_direct_IO(
+ int rw,
+ struct kiocb *iocb,
+ const struct iovec *iov,
+ loff_t offset,
+ unsigned long nr_segs)
+{
+ struct file *file = iocb->ki_filp;
+ struct inode *inode = file->f_mapping->host;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ xfs_iomap_t iomap;
+ int maps = 1;
+ int error;
+
+ VOP_BMAP(vp, offset, 0, BMAPI_DEVICE, &iomap, &maps, error);
+ if (error)
+ return -error;
+
+ return blockdev_direct_IO_own_locking(rw, iocb, inode,
+ iomap.iomap_target->pbr_bdev,
+ iov, offset, nr_segs,
+ linvfs_get_blocks_direct,
+ linvfs_unwritten_convert_direct);
+}
+
+
+STATIC sector_t
+linvfs_bmap(
+ struct address_space *mapping,
+ sector_t block)
+{
+ struct inode *inode = (struct inode *)mapping->host;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error;
+
+ vn_trace_entry(vp, "linvfs_bmap", (inst_t *)__return_address);
+
+ VOP_RWLOCK(vp, VRWLOCK_READ);
+ VOP_FLUSH_PAGES(vp, (xfs_off_t)0, -1, 0, FI_REMAPF, error);
+ VOP_RWUNLOCK(vp, VRWLOCK_READ);
+ return generic_block_bmap(mapping, block, linvfs_get_block);
+}
+
+STATIC int
+linvfs_readpage(
+ struct file *unused,
+ struct page *page)
+{
+ return mpage_readpage(page, linvfs_get_block);
+}
+
+STATIC int
+linvfs_readpages(
+ struct file *unused,
+ struct address_space *mapping,
+ struct list_head *pages,
+ unsigned nr_pages)
+{
+ return mpage_readpages(mapping, pages, nr_pages, linvfs_get_block);
+}
+
+STATIC void
+xfs_count_page_state(
+ struct page *page,
+ int *delalloc,
+ int *unmapped,
+ int *unwritten)
+{
+ struct buffer_head *bh, *head;
+
+ *delalloc = *unmapped = *unwritten = 0;
+
+ bh = head = page_buffers(page);
+ do {
+ if (buffer_uptodate(bh) && !buffer_mapped(bh))
+ (*unmapped) = 1;
+ else if (buffer_unwritten(bh) && !buffer_delay(bh))
+ clear_buffer_unwritten(bh);
+ else if (buffer_unwritten(bh))
+ (*unwritten) = 1;
+ else if (buffer_delay(bh))
+ (*delalloc) = 1;
+ } while ((bh = bh->b_this_page) != head);
+}
+
+
+/*
+ * writepage: Called from one of two places:
+ *
+ * 1. we are flushing a delalloc buffer head.
+ *
+ * 2. we are writing out a dirty page. Typically the page dirty
+ * state is cleared before we get here. In this case is it
+ * conceivable we have no buffer heads.
+ *
+ * For delalloc space on the page we need to allocate space and
+ * flush it. For unmapped buffer heads on the page we should
+ * allocate space if the page is uptodate. For any other dirty
+ * buffer heads on the page we should flush them.
+ *
+ * If we detect that a transaction would be required to flush
+ * the page, we have to check the process flags first, if we
+ * are already in a transaction or disk I/O during allocations
+ * is off, we need to fail the writepage and redirty the page.
+ */
+
+STATIC int
+linvfs_writepage(
+ struct page *page,
+ struct writeback_control *wbc)
+{
+ int error;
+ int need_trans;
+ int delalloc, unmapped, unwritten;
+ struct inode *inode = page->mapping->host;
+
+ xfs_page_trace(XFS_WRITEPAGE_ENTER, inode, page, 0);
+
+ /*
+ * We need a transaction if:
+ * 1. There are delalloc buffers on the page
+ * 2. The page is uptodate and we have unmapped buffers
+ * 3. The page is uptodate and we have no buffers
+ * 4. There are unwritten buffers on the page
+ */
+
+ if (!page_has_buffers(page)) {
+ unmapped = 1;
+ need_trans = 1;
+ } else {
+ xfs_count_page_state(page, &delalloc, &unmapped, &unwritten);
+ if (!PageUptodate(page))
+ unmapped = 0;
+ need_trans = delalloc + unmapped + unwritten;
+ }
+
+ /*
+ * If we need a transaction and the process flags say
+ * we are already in a transaction, or no IO is allowed
+ * then mark the page dirty again and leave the page
+ * as is.
+ */
+ if (PFLAGS_TEST_FSTRANS() && need_trans)
+ goto out_fail;
+
+ /*
+ * Delay hooking up buffer heads until we have
+ * made our go/no-go decision.
+ */
+ if (!page_has_buffers(page))
+ create_empty_buffers(page, 1 << inode->i_blkbits, 0);
+
+ /*
+ * Convert delayed allocate, unwritten or unmapped space
+ * to real space and flush out to disk.
+ */
+ error = xfs_page_state_convert(inode, page, wbc, 1, unmapped);
+ if (error == -EAGAIN)
+ goto out_fail;
+ if (unlikely(error < 0))
+ goto out_unlock;
+
+ return 0;
+
+out_fail:
+ redirty_page_for_writepage(wbc, page);
+ unlock_page(page);
+ return 0;
+out_unlock:
+ unlock_page(page);
+ return error;
+}
+
+/*
+ * Called to move a page into cleanable state - and from there
+ * to be released. Possibly the page is already clean. We always
+ * have buffer heads in this call.
+ *
+ * Returns 0 if the page is ok to release, 1 otherwise.
+ *
+ * Possible scenarios are:
+ *
+ * 1. We are being called to release a page which has been written
+ * to via regular I/O. buffer heads will be dirty and possibly
+ * delalloc. If no delalloc buffer heads in this case then we
+ * can just return zero.
+ *
+ * 2. We are called to release a page which has been written via
+ * mmap, all we need to do is ensure there is no delalloc
+ * state in the buffer heads, if not we can let the caller
+ * free them and we should come back later via writepage.
+ */
+STATIC int
+linvfs_release_page(
+ struct page *page,
+ int gfp_mask)
+{
+ struct inode *inode = page->mapping->host;
+ int dirty, delalloc, unmapped, unwritten;
+ struct writeback_control wbc = {
+ .sync_mode = WB_SYNC_ALL,
+ .nr_to_write = 1,
+ };
+
+ xfs_page_trace(XFS_RELEASEPAGE_ENTER, inode, page, gfp_mask);
+
+ xfs_count_page_state(page, &delalloc, &unmapped, &unwritten);
+ if (!delalloc && !unwritten)
+ goto free_buffers;
+
+ if (!(gfp_mask & __GFP_FS))
+ return 0;
+
+ /* If we are already inside a transaction or the thread cannot
+ * do I/O, we cannot release this page.
+ */
+ if (PFLAGS_TEST_FSTRANS())
+ return 0;
+
+ /*
+ * Convert delalloc space to real space, do not flush the
+ * data out to disk, that will be done by the caller.
+ * Never need to allocate space here - we will always
+ * come back to writepage in that case.
+ */
+ dirty = xfs_page_state_convert(inode, page, &wbc, 0, 0);
+ if (dirty == 0 && !unwritten)
+ goto free_buffers;
+ return 0;
+
+free_buffers:
+ return try_to_free_buffers(page);
+}
+
+STATIC int
+linvfs_prepare_write(
+ struct file *file,
+ struct page *page,
+ unsigned int from,
+ unsigned int to)
+{
+ return block_prepare_write(page, from, to, linvfs_get_block);
+}
+
+struct address_space_operations linvfs_aops = {
+ .readpage = linvfs_readpage,
+ .readpages = linvfs_readpages,
+ .writepage = linvfs_writepage,
+ .sync_page = block_sync_page,
+ .releasepage = linvfs_release_page,
+ .prepare_write = linvfs_prepare_write,
+ .commit_write = generic_commit_write,
+ .bmap = linvfs_bmap,
+ .direct_IO = linvfs_direct_IO,
+};
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+/*
+ * The xfs_buf.c code provides an abstract buffer cache model on top
+ * of the Linux page cache. Cached metadata blocks for a file system
+ * are hashed to the inode for the block device. xfs_buf.c assembles
+ * buffers (xfs_buf_t) on demand to aggregate such cached pages for I/O.
+ *
+ * Written by Steve Lord, Jim Mostek, Russell Cattelan
+ * and Rajagopal Ananthanarayanan ("ananth") at SGI.
+ *
+ */
+
+#include <linux/stddef.h>
+#include <linux/errno.h>
+#include <linux/slab.h>
+#include <linux/pagemap.h>
+#include <linux/init.h>
+#include <linux/vmalloc.h>
+#include <linux/bio.h>
+#include <linux/sysctl.h>
+#include <linux/proc_fs.h>
+#include <linux/workqueue.h>
+#include <linux/suspend.h>
+#include <linux/percpu.h>
+#include <linux/blkdev.h>
+
+#include "xfs_linux.h"
+
+/*
+ * File wide globals
+ */
+
+STATIC kmem_cache_t *pagebuf_cache;
+STATIC kmem_shaker_t pagebuf_shake;
+STATIC int pagebuf_daemon_wakeup(int, unsigned int);
+STATIC void pagebuf_delwri_queue(xfs_buf_t *, int);
+STATIC struct workqueue_struct *pagebuf_logio_workqueue;
+STATIC struct workqueue_struct *pagebuf_dataio_workqueue;
+
+/*
+ * Pagebuf debugging
+ */
+
+#ifdef PAGEBUF_TRACE
+void
+pagebuf_trace(
+ xfs_buf_t *pb,
+ char *id,
+ void *data,
+ void *ra)
+{
+ ktrace_enter(pagebuf_trace_buf,
+ pb, id,
+ (void *)(unsigned long)pb->pb_flags,
+ (void *)(unsigned long)pb->pb_hold.counter,
+ (void *)(unsigned long)pb->pb_sema.count.counter,
+ (void *)current,
+ data, ra,
+ (void *)(unsigned long)((pb->pb_file_offset>>32) & 0xffffffff),
+ (void *)(unsigned long)(pb->pb_file_offset & 0xffffffff),
+ (void *)(unsigned long)pb->pb_buffer_length,
+ NULL, NULL, NULL, NULL, NULL);
+}
+ktrace_t *pagebuf_trace_buf;
+#define PAGEBUF_TRACE_SIZE 4096
+#define PB_TRACE(pb, id, data) \
+ pagebuf_trace(pb, id, (void *)data, (void *)__builtin_return_address(0))
+#else
+#define PB_TRACE(pb, id, data) do { } while (0)
+#endif
+
+#ifdef PAGEBUF_LOCK_TRACKING
+# define PB_SET_OWNER(pb) ((pb)->pb_last_holder = current->pid)
+# define PB_CLEAR_OWNER(pb) ((pb)->pb_last_holder = -1)
+# define PB_GET_OWNER(pb) ((pb)->pb_last_holder)
+#else
+# define PB_SET_OWNER(pb) do { } while (0)
+# define PB_CLEAR_OWNER(pb) do { } while (0)
+# define PB_GET_OWNER(pb) do { } while (0)
+#endif
+
+/*
+ * Pagebuf allocation / freeing.
+ */
+
+#define pb_to_gfp(flags) \
+ ((((flags) & PBF_READ_AHEAD) ? __GFP_NORETRY : \
+ ((flags) & PBF_DONT_BLOCK) ? GFP_NOFS : GFP_KERNEL) | __GFP_NOWARN)
+
+#define pb_to_km(flags) \
+ (((flags) & PBF_DONT_BLOCK) ? KM_NOFS : KM_SLEEP)
+
+
+#define pagebuf_allocate(flags) \
+ kmem_zone_alloc(pagebuf_cache, pb_to_km(flags))
+#define pagebuf_deallocate(pb) \
+ kmem_zone_free(pagebuf_cache, (pb));
+
+/*
+ * Pagebuf hashing
+ */
+
+#define NBITS 8
+#define NHASH (1<<NBITS)
+
+typedef struct {
+ struct list_head pb_hash;
+ spinlock_t pb_hash_lock;
+} pb_hash_t;
+
+STATIC pb_hash_t pbhash[NHASH];
+#define pb_hash(pb) &pbhash[pb->pb_hash_index]
+
+STATIC int
+_bhash(
+ struct block_device *bdev,
+ loff_t base)
+{
+ int bit, hval;
+
+ base >>= 9;
+ base ^= (unsigned long)bdev / L1_CACHE_BYTES;
+ for (bit = hval = 0; base && bit < sizeof(base) * 8; bit += NBITS) {
+ hval ^= (int)base & (NHASH-1);
+ base >>= NBITS;
+ }
+ return hval;
+}
+
+/*
+ * Mapping of multi-page buffers into contiguous virtual space
+ */
+
+typedef struct a_list {
+ void *vm_addr;
+ struct a_list *next;
+} a_list_t;
+
+STATIC a_list_t *as_free_head;
+STATIC int as_list_len;
+STATIC spinlock_t as_lock = SPIN_LOCK_UNLOCKED;
+
+/*
+ * Try to batch vunmaps because they are costly.
+ */
+STATIC void
+free_address(
+ void *addr)
+{
+ a_list_t *aentry;
+
+ aentry = kmalloc(sizeof(a_list_t), GFP_ATOMIC);
+ if (aentry) {
+ spin_lock(&as_lock);
+ aentry->next = as_free_head;
+ aentry->vm_addr = addr;
+ as_free_head = aentry;
+ as_list_len++;
+ spin_unlock(&as_lock);
+ } else {
+ vunmap(addr);
+ }
+}
+
+STATIC void
+purge_addresses(void)
+{
+ a_list_t *aentry, *old;
+
+ if (as_free_head == NULL)
+ return;
+
+ spin_lock(&as_lock);
+ aentry = as_free_head;
+ as_free_head = NULL;
+ as_list_len = 0;
+ spin_unlock(&as_lock);
+
+ while ((old = aentry) != NULL) {
+ vunmap(aentry->vm_addr);
+ aentry = aentry->next;
+ kfree(old);
+ }
+}
+
+/*
+ * Internal pagebuf object manipulation
+ */
+
+STATIC void
+_pagebuf_initialize(
+ xfs_buf_t *pb,
+ xfs_buftarg_t *target,
+ loff_t range_base,
+ size_t range_length,
+ page_buf_flags_t flags)
+{
+ /*
+ * We don't want certain flags to appear in pb->pb_flags.
+ */
+ flags &= ~(PBF_LOCK|PBF_MAPPED|PBF_DONT_BLOCK|PBF_READ_AHEAD);
+
+ memset(pb, 0, sizeof(xfs_buf_t));
+ atomic_set(&pb->pb_hold, 1);
+ init_MUTEX_LOCKED(&pb->pb_iodonesema);
+ INIT_LIST_HEAD(&pb->pb_list);
+ INIT_LIST_HEAD(&pb->pb_hash_list);
+ init_MUTEX_LOCKED(&pb->pb_sema); /* held, no waiters */
+ PB_SET_OWNER(pb);
+ pb->pb_target = target;
+ pb->pb_file_offset = range_base;
+ /*
+ * Set buffer_length and count_desired to the same value initially.
+ * I/O routines should use count_desired, which will be the same in
+ * most cases but may be reset (e.g. XFS recovery).
+ */
+ pb->pb_buffer_length = pb->pb_count_desired = range_length;
+ pb->pb_flags = flags | PBF_NONE;
+ pb->pb_bn = XFS_BUF_DADDR_NULL;
+ atomic_set(&pb->pb_pin_count, 0);
+ init_waitqueue_head(&pb->pb_waiters);
+
+ XFS_STATS_INC(pb_create);
+ PB_TRACE(pb, "initialize", target);
+}
+
+/*
+ * Allocate a page array capable of holding a specified number
+ * of pages, and point the page buf at it.
+ */
+STATIC int
+_pagebuf_get_pages(
+ xfs_buf_t *pb,
+ int page_count,
+ page_buf_flags_t flags)
+{
+ /* Make sure that we have a page list */
+ if (pb->pb_pages == NULL) {
+ pb->pb_offset = page_buf_poff(pb->pb_file_offset);
+ pb->pb_page_count = page_count;
+ if (page_count <= PB_PAGES) {
+ pb->pb_pages = pb->pb_page_array;
+ } else {
+ pb->pb_pages = kmem_alloc(sizeof(struct page *) *
+ page_count, pb_to_km(flags));
+ if (pb->pb_pages == NULL)
+ return -ENOMEM;
+ }
+ memset(pb->pb_pages, 0, sizeof(struct page *) * page_count);
+ }
+ return 0;
+}
+
+/*
+ * Frees pb_pages if it was malloced.
+ */
+STATIC void
+_pagebuf_free_pages(
+ xfs_buf_t *bp)
+{
+ if (bp->pb_pages != bp->pb_page_array) {
+ kmem_free(bp->pb_pages,
+ bp->pb_page_count * sizeof(struct page *));
+ }
+}
+
+/*
+ * Releases the specified buffer.
+ *
+ * The modification state of any associated pages is left unchanged.
+ * The buffer most not be on any hash - use pagebuf_rele instead for
+ * hashed and refcounted buffers
+ */
+void
+pagebuf_free(
+ xfs_buf_t *bp)
+{
+ PB_TRACE(bp, "free", 0);
+
+ ASSERT(list_empty(&bp->pb_hash_list));
+
+ if (bp->pb_flags & _PBF_PAGE_CACHE) {
+ uint i;
+
+ if ((bp->pb_flags & PBF_MAPPED) && (bp->pb_page_count > 1))
+ free_address(bp->pb_addr - bp->pb_offset);
+
+ for (i = 0; i < bp->pb_page_count; i++)
+ page_cache_release(bp->pb_pages[i]);
+ _pagebuf_free_pages(bp);
+ } else if (bp->pb_flags & _PBF_KMEM_ALLOC) {
+ /*
+ * XXX(hch): bp->pb_count_desired might be incorrect (see
+ * pagebuf_associate_memory for details), but fortunately
+ * the Linux version of kmem_free ignores the len argument..
+ */
+ kmem_free(bp->pb_addr, bp->pb_count_desired);
+ _pagebuf_free_pages(bp);
+ }
+
+ pagebuf_deallocate(bp);
+}
+
+/*
+ * Finds all pages for buffer in question and builds it's page list.
+ */
+STATIC int
+_pagebuf_lookup_pages(
+ xfs_buf_t *bp,
+ uint flags)
+{
+ struct address_space *mapping = bp->pb_target->pbr_mapping;
+ unsigned int sectorshift = bp->pb_target->pbr_sshift;
+ size_t blocksize = bp->pb_target->pbr_bsize;
+ size_t size = bp->pb_count_desired;
+ size_t nbytes, offset;
+ int gfp_mask = pb_to_gfp(flags);
+ unsigned short page_count, i;
+ pgoff_t first;
+ loff_t end;
+ int error;
+
+ end = bp->pb_file_offset + bp->pb_buffer_length;
+ page_count = page_buf_btoc(end) - page_buf_btoct(bp->pb_file_offset);
+
+ error = _pagebuf_get_pages(bp, page_count, flags);
+ if (unlikely(error))
+ return error;
+ bp->pb_flags |= _PBF_PAGE_CACHE;
+
+ offset = bp->pb_offset;
+ first = bp->pb_file_offset >> PAGE_CACHE_SHIFT;
+
+ for (i = 0; i < bp->pb_page_count; i++) {
+ struct page *page;
+ uint retries = 0;
+
+ retry:
+ page = find_or_create_page(mapping, first + i, gfp_mask);
+ if (unlikely(page == NULL)) {
+ if (flags & PBF_READ_AHEAD) {
+ bp->pb_page_count = i;
+ for (i = 0; i < bp->pb_page_count; i++)
+ unlock_page(bp->pb_pages[i]);
+ return -ENOMEM;
+ }
+
+ /*
+ * This could deadlock.
+ *
+ * But until all the XFS lowlevel code is revamped to
+ * handle buffer allocation failures we can't do much.
+ */
+ if (!(++retries % 100))
+ printk(KERN_ERR
+ "XFS: possible memory allocation "
+ "deadlock in %s (mode:0x%x)\n",
+ __FUNCTION__, gfp_mask);
+
+ XFS_STATS_INC(pb_page_retries);
+ pagebuf_daemon_wakeup(0, gfp_mask);
+ blk_congestion_wait(WRITE, HZ/50);
+ goto retry;
+ }
+
+ XFS_STATS_INC(pb_page_found);
+
+ nbytes = min_t(size_t, size, PAGE_CACHE_SIZE - offset);
+ size -= nbytes;
+
+ if (!PageUptodate(page)) {
+ page_count--;
+ if (blocksize == PAGE_CACHE_SIZE) {
+ if (flags & PBF_READ)
+ bp->pb_locked = 1;
+ } else if (!PagePrivate(page)) {
+ unsigned long j, range;
+
+ /*
+ * In this case page->private holds a bitmap
+ * of uptodate sectors within the page
+ */
+ ASSERT(blocksize < PAGE_CACHE_SIZE);
+ range = (offset + nbytes) >> sectorshift;
+ for (j = offset >> sectorshift; j < range; j++)
+ if (!test_bit(j, &page->private))
+ break;
+ if (j == range)
+ page_count++;
+ }
+ }
+
+ bp->pb_pages[i] = page;
+ offset = 0;
+ }
+
+ if (!bp->pb_locked) {
+ for (i = 0; i < bp->pb_page_count; i++)
+ unlock_page(bp->pb_pages[i]);
+ }
+
+ if (page_count) {
+ /* if we have any uptodate pages, mark that in the buffer */
+ bp->pb_flags &= ~PBF_NONE;
+
+ /* if some pages aren't uptodate, mark that in the buffer */
+ if (page_count != bp->pb_page_count)
+ bp->pb_flags |= PBF_PARTIAL;
+ }
+
+ PB_TRACE(bp, "lookup_pages", (long)page_count);
+ return error;
+}
+
+/*
+ * Map buffer into kernel address-space if nessecary.
+ */
+STATIC int
+_pagebuf_map_pages(
+ xfs_buf_t *bp,
+ uint flags)
+{
+ /* A single page buffer is always mappable */
+ if (bp->pb_page_count == 1) {
+ bp->pb_addr = page_address(bp->pb_pages[0]) + bp->pb_offset;
+ bp->pb_flags |= PBF_MAPPED;
+ } else if (flags & PBF_MAPPED) {
+ if (as_list_len > 64)
+ purge_addresses();
+ bp->pb_addr = vmap(bp->pb_pages, bp->pb_page_count,
+ VM_MAP, PAGE_KERNEL);
+ if (unlikely(bp->pb_addr == NULL))
+ return -ENOMEM;
+ bp->pb_addr += bp->pb_offset;
+ bp->pb_flags |= PBF_MAPPED;
+ }
+
+ return 0;
+}
+
+/*
+ * Finding and Reading Buffers
+ */
+
+/*
+ * _pagebuf_find
+ *
+ * Looks up, and creates if absent, a lockable buffer for
+ * a given range of an inode. The buffer is returned
+ * locked. If other overlapping buffers exist, they are
+ * released before the new buffer is created and locked,
+ * which may imply that this call will block until those buffers
+ * are unlocked. No I/O is implied by this call.
+ */
+xfs_buf_t *
+_pagebuf_find( /* find buffer for block */
+ xfs_buftarg_t *target,/* target for block */
+ loff_t ioff, /* starting offset of range */
+ size_t isize, /* length of range */
+ page_buf_flags_t flags, /* PBF_TRYLOCK */
+ xfs_buf_t *new_pb)/* newly allocated buffer */
+{
+ loff_t range_base;
+ size_t range_length;
+ int hval;
+ pb_hash_t *h;
+ xfs_buf_t *pb, *n;
+ int not_locked;
+
+ range_base = (ioff << BBSHIFT);
+ range_length = (isize << BBSHIFT);
+
+ /* Ensure we never do IOs smaller than the sector size */
+ BUG_ON(range_length < (1 << target->pbr_sshift));
+
+ /* Ensure we never do IOs that are not sector aligned */
+ BUG_ON(range_base & (loff_t)target->pbr_smask);
+
+ hval = _bhash(target->pbr_bdev, range_base);
+ h = &pbhash[hval];
+
+ spin_lock(&h->pb_hash_lock);
+ list_for_each_entry_safe(pb, n, &h->pb_hash, pb_hash_list) {
+ if (pb->pb_target == target &&
+ pb->pb_file_offset == range_base &&
+ pb->pb_buffer_length == range_length) {
+ /* If we look at something bring it to the
+ * front of the list for next time
+ */
+ atomic_inc(&pb->pb_hold);
+ list_move(&pb->pb_hash_list, &h->pb_hash);
+ goto found;
+ }
+ }
+
+ /* No match found */
+ if (new_pb) {
+ _pagebuf_initialize(new_pb, target, range_base,
+ range_length, flags);
+ new_pb->pb_hash_index = hval;
+ list_add(&new_pb->pb_hash_list, &h->pb_hash);
+ } else {
+ XFS_STATS_INC(pb_miss_locked);
+ }
+
+ spin_unlock(&h->pb_hash_lock);
+ return (new_pb);
+
+found:
+ spin_unlock(&h->pb_hash_lock);
+
+ /* Attempt to get the semaphore without sleeping,
+ * if this does not work then we need to drop the
+ * spinlock and do a hard attempt on the semaphore.
+ */
+ not_locked = down_trylock(&pb->pb_sema);
+ if (not_locked) {
+ if (!(flags & PBF_TRYLOCK)) {
+ /* wait for buffer ownership */
+ PB_TRACE(pb, "get_lock", 0);
+ pagebuf_lock(pb);
+ XFS_STATS_INC(pb_get_locked_waited);
+ } else {
+ /* We asked for a trylock and failed, no need
+ * to look at file offset and length here, we
+ * know that this pagebuf at least overlaps our
+ * pagebuf and is locked, therefore our buffer
+ * either does not exist, or is this buffer
+ */
+
+ pagebuf_rele(pb);
+ XFS_STATS_INC(pb_busy_locked);
+ return (NULL);
+ }
+ } else {
+ /* trylock worked */
+ PB_SET_OWNER(pb);
+ }
+
+ if (pb->pb_flags & PBF_STALE)
+ pb->pb_flags &= PBF_MAPPED;
+ PB_TRACE(pb, "got_lock", 0);
+ XFS_STATS_INC(pb_get_locked);
+ return (pb);
+}
+
+/*
+ * xfs_buf_get_flags assembles a buffer covering the specified range.
+ *
+ * Storage in memory for all portions of the buffer will be allocated,
+ * although backing storage may not be.
+ */
+xfs_buf_t *
+xfs_buf_get_flags( /* allocate a buffer */
+ xfs_buftarg_t *target,/* target for buffer */
+ loff_t ioff, /* starting offset of range */
+ size_t isize, /* length of range */
+ page_buf_flags_t flags) /* PBF_TRYLOCK */
+{
+ xfs_buf_t *pb, *new_pb;
+ int error = 0, i;
+
+ new_pb = pagebuf_allocate(flags);
+ if (unlikely(!new_pb))
+ return NULL;
+
+ pb = _pagebuf_find(target, ioff, isize, flags, new_pb);
+ if (pb == new_pb) {
+ error = _pagebuf_lookup_pages(pb, flags);
+ if (error)
+ goto no_buffer;
+ } else {
+ pagebuf_deallocate(new_pb);
+ if (unlikely(pb == NULL))
+ return NULL;
+ }
+
+ for (i = 0; i < pb->pb_page_count; i++)
+ mark_page_accessed(pb->pb_pages[i]);
+
+ if (!(pb->pb_flags & PBF_MAPPED)) {
+ error = _pagebuf_map_pages(pb, flags);
+ if (unlikely(error)) {
+ printk(KERN_WARNING "%s: failed to map pages\n",
+ __FUNCTION__);
+ goto no_buffer;
+ }
+ }
+
+ XFS_STATS_INC(pb_get);
+
+ /*
+ * Always fill in the block number now, the mapped cases can do
+ * their own overlay of this later.
+ */
+ pb->pb_bn = ioff;
+ pb->pb_count_desired = pb->pb_buffer_length;
+
+ PB_TRACE(pb, "get", (unsigned long)flags);
+ return pb;
+
+ no_buffer:
+ if (flags & (PBF_LOCK | PBF_TRYLOCK))
+ pagebuf_unlock(pb);
+ pagebuf_rele(pb);
+ return NULL;
+}
+
+xfs_buf_t *
+xfs_buf_read_flags(
+ xfs_buftarg_t *target,
+ loff_t ioff,
+ size_t isize,
+ page_buf_flags_t flags)
+{
+ xfs_buf_t *pb;
+
+ flags |= PBF_READ;
+
+ pb = xfs_buf_get_flags(target, ioff, isize, flags);
+ if (pb) {
+ if (PBF_NOT_DONE(pb)) {
+ PB_TRACE(pb, "read", (unsigned long)flags);
+ XFS_STATS_INC(pb_get_read);
+ pagebuf_iostart(pb, flags);
+ } else if (flags & PBF_ASYNC) {
+ PB_TRACE(pb, "read_async", (unsigned long)flags);
+ /*
+ * Read ahead call which is already satisfied,
+ * drop the buffer
+ */
+ goto no_buffer;
+ } else {
+ PB_TRACE(pb, "read_done", (unsigned long)flags);
+ /* We do not want read in the flags */
+ pb->pb_flags &= ~PBF_READ;
+ }
+ }
+
+ return pb;
+
+ no_buffer:
+ if (flags & (PBF_LOCK | PBF_TRYLOCK))
+ pagebuf_unlock(pb);
+ pagebuf_rele(pb);
+ return NULL;
+}
+
+/*
+ * Create a skeletal pagebuf (no pages associated with it).
+ */
+xfs_buf_t *
+pagebuf_lookup(
+ xfs_buftarg_t *target,
+ loff_t ioff,
+ size_t isize,
+ page_buf_flags_t flags)
+{
+ xfs_buf_t *pb;
+
+ pb = pagebuf_allocate(flags);
+ if (pb) {
+ _pagebuf_initialize(pb, target, ioff, isize, flags);
+ }
+ return pb;
+}
+
+/*
+ * If we are not low on memory then do the readahead in a deadlock
+ * safe manner.
+ */
+void
+pagebuf_readahead(
+ xfs_buftarg_t *target,
+ loff_t ioff,
+ size_t isize,
+ page_buf_flags_t flags)
+{
+ struct backing_dev_info *bdi;
+
+ bdi = target->pbr_mapping->backing_dev_info;
+ if (bdi_read_congested(bdi))
+ return;
+ if (bdi_write_congested(bdi))
+ return;
+
+ flags |= (PBF_TRYLOCK|PBF_ASYNC|PBF_READ_AHEAD);
+ xfs_buf_read_flags(target, ioff, isize, flags);
+}
+
+xfs_buf_t *
+pagebuf_get_empty(
+ size_t len,
+ xfs_buftarg_t *target)
+{
+ xfs_buf_t *pb;
+
+ pb = pagebuf_allocate(0);
+ if (pb)
+ _pagebuf_initialize(pb, target, 0, len, 0);
+ return pb;
+}
+
+static inline struct page *
+mem_to_page(
+ void *addr)
+{
+ if (((unsigned long)addr < VMALLOC_START) ||
+ ((unsigned long)addr >= VMALLOC_END)) {
+ return virt_to_page(addr);
+ } else {
+ return vmalloc_to_page(addr);
+ }
+}
+
+int
+pagebuf_associate_memory(
+ xfs_buf_t *pb,
+ void *mem,
+ size_t len)
+{
+ int rval;
+ int i = 0;
+ size_t ptr;
+ size_t end, end_cur;
+ off_t offset;
+ int page_count;
+
+ page_count = PAGE_CACHE_ALIGN(len) >> PAGE_CACHE_SHIFT;
+ offset = (off_t) mem - ((off_t)mem & PAGE_CACHE_MASK);
+ if (offset && (len > PAGE_CACHE_SIZE))
+ page_count++;
+
+ /* Free any previous set of page pointers */
+ if (pb->pb_pages)
+ _pagebuf_free_pages(pb);
+
+ pb->pb_pages = NULL;
+ pb->pb_addr = mem;
+
+ rval = _pagebuf_get_pages(pb, page_count, 0);
+ if (rval)
+ return rval;
+
+ pb->pb_offset = offset;
+ ptr = (size_t) mem & PAGE_CACHE_MASK;
+ end = PAGE_CACHE_ALIGN((size_t) mem + len);
+ end_cur = end;
+ /* set up first page */
+ pb->pb_pages[0] = mem_to_page(mem);
+
+ ptr += PAGE_CACHE_SIZE;
+ pb->pb_page_count = ++i;
+ while (ptr < end) {
+ pb->pb_pages[i] = mem_to_page((void *)ptr);
+ pb->pb_page_count = ++i;
+ ptr += PAGE_CACHE_SIZE;
+ }
+ pb->pb_locked = 0;
+
+ pb->pb_count_desired = pb->pb_buffer_length = len;
+ pb->pb_flags |= PBF_MAPPED;
+
+ return 0;
+}
+
+xfs_buf_t *
+pagebuf_get_no_daddr(
+ size_t len,
+ xfs_buftarg_t *target)
+{
+ size_t malloc_len = len;
+ xfs_buf_t *bp;
+ void *data;
+ int error;
+
+ bp = pagebuf_allocate(0);
+ if (unlikely(bp == NULL))
+ goto fail;
+ _pagebuf_initialize(bp, target, 0, len, PBF_FORCEIO);
+
+ try_again:
+ data = kmem_alloc(malloc_len, KM_SLEEP | KM_MAYFAIL);
+ if (unlikely(data == NULL))
+ goto fail_free_buf;
+
+ /* check whether alignment matches.. */
+ if ((__psunsigned_t)data !=
+ ((__psunsigned_t)data & ~target->pbr_smask)) {
+ /* .. else double the size and try again */
+ kmem_free(data, malloc_len);
+ malloc_len <<= 1;
+ goto try_again;
+ }
+
+ error = pagebuf_associate_memory(bp, data, len);
+ if (error)
+ goto fail_free_mem;
+ bp->pb_flags |= _PBF_KMEM_ALLOC;
+
+ pagebuf_unlock(bp);
+
+ PB_TRACE(bp, "no_daddr", data);
+ return bp;
+ fail_free_mem:
+ kmem_free(data, malloc_len);
+ fail_free_buf:
+ pagebuf_free(bp);
+ fail:
+ return NULL;
+}
+
+/*
+ * pagebuf_hold
+ *
+ * Increment reference count on buffer, to hold the buffer concurrently
+ * with another thread which may release (free) the buffer asynchronously.
+ *
+ * Must hold the buffer already to call this function.
+ */
+void
+pagebuf_hold(
+ xfs_buf_t *pb)
+{
+ atomic_inc(&pb->pb_hold);
+ PB_TRACE(pb, "hold", 0);
+}
+
+/*
+ * pagebuf_rele
+ *
+ * pagebuf_rele releases a hold on the specified buffer. If the
+ * the hold count is 1, pagebuf_rele calls pagebuf_free.
+ */
+void
+pagebuf_rele(
+ xfs_buf_t *pb)
+{
+ pb_hash_t *hash = pb_hash(pb);
+
+ PB_TRACE(pb, "rele", pb->pb_relse);
+
+ if (atomic_dec_and_lock(&pb->pb_hold, &hash->pb_hash_lock)) {
+ int do_free = 1;
+
+ if (pb->pb_relse) {
+ atomic_inc(&pb->pb_hold);
+ spin_unlock(&hash->pb_hash_lock);
+ (*(pb->pb_relse)) (pb);
+ spin_lock(&hash->pb_hash_lock);
+ do_free = 0;
+ }
+
+ if (pb->pb_flags & PBF_DELWRI) {
+ pb->pb_flags |= PBF_ASYNC;
+ atomic_inc(&pb->pb_hold);
+ pagebuf_delwri_queue(pb, 0);
+ do_free = 0;
+ } else if (pb->pb_flags & PBF_FS_MANAGED) {
+ do_free = 0;
+ }
+
+ if (do_free) {
+ list_del_init(&pb->pb_hash_list);
+ spin_unlock(&hash->pb_hash_lock);
+ pagebuf_free(pb);
+ } else {
+ spin_unlock(&hash->pb_hash_lock);
+ }
+ }
+}
+
+
+/*
+ * Mutual exclusion on buffers. Locking model:
+ *
+ * Buffers associated with inodes for which buffer locking
+ * is not enabled are not protected by semaphores, and are
+ * assumed to be exclusively owned by the caller. There is a
+ * spinlock in the buffer, used by the caller when concurrent
+ * access is possible.
+ */
+
+/*
+ * pagebuf_cond_lock
+ *
+ * pagebuf_cond_lock locks a buffer object, if it is not already locked.
+ * Note that this in no way
+ * locks the underlying pages, so it is only useful for synchronizing
+ * concurrent use of page buffer objects, not for synchronizing independent
+ * access to the underlying pages.
+ */
+int
+pagebuf_cond_lock( /* lock buffer, if not locked */
+ /* returns -EBUSY if locked) */
+ xfs_buf_t *pb)
+{
+ int locked;
+
+ locked = down_trylock(&pb->pb_sema) == 0;
+ if (locked) {
+ PB_SET_OWNER(pb);
+ }
+ PB_TRACE(pb, "cond_lock", (long)locked);
+ return(locked ? 0 : -EBUSY);
+}
+
+/*
+ * pagebuf_lock_value
+ *
+ * Return lock value for a pagebuf
+ */
+int
+pagebuf_lock_value(
+ xfs_buf_t *pb)
+{
+ return(atomic_read(&pb->pb_sema.count));
+}
+
+/*
+ * pagebuf_lock
+ *
+ * pagebuf_lock locks a buffer object. Note that this in no way
+ * locks the underlying pages, so it is only useful for synchronizing
+ * concurrent use of page buffer objects, not for synchronizing independent
+ * access to the underlying pages.
+ */
+int
+pagebuf_lock(
+ xfs_buf_t *pb)
+{
+ PB_TRACE(pb, "lock", 0);
+ if (atomic_read(&pb->pb_io_remaining))
+ blk_run_address_space(pb->pb_target->pbr_mapping);
+ down(&pb->pb_sema);
+ PB_SET_OWNER(pb);
+ PB_TRACE(pb, "locked", 0);
+ return 0;
+}
+
+/*
+ * pagebuf_unlock
+ *
+ * pagebuf_unlock releases the lock on the buffer object created by
+ * pagebuf_lock or pagebuf_cond_lock (not any
+ * pinning of underlying pages created by pagebuf_pin).
+ */
+void
+pagebuf_unlock( /* unlock buffer */
+ xfs_buf_t *pb) /* buffer to unlock */
+{
+ PB_CLEAR_OWNER(pb);
+ up(&pb->pb_sema);
+ PB_TRACE(pb, "unlock", 0);
+}
+
+
+/*
+ * Pinning Buffer Storage in Memory
+ */
+
+/*
+ * pagebuf_pin
+ *
+ * pagebuf_pin locks all of the memory represented by a buffer in
+ * memory. Multiple calls to pagebuf_pin and pagebuf_unpin, for
+ * the same or different buffers affecting a given page, will
+ * properly count the number of outstanding "pin" requests. The
+ * buffer may be released after the pagebuf_pin and a different
+ * buffer used when calling pagebuf_unpin, if desired.
+ * pagebuf_pin should be used by the file system when it wants be
+ * assured that no attempt will be made to force the affected
+ * memory to disk. It does not assure that a given logical page
+ * will not be moved to a different physical page.
+ */
+void
+pagebuf_pin(
+ xfs_buf_t *pb)
+{
+ atomic_inc(&pb->pb_pin_count);
+ PB_TRACE(pb, "pin", (long)pb->pb_pin_count.counter);
+}
+
+/*
+ * pagebuf_unpin
+ *
+ * pagebuf_unpin reverses the locking of memory performed by
+ * pagebuf_pin. Note that both functions affected the logical
+ * pages associated with the buffer, not the buffer itself.
+ */
+void
+pagebuf_unpin(
+ xfs_buf_t *pb)
+{
+ if (atomic_dec_and_test(&pb->pb_pin_count)) {
+ wake_up_all(&pb->pb_waiters);
+ }
+ PB_TRACE(pb, "unpin", (long)pb->pb_pin_count.counter);
+}
+
+int
+pagebuf_ispin(
+ xfs_buf_t *pb)
+{
+ return atomic_read(&pb->pb_pin_count);
+}
+
+/*
+ * pagebuf_wait_unpin
+ *
+ * pagebuf_wait_unpin waits until all of the memory associated
+ * with the buffer is not longer locked in memory. It returns
+ * immediately if none of the affected pages are locked.
+ */
+static inline void
+_pagebuf_wait_unpin(
+ xfs_buf_t *pb)
+{
+ DECLARE_WAITQUEUE (wait, current);
+
+ if (atomic_read(&pb->pb_pin_count) == 0)
+ return;
+
+ add_wait_queue(&pb->pb_waiters, &wait);
+ for (;;) {
+ set_current_state(TASK_UNINTERRUPTIBLE);
+ if (atomic_read(&pb->pb_pin_count) == 0)
+ break;
+ if (atomic_read(&pb->pb_io_remaining))
+ blk_run_address_space(pb->pb_target->pbr_mapping);
+ schedule();
+ }
+ remove_wait_queue(&pb->pb_waiters, &wait);
+ set_current_state(TASK_RUNNING);
+}
+
+/*
+ * Buffer Utility Routines
+ */
+
+/*
+ * pagebuf_iodone
+ *
+ * pagebuf_iodone marks a buffer for which I/O is in progress
+ * done with respect to that I/O. The pb_iodone routine, if
+ * present, will be called as a side-effect.
+ */
+STATIC void
+pagebuf_iodone_work(
+ void *v)
+{
+ xfs_buf_t *bp = (xfs_buf_t *)v;
+
+ if (bp->pb_iodone)
+ (*(bp->pb_iodone))(bp);
+ else if (bp->pb_flags & PBF_ASYNC)
+ xfs_buf_relse(bp);
+}
+
+void
+pagebuf_iodone(
+ xfs_buf_t *pb,
+ int dataio,
+ int schedule)
+{
+ pb->pb_flags &= ~(PBF_READ | PBF_WRITE);
+ if (pb->pb_error == 0) {
+ pb->pb_flags &= ~(PBF_PARTIAL | PBF_NONE);
+ }
+
+ PB_TRACE(pb, "iodone", pb->pb_iodone);
+
+ if ((pb->pb_iodone) || (pb->pb_flags & PBF_ASYNC)) {
+ if (schedule) {
+ INIT_WORK(&pb->pb_iodone_work, pagebuf_iodone_work, pb);
+ queue_work(dataio ? pagebuf_dataio_workqueue :
+ pagebuf_logio_workqueue, &pb->pb_iodone_work);
+ } else {
+ pagebuf_iodone_work(pb);
+ }
+ } else {
+ up(&pb->pb_iodonesema);
+ }
+}
+
+/*
+ * pagebuf_ioerror
+ *
+ * pagebuf_ioerror sets the error code for a buffer.
+ */
+void
+pagebuf_ioerror( /* mark/clear buffer error flag */
+ xfs_buf_t *pb, /* buffer to mark */
+ int error) /* error to store (0 if none) */
+{
+ ASSERT(error >= 0 && error <= 0xffff);
+ pb->pb_error = (unsigned short)error;
+ PB_TRACE(pb, "ioerror", (unsigned long)error);
+}
+
+/*
+ * pagebuf_iostart
+ *
+ * pagebuf_iostart initiates I/O on a buffer, based on the flags supplied.
+ * If necessary, it will arrange for any disk space allocation required,
+ * and it will break up the request if the block mappings require it.
+ * The pb_iodone routine in the buffer supplied will only be called
+ * when all of the subsidiary I/O requests, if any, have been completed.
+ * pagebuf_iostart calls the pagebuf_ioinitiate routine or
+ * pagebuf_iorequest, if the former routine is not defined, to start
+ * the I/O on a given low-level request.
+ */
+int
+pagebuf_iostart( /* start I/O on a buffer */
+ xfs_buf_t *pb, /* buffer to start */
+ page_buf_flags_t flags) /* PBF_LOCK, PBF_ASYNC, PBF_READ, */
+ /* PBF_WRITE, PBF_DELWRI, */
+ /* PBF_DONT_BLOCK */
+{
+ int status = 0;
+
+ PB_TRACE(pb, "iostart", (unsigned long)flags);
+
+ if (flags & PBF_DELWRI) {
+ pb->pb_flags &= ~(PBF_READ | PBF_WRITE | PBF_ASYNC);
+ pb->pb_flags |= flags & (PBF_DELWRI | PBF_ASYNC);
+ pagebuf_delwri_queue(pb, 1);
+ return status;
+ }
+
+ pb->pb_flags &= ~(PBF_READ | PBF_WRITE | PBF_ASYNC | PBF_DELWRI | \
+ PBF_READ_AHEAD | _PBF_RUN_QUEUES);
+ pb->pb_flags |= flags & (PBF_READ | PBF_WRITE | PBF_ASYNC | \
+ PBF_READ_AHEAD | _PBF_RUN_QUEUES);
+
+ BUG_ON(pb->pb_bn == XFS_BUF_DADDR_NULL);
+
+ /* For writes allow an alternate strategy routine to precede
+ * the actual I/O request (which may not be issued at all in
+ * a shutdown situation, for example).
+ */
+ status = (flags & PBF_WRITE) ?
+ pagebuf_iostrategy(pb) : pagebuf_iorequest(pb);
+
+ /* Wait for I/O if we are not an async request.
+ * Note: async I/O request completion will release the buffer,
+ * and that can already be done by this point. So using the
+ * buffer pointer from here on, after async I/O, is invalid.
+ */
+ if (!status && !(flags & PBF_ASYNC))
+ status = pagebuf_iowait(pb);
+
+ return status;
+}
+
+/*
+ * Helper routine for pagebuf_iorequest
+ */
+
+STATIC __inline__ int
+_pagebuf_iolocked(
+ xfs_buf_t *pb)
+{
+ ASSERT(pb->pb_flags & (PBF_READ|PBF_WRITE));
+ if (pb->pb_flags & PBF_READ)
+ return pb->pb_locked;
+ return 0;
+}
+
+STATIC __inline__ void
+_pagebuf_iodone(
+ xfs_buf_t *pb,
+ int schedule)
+{
+ if (atomic_dec_and_test(&pb->pb_io_remaining) == 1) {
+ pb->pb_locked = 0;
+ pagebuf_iodone(pb, (pb->pb_flags & PBF_FS_DATAIOD), schedule);
+ }
+}
+
+STATIC int
+bio_end_io_pagebuf(
+ struct bio *bio,
+ unsigned int bytes_done,
+ int error)
+{
+ xfs_buf_t *pb = (xfs_buf_t *)bio->bi_private;
+ unsigned int i, blocksize = pb->pb_target->pbr_bsize;
+ unsigned int sectorshift = pb->pb_target->pbr_sshift;
+ struct bio_vec *bvec = bio->bi_io_vec;
+
+ if (bio->bi_size)
+ return 1;
+
+ if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
+ pb->pb_error = EIO;
+
+ for (i = 0; i < bio->bi_vcnt; i++, bvec++) {
+ struct page *page = bvec->bv_page;
+
+ if (pb->pb_error) {
+ SetPageError(page);
+ } else if (blocksize == PAGE_CACHE_SIZE) {
+ SetPageUptodate(page);
+ } else if (!PagePrivate(page) &&
+ (pb->pb_flags & _PBF_PAGE_CACHE)) {
+ unsigned long j, range;
+
+ ASSERT(blocksize < PAGE_CACHE_SIZE);
+ range = (bvec->bv_offset + bvec->bv_len) >> sectorshift;
+ for (j = bvec->bv_offset >> sectorshift; j < range; j++)
+ set_bit(j, &page->private);
+ if (page->private == (unsigned long)(PAGE_CACHE_SIZE-1))
+ SetPageUptodate(page);
+ }
+
+ if (_pagebuf_iolocked(pb)) {
+ unlock_page(page);
+ }
+ }
+
+ _pagebuf_iodone(pb, 1);
+ bio_put(bio);
+ return 0;
+}
+
+STATIC void
+_pagebuf_ioapply(
+ xfs_buf_t *pb)
+{
+ int i, map_i, total_nr_pages, nr_pages;
+ struct bio *bio;
+ int offset = pb->pb_offset;
+ int size = pb->pb_count_desired;
+ sector_t sector = pb->pb_bn;
+ unsigned int blocksize = pb->pb_target->pbr_bsize;
+ int locking = _pagebuf_iolocked(pb);
+
+ total_nr_pages = pb->pb_page_count;
+ map_i = 0;
+
+ /* Special code path for reading a sub page size pagebuf in --
+ * we populate up the whole page, and hence the other metadata
+ * in the same page. This optimization is only valid when the
+ * filesystem block size and the page size are equal.
+ */
+ if ((pb->pb_buffer_length < PAGE_CACHE_SIZE) &&
+ (pb->pb_flags & PBF_READ) && locking &&
+ (blocksize == PAGE_CACHE_SIZE)) {
+ bio = bio_alloc(GFP_NOIO, 1);
+
+ bio->bi_bdev = pb->pb_target->pbr_bdev;
+ bio->bi_sector = sector - (offset >> BBSHIFT);
+ bio->bi_end_io = bio_end_io_pagebuf;
+ bio->bi_private = pb;
+
+ bio_add_page(bio, pb->pb_pages[0], PAGE_CACHE_SIZE, 0);
+ size = 0;
+
+ atomic_inc(&pb->pb_io_remaining);
+
+ goto submit_io;
+ }
+
+ /* Lock down the pages which we need to for the request */
+ if (locking && (pb->pb_flags & PBF_WRITE) && (pb->pb_locked == 0)) {
+ for (i = 0; size; i++) {
+ int nbytes = PAGE_CACHE_SIZE - offset;
+ struct page *page = pb->pb_pages[i];
+
+ if (nbytes > size)
+ nbytes = size;
+
+ lock_page(page);
+
+ size -= nbytes;
+ offset = 0;
+ }
+ offset = pb->pb_offset;
+ size = pb->pb_count_desired;
+ }
+
+next_chunk:
+ atomic_inc(&pb->pb_io_remaining);
+ nr_pages = BIO_MAX_SECTORS >> (PAGE_SHIFT - BBSHIFT);
+ if (nr_pages > total_nr_pages)
+ nr_pages = total_nr_pages;
+
+ bio = bio_alloc(GFP_NOIO, nr_pages);
+ bio->bi_bdev = pb->pb_target->pbr_bdev;
+ bio->bi_sector = sector;
+ bio->bi_end_io = bio_end_io_pagebuf;
+ bio->bi_private = pb;
+
+ for (; size && nr_pages; nr_pages--, map_i++) {
+ int nbytes = PAGE_CACHE_SIZE - offset;
+
+ if (nbytes > size)
+ nbytes = size;
+
+ if (bio_add_page(bio, pb->pb_pages[map_i],
+ nbytes, offset) < nbytes)
+ break;
+
+ offset = 0;
+ sector += nbytes >> BBSHIFT;
+ size -= nbytes;
+ total_nr_pages--;
+ }
+
+submit_io:
+ if (likely(bio->bi_size)) {
+ submit_bio((pb->pb_flags & PBF_READ) ? READ : WRITE, bio);
+ if (size)
+ goto next_chunk;
+ } else {
+ bio_put(bio);
+ pagebuf_ioerror(pb, EIO);
+ }
+
+ if (pb->pb_flags & _PBF_RUN_QUEUES) {
+ pb->pb_flags &= ~_PBF_RUN_QUEUES;
+ if (atomic_read(&pb->pb_io_remaining) > 1)
+ blk_run_address_space(pb->pb_target->pbr_mapping);
+ }
+}
+
+/*
+ * pagebuf_iorequest -- the core I/O request routine.
+ */
+int
+pagebuf_iorequest( /* start real I/O */
+ xfs_buf_t *pb) /* buffer to convey to device */
+{
+ PB_TRACE(pb, "iorequest", 0);
+
+ if (pb->pb_flags & PBF_DELWRI) {
+ pagebuf_delwri_queue(pb, 1);
+ return 0;
+ }
+
+ if (pb->pb_flags & PBF_WRITE) {
+ _pagebuf_wait_unpin(pb);
+ }
+
+ pagebuf_hold(pb);
+
+ /* Set the count to 1 initially, this will stop an I/O
+ * completion callout which happens before we have started
+ * all the I/O from calling pagebuf_iodone too early.
+ */
+ atomic_set(&pb->pb_io_remaining, 1);
+ _pagebuf_ioapply(pb);
+ _pagebuf_iodone(pb, 0);
+
+ pagebuf_rele(pb);
+ return 0;
+}
+
+/*
+ * pagebuf_iowait
+ *
+ * pagebuf_iowait waits for I/O to complete on the buffer supplied.
+ * It returns immediately if no I/O is pending. In any case, it returns
+ * the error code, if any, or 0 if there is no error.
+ */
+int
+pagebuf_iowait(
+ xfs_buf_t *pb)
+{
+ PB_TRACE(pb, "iowait", 0);
+ if (atomic_read(&pb->pb_io_remaining))
+ blk_run_address_space(pb->pb_target->pbr_mapping);
+ down(&pb->pb_iodonesema);
+ PB_TRACE(pb, "iowaited", (long)pb->pb_error);
+ return pb->pb_error;
+}
+
+caddr_t
+pagebuf_offset(
+ xfs_buf_t *pb,
+ size_t offset)
+{
+ struct page *page;
+
+ offset += pb->pb_offset;
+
+ page = pb->pb_pages[offset >> PAGE_CACHE_SHIFT];
+ return (caddr_t) page_address(page) + (offset & (PAGE_CACHE_SIZE - 1));
+}
+
+/*
+ * pagebuf_iomove
+ *
+ * Move data into or out of a buffer.
+ */
+void
+pagebuf_iomove(
+ xfs_buf_t *pb, /* buffer to process */
+ size_t boff, /* starting buffer offset */
+ size_t bsize, /* length to copy */
+ caddr_t data, /* data address */
+ page_buf_rw_t mode) /* read/write flag */
+{
+ size_t bend, cpoff, csize;
+ struct page *page;
+
+ bend = boff + bsize;
+ while (boff < bend) {
+ page = pb->pb_pages[page_buf_btoct(boff + pb->pb_offset)];
+ cpoff = page_buf_poff(boff + pb->pb_offset);
+ csize = min_t(size_t,
+ PAGE_CACHE_SIZE-cpoff, pb->pb_count_desired-boff);
+
+ ASSERT(((csize + cpoff) <= PAGE_CACHE_SIZE));
+
+ switch (mode) {
+ case PBRW_ZERO:
+ memset(page_address(page) + cpoff, 0, csize);
+ break;
+ case PBRW_READ:
+ memcpy(data, page_address(page) + cpoff, csize);
+ break;
+ case PBRW_WRITE:
+ memcpy(page_address(page) + cpoff, data, csize);
+ }
+
+ boff += csize;
+ data += csize;
+ }
+}
+
+/*
+ * Handling of buftargs.
+ */
+
+/*
+ * Wait for any bufs with callbacks that have been submitted but
+ * have not yet returned... walk the hash list for the target.
+ */
+void
+xfs_wait_buftarg(
+ xfs_buftarg_t *target)
+{
+ xfs_buf_t *pb, *n;
+ pb_hash_t *h;
+ int i;
+
+ for (i = 0; i < NHASH; i++) {
+ h = &pbhash[i];
+again:
+ spin_lock(&h->pb_hash_lock);
+ list_for_each_entry_safe(pb, n, &h->pb_hash, pb_hash_list) {
+ if (pb->pb_target == target &&
+ !(pb->pb_flags & PBF_FS_MANAGED)) {
+ spin_unlock(&h->pb_hash_lock);
+ delay(100);
+ goto again;
+ }
+ }
+ spin_unlock(&h->pb_hash_lock);
+ }
+}
+
+void
+xfs_free_buftarg(
+ xfs_buftarg_t *btp,
+ int external)
+{
+ xfs_flush_buftarg(btp, 1);
+ if (external)
+ xfs_blkdev_put(btp->pbr_bdev);
+ iput(btp->pbr_mapping->host);
+ kmem_free(btp, sizeof(*btp));
+}
+
+void
+xfs_incore_relse(
+ xfs_buftarg_t *btp,
+ int delwri_only,
+ int wait)
+{
+ invalidate_bdev(btp->pbr_bdev, 1);
+ truncate_inode_pages(btp->pbr_mapping, 0LL);
+}
+
+int
+xfs_setsize_buftarg(
+ xfs_buftarg_t *btp,
+ unsigned int blocksize,
+ unsigned int sectorsize)
+{
+ btp->pbr_bsize = blocksize;
+ btp->pbr_sshift = ffs(sectorsize) - 1;
+ btp->pbr_smask = sectorsize - 1;
+
+ if (set_blocksize(btp->pbr_bdev, sectorsize)) {
+ printk(KERN_WARNING
+ "XFS: Cannot set_blocksize to %u on device %s\n",
+ sectorsize, XFS_BUFTARG_NAME(btp));
+ return EINVAL;
+ }
+ return 0;
+}
+
+STATIC int
+xfs_mapping_buftarg(
+ xfs_buftarg_t *btp,
+ struct block_device *bdev)
+{
+ struct backing_dev_info *bdi;
+ struct inode *inode;
+ struct address_space *mapping;
+ static struct address_space_operations mapping_aops = {
+ .sync_page = block_sync_page,
+ };
+
+ inode = new_inode(bdev->bd_inode->i_sb);
+ if (!inode) {
+ printk(KERN_WARNING
+ "XFS: Cannot allocate mapping inode for device %s\n",
+ XFS_BUFTARG_NAME(btp));
+ return ENOMEM;
+ }
+ inode->i_mode = S_IFBLK;
+ inode->i_bdev = bdev;
+ inode->i_rdev = bdev->bd_dev;
+ bdi = blk_get_backing_dev_info(bdev);
+ if (!bdi)
+ bdi = &default_backing_dev_info;
+ mapping = &inode->i_data;
+ mapping->a_ops = &mapping_aops;
+ mapping->backing_dev_info = bdi;
+ mapping_set_gfp_mask(mapping, GFP_KERNEL);
+ btp->pbr_mapping = mapping;
+ return 0;
+}
+
+xfs_buftarg_t *
+xfs_alloc_buftarg(
+ struct block_device *bdev)
+{
+ xfs_buftarg_t *btp;
+
+ btp = kmem_zalloc(sizeof(*btp), KM_SLEEP);
+
+ btp->pbr_dev = bdev->bd_dev;
+ btp->pbr_bdev = bdev;
+ if (xfs_setsize_buftarg(btp, PAGE_CACHE_SIZE, bdev_hardsect_size(bdev)))
+ goto error;
+ if (xfs_mapping_buftarg(btp, bdev))
+ goto error;
+ return btp;
+
+error:
+ kmem_free(btp, sizeof(*btp));
+ return NULL;
+}
+
+
+/*
+ * Pagebuf delayed write buffer handling
+ */
+
+STATIC LIST_HEAD(pbd_delwrite_queue);
+STATIC spinlock_t pbd_delwrite_lock = SPIN_LOCK_UNLOCKED;
+
+STATIC void
+pagebuf_delwri_queue(
+ xfs_buf_t *pb,
+ int unlock)
+{
+ PB_TRACE(pb, "delwri_q", (long)unlock);
+ ASSERT(pb->pb_flags & PBF_DELWRI);
+
+ spin_lock(&pbd_delwrite_lock);
+ /* If already in the queue, dequeue and place at tail */
+ if (!list_empty(&pb->pb_list)) {
+ if (unlock) {
+ atomic_dec(&pb->pb_hold);
+ }
+ list_del(&pb->pb_list);
+ }
+
+ list_add_tail(&pb->pb_list, &pbd_delwrite_queue);
+ pb->pb_queuetime = jiffies;
+ spin_unlock(&pbd_delwrite_lock);
+
+ if (unlock)
+ pagebuf_unlock(pb);
+}
+
+void
+pagebuf_delwri_dequeue(
+ xfs_buf_t *pb)
+{
+ int dequeued = 0;
+
+ spin_lock(&pbd_delwrite_lock);
+ if ((pb->pb_flags & PBF_DELWRI) && !list_empty(&pb->pb_list)) {
+ list_del_init(&pb->pb_list);
+ dequeued = 1;
+ }
+ pb->pb_flags &= ~PBF_DELWRI;
+ spin_unlock(&pbd_delwrite_lock);
+
+ if (dequeued)
+ pagebuf_rele(pb);
+
+ PB_TRACE(pb, "delwri_dq", (long)dequeued);
+}
+
+STATIC void
+pagebuf_runall_queues(
+ struct workqueue_struct *queue)
+{
+ flush_workqueue(queue);
+}
+
+/* Defines for pagebuf daemon */
+STATIC DECLARE_COMPLETION(pagebuf_daemon_done);
+STATIC struct task_struct *pagebuf_daemon_task;
+STATIC int pagebuf_daemon_active;
+STATIC int force_flush;
+
+
+STATIC int
+pagebuf_daemon_wakeup(
+ int priority,
+ unsigned int mask)
+{
+ force_flush = 1;
+ barrier();
+ wake_up_process(pagebuf_daemon_task);
+ return 0;
+}
+
+STATIC int
+pagebuf_daemon(
+ void *data)
+{
+ struct list_head tmp;
+ unsigned long age;
+ xfs_buftarg_t *target;
+ xfs_buf_t *pb, *n;
+
+ /* Set up the thread */
+ daemonize("xfsbufd");
+ current->flags |= PF_MEMALLOC;
+
+ pagebuf_daemon_task = current;
+ pagebuf_daemon_active = 1;
+ barrier();
+
+ INIT_LIST_HEAD(&tmp);
+ do {
+ /* swsusp */
+ if (current->flags & PF_FREEZE)
+ refrigerator(PF_FREEZE);
+
+ set_current_state(TASK_INTERRUPTIBLE);
+ schedule_timeout((xfs_buf_timer_centisecs * HZ) / 100);
+
+ age = (xfs_buf_age_centisecs * HZ) / 100;
+ spin_lock(&pbd_delwrite_lock);
+ list_for_each_entry_safe(pb, n, &pbd_delwrite_queue, pb_list) {
+ PB_TRACE(pb, "walkq1", (long)pagebuf_ispin(pb));
+ ASSERT(pb->pb_flags & PBF_DELWRI);
+
+ if (!pagebuf_ispin(pb) && !pagebuf_cond_lock(pb)) {
+ if (!force_flush &&
+ time_before(jiffies,
+ pb->pb_queuetime + age)) {
+ pagebuf_unlock(pb);
+ break;
+ }
+
+ pb->pb_flags &= ~PBF_DELWRI;
+ pb->pb_flags |= PBF_WRITE;
+ list_move(&pb->pb_list, &tmp);
+ }
+ }
+ spin_unlock(&pbd_delwrite_lock);
+
+ while (!list_empty(&tmp)) {
+ pb = list_entry(tmp.next, xfs_buf_t, pb_list);
+ target = pb->pb_target;
+
+ list_del_init(&pb->pb_list);
+ pagebuf_iostrategy(pb);
+
+ blk_run_address_space(target->pbr_mapping);
+ }
+
+ if (as_list_len > 0)
+ purge_addresses();
+
+ force_flush = 0;
+ } while (pagebuf_daemon_active);
+
+ complete_and_exit(&pagebuf_daemon_done, 0);
+}
+
+/*
+ * Go through all incore buffers, and release buffers if they belong to
+ * the given device. This is used in filesystem error handling to
+ * preserve the consistency of its metadata.
+ */
+int
+xfs_flush_buftarg(
+ xfs_buftarg_t *target,
+ int wait)
+{
+ struct list_head tmp;
+ xfs_buf_t *pb, *n;
+ int pincount = 0;
+
+ pagebuf_runall_queues(pagebuf_dataio_workqueue);
+ pagebuf_runall_queues(pagebuf_logio_workqueue);
+
+ INIT_LIST_HEAD(&tmp);
+ spin_lock(&pbd_delwrite_lock);
+ list_for_each_entry_safe(pb, n, &pbd_delwrite_queue, pb_list) {
+
+ if (pb->pb_target != target)
+ continue;
+
+ ASSERT(pb->pb_flags & PBF_DELWRI);
+ PB_TRACE(pb, "walkq2", (long)pagebuf_ispin(pb));
+ if (pagebuf_ispin(pb)) {
+ pincount++;
+ continue;
+ }
+
+ pb->pb_flags &= ~PBF_DELWRI;
+ pb->pb_flags |= PBF_WRITE;
+ list_move(&pb->pb_list, &tmp);
+ }
+ spin_unlock(&pbd_delwrite_lock);
+
+ /*
+ * Dropped the delayed write list lock, now walk the temporary list
+ */
+ list_for_each_entry_safe(pb, n, &tmp, pb_list) {
+ if (wait)
+ pb->pb_flags &= ~PBF_ASYNC;
+ else
+ list_del_init(&pb->pb_list);
+
+ pagebuf_lock(pb);
+ pagebuf_iostrategy(pb);
+ }
+
+ /*
+ * Remaining list items must be flushed before returning
+ */
+ while (!list_empty(&tmp)) {
+ pb = list_entry(tmp.next, xfs_buf_t, pb_list);
+
+ list_del_init(&pb->pb_list);
+ xfs_iowait(pb);
+ xfs_buf_relse(pb);
+ }
+
+ if (wait)
+ blk_run_address_space(target->pbr_mapping);
+
+ return pincount;
+}
+
+STATIC int
+pagebuf_daemon_start(void)
+{
+ int rval;
+
+ pagebuf_logio_workqueue = create_workqueue("xfslogd");
+ if (!pagebuf_logio_workqueue)
+ return -ENOMEM;
+
+ pagebuf_dataio_workqueue = create_workqueue("xfsdatad");
+ if (!pagebuf_dataio_workqueue) {
+ destroy_workqueue(pagebuf_logio_workqueue);
+ return -ENOMEM;
+ }
+
+ rval = kernel_thread(pagebuf_daemon, NULL, CLONE_FS|CLONE_FILES);
+ if (rval < 0) {
+ destroy_workqueue(pagebuf_logio_workqueue);
+ destroy_workqueue(pagebuf_dataio_workqueue);
+ }
+
+ return rval;
+}
+
+/*
+ * pagebuf_daemon_stop
+ *
+ * Note: do not mark as __exit, it is called from pagebuf_terminate.
+ */
+STATIC void
+pagebuf_daemon_stop(void)
+{
+ pagebuf_daemon_active = 0;
+ barrier();
+ wait_for_completion(&pagebuf_daemon_done);
+
+ destroy_workqueue(pagebuf_logio_workqueue);
+ destroy_workqueue(pagebuf_dataio_workqueue);
+}
+
+/*
+ * Initialization and Termination
+ */
+
+int __init
+pagebuf_init(void)
+{
+ int i;
+
+ pagebuf_cache = kmem_cache_create("xfs_buf_t", sizeof(xfs_buf_t), 0,
+ SLAB_HWCACHE_ALIGN, NULL, NULL);
+ if (pagebuf_cache == NULL) {
+ printk("XFS: couldn't init xfs_buf_t cache\n");
+ pagebuf_terminate();
+ return -ENOMEM;
+ }
+
+#ifdef PAGEBUF_TRACE
+ pagebuf_trace_buf = ktrace_alloc(PAGEBUF_TRACE_SIZE, KM_SLEEP);
+#endif
+
+ pagebuf_daemon_start();
+
+ pagebuf_shake = kmem_shake_register(pagebuf_daemon_wakeup);
+ if (pagebuf_shake == NULL) {
+ pagebuf_terminate();
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < NHASH; i++) {
+ spin_lock_init(&pbhash[i].pb_hash_lock);
+ INIT_LIST_HEAD(&pbhash[i].pb_hash);
+ }
+
+ return 0;
+}
+
+
+/*
+ * pagebuf_terminate.
+ *
+ * Note: do not mark as __exit, this is also called from the __init code.
+ */
+void
+pagebuf_terminate(void)
+{
+ pagebuf_daemon_stop();
+
+#ifdef PAGEBUF_TRACE
+ ktrace_free(pagebuf_trace_buf);
+#endif
+
+ kmem_zone_destroy(pagebuf_cache);
+ kmem_shake_deregister(pagebuf_shake);
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+/*
+ * Written by Steve Lord, Jim Mostek, Russell Cattelan at SGI
+ */
+
+#ifndef __XFS_BUF_H__
+#define __XFS_BUF_H__
+
+#include <linux/config.h>
+#include <linux/list.h>
+#include <linux/types.h>
+#include <linux/spinlock.h>
+#include <asm/system.h>
+#include <linux/mm.h>
+#include <linux/fs.h>
+#include <linux/buffer_head.h>
+#include <linux/uio.h>
+
+/*
+ * Base types
+ */
+
+#define XFS_BUF_DADDR_NULL ((xfs_daddr_t) (-1LL))
+
+#define page_buf_ctob(pp) ((pp) * PAGE_CACHE_SIZE)
+#define page_buf_btoc(dd) (((dd) + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT)
+#define page_buf_btoct(dd) ((dd) >> PAGE_CACHE_SHIFT)
+#define page_buf_poff(aa) ((aa) & ~PAGE_CACHE_MASK)
+
+typedef enum page_buf_rw_e {
+ PBRW_READ = 1, /* transfer into target memory */
+ PBRW_WRITE = 2, /* transfer from target memory */
+ PBRW_ZERO = 3 /* Zero target memory */
+} page_buf_rw_t;
+
+
+typedef enum page_buf_flags_e { /* pb_flags values */
+ PBF_READ = (1 << 0), /* buffer intended for reading from device */
+ PBF_WRITE = (1 << 1), /* buffer intended for writing to device */
+ PBF_MAPPED = (1 << 2), /* buffer mapped (pb_addr valid) */
+ PBF_PARTIAL = (1 << 3), /* buffer partially read */
+ PBF_ASYNC = (1 << 4), /* initiator will not wait for completion */
+ PBF_NONE = (1 << 5), /* buffer not read at all */
+ PBF_DELWRI = (1 << 6), /* buffer has dirty pages */
+ PBF_STALE = (1 << 7), /* buffer has been staled, do not find it */
+ PBF_FS_MANAGED = (1 << 8), /* filesystem controls freeing memory */
+ PBF_FS_DATAIOD = (1 << 9), /* schedule IO completion on fs datad */
+ PBF_FORCEIO = (1 << 10), /* ignore any cache state */
+ PBF_FLUSH = (1 << 11), /* flush disk write cache */
+ PBF_READ_AHEAD = (1 << 12), /* asynchronous read-ahead */
+
+ /* flags used only as arguments to access routines */
+ PBF_LOCK = (1 << 14), /* lock requested */
+ PBF_TRYLOCK = (1 << 15), /* lock requested, but do not wait */
+ PBF_DONT_BLOCK = (1 << 16), /* do not block in current thread */
+
+ /* flags used only internally */
+ _PBF_PAGE_CACHE = (1 << 17),/* backed by pagecache */
+ _PBF_KMEM_ALLOC = (1 << 18),/* backed by kmem_alloc() */
+ _PBF_RUN_QUEUES = (1 << 19),/* run block device task queue */
+} page_buf_flags_t;
+
+#define PBF_UPDATE (PBF_READ | PBF_WRITE)
+#define PBF_NOT_DONE(pb) (((pb)->pb_flags & (PBF_PARTIAL|PBF_NONE)) != 0)
+#define PBF_DONE(pb) (((pb)->pb_flags & (PBF_PARTIAL|PBF_NONE)) == 0)
+
+typedef struct xfs_buftarg {
+ dev_t pbr_dev;
+ struct block_device *pbr_bdev;
+ struct address_space *pbr_mapping;
+ unsigned int pbr_bsize;
+ unsigned int pbr_sshift;
+ size_t pbr_smask;
+} xfs_buftarg_t;
+
+/*
+ * xfs_buf_t: Buffer structure for page cache-based buffers
+ *
+ * This buffer structure is used by the page cache buffer management routines
+ * to refer to an assembly of pages forming a logical buffer. The actual
+ * I/O is performed with buffer_head or bio structures, as required by drivers,
+ * for drivers which do not understand this structure. The buffer structure is
+ * used on temporary basis only, and discarded when released.
+ *
+ * The real data storage is recorded in the page cache. Metadata is
+ * hashed to the inode for the block device on which the file system resides.
+ * File data is hashed to the inode for the file. Pages which are only
+ * partially filled with data have bits set in their block_map entry
+ * to indicate which disk blocks in the page are not valid.
+ */
+
+struct xfs_buf;
+typedef void (*page_buf_iodone_t)(struct xfs_buf *);
+ /* call-back function on I/O completion */
+typedef void (*page_buf_relse_t)(struct xfs_buf *);
+ /* call-back function on I/O completion */
+typedef int (*page_buf_bdstrat_t)(struct xfs_buf *);
+
+#define PB_PAGES 4
+
+typedef struct xfs_buf {
+ struct semaphore pb_sema; /* semaphore for lockables */
+ unsigned long pb_queuetime; /* time buffer was queued */
+ atomic_t pb_pin_count; /* pin count */
+ wait_queue_head_t pb_waiters; /* unpin waiters */
+ struct list_head pb_list;
+ page_buf_flags_t pb_flags; /* status flags */
+ struct list_head pb_hash_list;
+ xfs_buftarg_t *pb_target; /* logical object */
+ atomic_t pb_hold; /* reference count */
+ xfs_daddr_t pb_bn; /* block number for I/O */
+ loff_t pb_file_offset; /* offset in file */
+ size_t pb_buffer_length; /* size of buffer in bytes */
+ size_t pb_count_desired; /* desired transfer size */
+ void *pb_addr; /* virtual address of buffer */
+ struct work_struct pb_iodone_work;
+ atomic_t pb_io_remaining;/* #outstanding I/O requests */
+ page_buf_iodone_t pb_iodone; /* I/O completion function */
+ page_buf_relse_t pb_relse; /* releasing function */
+ page_buf_bdstrat_t pb_strat; /* pre-write function */
+ struct semaphore pb_iodonesema; /* Semaphore for I/O waiters */
+ void *pb_fspriv;
+ void *pb_fspriv2;
+ void *pb_fspriv3;
+ unsigned short pb_error; /* error code on I/O */
+ unsigned short pb_page_count; /* size of page array */
+ unsigned short pb_offset; /* page offset in first page */
+ unsigned char pb_locked; /* page array is locked */
+ unsigned char pb_hash_index; /* hash table index */
+ struct page **pb_pages; /* array of page pointers */
+ struct page *pb_page_array[PB_PAGES]; /* inline pages */
+#ifdef PAGEBUF_LOCK_TRACKING
+ int pb_last_holder;
+#endif
+} xfs_buf_t;
+
+
+/* Finding and Reading Buffers */
+
+extern xfs_buf_t *_pagebuf_find( /* find buffer for block if */
+ /* the block is in memory */
+ xfs_buftarg_t *, /* inode for block */
+ loff_t, /* starting offset of range */
+ size_t, /* length of range */
+ page_buf_flags_t, /* PBF_LOCK */
+ xfs_buf_t *); /* newly allocated buffer */
+
+#define xfs_incore(buftarg,blkno,len,lockit) \
+ _pagebuf_find(buftarg, blkno ,len, lockit, NULL)
+
+extern xfs_buf_t *xfs_buf_get_flags( /* allocate a buffer */
+ xfs_buftarg_t *, /* inode for buffer */
+ loff_t, /* starting offset of range */
+ size_t, /* length of range */
+ page_buf_flags_t); /* PBF_LOCK, PBF_READ, */
+ /* PBF_ASYNC */
+
+#define xfs_buf_get(target, blkno, len, flags) \
+ xfs_buf_get_flags((target), (blkno), (len), PBF_LOCK | PBF_MAPPED)
+
+extern xfs_buf_t *xfs_buf_read_flags( /* allocate and read a buffer */
+ xfs_buftarg_t *, /* inode for buffer */
+ loff_t, /* starting offset of range */
+ size_t, /* length of range */
+ page_buf_flags_t); /* PBF_LOCK, PBF_ASYNC */
+
+#define xfs_buf_read(target, blkno, len, flags) \
+ xfs_buf_read_flags((target), (blkno), (len), PBF_LOCK | PBF_MAPPED)
+
+extern xfs_buf_t *pagebuf_lookup(
+ xfs_buftarg_t *,
+ loff_t, /* starting offset of range */
+ size_t, /* length of range */
+ page_buf_flags_t); /* PBF_READ, PBF_WRITE, */
+ /* PBF_FORCEIO, */
+
+extern xfs_buf_t *pagebuf_get_empty( /* allocate pagebuf struct with */
+ /* no memory or disk address */
+ size_t len,
+ xfs_buftarg_t *); /* mount point "fake" inode */
+
+extern xfs_buf_t *pagebuf_get_no_daddr(/* allocate pagebuf struct */
+ /* without disk address */
+ size_t len,
+ xfs_buftarg_t *); /* mount point "fake" inode */
+
+extern int pagebuf_associate_memory(
+ xfs_buf_t *,
+ void *,
+ size_t);
+
+extern void pagebuf_hold( /* increment reference count */
+ xfs_buf_t *); /* buffer to hold */
+
+extern void pagebuf_readahead( /* read ahead into cache */
+ xfs_buftarg_t *, /* target for buffer (or NULL) */
+ loff_t, /* starting offset of range */
+ size_t, /* length of range */
+ page_buf_flags_t); /* additional read flags */
+
+/* Releasing Buffers */
+
+extern void pagebuf_free( /* deallocate a buffer */
+ xfs_buf_t *); /* buffer to deallocate */
+
+extern void pagebuf_rele( /* release hold on a buffer */
+ xfs_buf_t *); /* buffer to release */
+
+/* Locking and Unlocking Buffers */
+
+extern int pagebuf_cond_lock( /* lock buffer, if not locked */
+ /* (returns -EBUSY if locked) */
+ xfs_buf_t *); /* buffer to lock */
+
+extern int pagebuf_lock_value( /* return count on lock */
+ xfs_buf_t *); /* buffer to check */
+
+extern int pagebuf_lock( /* lock buffer */
+ xfs_buf_t *); /* buffer to lock */
+
+extern void pagebuf_unlock( /* unlock buffer */
+ xfs_buf_t *); /* buffer to unlock */
+
+/* Buffer Read and Write Routines */
+
+extern void pagebuf_iodone( /* mark buffer I/O complete */
+ xfs_buf_t *, /* buffer to mark */
+ int, /* use data/log helper thread. */
+ int); /* run completion locally, or in
+ * a helper thread. */
+
+extern void pagebuf_ioerror( /* mark buffer in error (or not) */
+ xfs_buf_t *, /* buffer to mark */
+ int); /* error to store (0 if none) */
+
+extern int pagebuf_iostart( /* start I/O on a buffer */
+ xfs_buf_t *, /* buffer to start */
+ page_buf_flags_t); /* PBF_LOCK, PBF_ASYNC, */
+ /* PBF_READ, PBF_WRITE, */
+ /* PBF_DELWRI */
+
+extern int pagebuf_iorequest( /* start real I/O */
+ xfs_buf_t *); /* buffer to convey to device */
+
+extern int pagebuf_iowait( /* wait for buffer I/O done */
+ xfs_buf_t *); /* buffer to wait on */
+
+extern void pagebuf_iomove( /* move data in/out of pagebuf */
+ xfs_buf_t *, /* buffer to manipulate */
+ size_t, /* starting buffer offset */
+ size_t, /* length in buffer */
+ caddr_t, /* data pointer */
+ page_buf_rw_t); /* direction */
+
+static inline int pagebuf_iostrategy(xfs_buf_t *pb)
+{
+ return pb->pb_strat ? pb->pb_strat(pb) : pagebuf_iorequest(pb);
+}
+
+static inline int pagebuf_geterror(xfs_buf_t *pb)
+{
+ return pb ? pb->pb_error : ENOMEM;
+}
+
+/* Buffer Utility Routines */
+
+extern caddr_t pagebuf_offset( /* pointer at offset in buffer */
+ xfs_buf_t *, /* buffer to offset into */
+ size_t); /* offset */
+
+/* Pinning Buffer Storage in Memory */
+
+extern void pagebuf_pin( /* pin buffer in memory */
+ xfs_buf_t *); /* buffer to pin */
+
+extern void pagebuf_unpin( /* unpin buffered data */
+ xfs_buf_t *); /* buffer to unpin */
+
+extern int pagebuf_ispin( /* check if buffer is pinned */
+ xfs_buf_t *); /* buffer to check */
+
+/* Delayed Write Buffer Routines */
+
+extern void pagebuf_delwri_dequeue(xfs_buf_t *);
+
+/* Buffer Daemon Setup Routines */
+
+extern int pagebuf_init(void);
+extern void pagebuf_terminate(void);
+
+
+#ifdef PAGEBUF_TRACE
+extern ktrace_t *pagebuf_trace_buf;
+extern void pagebuf_trace(
+ xfs_buf_t *, /* buffer being traced */
+ char *, /* description of operation */
+ void *, /* arbitrary diagnostic value */
+ void *); /* return address */
+#else
+# define pagebuf_trace(pb, id, ptr, ra) do { } while (0)
+#endif
+
+#define pagebuf_target_name(target) \
+ ({ char __b[BDEVNAME_SIZE]; bdevname((target)->pbr_bdev, __b); __b; })
+
+
+
+
+
+/* These are just for xfs_syncsub... it sets an internal variable
+ * then passes it to VOP_FLUSH_PAGES or adds the flags to a newly gotten buf_t
+ */
+#define XFS_B_ASYNC PBF_ASYNC
+#define XFS_B_DELWRI PBF_DELWRI
+#define XFS_B_READ PBF_READ
+#define XFS_B_WRITE PBF_WRITE
+#define XFS_B_STALE PBF_STALE
+
+#define XFS_BUF_TRYLOCK PBF_TRYLOCK
+#define XFS_INCORE_TRYLOCK PBF_TRYLOCK
+#define XFS_BUF_LOCK PBF_LOCK
+#define XFS_BUF_MAPPED PBF_MAPPED
+
+#define BUF_BUSY PBF_DONT_BLOCK
+
+#define XFS_BUF_BFLAGS(x) ((x)->pb_flags)
+#define XFS_BUF_ZEROFLAGS(x) \
+ ((x)->pb_flags &= ~(PBF_READ|PBF_WRITE|PBF_ASYNC|PBF_DELWRI))
+
+#define XFS_BUF_STALE(x) ((x)->pb_flags |= XFS_B_STALE)
+#define XFS_BUF_UNSTALE(x) ((x)->pb_flags &= ~XFS_B_STALE)
+#define XFS_BUF_ISSTALE(x) ((x)->pb_flags & XFS_B_STALE)
+#define XFS_BUF_SUPER_STALE(x) do { \
+ XFS_BUF_STALE(x); \
+ pagebuf_delwri_dequeue(x); \
+ XFS_BUF_DONE(x); \
+ } while (0)
+
+#define XFS_BUF_MANAGE PBF_FS_MANAGED
+#define XFS_BUF_UNMANAGE(x) ((x)->pb_flags &= ~PBF_FS_MANAGED)
+
+#define XFS_BUF_DELAYWRITE(x) ((x)->pb_flags |= PBF_DELWRI)
+#define XFS_BUF_UNDELAYWRITE(x) pagebuf_delwri_dequeue(x)
+#define XFS_BUF_ISDELAYWRITE(x) ((x)->pb_flags & PBF_DELWRI)
+
+#define XFS_BUF_ERROR(x,no) pagebuf_ioerror(x,no)
+#define XFS_BUF_GETERROR(x) pagebuf_geterror(x)
+#define XFS_BUF_ISERROR(x) (pagebuf_geterror(x)?1:0)
+
+#define XFS_BUF_DONE(x) ((x)->pb_flags &= ~(PBF_PARTIAL|PBF_NONE))
+#define XFS_BUF_UNDONE(x) ((x)->pb_flags |= PBF_PARTIAL|PBF_NONE)
+#define XFS_BUF_ISDONE(x) (!(PBF_NOT_DONE(x)))
+
+#define XFS_BUF_BUSY(x) ((x)->pb_flags |= PBF_FORCEIO)
+#define XFS_BUF_UNBUSY(x) ((x)->pb_flags &= ~PBF_FORCEIO)
+#define XFS_BUF_ISBUSY(x) (1)
+
+#define XFS_BUF_ASYNC(x) ((x)->pb_flags |= PBF_ASYNC)
+#define XFS_BUF_UNASYNC(x) ((x)->pb_flags &= ~PBF_ASYNC)
+#define XFS_BUF_ISASYNC(x) ((x)->pb_flags & PBF_ASYNC)
+
+#define XFS_BUF_FLUSH(x) ((x)->pb_flags |= PBF_FLUSH)
+#define XFS_BUF_UNFLUSH(x) ((x)->pb_flags &= ~PBF_FLUSH)
+#define XFS_BUF_ISFLUSH(x) ((x)->pb_flags & PBF_FLUSH)
+
+#define XFS_BUF_SHUT(x) printk("XFS_BUF_SHUT not implemented yet\n")
+#define XFS_BUF_UNSHUT(x) printk("XFS_BUF_UNSHUT not implemented yet\n")
+#define XFS_BUF_ISSHUT(x) (0)
+
+#define XFS_BUF_HOLD(x) pagebuf_hold(x)
+#define XFS_BUF_READ(x) ((x)->pb_flags |= PBF_READ)
+#define XFS_BUF_UNREAD(x) ((x)->pb_flags &= ~PBF_READ)
+#define XFS_BUF_ISREAD(x) ((x)->pb_flags & PBF_READ)
+
+#define XFS_BUF_WRITE(x) ((x)->pb_flags |= PBF_WRITE)
+#define XFS_BUF_UNWRITE(x) ((x)->pb_flags &= ~PBF_WRITE)
+#define XFS_BUF_ISWRITE(x) ((x)->pb_flags & PBF_WRITE)
+
+#define XFS_BUF_ISUNINITIAL(x) (0)
+#define XFS_BUF_UNUNINITIAL(x) (0)
+
+#define XFS_BUF_BP_ISMAPPED(bp) 1
+
+#define XFS_BUF_DATAIO(x) ((x)->pb_flags |= PBF_FS_DATAIOD)
+#define XFS_BUF_UNDATAIO(x) ((x)->pb_flags &= ~PBF_FS_DATAIOD)
+
+#define XFS_BUF_IODONE_FUNC(buf) (buf)->pb_iodone
+#define XFS_BUF_SET_IODONE_FUNC(buf, func) \
+ (buf)->pb_iodone = (func)
+#define XFS_BUF_CLR_IODONE_FUNC(buf) \
+ (buf)->pb_iodone = NULL
+#define XFS_BUF_SET_BDSTRAT_FUNC(buf, func) \
+ (buf)->pb_strat = (func)
+#define XFS_BUF_CLR_BDSTRAT_FUNC(buf) \
+ (buf)->pb_strat = NULL
+
+#define XFS_BUF_FSPRIVATE(buf, type) \
+ ((type)(buf)->pb_fspriv)
+#define XFS_BUF_SET_FSPRIVATE(buf, value) \
+ (buf)->pb_fspriv = (void *)(value)
+#define XFS_BUF_FSPRIVATE2(buf, type) \
+ ((type)(buf)->pb_fspriv2)
+#define XFS_BUF_SET_FSPRIVATE2(buf, value) \
+ (buf)->pb_fspriv2 = (void *)(value)
+#define XFS_BUF_FSPRIVATE3(buf, type) \
+ ((type)(buf)->pb_fspriv3)
+#define XFS_BUF_SET_FSPRIVATE3(buf, value) \
+ (buf)->pb_fspriv3 = (void *)(value)
+#define XFS_BUF_SET_START(buf)
+
+#define XFS_BUF_SET_BRELSE_FUNC(buf, value) \
+ (buf)->pb_relse = (value)
+
+#define XFS_BUF_PTR(bp) (xfs_caddr_t)((bp)->pb_addr)
+
+extern inline xfs_caddr_t xfs_buf_offset(xfs_buf_t *bp, size_t offset)
+{
+ if (bp->pb_flags & PBF_MAPPED)
+ return XFS_BUF_PTR(bp) + offset;
+ return (xfs_caddr_t) pagebuf_offset(bp, offset);
+}
+
+#define XFS_BUF_SET_PTR(bp, val, count) \
+ pagebuf_associate_memory(bp, val, count)
+#define XFS_BUF_ADDR(bp) ((bp)->pb_bn)
+#define XFS_BUF_SET_ADDR(bp, blk) \
+ ((bp)->pb_bn = (blk))
+#define XFS_BUF_OFFSET(bp) ((bp)->pb_file_offset)
+#define XFS_BUF_SET_OFFSET(bp, off) \
+ ((bp)->pb_file_offset = (off))
+#define XFS_BUF_COUNT(bp) ((bp)->pb_count_desired)
+#define XFS_BUF_SET_COUNT(bp, cnt) \
+ ((bp)->pb_count_desired = (cnt))
+#define XFS_BUF_SIZE(bp) ((bp)->pb_buffer_length)
+#define XFS_BUF_SET_SIZE(bp, cnt) \
+ ((bp)->pb_buffer_length = (cnt))
+#define XFS_BUF_SET_VTYPE_REF(bp, type, ref)
+#define XFS_BUF_SET_VTYPE(bp, type)
+#define XFS_BUF_SET_REF(bp, ref)
+
+#define XFS_BUF_ISPINNED(bp) pagebuf_ispin(bp)
+
+#define XFS_BUF_VALUSEMA(bp) pagebuf_lock_value(bp)
+#define XFS_BUF_CPSEMA(bp) (pagebuf_cond_lock(bp) == 0)
+#define XFS_BUF_VSEMA(bp) pagebuf_unlock(bp)
+#define XFS_BUF_PSEMA(bp,x) pagebuf_lock(bp)
+#define XFS_BUF_V_IODONESEMA(bp) up(&bp->pb_iodonesema);
+
+/* setup the buffer target from a buftarg structure */
+#define XFS_BUF_SET_TARGET(bp, target) \
+ (bp)->pb_target = (target)
+#define XFS_BUF_TARGET(bp) ((bp)->pb_target)
+#define XFS_BUFTARG_NAME(target) \
+ pagebuf_target_name(target)
+
+#define XFS_BUF_SET_VTYPE_REF(bp, type, ref)
+#define XFS_BUF_SET_VTYPE(bp, type)
+#define XFS_BUF_SET_REF(bp, ref)
+
+static inline int xfs_bawrite(void *mp, xfs_buf_t *bp)
+{
+ bp->pb_fspriv3 = mp;
+ bp->pb_strat = xfs_bdstrat_cb;
+ pagebuf_delwri_dequeue(bp);
+ return pagebuf_iostart(bp, PBF_WRITE | PBF_ASYNC | _PBF_RUN_QUEUES);
+}
+
+static inline void xfs_buf_relse(xfs_buf_t *bp)
+{
+ if (!bp->pb_relse)
+ pagebuf_unlock(bp);
+ pagebuf_rele(bp);
+}
+
+#define xfs_bpin(bp) pagebuf_pin(bp)
+#define xfs_bunpin(bp) pagebuf_unpin(bp)
+
+#define xfs_buftrace(id, bp) \
+ pagebuf_trace(bp, id, NULL, (void *)__builtin_return_address(0))
+
+#define xfs_biodone(pb) \
+ pagebuf_iodone(pb, (pb->pb_flags & PBF_FS_DATAIOD), 0)
+
+#define xfs_biomove(pb, off, len, data, rw) \
+ pagebuf_iomove((pb), (off), (len), (data), \
+ ((rw) == XFS_B_WRITE) ? PBRW_WRITE : PBRW_READ)
+
+#define xfs_biozero(pb, off, len) \
+ pagebuf_iomove((pb), (off), (len), NULL, PBRW_ZERO)
+
+
+static inline int XFS_bwrite(xfs_buf_t *pb)
+{
+ int iowait = (pb->pb_flags & PBF_ASYNC) == 0;
+ int error = 0;
+
+ if (!iowait)
+ pb->pb_flags |= _PBF_RUN_QUEUES;
+
+ pagebuf_delwri_dequeue(pb);
+ pagebuf_iostrategy(pb);
+ if (iowait) {
+ error = pagebuf_iowait(pb);
+ xfs_buf_relse(pb);
+ }
+ return error;
+}
+
+#define XFS_bdwrite(pb) \
+ pagebuf_iostart(pb, PBF_DELWRI | PBF_ASYNC)
+
+static inline int xfs_bdwrite(void *mp, xfs_buf_t *bp)
+{
+ bp->pb_strat = xfs_bdstrat_cb;
+ bp->pb_fspriv3 = mp;
+
+ return pagebuf_iostart(bp, PBF_DELWRI | PBF_ASYNC);
+}
+
+#define XFS_bdstrat(bp) pagebuf_iorequest(bp)
+
+#define xfs_iowait(pb) pagebuf_iowait(pb)
+
+#define xfs_baread(target, rablkno, ralen) \
+ pagebuf_readahead((target), (rablkno), (ralen), PBF_DONT_BLOCK)
+
+#define xfs_buf_get_empty(len, target) pagebuf_get_empty((len), (target))
+#define xfs_buf_get_noaddr(len, target) pagebuf_get_no_daddr((len), (target))
+#define xfs_buf_free(bp) pagebuf_free(bp)
+
+
+/*
+ * Handling of buftargs.
+ */
+
+extern xfs_buftarg_t *xfs_alloc_buftarg(struct block_device *);
+extern void xfs_free_buftarg(xfs_buftarg_t *, int);
+extern void xfs_wait_buftarg(xfs_buftarg_t *);
+extern int xfs_setsize_buftarg(xfs_buftarg_t *, unsigned int, unsigned int);
+extern void xfs_incore_relse(xfs_buftarg_t *, int, int);
+extern int xfs_flush_buftarg(xfs_buftarg_t *, int);
+
+#define xfs_getsize_buftarg(buftarg) \
+ block_size((buftarg)->pbr_bdev)
+#define xfs_readonly_buftarg(buftarg) \
+ bdev_read_only((buftarg)->pbr_bdev)
+#define xfs_binval(buftarg) \
+ xfs_flush_buftarg(buftarg, 1)
+#define XFS_bflush(buftarg) \
+ xfs_flush_buftarg(buftarg, 1)
+
+#endif /* __XFS_BUF_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+#include "xfs_inum.h"
+#include "xfs_log.h"
+#include "xfs_sb.h"
+#include "xfs_dir.h"
+#include "xfs_dir2.h"
+#include "xfs_trans.h"
+#include "xfs_dmapi.h"
+#include "xfs_mount.h"
+#include "xfs_bmap_btree.h"
+#include "xfs_alloc_btree.h"
+#include "xfs_ialloc_btree.h"
+#include "xfs_alloc.h"
+#include "xfs_btree.h"
+#include "xfs_attr_sf.h"
+#include "xfs_dir_sf.h"
+#include "xfs_dir2_sf.h"
+#include "xfs_dinode.h"
+#include "xfs_inode.h"
+#include "xfs_error.h"
+#include "xfs_rw.h"
+
+#include <linux/dcache.h>
+#include <linux/smp_lock.h>
+
+static struct vm_operations_struct linvfs_file_vm_ops;
+
+
+STATIC inline ssize_t
+__linvfs_read(
+ struct kiocb *iocb,
+ char __user *buf,
+ int ioflags,
+ size_t count,
+ loff_t pos)
+{
+ struct iovec iov = {buf, count};
+ struct file *file = iocb->ki_filp;
+ vnode_t *vp = LINVFS_GET_VP(file->f_dentry->d_inode);
+ ssize_t rval;
+
+ BUG_ON(iocb->ki_pos != pos);
+
+ if (unlikely(file->f_flags & O_DIRECT))
+ ioflags |= IO_ISDIRECT;
+ VOP_READ(vp, iocb, &iov, 1, &iocb->ki_pos, ioflags, NULL, rval);
+ return rval;
+}
+
+
+STATIC ssize_t
+linvfs_read(
+ struct kiocb *iocb,
+ char __user *buf,
+ size_t count,
+ loff_t pos)
+{
+ return __linvfs_read(iocb, buf, 0, count, pos);
+}
+
+STATIC ssize_t
+linvfs_read_invis(
+ struct kiocb *iocb,
+ char __user *buf,
+ size_t count,
+ loff_t pos)
+{
+ return __linvfs_read(iocb, buf, IO_INVIS, count, pos);
+}
+
+
+STATIC inline ssize_t
+__linvfs_write(
+ struct kiocb *iocb,
+ const char __user *buf,
+ int ioflags,
+ size_t count,
+ loff_t pos)
+{
+ struct iovec iov = {(void __user *)buf, count};
+ struct file *file = iocb->ki_filp;
+ struct inode *inode = file->f_mapping->host;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ ssize_t rval;
+
+ BUG_ON(iocb->ki_pos != pos);
+ if (unlikely(file->f_flags & O_DIRECT))
+ ioflags |= IO_ISDIRECT;
+
+ VOP_WRITE(vp, iocb, &iov, 1, &iocb->ki_pos, ioflags, NULL, rval);
+ return rval;
+}
+
+
+STATIC ssize_t
+linvfs_write(
+ struct kiocb *iocb,
+ const char __user *buf,
+ size_t count,
+ loff_t pos)
+{
+ return __linvfs_write(iocb, buf, 0, count, pos);
+}
+
+STATIC ssize_t
+linvfs_write_invis(
+ struct kiocb *iocb,
+ const char __user *buf,
+ size_t count,
+ loff_t pos)
+{
+ return __linvfs_write(iocb, buf, IO_INVIS, count, pos);
+}
+
+
+STATIC inline ssize_t
+__linvfs_readv(
+ struct file *file,
+ const struct iovec *iov,
+ int ioflags,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ struct inode *inode = file->f_mapping->host;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ struct kiocb kiocb;
+ ssize_t rval;
+
+ init_sync_kiocb(&kiocb, file);
+ kiocb.ki_pos = *ppos;
+
+ if (unlikely(file->f_flags & O_DIRECT))
+ ioflags |= IO_ISDIRECT;
+ VOP_READ(vp, &kiocb, iov, nr_segs, &kiocb.ki_pos, ioflags, NULL, rval);
+
+ *ppos = kiocb.ki_pos;
+ return rval;
+}
+
+STATIC ssize_t
+linvfs_readv(
+ struct file *file,
+ const struct iovec *iov,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ return __linvfs_readv(file, iov, 0, nr_segs, ppos);
+}
+
+STATIC ssize_t
+linvfs_readv_invis(
+ struct file *file,
+ const struct iovec *iov,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ return __linvfs_readv(file, iov, IO_INVIS, nr_segs, ppos);
+}
+
+
+STATIC inline ssize_t
+__linvfs_writev(
+ struct file *file,
+ const struct iovec *iov,
+ int ioflags,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ struct inode *inode = file->f_mapping->host;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ struct kiocb kiocb;
+ ssize_t rval;
+
+ init_sync_kiocb(&kiocb, file);
+ kiocb.ki_pos = *ppos;
+ if (unlikely(file->f_flags & O_DIRECT))
+ ioflags |= IO_ISDIRECT;
+
+ VOP_WRITE(vp, &kiocb, iov, nr_segs, &kiocb.ki_pos, ioflags, NULL, rval);
+
+ *ppos = kiocb.ki_pos;
+ return rval;
+}
+
+
+STATIC ssize_t
+linvfs_writev(
+ struct file *file,
+ const struct iovec *iov,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ return __linvfs_writev(file, iov, 0, nr_segs, ppos);
+}
+
+STATIC ssize_t
+linvfs_writev_invis(
+ struct file *file,
+ const struct iovec *iov,
+ unsigned long nr_segs,
+ loff_t *ppos)
+{
+ return __linvfs_writev(file, iov, IO_INVIS, nr_segs, ppos);
+}
+
+STATIC ssize_t
+linvfs_sendfile(
+ struct file *filp,
+ loff_t *ppos,
+ size_t count,
+ read_actor_t actor,
+ void *target)
+{
+ vnode_t *vp = LINVFS_GET_VP(filp->f_dentry->d_inode);
+ ssize_t rval;
+
+ VOP_SENDFILE(vp, filp, ppos, 0, count, actor, target, NULL, rval);
+ return rval;
+}
+
+
+STATIC int
+linvfs_open(
+ struct inode *inode,
+ struct file *filp)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error;
+
+ if (!(filp->f_flags & O_LARGEFILE) && i_size_read(inode) > MAX_NON_LFS)
+ return -EFBIG;
+
+ ASSERT(vp);
+ VOP_OPEN(vp, NULL, error);
+ return -error;
+}
+
+
+STATIC int
+linvfs_release(
+ struct inode *inode,
+ struct file *filp)
+{
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error = 0;
+
+ if (vp)
+ VOP_RELEASE(vp, error);
+ return -error;
+}
+
+
+STATIC int
+linvfs_fsync(
+ struct file *filp,
+ struct dentry *dentry,
+ int datasync)
+{
+ struct inode *inode = dentry->d_inode;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+ int error;
+ int flags = FSYNC_WAIT;
+
+ if (datasync)
+ flags |= FSYNC_DATA;
+
+ ASSERT(vp);
+ VOP_FSYNC(vp, flags, NULL, (xfs_off_t)0, (xfs_off_t)-1, error);
+ return -error;
+}
+
+/*
+ * linvfs_readdir maps to VOP_READDIR().
+ * We need to build a uio, cred, ...
+ */
+
+#define nextdp(dp) ((struct xfs_dirent *)((char *)(dp) + (dp)->d_reclen))
+
+STATIC int
+linvfs_readdir(
+ struct file *filp,
+ void *dirent,
+ filldir_t filldir)
+{
+ int error = 0;
+ vnode_t *vp;
+ uio_t uio;
+ iovec_t iov;
+ int eof = 0;
+ caddr_t read_buf;
+ int namelen, size = 0;
+ size_t rlen = PAGE_CACHE_SIZE;
+ xfs_off_t start_offset, curr_offset;
+ xfs_dirent_t *dbp = NULL;
+
+ vp = LINVFS_GET_VP(filp->f_dentry->d_inode);
+ ASSERT(vp);
+
+ /* Try fairly hard to get memory */
+ do {
+ if ((read_buf = (caddr_t)kmalloc(rlen, GFP_KERNEL)))
+ break;
+ rlen >>= 1;
+ } while (rlen >= 1024);
+
+ if (read_buf == NULL)
+ return -ENOMEM;
+
+ uio.uio_iov = &iov;
+ uio.uio_segflg = UIO_SYSSPACE;
+ curr_offset = filp->f_pos;
+ if (filp->f_pos != 0x7fffffff)
+ uio.uio_offset = filp->f_pos;
+ else
+ uio.uio_offset = 0xffffffff;
+
+ while (!eof) {
+ uio.uio_resid = iov.iov_len = rlen;
+ iov.iov_base = read_buf;
+ uio.uio_iovcnt = 1;
+
+ start_offset = uio.uio_offset;
+
+ VOP_READDIR(vp, &uio, NULL, &eof, error);
+ if ((uio.uio_offset == start_offset) || error) {
+ size = 0;
+ break;
+ }
+
+ size = rlen - uio.uio_resid;
+ dbp = (xfs_dirent_t *)read_buf;
+ while (size > 0) {
+ namelen = strlen(dbp->d_name);
+
+ if (filldir(dirent, dbp->d_name, namelen,
+ (loff_t) curr_offset & 0x7fffffff,
+ (ino_t) dbp->d_ino,
+ DT_UNKNOWN)) {
+ goto done;
+ }
+ size -= dbp->d_reclen;
+ curr_offset = (loff_t)dbp->d_off /* & 0x7fffffff */;
+ dbp = nextdp(dbp);
+ }
+ }
+done:
+ if (!error) {
+ if (size == 0)
+ filp->f_pos = uio.uio_offset & 0x7fffffff;
+ else if (dbp)
+ filp->f_pos = curr_offset;
+ }
+
+ kfree(read_buf);
+ return -error;
+}
+
+
+STATIC int
+linvfs_file_mmap(
+ struct file *filp,
+ struct vm_area_struct *vma)
+{
+ struct inode *ip = filp->f_dentry->d_inode;
+ vnode_t *vp = LINVFS_GET_VP(ip);
+ vattr_t va = { .va_mask = XFS_AT_UPDATIME };
+ int error;
+
+ if (vp->v_vfsp->vfs_flag & VFS_DMI) {
+ xfs_mount_t *mp = XFS_VFSTOM(vp->v_vfsp);
+
+ error = -XFS_SEND_MMAP(mp, vma, 0);
+ if (error)
+ return error;
+ }
+
+ vma->vm_ops = &linvfs_file_vm_ops;
+
+ VOP_SETATTR(vp, &va, XFS_AT_UPDATIME, NULL, error);
+ if (!error)
+ vn_revalidate(vp); /* update Linux inode flags */
+ return 0;
+}
+
+
+STATIC int
+linvfs_ioctl(
+ struct inode *inode,
+ struct file *filp,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ int error;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+
+ unlock_kernel();
+ VOP_IOCTL(vp, inode, filp, 0, cmd, (void __user *)arg, error);
+ VMODIFY(vp);
+ lock_kernel();
+
+ /* NOTE: some of the ioctl's return positive #'s as a
+ * byte count indicating success, such as
+ * readlink_by_handle. So we don't "sign flip"
+ * like most other routines. This means true
+ * errors need to be returned as a negative value.
+ */
+ return error;
+}
+
+STATIC int
+linvfs_ioctl_invis(
+ struct inode *inode,
+ struct file *filp,
+ unsigned int cmd,
+ unsigned long arg)
+{
+ int error;
+ vnode_t *vp = LINVFS_GET_VP(inode);
+
+ unlock_kernel();
+ ASSERT(vp);
+ VOP_IOCTL(vp, inode, filp, IO_INVIS, cmd, (void __user *)arg, error);
+ VMODIFY(vp);
+ lock_kernel();
+
+ /* NOTE: some of the ioctl's return positive #'s as a
+ * byte count indicating success, such as
+ * readlink_by_handle. So we don't "sign flip"
+ * like most other routines. This means true
+ * errors need to be returned as a negative value.
+ */
+ return error;
+}
+
+#ifdef HAVE_VMOP_MPROTECT
+STATIC int
+linvfs_mprotect(
+ struct vm_area_struct *vma,
+ unsigned int newflags)
+{
+ vnode_t *vp = LINVFS_GET_VP(vma->vm_file->f_dentry->d_inode);
+ int error = 0;
+
+ if (vp->v_vfsp->vfs_flag & VFS_DMI) {
+ if ((vma->vm_flags & VM_MAYSHARE) &&
+ (newflags & VM_WRITE) && !(vma->vm_flags & VM_WRITE)) {
+ xfs_mount_t *mp = XFS_VFSTOM(vp->v_vfsp);
+
+ error = XFS_SEND_MMAP(mp, vma, VM_WRITE);
+ }
+ }
+ return error;
+}
+#endif /* HAVE_VMOP_MPROTECT */
+
+
+struct file_operations linvfs_file_operations = {
+ .llseek = generic_file_llseek,
+ .read = do_sync_read,
+ .write = do_sync_write,
+ .readv = linvfs_readv,
+ .writev = linvfs_writev,
+ .aio_read = linvfs_read,
+ .aio_write = linvfs_write,
+ .sendfile = linvfs_sendfile,
+ .ioctl = linvfs_ioctl,
+ .mmap = linvfs_file_mmap,
+ .open = linvfs_open,
+ .release = linvfs_release,
+ .fsync = linvfs_fsync,
+};
+
+struct file_operations linvfs_invis_file_operations = {
+ .llseek = generic_file_llseek,
+ .read = do_sync_read,
+ .write = do_sync_write,
+ .readv = linvfs_readv_invis,
+ .writev = linvfs_writev_invis,
+ .aio_read = linvfs_read_invis,
+ .aio_write = linvfs_write_invis,
+ .sendfile = linvfs_sendfile,
+ .ioctl = linvfs_ioctl_invis,
+ .mmap = linvfs_file_mmap,
+ .open = linvfs_open,
+ .release = linvfs_release,
+ .fsync = linvfs_fsync,
+};
+
+
+struct file_operations linvfs_dir_operations = {
+ .read = generic_read_dir,
+ .readdir = linvfs_readdir,
+ .ioctl = linvfs_ioctl,
+ .fsync = linvfs_fsync,
+};
+
+static struct vm_operations_struct linvfs_file_vm_ops = {
+ .nopage = filemap_nopage,
+#ifdef HAVE_VMOP_MPROTECT
+ .mprotect = linvfs_mprotect,
+#endif
+};
--- /dev/null
+/*
+ * Copyright (c) 2000, 2002 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_SUBR_H__
+#define __XFS_SUBR_H__
+
+/*
+ * Utilities shared among file system implementations.
+ */
+
+struct cred;
+
+extern int fs_noerr(void);
+extern int fs_nosys(void);
+extern void fs_noval(void);
+extern void fs_tosspages(bhv_desc_t *, xfs_off_t, xfs_off_t, int);
+extern void fs_flushinval_pages(bhv_desc_t *, xfs_off_t, xfs_off_t, int);
+extern int fs_flush_pages(bhv_desc_t *, xfs_off_t, xfs_off_t, uint64_t, int);
+
+#endif /* __XFS_FS_SUBR_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+/*
+ * This file contains globals needed by XFS that were normally defined
+ * somewhere else in IRIX.
+ */
+
+#include "xfs.h"
+#include "xfs_cred.h"
+#include "xfs_sysctl.h"
+
+/*
+ * System memory size - used to scale certain data structures in XFS.
+ */
+unsigned long xfs_physmem;
+
+/*
+ * Tunable XFS parameters. xfs_params is required even when CONFIG_SYSCTL=n,
+ * other XFS code uses these values. Times are measured in centisecs (i.e.
+ * 100ths of a second).
+ */
+xfs_param_t xfs_params = {
+ /* MIN DFLT MAX */
+ .restrict_chown = { 0, 1, 1 },
+ .sgid_inherit = { 0, 0, 1 },
+ .symlink_mode = { 0, 0, 1 },
+ .panic_mask = { 0, 0, 127 },
+ .error_level = { 0, 3, 11 },
+ .syncd_timer = { 1*100, 30*100, 7200*100},
+ .stats_clear = { 0, 0, 1 },
+ .inherit_sync = { 0, 1, 1 },
+ .inherit_nodump = { 0, 1, 1 },
+ .inherit_noatim = { 0, 1, 1 },
+ .xfs_buf_timer = { 100/2, 1*100, 30*100 },
+ .xfs_buf_age = { 1*100, 15*100, 7200*100},
+ .inherit_nosym = { 0, 0, 1 },
+ .rotorstep = { 1, 1, 255 },
+};
+
+/*
+ * Global system credential structure.
+ */
+cred_t sys_cred_val, *sys_cred = &sys_cred_val;
+
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_IOPS_H__
+#define __XFS_IOPS_H__
+
+extern struct inode_operations linvfs_file_inode_operations;
+extern struct inode_operations linvfs_dir_inode_operations;
+extern struct inode_operations linvfs_symlink_inode_operations;
+
+extern struct file_operations linvfs_file_operations;
+extern struct file_operations linvfs_invis_file_operations;
+extern struct file_operations linvfs_dir_operations;
+
+extern struct address_space_operations linvfs_aops;
+
+extern int linvfs_get_block(struct inode *, sector_t, struct buffer_head *, int);
+extern void linvfs_unwritten_done(struct buffer_head *, int);
+
+extern int xfs_ioctl(struct bhv_desc *, struct inode *, struct file *,
+ int, unsigned int, void __user *);
+
+#endif /* __XFS_IOPS_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_LINUX__
+#define __XFS_LINUX__
+
+#include <linux/types.h>
+#include <linux/config.h>
+
+/*
+ * Some types are conditional depending on the target system.
+ * XFS_BIG_BLKNOS needs block layer disk addresses to be 64 bits.
+ * XFS_BIG_INUMS needs the VFS inode number to be 64 bits, as well
+ * as requiring XFS_BIG_BLKNOS to be set.
+ */
+#if defined(CONFIG_LBD) || (BITS_PER_LONG == 64)
+# define XFS_BIG_BLKNOS 1
+# if BITS_PER_LONG == 64
+# define XFS_BIG_INUMS 1
+# else
+# define XFS_BIG_INUMS 0
+# endif
+#else
+# define XFS_BIG_BLKNOS 0
+# define XFS_BIG_INUMS 0
+#endif
+
+#include <xfs_types.h>
+#include <xfs_arch.h>
+
+#include <kmem.h>
+#include <mrlock.h>
+#include <spin.h>
+#include <sv.h>
+#include <mutex.h>
+#include <sema.h>
+#include <time.h>
+
+#include <support/qsort.h>
+#include <support/ktrace.h>
+#include <support/debug.h>
+#include <support/move.h>
+#include <support/uuid.h>
+
+#include <linux/mm.h>
+#include <linux/kernel.h>
+#include <linux/blkdev.h>
+#include <linux/slab.h>
+#include <linux/module.h>
+#include <linux/file.h>
+#include <linux/swap.h>
+#include <linux/errno.h>
+#include <linux/sched.h>
+#include <linux/bitops.h>
+#include <linux/major.h>
+#include <linux/pagemap.h>
+#include <linux/vfs.h>
+#include <linux/seq_file.h>
+#include <linux/init.h>
+#include <linux/list.h>
+#include <linux/proc_fs.h>
+#include <linux/version.h>
+
+#include <asm/page.h>
+#include <asm/div64.h>
+#include <asm/param.h>
+#include <asm/uaccess.h>
+#include <asm/byteorder.h>
+#include <asm/unaligned.h>
+
+#include <xfs_behavior.h>
+#include <xfs_vfs.h>
+#include <xfs_cred.h>
+#include <xfs_vnode.h>
+#include <xfs_stats.h>
+#include <xfs_sysctl.h>
+#include <xfs_iops.h>
+#include <xfs_super.h>
+#include <xfs_globals.h>
+#include <xfs_fs_subr.h>
+#include <xfs_lrw.h>
+#include <xfs_buf.h>
+
+/*
+ * Feature macros (disable/enable)
+ */
+#undef HAVE_REFCACHE /* reference cache not needed for NFS in 2.6 */
+#define HAVE_SENDFILE /* sendfile(2) exists in 2.6, but not in 2.4 */
+
+/*
+ * State flag for unwritten extent buffers.
+ *
+ * We need to be able to distinguish between these and delayed
+ * allocate buffers within XFS. The generic IO path code does
+ * not need to distinguish - we use the BH_Delay flag for both
+ * delalloc and these ondisk-uninitialised buffers.
+ */
+BUFFER_FNS(PrivateStart, unwritten);
+static inline void set_buffer_unwritten_io(struct buffer_head *bh)
+{
+ bh->b_end_io = linvfs_unwritten_done;
+}
+
+#define restricted_chown xfs_params.restrict_chown.val
+#define irix_sgid_inherit xfs_params.sgid_inherit.val
+#define irix_symlink_mode xfs_params.symlink_mode.val
+#define xfs_panic_mask xfs_params.panic_mask.val
+#define xfs_error_level xfs_params.error_level.val
+#define xfs_syncd_centisecs xfs_params.syncd_timer.val
+#define xfs_stats_clear xfs_params.stats_clear.val
+#define xfs_inherit_sync xfs_params.inherit_sync.val
+#define xfs_inherit_nodump xfs_params.inherit_nodump.val
+#define xfs_inherit_noatime xfs_params.inherit_noatim.val
+#define xfs_buf_timer_centisecs xfs_params.xfs_buf_timer.val
+#define xfs_buf_age_centisecs xfs_params.xfs_buf_age.val
+#define xfs_inherit_nosymlinks xfs_params.inherit_nosym.val
+#define xfs_rotorstep xfs_params.rotorstep.val
+
+#define current_cpu() smp_processor_id()
+#define current_pid() (current->pid)
+#define current_fsuid(cred) (current->fsuid)
+#define current_fsgid(cred) (current->fsgid)
+
+#define NBPP PAGE_SIZE
+#define DPPSHFT (PAGE_SHIFT - 9)
+#define NDPP (1 << (PAGE_SHIFT - 9))
+#define dtop(DD) (((DD) + NDPP - 1) >> DPPSHFT)
+#define dtopt(DD) ((DD) >> DPPSHFT)
+#define dpoff(DD) ((DD) & (NDPP-1))
+
+#define NBBY 8 /* number of bits per byte */
+#define NBPC PAGE_SIZE /* Number of bytes per click */
+#define BPCSHIFT PAGE_SHIFT /* LOG2(NBPC) if exact */
+
+/*
+ * Size of block device i/o is parameterized here.
+ * Currently the system supports page-sized i/o.
+ */
+#define BLKDEV_IOSHIFT BPCSHIFT
+#define BLKDEV_IOSIZE (1<<BLKDEV_IOSHIFT)
+/* number of BB's per block device block */
+#define BLKDEV_BB BTOBB(BLKDEV_IOSIZE)
+
+/* bytes to clicks */
+#define btoc(x) (((__psunsigned_t)(x)+(NBPC-1))>>BPCSHIFT)
+#define btoct(x) ((__psunsigned_t)(x)>>BPCSHIFT)
+#define btoc64(x) (((__uint64_t)(x)+(NBPC-1))>>BPCSHIFT)
+#define btoct64(x) ((__uint64_t)(x)>>BPCSHIFT)
+#define io_btoc(x) (((__psunsigned_t)(x)+(IO_NBPC-1))>>IO_BPCSHIFT)
+#define io_btoct(x) ((__psunsigned_t)(x)>>IO_BPCSHIFT)
+
+/* off_t bytes to clicks */
+#define offtoc(x) (((__uint64_t)(x)+(NBPC-1))>>BPCSHIFT)
+#define offtoct(x) ((xfs_off_t)(x)>>BPCSHIFT)
+
+/* clicks to off_t bytes */
+#define ctooff(x) ((xfs_off_t)(x)<<BPCSHIFT)
+
+/* clicks to bytes */
+#define ctob(x) ((__psunsigned_t)(x)<<BPCSHIFT)
+#define btoct(x) ((__psunsigned_t)(x)>>BPCSHIFT)
+#define ctob64(x) ((__uint64_t)(x)<<BPCSHIFT)
+#define io_ctob(x) ((__psunsigned_t)(x)<<IO_BPCSHIFT)
+
+/* bytes to clicks */
+#define btoc(x) (((__psunsigned_t)(x)+(NBPC-1))>>BPCSHIFT)
+
+#ifndef CELL_CAPABLE
+#define FSC_NOTIFY_NAME_CHANGED(vp)
+#endif
+
+#ifndef ENOATTR
+#define ENOATTR ENODATA /* Attribute not found */
+#endif
+
+/* Note: EWRONGFS never visible outside the kernel */
+#define EWRONGFS EINVAL /* Mount with wrong filesystem type */
+
+/*
+ * XXX EFSCORRUPTED needs a real value in errno.h. asm-i386/errno.h won't
+ * return codes out of its known range in errno.
+ * XXX Also note: needs to be < 1000 and fairly unique on Linux (mustn't
+ * conflict with any code we use already or any code a driver may use)
+ * XXX Some options (currently we do #2):
+ * 1/ New error code ["Filesystem is corrupted", _after_ glibc updated]
+ * 2/ 990 ["Unknown error 990"]
+ * 3/ EUCLEAN ["Structure needs cleaning"]
+ * 4/ Convert EFSCORRUPTED to EIO [just prior to return into userspace]
+ */
+#define EFSCORRUPTED 990 /* Filesystem is corrupted */
+
+#define SYNCHRONIZE() barrier()
+#define __return_address __builtin_return_address(0)
+
+/*
+ * IRIX (BSD) quotactl makes use of separate commands for user/group,
+ * whereas on Linux the syscall encodes this information into the cmd
+ * field (see the QCMD macro in quota.h). These macros help keep the
+ * code portable - they are not visible from the syscall interface.
+ */
+#define Q_XSETGQLIM XQM_CMD(0x8) /* set groups disk limits */
+#define Q_XGETGQUOTA XQM_CMD(0x9) /* get groups disk limits */
+
+/* IRIX uses a dynamic sizing algorithm (ndquot = 200 + numprocs*2) */
+/* we may well need to fine-tune this if it ever becomes an issue. */
+#define DQUOT_MAX_HEURISTIC 1024 /* NR_DQUOTS */
+#define ndquot DQUOT_MAX_HEURISTIC
+
+/* IRIX uses the current size of the name cache to guess a good value */
+/* - this isn't the same but is a good enough starting point for now. */
+#define DQUOT_HASH_HEURISTIC files_stat.nr_files
+
+/* IRIX inodes maintain the project ID also, zero this field on Linux */
+#define DEFAULT_PROJID 0
+#define dfltprid DEFAULT_PROJID
+
+#define MAXPATHLEN 1024
+
+#define MIN(a,b) (min(a,b))
+#define MAX(a,b) (max(a,b))
+#define howmany(x, y) (((x)+((y)-1))/(y))
+#define roundup(x, y) ((((x)+((y)-1))/(y))*(y))
+
+#define xfs_stack_trace() dump_stack()
+
+#define xfs_itruncate_data(ip, off) \
+ (-vmtruncate(LINVFS_GET_IP(XFS_ITOV(ip)), (off)))
+
+
+/* Move the kernel do_div definition off to one side */
+
+#if defined __i386__
+/* For ia32 we need to pull some tricks to get past various versions
+ * of the compiler which do not like us using do_div in the middle
+ * of large functions.
+ */
+static inline __u32 xfs_do_div(void *a, __u32 b, int n)
+{
+ __u32 mod;
+
+ switch (n) {
+ case 4:
+ mod = *(__u32 *)a % b;
+ *(__u32 *)a = *(__u32 *)a / b;
+ return mod;
+ case 8:
+ {
+ unsigned long __upper, __low, __high, __mod;
+ __u64 c = *(__u64 *)a;
+ __upper = __high = c >> 32;
+ __low = c;
+ if (__high) {
+ __upper = __high % (b);
+ __high = __high / (b);
+ }
+ asm("divl %2":"=a" (__low), "=d" (__mod):"rm" (b), "0" (__low), "1" (__upper));
+ asm("":"=A" (c):"a" (__low),"d" (__high));
+ *(__u64 *)a = c;
+ return __mod;
+ }
+ }
+
+ /* NOTREACHED */
+ return 0;
+}
+
+/* Side effect free 64 bit mod operation */
+static inline __u32 xfs_do_mod(void *a, __u32 b, int n)
+{
+ switch (n) {
+ case 4:
+ return *(__u32 *)a % b;
+ case 8:
+ {
+ unsigned long __upper, __low, __high, __mod;
+ __u64 c = *(__u64 *)a;
+ __upper = __high = c >> 32;
+ __low = c;
+ if (__high) {
+ __upper = __high % (b);
+ __high = __high / (b);
+ }
+ asm("divl %2":"=a" (__low), "=d" (__mod):"rm" (b), "0" (__low), "1" (__upper));
+ asm("":"=A" (c):"a" (__low),"d" (__high));
+ return __mod;
+ }
+ }
+
+ /* NOTREACHED */
+ return 0;
+}
+#else
+static inline __u32 xfs_do_div(void *a, __u32 b, int n)
+{
+ __u32 mod;
+
+ switch (n) {
+ case 4:
+ mod = *(__u32 *)a % b;
+ *(__u32 *)a = *(__u32 *)a / b;
+ return mod;
+ case 8:
+ mod = do_div(*(__u64 *)a, b);
+ return mod;
+ }
+
+ /* NOTREACHED */
+ return 0;
+}
+
+/* Side effect free 64 bit mod operation */
+static inline __u32 xfs_do_mod(void *a, __u32 b, int n)
+{
+ switch (n) {
+ case 4:
+ return *(__u32 *)a % b;
+ case 8:
+ {
+ __u64 c = *(__u64 *)a;
+ return do_div(c, b);
+ }
+ }
+
+ /* NOTREACHED */
+ return 0;
+}
+#endif
+
+#undef do_div
+#define do_div(a, b) xfs_do_div(&(a), (b), sizeof(a))
+#define do_mod(a, b) xfs_do_mod(&(a), (b), sizeof(a))
+
+static inline __uint64_t roundup_64(__uint64_t x, __uint32_t y)
+{
+ x += y - 1;
+ do_div(x, y);
+ return(x * y);
+}
+
+#endif /* __XFS_LINUX__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_LRW_H__
+#define __XFS_LRW_H__
+
+struct vnode;
+struct bhv_desc;
+struct xfs_mount;
+struct xfs_iocore;
+struct xfs_inode;
+struct xfs_bmbt_irec;
+struct xfs_buf;
+struct xfs_iomap;
+
+#if defined(XFS_RW_TRACE)
+/*
+ * Defines for the trace mechanisms in xfs_lrw.c.
+ */
+#define XFS_RW_KTRACE_SIZE 128
+
+#define XFS_READ_ENTER 1
+#define XFS_WRITE_ENTER 2
+#define XFS_IOMAP_READ_ENTER 3
+#define XFS_IOMAP_WRITE_ENTER 4
+#define XFS_IOMAP_READ_MAP 5
+#define XFS_IOMAP_WRITE_MAP 6
+#define XFS_IOMAP_WRITE_NOSPACE 7
+#define XFS_ITRUNC_START 8
+#define XFS_ITRUNC_FINISH1 9
+#define XFS_ITRUNC_FINISH2 10
+#define XFS_CTRUNC1 11
+#define XFS_CTRUNC2 12
+#define XFS_CTRUNC3 13
+#define XFS_CTRUNC4 14
+#define XFS_CTRUNC5 15
+#define XFS_CTRUNC6 16
+#define XFS_BUNMAPI 17
+#define XFS_INVAL_CACHED 18
+#define XFS_DIORD_ENTER 19
+#define XFS_DIOWR_ENTER 20
+#define XFS_SENDFILE_ENTER 21
+#define XFS_WRITEPAGE_ENTER 22
+#define XFS_RELEASEPAGE_ENTER 23
+#define XFS_IOMAP_ALLOC_ENTER 24
+#define XFS_IOMAP_ALLOC_MAP 25
+#define XFS_IOMAP_UNWRITTEN 26
+extern void xfs_rw_enter_trace(int, struct xfs_iocore *,
+ void *, size_t, loff_t, int);
+extern void xfs_inval_cached_trace(struct xfs_iocore *,
+ xfs_off_t, xfs_off_t, xfs_off_t, xfs_off_t);
+#else
+#define xfs_rw_enter_trace(tag, io, data, size, offset, ioflags)
+#define xfs_inval_cached_trace(io, offset, len, first, last)
+#endif
+
+/*
+ * Maximum count of bmaps used by read and write paths.
+ */
+#define XFS_MAX_RW_NBMAPS 4
+
+extern int xfs_bmap(struct bhv_desc *, xfs_off_t, ssize_t, int,
+ struct xfs_iomap *, int *);
+extern int xfsbdstrat(struct xfs_mount *, struct xfs_buf *);
+extern int xfs_bdstrat_cb(struct xfs_buf *);
+
+extern int xfs_zero_eof(struct vnode *, struct xfs_iocore *, xfs_off_t,
+ xfs_fsize_t, xfs_fsize_t);
+extern void xfs_inval_cached_pages(struct vnode *, struct xfs_iocore *,
+ xfs_off_t, int, int);
+extern ssize_t xfs_read(struct bhv_desc *, struct kiocb *,
+ const struct iovec *, unsigned int,
+ loff_t *, int, struct cred *);
+extern ssize_t xfs_write(struct bhv_desc *, struct kiocb *,
+ const struct iovec *, unsigned int,
+ loff_t *, int, struct cred *);
+extern ssize_t xfs_sendfile(struct bhv_desc *, struct file *,
+ loff_t *, int, size_t, read_actor_t,
+ void *, struct cred *);
+
+extern int xfs_dev_is_read_only(struct xfs_mount *, char *);
+
+#define XFS_FSB_TO_DB_IO(io,fsb) \
+ (((io)->io_flags & XFS_IOCORE_RT) ? \
+ XFS_FSB_TO_BB((io)->io_mount, (fsb)) : \
+ XFS_FSB_TO_DADDR((io)->io_mount, (fsb)))
+
+#endif /* __XFS_LRW_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+#include <linux/proc_fs.h>
+
+DEFINE_PER_CPU(struct xfsstats, xfsstats);
+
+STATIC int
+xfs_read_xfsstats(
+ char *buffer,
+ char **start,
+ off_t offset,
+ int count,
+ int *eof,
+ void *data)
+{
+ int c, i, j, len, val;
+ __uint64_t xs_xstrat_bytes = 0;
+ __uint64_t xs_write_bytes = 0;
+ __uint64_t xs_read_bytes = 0;
+
+ static struct xstats_entry {
+ char *desc;
+ int endpoint;
+ } xstats[] = {
+ { "extent_alloc", XFSSTAT_END_EXTENT_ALLOC },
+ { "abt", XFSSTAT_END_ALLOC_BTREE },
+ { "blk_map", XFSSTAT_END_BLOCK_MAPPING },
+ { "bmbt", XFSSTAT_END_BLOCK_MAP_BTREE },
+ { "dir", XFSSTAT_END_DIRECTORY_OPS },
+ { "trans", XFSSTAT_END_TRANSACTIONS },
+ { "ig", XFSSTAT_END_INODE_OPS },
+ { "log", XFSSTAT_END_LOG_OPS },
+ { "push_ail", XFSSTAT_END_TAIL_PUSHING },
+ { "xstrat", XFSSTAT_END_WRITE_CONVERT },
+ { "rw", XFSSTAT_END_READ_WRITE_OPS },
+ { "attr", XFSSTAT_END_ATTRIBUTE_OPS },
+ { "icluster", XFSSTAT_END_INODE_CLUSTER },
+ { "vnodes", XFSSTAT_END_VNODE_OPS },
+ { "buf", XFSSTAT_END_BUF },
+ };
+
+ /* Loop over all stats groups */
+ for (i=j=len = 0; i < sizeof(xstats)/sizeof(struct xstats_entry); i++) {
+ len += sprintf(buffer + len, xstats[i].desc);
+ /* inner loop does each group */
+ while (j < xstats[i].endpoint) {
+ val = 0;
+ /* sum over all cpus */
+ for (c = 0; c < NR_CPUS; c++) {
+ if (!cpu_possible(c)) continue;
+ val += *(((__u32*)&per_cpu(xfsstats, c) + j));
+ }
+ len += sprintf(buffer + len, " %u", val);
+ j++;
+ }
+ buffer[len++] = '\n';
+ }
+ /* extra precision counters */
+ for (i = 0; i < NR_CPUS; i++) {
+ if (!cpu_possible(i)) continue;
+ xs_xstrat_bytes += per_cpu(xfsstats, i).xs_xstrat_bytes;
+ xs_write_bytes += per_cpu(xfsstats, i).xs_write_bytes;
+ xs_read_bytes += per_cpu(xfsstats, i).xs_read_bytes;
+ }
+
+ len += sprintf(buffer + len, "xpc %Lu %Lu %Lu\n",
+ xs_xstrat_bytes, xs_write_bytes, xs_read_bytes);
+ len += sprintf(buffer + len, "debug %u\n",
+#if defined(DEBUG)
+ 1);
+#else
+ 0);
+#endif
+
+ if (offset >= len) {
+ *start = buffer;
+ *eof = 1;
+ return 0;
+ }
+ *start = buffer + offset;
+ if ((len -= offset) > count)
+ return count;
+ *eof = 1;
+
+ return len;
+}
+
+void
+xfs_init_procfs(void)
+{
+ if (!proc_mkdir("fs/xfs", NULL))
+ return;
+ create_proc_read_entry("fs/xfs/stat", 0, NULL, xfs_read_xfsstats, NULL);
+}
+
+void
+xfs_cleanup_procfs(void)
+{
+ remove_proc_entry("fs/xfs/stat", NULL);
+ remove_proc_entry("fs/xfs", NULL);
+}
--- /dev/null
+/*
+ * Copyright (c) 2001-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+#include "xfs_rw.h"
+#include <linux/sysctl.h>
+#include <linux/proc_fs.h>
+
+
+static struct ctl_table_header *xfs_table_header;
+
+
+#ifdef CONFIG_PROC_FS
+STATIC int
+xfs_stats_clear_proc_handler(
+ ctl_table *ctl,
+ int write,
+ struct file *filp,
+ void __user *buffer,
+ size_t *lenp,
+ loff_t *ppos)
+{
+ int c, ret, *valp = ctl->data;
+ __uint32_t vn_active;
+
+ ret = proc_dointvec_minmax(ctl, write, filp, buffer, lenp, ppos);
+
+ if (!ret && write && *valp) {
+ printk("XFS Clearing xfsstats\n");
+ for (c = 0; c < NR_CPUS; c++) {
+ if (!cpu_possible(c)) continue;
+ preempt_disable();
+ /* save vn_active, it's a universal truth! */
+ vn_active = per_cpu(xfsstats, c).vn_active;
+ memset(&per_cpu(xfsstats, c), 0,
+ sizeof(struct xfsstats));
+ per_cpu(xfsstats, c).vn_active = vn_active;
+ preempt_enable();
+ }
+ xfs_stats_clear = 0;
+ }
+
+ return ret;
+}
+#endif /* CONFIG_PROC_FS */
+
+STATIC ctl_table xfs_table[] = {
+ {XFS_RESTRICT_CHOWN, "restrict_chown", &xfs_params.restrict_chown.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.restrict_chown.min, &xfs_params.restrict_chown.max},
+
+ {XFS_SGID_INHERIT, "irix_sgid_inherit", &xfs_params.sgid_inherit.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.sgid_inherit.min, &xfs_params.sgid_inherit.max},
+
+ {XFS_SYMLINK_MODE, "irix_symlink_mode", &xfs_params.symlink_mode.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.symlink_mode.min, &xfs_params.symlink_mode.max},
+
+ {XFS_PANIC_MASK, "panic_mask", &xfs_params.panic_mask.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.panic_mask.min, &xfs_params.panic_mask.max},
+
+ {XFS_ERRLEVEL, "error_level", &xfs_params.error_level.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.error_level.min, &xfs_params.error_level.max},
+
+ {XFS_SYNCD_TIMER, "xfssyncd_centisecs", &xfs_params.syncd_timer.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.syncd_timer.min, &xfs_params.syncd_timer.max},
+
+ {XFS_INHERIT_SYNC, "inherit_sync", &xfs_params.inherit_sync.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.inherit_sync.min, &xfs_params.inherit_sync.max},
+
+ {XFS_INHERIT_NODUMP, "inherit_nodump", &xfs_params.inherit_nodump.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.inherit_nodump.min, &xfs_params.inherit_nodump.max},
+
+ {XFS_INHERIT_NOATIME, "inherit_noatime", &xfs_params.inherit_noatim.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.inherit_noatim.min, &xfs_params.inherit_noatim.max},
+
+ {XFS_BUF_TIMER, "xfsbufd_centisecs", &xfs_params.xfs_buf_timer.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.xfs_buf_timer.min, &xfs_params.xfs_buf_timer.max},
+
+ {XFS_BUF_AGE, "age_buffer_centisecs", &xfs_params.xfs_buf_age.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.xfs_buf_age.min, &xfs_params.xfs_buf_age.max},
+
+ {XFS_INHERIT_NOSYM, "inherit_nosymlinks", &xfs_params.inherit_nosym.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.inherit_nosym.min, &xfs_params.inherit_nosym.max},
+
+ {XFS_ROTORSTEP, "rotorstep", &xfs_params.rotorstep.val,
+ sizeof(int), 0644, NULL, &proc_dointvec_minmax,
+ &sysctl_intvec, NULL,
+ &xfs_params.rotorstep.min, &xfs_params.rotorstep.max},
+
+ /* please keep this the last entry */
+#ifdef CONFIG_PROC_FS
+ {XFS_STATS_CLEAR, "stats_clear", &xfs_params.stats_clear.val,
+ sizeof(int), 0644, NULL, &xfs_stats_clear_proc_handler,
+ &sysctl_intvec, NULL,
+ &xfs_params.stats_clear.min, &xfs_params.stats_clear.max},
+#endif /* CONFIG_PROC_FS */
+
+ {0}
+};
+
+STATIC ctl_table xfs_dir_table[] = {
+ {FS_XFS, "xfs", NULL, 0, 0555, xfs_table},
+ {0}
+};
+
+STATIC ctl_table xfs_root_table[] = {
+ {CTL_FS, "fs", NULL, 0, 0555, xfs_dir_table},
+ {0}
+};
+
+void
+xfs_sysctl_register(void)
+{
+ xfs_table_header = register_sysctl_table(xfs_root_table, 1);
+}
+
+void
+xfs_sysctl_unregister(void)
+{
+ if (xfs_table_header)
+ unregister_sysctl_table(xfs_table_header);
+}
--- /dev/null
+/*
+ * Copyright (c) 2001-2004 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#ifndef __XFS_SYSCTL_H__
+#define __XFS_SYSCTL_H__
+
+#include <linux/sysctl.h>
+
+/*
+ * Tunable xfs parameters
+ */
+
+typedef struct xfs_sysctl_val {
+ int min;
+ int val;
+ int max;
+} xfs_sysctl_val_t;
+
+typedef struct xfs_param {
+ xfs_sysctl_val_t restrict_chown;/* Root/non-root can give away files.*/
+ xfs_sysctl_val_t sgid_inherit; /* Inherit S_ISGID if process' GID is
+ * not a member of parent dir GID. */
+ xfs_sysctl_val_t symlink_mode; /* Link creat mode affected by umask */
+ xfs_sysctl_val_t panic_mask; /* bitmask to cause panic on errors. */
+ xfs_sysctl_val_t error_level; /* Degree of reporting for problems */
+ xfs_sysctl_val_t syncd_timer; /* Interval between xfssyncd wakeups */
+ xfs_sysctl_val_t stats_clear; /* Reset all XFS statistics to zero. */
+ xfs_sysctl_val_t inherit_sync; /* Inherit the "sync" inode flag. */
+ xfs_sysctl_val_t inherit_nodump;/* Inherit the "nodump" inode flag. */
+ xfs_sysctl_val_t inherit_noatim;/* Inherit the "noatime" inode flag. */
+ xfs_sysctl_val_t xfs_buf_timer; /* Interval between xfsbufd wakeups. */
+ xfs_sysctl_val_t xfs_buf_age; /* Metadata buffer age before flush. */
+ xfs_sysctl_val_t inherit_nosym; /* Inherit the "nosymlinks" flag. */
+ xfs_sysctl_val_t rotorstep; /* inode32 AG rotoring control knob */
+} xfs_param_t;
+
+/*
+ * xfs_error_level:
+ *
+ * How much error reporting will be done when internal problems are
+ * encountered. These problems normally return an EFSCORRUPTED to their
+ * caller, with no other information reported.
+ *
+ * 0 No error reports
+ * 1 Report EFSCORRUPTED errors that will cause a filesystem shutdown
+ * 5 Report all EFSCORRUPTED errors (all of the above errors, plus any
+ * additional errors that are known to not cause shutdowns)
+ *
+ * xfs_panic_mask bit 0x8 turns the error reports into panics
+ */
+
+enum {
+ /* XFS_REFCACHE_SIZE = 1 */
+ /* XFS_REFCACHE_PURGE = 2 */
+ XFS_RESTRICT_CHOWN = 3,
+ XFS_SGID_INHERIT = 4,
+ XFS_SYMLINK_MODE = 5,
+ XFS_PANIC_MASK = 6,
+ XFS_ERRLEVEL = 7,
+ XFS_SYNCD_TIMER = 8,
+ /* XFS_PROBE_DMAPI = 9 */
+ /* XFS_PROBE_IOOPS = 10 */
+ /* XFS_PROBE_QUOTA = 11 */
+ XFS_STATS_CLEAR = 12,
+ XFS_INHERIT_SYNC = 13,
+ XFS_INHERIT_NODUMP = 14,
+ XFS_INHERIT_NOATIME = 15,
+ XFS_BUF_TIMER = 16,
+ XFS_BUF_AGE = 17,
+ /* XFS_IO_BYPASS = 18 */
+ XFS_INHERIT_NOSYM = 19,
+ XFS_ROTORSTEP = 20,
+};
+
+extern xfs_param_t xfs_params;
+
+#ifdef CONFIG_SYSCTL
+extern void xfs_sysctl_register(void);
+extern void xfs_sysctl_unregister(void);
+#else
+# define xfs_sysctl_register() do { } while (0)
+# define xfs_sysctl_unregister() do { } while (0)
+#endif /* CONFIG_SYSCTL */
+
+#endif /* __XFS_SYSCTL_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+
+#include "xfs.h"
+#include "xfs_fs.h"
+#include "xfs_macros.h"
+#include "xfs_inum.h"
+#include "xfs_log.h"
+#include "xfs_clnt.h"
+#include "xfs_trans.h"
+#include "xfs_sb.h"
+#include "xfs_ag.h"
+#include "xfs_dir.h"
+#include "xfs_dir2.h"
+#include "xfs_imap.h"
+#include "xfs_alloc.h"
+#include "xfs_dmapi.h"
+#include "xfs_mount.h"
+#include "xfs_quota.h"
+
+int
+vfs_mount(
+ struct bhv_desc *bdp,
+ struct xfs_mount_args *args,
+ struct cred *cr)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_mount)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_mount)(next, args, cr));
+}
+
+int
+vfs_parseargs(
+ struct bhv_desc *bdp,
+ char *s,
+ struct xfs_mount_args *args,
+ int f)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_parseargs)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_parseargs)(next, s, args, f));
+}
+
+int
+vfs_showargs(
+ struct bhv_desc *bdp,
+ struct seq_file *m)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_showargs)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_showargs)(next, m));
+}
+
+int
+vfs_unmount(
+ struct bhv_desc *bdp,
+ int fl,
+ struct cred *cr)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_unmount)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_unmount)(next, fl, cr));
+}
+
+int
+vfs_mntupdate(
+ struct bhv_desc *bdp,
+ int *fl,
+ struct xfs_mount_args *args)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_mntupdate)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_mntupdate)(next, fl, args));
+}
+
+int
+vfs_root(
+ struct bhv_desc *bdp,
+ struct vnode **vpp)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_root)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_root)(next, vpp));
+}
+
+int
+vfs_statvfs(
+ struct bhv_desc *bdp,
+ xfs_statfs_t *sp,
+ struct vnode *vp)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_statvfs)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_statvfs)(next, sp, vp));
+}
+
+int
+vfs_sync(
+ struct bhv_desc *bdp,
+ int fl,
+ struct cred *cr)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_sync)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_sync)(next, fl, cr));
+}
+
+int
+vfs_vget(
+ struct bhv_desc *bdp,
+ struct vnode **vpp,
+ struct fid *fidp)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_vget)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_vget)(next, vpp, fidp));
+}
+
+int
+vfs_dmapiops(
+ struct bhv_desc *bdp,
+ caddr_t addr)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_dmapiops)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_dmapiops)(next, addr));
+}
+
+int
+vfs_quotactl(
+ struct bhv_desc *bdp,
+ int cmd,
+ int id,
+ caddr_t addr)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_quotactl)
+ next = BHV_NEXT(next);
+ return ((*bhvtovfsops(next)->vfs_quotactl)(next, cmd, id, addr));
+}
+
+void
+vfs_init_vnode(
+ struct bhv_desc *bdp,
+ struct vnode *vp,
+ struct bhv_desc *bp,
+ int unlock)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_init_vnode)
+ next = BHV_NEXT(next);
+ ((*bhvtovfsops(next)->vfs_init_vnode)(next, vp, bp, unlock));
+}
+
+void
+vfs_force_shutdown(
+ struct bhv_desc *bdp,
+ int fl,
+ char *file,
+ int line)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_force_shutdown)
+ next = BHV_NEXT(next);
+ ((*bhvtovfsops(next)->vfs_force_shutdown)(next, fl, file, line));
+}
+
+void
+vfs_freeze(
+ struct bhv_desc *bdp)
+{
+ struct bhv_desc *next = bdp;
+
+ ASSERT(next);
+ while (! (bhvtovfsops(next))->vfs_freeze)
+ next = BHV_NEXT(next);
+ ((*bhvtovfsops(next)->vfs_freeze)(next));
+}
+
+vfs_t *
+vfs_allocate( void )
+{
+ struct vfs *vfsp;
+
+ vfsp = kmem_zalloc(sizeof(vfs_t), KM_SLEEP);
+ bhv_head_init(VFS_BHVHEAD(vfsp), "vfs");
+ INIT_LIST_HEAD(&vfsp->vfs_sync_list);
+ vfsp->vfs_sync_lock = SPIN_LOCK_UNLOCKED;
+ init_waitqueue_head(&vfsp->vfs_wait_sync_task);
+ init_waitqueue_head(&vfsp->vfs_wait_single_sync_task);
+ return vfsp;
+}
+
+void
+vfs_deallocate(
+ struct vfs *vfsp)
+{
+ bhv_head_destroy(VFS_BHVHEAD(vfsp));
+ kmem_free(vfsp, sizeof(vfs_t));
+}
+
+void
+vfs_insertops(
+ struct vfs *vfsp,
+ struct bhv_vfsops *vfsops)
+{
+ struct bhv_desc *bdp;
+
+ bdp = kmem_alloc(sizeof(struct bhv_desc), KM_SLEEP);
+ bhv_desc_init(bdp, NULL, vfsp, vfsops);
+ bhv_insert(&vfsp->vfs_bh, bdp);
+}
+
+void
+vfs_insertbhv(
+ struct vfs *vfsp,
+ struct bhv_desc *bdp,
+ struct vfsops *vfsops,
+ void *mount)
+{
+ bhv_desc_init(bdp, mount, vfsp, vfsops);
+ bhv_insert_initial(&vfsp->vfs_bh, bdp);
+}
+
+void
+bhv_remove_vfsops(
+ struct vfs *vfsp,
+ int pos)
+{
+ struct bhv_desc *bhv;
+
+ bhv = bhv_lookup_range(&vfsp->vfs_bh, pos, pos);
+ if (!bhv)
+ return;
+ bhv_remove(&vfsp->vfs_bh, bhv);
+ kmem_free(bhv, sizeof(*bhv));
+}
+
+void
+bhv_remove_all_vfsops(
+ struct vfs *vfsp,
+ int freebase)
+{
+ struct xfs_mount *mp;
+
+ bhv_remove_vfsops(vfsp, VFS_POSITION_QM);
+ bhv_remove_vfsops(vfsp, VFS_POSITION_DM);
+ if (!freebase)
+ return;
+ mp = XFS_BHVTOM(bhv_lookup(VFS_BHVHEAD(vfsp), &xfs_vfsops));
+ VFS_REMOVEBHV(vfsp, &mp->m_bhv);
+ xfs_mount_free(mp, 0);
+}
+
+void
+bhv_insert_all_vfsops(
+ struct vfs *vfsp)
+{
+ struct xfs_mount *mp;
+
+ mp = xfs_mount_init();
+ vfs_insertbhv(vfsp, &mp->m_bhv, &xfs_vfsops, mp);
+ vfs_insertdmapi(vfsp);
+ vfs_insertquota(vfsp);
+}
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ */
+#ifndef __XFS_VFS_H__
+#define __XFS_VFS_H__
+
+#include <linux/vfs.h>
+#include "xfs_fs.h"
+
+struct fid;
+struct vfs;
+struct cred;
+struct vnode;
+struct kstatfs;
+struct seq_file;
+struct super_block;
+struct xfs_mount_args;
+
+typedef struct kstatfs xfs_statfs_t;
+
+typedef struct vfs_sync_work {
+ struct list_head w_list;
+ struct vfs *w_vfs;
+ void *w_data; /* syncer routine argument */
+ void (*w_syncer)(struct vfs *, void *);
+} vfs_sync_work_t;
+
+typedef struct vfs {
+ u_int vfs_flag; /* flags */
+ xfs_fsid_t vfs_fsid; /* file system ID */
+ xfs_fsid_t *vfs_altfsid; /* An ID fixed for life of FS */
+ bhv_head_t vfs_bh; /* head of vfs behavior chain */
+ struct super_block *vfs_super; /* generic superblock pointer */
+ struct task_struct *vfs_sync_task; /* generalised sync thread */
+ vfs_sync_work_t vfs_sync_work; /* work item for VFS_SYNC */
+ struct list_head vfs_sync_list; /* sync thread work item list */
+ spinlock_t vfs_sync_lock; /* work item list lock */
+ int vfs_sync_seq; /* sync thread generation no. */
+ wait_queue_head_t vfs_wait_single_sync_task;
+ wait_queue_head_t vfs_wait_sync_task;
+} vfs_t;
+
+#define vfs_fbhv vfs_bh.bh_first /* 1st on vfs behavior chain */
+
+#define bhvtovfs(bdp) ( (struct vfs *)BHV_VOBJ(bdp) )
+#define bhvtovfsops(bdp) ( (struct vfsops *)BHV_OPS(bdp) )
+#define VFS_BHVHEAD(vfs) ( &(vfs)->vfs_bh )
+#define VFS_REMOVEBHV(vfs, bdp) ( bhv_remove(VFS_BHVHEAD(vfs), bdp) )
+
+#define VFS_POSITION_BASE BHV_POSITION_BASE /* chain bottom */
+#define VFS_POSITION_TOP BHV_POSITION_TOP /* chain top */
+#define VFS_POSITION_INVALID BHV_POSITION_INVALID /* invalid pos. num */
+
+typedef enum {
+ VFS_BHV_UNKNOWN, /* not specified */
+ VFS_BHV_XFS, /* xfs */
+ VFS_BHV_DM, /* data migration */
+ VFS_BHV_QM, /* quota manager */
+ VFS_BHV_IO, /* IO path */
+ VFS_BHV_END /* housekeeping end-of-range */
+} vfs_bhv_t;
+
+#define VFS_POSITION_XFS (BHV_POSITION_BASE)
+#define VFS_POSITION_DM (VFS_POSITION_BASE+10)
+#define VFS_POSITION_QM (VFS_POSITION_BASE+20)
+#define VFS_POSITION_IO (VFS_POSITION_BASE+30)
+
+#define VFS_RDONLY 0x0001 /* read-only vfs */
+#define VFS_GRPID 0x0002 /* group-ID assigned from directory */
+#define VFS_DMI 0x0004 /* filesystem has the DMI enabled */
+#define VFS_UMOUNT 0x0008 /* unmount in progress */
+#define VFS_END 0x0008 /* max flag */
+
+#define SYNC_ATTR 0x0001 /* sync attributes */
+#define SYNC_CLOSE 0x0002 /* close file system down */
+#define SYNC_DELWRI 0x0004 /* look at delayed writes */
+#define SYNC_WAIT 0x0008 /* wait for i/o to complete */
+#define SYNC_BDFLUSH 0x0010 /* BDFLUSH is calling -- don't block */
+#define SYNC_FSDATA 0x0020 /* flush fs data (e.g. superblocks) */
+#define SYNC_REFCACHE 0x0040 /* prune some of the nfs ref cache */
+#define SYNC_REMOUNT 0x0080 /* remount readonly, no dummy LRs */
+
+typedef int (*vfs_mount_t)(bhv_desc_t *,
+ struct xfs_mount_args *, struct cred *);
+typedef int (*vfs_parseargs_t)(bhv_desc_t *, char *,
+ struct xfs_mount_args *, int);
+typedef int (*vfs_showargs_t)(bhv_desc_t *, struct seq_file *);
+typedef int (*vfs_unmount_t)(bhv_desc_t *, int, struct cred *);
+typedef int (*vfs_mntupdate_t)(bhv_desc_t *, int *,
+ struct xfs_mount_args *);
+typedef int (*vfs_root_t)(bhv_desc_t *, struct vnode **);
+typedef int (*vfs_statvfs_t)(bhv_desc_t *, xfs_statfs_t *, struct vnode *);
+typedef int (*vfs_sync_t)(bhv_desc_t *, int, struct cred *);
+typedef int (*vfs_vget_t)(bhv_desc_t *, struct vnode **, struct fid *);
+typedef int (*vfs_dmapiops_t)(bhv_desc_t *, caddr_t);
+typedef int (*vfs_quotactl_t)(bhv_desc_t *, int, int, caddr_t);
+typedef void (*vfs_init_vnode_t)(bhv_desc_t *,
+ struct vnode *, bhv_desc_t *, int);
+typedef void (*vfs_force_shutdown_t)(bhv_desc_t *, int, char *, int);
+typedef void (*vfs_freeze_t)(bhv_desc_t *);
+
+typedef struct vfsops {
+ bhv_position_t vf_position; /* behavior chain position */
+ vfs_mount_t vfs_mount; /* mount file system */
+ vfs_parseargs_t vfs_parseargs; /* parse mount options */
+ vfs_showargs_t vfs_showargs; /* unparse mount options */
+ vfs_unmount_t vfs_unmount; /* unmount file system */
+ vfs_mntupdate_t vfs_mntupdate; /* update file system options */
+ vfs_root_t vfs_root; /* get root vnode */
+ vfs_statvfs_t vfs_statvfs; /* file system statistics */
+ vfs_sync_t vfs_sync; /* flush files */
+ vfs_vget_t vfs_vget; /* get vnode from fid */
+ vfs_dmapiops_t vfs_dmapiops; /* data migration */
+ vfs_quotactl_t vfs_quotactl; /* disk quota */
+ vfs_init_vnode_t vfs_init_vnode; /* initialize a new vnode */
+ vfs_force_shutdown_t vfs_force_shutdown; /* crash and burn */
+ vfs_freeze_t vfs_freeze; /* freeze fs for snapshot */
+} vfsops_t;
+
+/*
+ * VFS's. Operates on vfs structure pointers (starts at bhv head).
+ */
+#define VHEAD(v) ((v)->vfs_fbhv)
+#define VFS_MOUNT(v, ma,cr, rv) ((rv) = vfs_mount(VHEAD(v), ma,cr))
+#define VFS_PARSEARGS(v, o,ma,f, rv) ((rv) = vfs_parseargs(VHEAD(v), o,ma,f))
+#define VFS_SHOWARGS(v, m, rv) ((rv) = vfs_showargs(VHEAD(v), m))
+#define VFS_UNMOUNT(v, f, cr, rv) ((rv) = vfs_unmount(VHEAD(v), f,cr))
+#define VFS_MNTUPDATE(v, fl, args, rv) ((rv) = vfs_mntupdate(VHEAD(v), fl, args))
+#define VFS_ROOT(v, vpp, rv) ((rv) = vfs_root(VHEAD(v), vpp))
+#define VFS_STATVFS(v, sp,vp, rv) ((rv) = vfs_statvfs(VHEAD(v), sp,vp))
+#define VFS_SYNC(v, flag,cr, rv) ((rv) = vfs_sync(VHEAD(v), flag,cr))
+#define VFS_VGET(v, vpp,fidp, rv) ((rv) = vfs_vget(VHEAD(v), vpp,fidp))
+#define VFS_DMAPIOPS(v, p, rv) ((rv) = vfs_dmapiops(VHEAD(v), p))
+#define VFS_QUOTACTL(v, c,id,p, rv) ((rv) = vfs_quotactl(VHEAD(v), c,id,p))
+#define VFS_INIT_VNODE(v, vp,b,ul) ( vfs_init_vnode(VHEAD(v), vp,b,ul) )
+#define VFS_FORCE_SHUTDOWN(v, fl,f,l) ( vfs_force_shutdown(VHEAD(v), fl,f,l) )
+#define VFS_FREEZE(v) ( vfs_freeze(VHEAD(v)) )
+
+/*
+ * PVFS's. Operates on behavior descriptor pointers.
+ */
+#define PVFS_MOUNT(b, ma,cr, rv) ((rv) = vfs_mount(b, ma,cr))
+#define PVFS_PARSEARGS(b, o,ma,f, rv) ((rv) = vfs_parseargs(b, o,ma,f))
+#define PVFS_SHOWARGS(b, m, rv) ((rv) = vfs_showargs(b, m))
+#define PVFS_UNMOUNT(b, f,cr, rv) ((rv) = vfs_unmount(b, f,cr))
+#define PVFS_MNTUPDATE(b, fl, args, rv) ((rv) = vfs_mntupdate(b, fl, args))
+#define PVFS_ROOT(b, vpp, rv) ((rv) = vfs_root(b, vpp))
+#define PVFS_STATVFS(b, sp,vp, rv) ((rv) = vfs_statvfs(b, sp,vp))
+#define PVFS_SYNC(b, flag,cr, rv) ((rv) = vfs_sync(b, flag,cr))
+#define PVFS_VGET(b, vpp,fidp, rv) ((rv) = vfs_vget(b, vpp,fidp))
+#define PVFS_DMAPIOPS(b, p, rv) ((rv) = vfs_dmapiops(b, p))
+#define PVFS_QUOTACTL(b, c,id,p, rv) ((rv) = vfs_quotactl(b, c,id,p))
+#define PVFS_INIT_VNODE(b, vp,b2,ul) ( vfs_init_vnode(b, vp,b2,ul) )
+#define PVFS_FORCE_SHUTDOWN(b, fl,f,l) ( vfs_force_shutdown(b, fl,f,l) )
+#define PVFS_FREEZE(b) ( vfs_freeze(b) )
+
+extern int vfs_mount(bhv_desc_t *, struct xfs_mount_args *, struct cred *);
+extern int vfs_parseargs(bhv_desc_t *, char *, struct xfs_mount_args *, int);
+extern int vfs_showargs(bhv_desc_t *, struct seq_file *);
+extern int vfs_unmount(bhv_desc_t *, int, struct cred *);
+extern int vfs_mntupdate(bhv_desc_t *, int *, struct xfs_mount_args *);
+extern int vfs_root(bhv_desc_t *, struct vnode **);
+extern int vfs_statvfs(bhv_desc_t *, xfs_statfs_t *, struct vnode *);
+extern int vfs_sync(bhv_desc_t *, int, struct cred *);
+extern int vfs_vget(bhv_desc_t *, struct vnode **, struct fid *);
+extern int vfs_dmapiops(bhv_desc_t *, caddr_t);
+extern int vfs_quotactl(bhv_desc_t *, int, int, caddr_t);
+extern void vfs_init_vnode(bhv_desc_t *, struct vnode *, bhv_desc_t *, int);
+extern void vfs_force_shutdown(bhv_desc_t *, int, char *, int);
+extern void vfs_freeze(bhv_desc_t *);
+
+typedef struct bhv_vfsops {
+ struct vfsops bhv_common;
+ void * bhv_custom;
+} bhv_vfsops_t;
+
+#define vfs_bhv_lookup(v, id) ( bhv_lookup_range(&(v)->vfs_bh, (id), (id)) )
+#define vfs_bhv_custom(b) ( ((bhv_vfsops_t *)BHV_OPS(b))->bhv_custom )
+#define vfs_bhv_set_custom(b,o) ( (b)->bhv_custom = (void *)(o))
+#define vfs_bhv_clr_custom(b) ( (b)->bhv_custom = NULL )
+
+extern vfs_t *vfs_allocate(void);
+extern void vfs_deallocate(vfs_t *);
+extern void vfs_insertops(vfs_t *, bhv_vfsops_t *);
+extern void vfs_insertbhv(vfs_t *, bhv_desc_t *, vfsops_t *, void *);
+
+extern void bhv_insert_all_vfsops(struct vfs *);
+extern void bhv_remove_all_vfsops(struct vfs *, int);
+extern void bhv_remove_vfsops(struct vfs *, int);
+
+#define fs_frozen(vfsp) ((vfsp)->vfs_super->s_frozen)
+#define fs_check_frozen(vfsp, level) \
+ vfs_check_frozen(vfsp->vfs_super, level);
+
+#endif /* __XFS_VFS_H__ */
--- /dev/null
+/*
+ * Copyright (c) 2000-2003 Silicon Graphics, Inc. All Rights Reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it would be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ *
+ * Further, this software is distributed without any warranty that it is
+ * free of the rightful claim of any third person regarding infringement
+ * or the like. Any license provided herein, whether implied or
+ * otherwise, applies only to this software file. Patent licenses, if
+ * any, provided herein do not apply to combinations of this program with
+ * other software, or any other product whatsoever.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; if not, write the Free Software Foundation, Inc., 59
+ * Temple Place - Suite 330, Boston MA 02111-1307, USA.
+ *
+ * Contact information: Silicon Graphics, Inc., 1600 Amphitheatre Pkwy,
+ * Mountain View, CA 94043, or:
+ *
+ * http://www.sgi.com
+ *
+ * For further information regarding this notice, see:
+ *
+ * http://oss.sgi.com/projects/GenInfo/SGIGPLNoticeExplan/
+ *
+ * Portions Copyright (c) 1989, 1993
+ * The Regents of the University of California. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * 3. Neither the name of the University nor the names of its contributors
+ * may be used to endorse or promote products derived from this software
+ * without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND
+ * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE
+ * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
+ * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
+ * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
+ * SUCH DAMAGE.
+ */
+#ifndef __XFS_VNODE_H__
+#define __XFS_VNODE_H__
+
+struct uio;
+struct file;
+struct vattr;
+struct xfs_iomap;
+struct attrlist_cursor_kern;
+
+/*
+ * Vnode types. VNON means no type.
+ */
+enum vtype { VNON, VREG, VDIR, VBLK, VCHR, VLNK, VFIFO, VBAD, VSOCK };
+
+typedef xfs_ino_t vnumber_t;
+typedef struct dentry vname_t;
+typedef bhv_head_t vn_bhv_head_t;
+
+/*
+ * MP locking protocols:
+ * v_flag, v_vfsp VN_LOCK/VN_UNLOCK
+ * v_type read-only or fs-dependent
+ */
+typedef struct vnode {
+ __u32 v_flag; /* vnode flags (see below) */
+ enum vtype v_type; /* vnode type */
+ struct vfs *v_vfsp; /* ptr to containing VFS */
+ vnumber_t v_number; /* in-core vnode number */
+ vn_bhv_head_t v_bh; /* behavior head */
+ spinlock_t v_lock; /* VN_LOCK/VN_UNLOCK */
+ struct inode v_inode; /* Linux inode */
+#ifdef XFS_VNODE_TRACE
+ struct ktrace *v_trace; /* trace header structure */
+#endif
+} vnode_t;
+
+#define v_fbhv v_bh.bh_first /* first behavior */
+#define v_fops v_bh.bh_first->bd_ops /* first behavior ops */
+
+#define VNODE_POSITION_BASE BHV_POSITION_BASE /* chain bottom */
+#define VNODE_POSITION_TOP BHV_POSITION_TOP /* chain top */
+#define VNODE_POSITION_INVALID BHV_POSITION_INVALID /* invalid pos. num */
+
+typedef enum {
+ VN_BHV_UNKNOWN, /* not specified */
+ VN_BHV_XFS, /* xfs */
+ VN_BHV_DM, /* data migration */
+ VN_BHV_QM, /* quota manager */
+ VN_BHV_IO, /* IO path */
+ VN_BHV_END /* housekeeping end-of-range */
+} vn_bhv_t;
+
+#define VNODE_POSITION_XFS (VNODE_POSITION_BASE)
+#define VNODE_POSITION_DM (VNODE_POSITION_BASE+10)
+#define VNODE_POSITION_QM (VNODE_POSITION_BASE+20)
+#define VNODE_POSITION_IO (VNODE_POSITION_BASE+30)
+
+/*
+ * Macros for dealing with the behavior descriptor inside of the vnode.
+ */
+#define BHV_TO_VNODE(bdp) ((vnode_t *)BHV_VOBJ(bdp))
+#define BHV_TO_VNODE_NULL(bdp) ((vnode_t *)BHV_VOBJNULL(bdp))
+
+#define VN_BHV_HEAD(vp) ((bhv_head_t *)(&((vp)->v_bh)))
+#define vn_bhv_head_init(bhp,name) bhv_head_init(bhp,name)
+#define vn_bhv_remove(bhp,bdp) bhv_remove(bhp,bdp)
+#define vn_bhv_lookup(bhp,ops) bhv_lookup(bhp,ops)
+#define vn_bhv_lookup_unlocked(bhp,ops) bhv_lookup_unlocked(bhp,ops)
+
+/*
+ * Vnode to Linux inode mapping.
+ */
+#define LINVFS_GET_VP(inode) ((vnode_t *)list_entry(inode, vnode_t, v_inode))
+#define LINVFS_GET_IP(vp) (&(vp)->v_inode)
+
+/*
+ * Convert between vnode types and inode formats (since POSIX.1
+ * defines mode word of stat structure in terms of inode formats).
+ */
+extern enum vtype iftovt_tab[];
+extern u_short vttoif_tab[];
+#define IFTOVT(mode) (iftovt_tab[((mode) & S_IFMT) >> 12])
+#define VTTOIF(indx) (vttoif_tab[(int)(indx)])
+#define MAKEIMODE(indx, mode) (int)(VTTOIF(indx) | (mode))
+
+
+/*
+ * Vnode flags.
+ */
+#define VINACT 0x1 /* vnode is being inactivated */
+#define VRECLM 0x2 /* vnode is being reclaimed */
+#define VWAIT 0x4 /* waiting for VINACT/VRECLM to end */
+#define VMODIFIED 0x8 /* XFS inode state possibly differs */
+ /* to the Linux inode state. */
+
+/*
+ * Values for the VOP_RWLOCK and VOP_RWUNLOCK flags parameter.
+ */
+typedef enum vrwlock {
+ VRWLOCK_NONE,
+ VRWLOCK_READ,
+ VRWLOCK_WRITE,
+ VRWLOCK_WRITE_DIRECT,
+ VRWLOCK_TRY_READ,
+ VRWLOCK_TRY_WRITE
+} vrwlock_t;
+
+/*
+ * Return values for VOP_INACTIVE. A return value of
+ * VN_INACTIVE_NOCACHE implies that the file system behavior
+ * has disassociated its state and bhv_desc_t from the vnode.
+ */
+#define VN_INACTIVE_CACHE 0
+#define VN_INACTIVE_NOCACHE 1
+
+/*
+ * Values for the cmd code given to VOP_VNODE_CHANGE.
+ */
+typedef enum vchange {
+ VCHANGE_FLAGS_FRLOCKS = 0,
+ VCHANGE_FLAGS_ENF_LOCKING = 1,
+ VCHANGE_FLAGS_TRUNCATED = 2,
+ VCHANGE_FLAGS_PAGE_DIRTY = 3,
+ VCHANGE_FLAGS_IOEXCL_COUNT = 4
+} vchange_t;
+
+
+typedef int (*vop_open_t)(bhv_desc_t *, struct cred *);
+typedef ssize_t (*vop_read_t)(bhv_desc_t *, struct kiocb *,
+ const struct iovec *, unsigned int,
+ loff_t *, int, struct cred *);
+typedef ssize_t (*vop_write_t)(bhv_desc_t *, struct kiocb *,
+ const struct iovec *, unsigned int,
+ loff_t *, int, struct cred *);
+typedef ssize_t (*vop_sendfile_t)(bhv_desc_t *, struct file *,
+ loff_t *, int, size_t, read_actor_t,
+ void *, struct cred *);
+typedef int (*vop_ioctl_t)(bhv_desc_t *, struct inode *, struct file *,
+ int, unsigned int, void __user *);
+typedef int (*vop_getattr_t)(bhv_desc_t *, struct vattr *, int,
+ struct cred *);
+typedef int (*vop_setattr_t)(bhv_desc_t *, struct vattr *, int,
+ struct cred *);
+typedef int (*vop_access_t)(bhv_desc_t *, int, struct cred *);
+typedef int (*vop_lookup_t)(bhv_desc_t *, vname_t *, vnode_t **,
+ int, vnode_t *, struct cred *);
+typedef int (*vop_create_t)(bhv_desc_t *, vname_t *, struct vattr *,
+ vnode_t **, struct cred *);
+typedef int (*vop_remove_t)(bhv_desc_t *, vname_t *, struct cred *);
+typedef int (*vop_link_t)(bhv_desc_t *, vnode_t *, vname_t *,
+ struct cred *);
+typedef int (*vop_rename_t)(bhv_desc_t *, vname_t *, vnode_t *, vname_t *,
+ struct cred *);
+typedef int (*vop_mkdir_t)(bhv_desc_t *, vname_t *, struct vattr *,
+ vnode_t **, struct cred *);
+typedef int (*vop_rmdir_t)(bhv_desc_t *, vname_t *, struct cred *);
+typedef int (*vop_readdir_t)(bhv_desc_t *, struct uio *, struct cred *,
+ int *);
+typedef int (*vop_symlink_t)(bhv_desc_t *, vname_t *, struct vattr *,
+ char *, vnode_t **, struct cred *);
+typedef int (*vop_readlink_t)(bhv_desc_t *, struct uio *, int,
+ struct cred *);
+typedef int (*vop_fsync_t)(bhv_desc_t *, int, struct cred *,
+ xfs_off_t, xfs_off_t);
+typedef int (*vop_inactive_t)(bhv_desc_t *, struct cred *);
+typedef int (*vop_fid2_t)(bhv_desc_t *, struct fid *);
+typedef int (*vop_release_t)(bhv_desc_t *);
+typedef int (*vop_rwlock_t)(bhv_desc_t *, vrwlock_t);
+typedef void (*vop_rwunlock_t)(bhv_desc_t *, vrwlock_t);
+typedef int (*vop_bmap_t)(bhv_desc_t *, xfs_off_t, ssize_t, int,
+ struct xfs_iomap *, int *);
+typedef int (*vop_reclaim_t)(bhv_desc_t *);
+typedef int (*vop_attr_get_t)(bhv_desc_t *, char *, char *, int *, int,
+ struct cred *);
+typedef int (*vop_attr_set_t)(bhv_desc_t *, char *, char *, int, int,
+ struct cred *);
+typedef int (*vop_attr_remove_t)(bhv_desc_t *, char *, int, struct cred *);
+typedef int (*vop_attr_list_t)(bhv_desc_t *, char *, int, int,
+ struct attrlist_cursor_kern *, struct cred *);
+typedef void (*vop_link_removed_t)(bhv_desc_t *, vnode_t *, int);
+typedef void (*vop_vnode_change_t)(bhv_desc_t *, vchange_t, __psint_t);
+typedef void (*vop_ptossvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, int);
+typedef void (*vop_pflushinvalvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t, int);
+typedef int (*vop_pflushvp_t)(bhv_desc_t *, xfs_off_t, xfs_off_t,
+ uint64_t, int);
+typedef int (*vop_iflush_t)(bhv_desc_t *, int);
+
+
+typedef struct vnodeops {
+ bhv_position_t vn_position; /* position within behavior chain */
+ vop_open_t vop_open;
+ vop_read_t vop_read;
+ vop_write_t vop_write;
+ vop_sendfile_t vop_sendfile;
+ vop_ioctl_t vop_ioctl;
+ vop_getattr_t vop_getattr;
+ vop_setattr_t vop_setattr;
+ vop_access_t vop_access;
+ vop_lookup_t vop_lookup;
+ vop_create_t vop_create;
+ vop_remove_t vop_remove;
+ vop_link_t vop_link;
+ vop_rename_t vop_rename;
+ vop_mkdir_t vop_mkdir;
+ vop_rmdir_t vop_rmdir;
+ vop_readdir_t vop_readdir;
+ vop_symlink_t vop_symlink;
+ vop_readlink_t vop_readlink;
+ vop_fsync_t vop_fsync;
+ vop_inactive_t vop_inactive;
+ vop_fid2_t vop_fid2;
+ vop_rwlock_t vop_rwlock;
+ vop_rwunlock_t vop_rwunlock;
+ vop_bmap_t vop_bmap;
+ vop_reclaim_t vop_reclaim;
+ vop_attr_get_t vop_attr_get;
+ vop_attr_set_t vop_attr_set;
+ vop_attr_remove_t vop_attr_remove;
+ vop_attr_list_t vop_attr_list;
+ vop_link_removed_t vop_link_removed;
+ vop_vnode_change_t vop_vnode_change;
+ vop_ptossvp_t vop_tosspages;
+ vop_pflushinvalvp_t vop_flushinval_pages;
+ vop_pflushvp_t vop_flush_pages;
+ vop_release_t vop_release;
+ vop_iflush_t vop_iflush;
+} vnodeops_t;
+
+/*
+ * VOP's.
+ */
+#define _VOP_(op, vp) (*((vnodeops_t *)(vp)->v_fops)->op)
+
+#define VOP_READ(vp,file,iov,segs,offset,ioflags,cr,rv) \
+ rv = _VOP_(vop_read, vp)((vp)->v_fbhv,file,iov,segs,offset,ioflags,cr)
+#define VOP_WRITE(vp,file,iov,segs,offset,ioflags,cr,rv) \
+ rv = _VOP_(vop_write, vp)((vp)->v_fbhv,file,iov,segs,offset,ioflags,cr)
+#define VOP_SENDFILE(vp,f,off,ioflags,cnt,act,targ,cr,rv) \
+ rv = _VOP_(vop_sendfile, vp)((vp)->v_fbhv,f,off,ioflags,cnt,act,targ,cr)
+#define VOP_BMAP(vp,of,sz,rw,b,n,rv) \
+ rv = _VOP_(vop_bmap, vp)((vp)->v_fbhv,of,sz,rw,b,n)
+#define VOP_OPEN(vp, cr, rv) \
+ rv = _VOP_(vop_open, vp)((vp)->v_fbhv, cr)
+#define VOP_GETATTR(vp, vap, f, cr, rv) \
+ rv = _VOP_(vop_getattr, vp)((vp)->v_fbhv, vap, f, cr)
+#define VOP_SETATTR(vp, vap, f, cr, rv) \
+ rv = _VOP_(vop_setattr, vp)((vp)->v_fbhv, vap, f, cr)
+#define VOP_ACCESS(vp, mode, cr, rv) \
+ rv = _VOP_(vop_access, vp)((vp)->v_fbhv, mode, cr)
+#define VOP_LOOKUP(vp,d,vpp,f,rdir,cr,rv) \
+ rv = _VOP_(vop_lookup, vp)((vp)->v_fbhv,d,vpp,f,rdir,cr)
+#define VOP_CREATE(dvp,d,vap,vpp,cr,rv) \
+ rv = _VOP_(vop_create, dvp)((dvp)->v_fbhv,d,vap,vpp,cr)
+#define VOP_REMOVE(dvp,d,cr,rv) \
+ rv = _VOP_(vop_remove, dvp)((dvp)->v_fbhv,d,cr)
+#define VOP_LINK(tdvp,fvp,d,cr,rv) \
+ rv = _VOP_(vop_link, tdvp)((tdvp)->v_fbhv,fvp,d,cr)
+#define VOP_RENAME(fvp,fnm,tdvp,tnm,cr,rv) \
+ rv = _VOP_(vop_rename, fvp)((fvp)->v_fbhv,fnm,tdvp,tnm,cr)
+#define VOP_MKDIR(dp,d,vap,vpp,cr,rv) \
+ rv = _VOP_(vop_mkdir, dp)((dp)->v_fbhv,d,vap,vpp,cr)
+#define VOP_RMDIR(dp,d,cr,rv) \
+ rv = _VOP_(vop_rmdir, dp)((dp)->v_fbhv,d,cr)
+#define VOP_READDIR(vp,uiop,cr,eofp,rv) \
+ rv = _VOP_(vop_readdir, vp)((vp)->v_fbhv,uiop,cr,eofp)
+#define VOP_SYMLINK(dvp,d,vap,tnm,vpp,cr,rv) \
+ rv = _VOP_(vop_symlink, dvp) ((dvp)->v_fbhv,d,vap,tnm,vpp,cr)
+#define VOP_READLINK(vp,uiop,fl,cr,rv) \
+ rv = _VOP_(vop_readlink, vp)((vp)->v_fbhv,uiop,fl,cr)
+#define VOP_FSYNC(vp,f,cr,b,e,rv) \
+ rv = _VOP_(vop_fsync, vp)((vp)->v_fbhv,f,cr,b,e)
+#define VOP_INACTIVE(vp, cr, rv) \
+ rv = _VOP_(vop_inactive, vp)((vp)->v_fbhv, cr)
+#define VOP_RELEASE(vp, rv) \
+ rv = _VOP_(vop_release, vp)((vp)->v_fbhv)
+#define VOP_FID2(vp, fidp, rv) \
+ rv = _VOP_(vop_fid2, vp)((vp)->v_fbhv, fidp)
+#define VOP_RWLOCK(vp,i) \
+ (void)_VOP_(vop_rwlock, vp)((vp)->v_fbhv, i)
+#define VOP_RWLOCK_TRY(vp,i) \
+ _VOP_(vop_rwlock, vp)((vp)->v_fbhv, i)
+#define VOP_RWUNLOCK(vp,i) \
+ (void)_VOP_(vop_rwunlock, vp)((vp)->v_fbhv, i)
+#define VOP_FRLOCK(vp,c,fl,flags,offset,fr,rv) \
+ rv = _VOP_(vop_frlock, vp)((vp)->v_fbhv,c,fl,flags,offset,fr)
+#define VOP_RECLAIM(vp, rv) \
+ rv = _VOP_(vop_reclaim, vp)((vp)->v_fbhv)
+#define VOP_ATTR_GET(vp, name, val, vallenp, fl, cred, rv) \
+ rv = _VOP_(vop_attr_get, vp)((vp)->v_fbhv,name,val,vallenp,fl,cred)
+#define VOP_ATTR_SET(vp, name, val, vallen, fl, cred, rv) \
+ rv = _VOP_(vop_attr_set, vp)((vp)->v_fbhv,name,val,vallen,fl,cred)
+#define VOP_ATTR_REMOVE(vp, name, flags, cred, rv) \
+ rv = _VOP_(vop_attr_remove, vp)((vp)->v_fbhv,name,flags,cred)
+#define VOP_ATTR_LIST(vp, buf, buflen, fl, cursor, cred, rv) \
+ rv = _VOP_(vop_attr_list, vp)((vp)->v_fbhv,buf,buflen,fl,cursor,cred)
+#define VOP_LINK_REMOVED(vp, dvp, linkzero) \
+ (void)_VOP_(vop_link_removed, vp)((vp)->v_fbhv, dvp, linkzero)
+#define VOP_VNODE_CHANGE(vp, cmd, val) \
+ (void)_VOP_(vop_vnode_change, vp)((vp)->v_fbhv,cmd,val)
+/*
+ * These are page cache functions that now go thru VOPs.
+ * 'last' parameter is unused and left in for IRIX compatibility
+ */
+#define VOP_TOSS_PAGES(vp, first, last, fiopt) \
+ _VOP_(vop_tosspages, vp)((vp)->v_fbhv,first, last, fiopt)
+/*
+ * 'last' parameter is unused and left in for IRIX compatibility
+ */
+#define VOP_FLUSHINVAL_PAGES(vp, first, last, fiopt) \
+ _VOP_(vop_flushinval_pages, vp)((vp)->v_fbhv,first,last,fiopt)
+/*
+ * 'last' parameter is unused and left in for IRIX compatibility
+ */
+#define VOP_FLUSH_PAGES(vp, first, last, flags, fiopt, rv) \
+ rv = _VOP_(vop_flush_pages, vp)((vp)->v_fbhv,first,last,flags,fiopt)
+#define VOP_IOCTL(vp, inode, filp, fl, cmd, arg, rv) \
+ rv = _VOP_(vop_ioctl, vp)((vp)->v_fbhv,inode,filp,fl,cmd,arg)
+#define VOP_IFLUSH(vp, flags, rv) \
+ rv = _VOP_(vop_iflush, vp)((vp)->v_fbhv, flags)
+
+/*
+ * Flags for read/write calls - same values as IRIX
+ */
+#define IO_ISDIRECT 0x00004 /* bypass page cache */
+#define IO_INVIS 0x00020 /* don't update inode timestamps */
+
+/*
+ * Flags for VOP_IFLUSH call
+ */
+#define FLUSH_SYNC 1 /* wait for flush to complete */
+#define FLUSH_INODE 2 /* flush the inode itself */
+#define FLUSH_LOG 4 /* force the last log entry for
+ * this inode out to disk */
+
+/*
+ * Flush/Invalidate options for VOP_TOSS_PAGES, VOP_FLUSHINVAL_PAGES and
+ * VOP_FLUSH_PAGES.
+ */
+#define FI_NONE 0 /* none */
+#define FI_REMAPF 1 /* Do a remapf prior to the operation */
+#define FI_REMAPF_LOCKED 2 /* Do a remapf prior to the operation.
+ Prevent VM access to the pages until
+ the operation completes. */
+
+/*
+ * Vnode attributes. va_mask indicates those attributes the caller
+ * wants to set or extract.
+ */
+typedef struct vattr {
+ int va_mask; /* bit-mask of attributes present */
+ enum vtype va_type; /* vnode type (for create) */
+ mode_t va_mode; /* file access mode and type */
+ nlink_t va_nlink; /* number of references to file */
+ uid_t va_uid; /* owner user id */
+ gid_t va_gid; /* owner group id */
+ xfs_ino_t va_nodeid; /* file id */
+ xfs_off_t va_size; /* file size in bytes */
+ u_long va_blocksize; /* blocksize preferred for i/o */
+ struct timespec va_atime; /* time of last access */
+ struct timespec va_mtime; /* time of last modification */
+ struct timespec va_ctime; /* time file changed */
+ u_int va_gen; /* generation number of file */
+ xfs_dev_t va_rdev; /* device the special file represents */
+ __int64_t va_nblocks; /* number of blocks allocated */
+ u_long va_xflags; /* random extended file flags */
+ u_long va_extsize; /* file extent size */
+ u_long va_nextents; /* number of extents in file */
+ u_long va_anextents; /* number of attr extents in file */
+ int va_projid; /* project id */
+} vattr_t;
+
+/*
+ * setattr or getattr attributes
+ */
+#define XFS_AT_TYPE 0x00000001
+#define XFS_AT_MODE 0x00000002
+#define XFS_AT_UID 0x00000004
+#define XFS_AT_GID 0x00000008
+#define XFS_AT_FSID 0x00000010
+#define XFS_AT_NODEID 0x00000020
+#define XFS_AT_NLINK 0x00000040
+#define XFS_AT_SIZE 0x00000080
+#define XFS_AT_ATIME 0x00000100
+#define XFS_AT_MTIME 0x00000200
+#define XFS_AT_CTIME 0x00000400
+#define XFS_AT_RDEV 0x00000800
+#define XFS_AT_BLKSIZE 0x00001000
+#define XFS_AT_NBLOCKS 0x00002000
+#define XFS_AT_VCODE 0x00004000
+#define XFS_AT_MAC 0x00008000
+#define XFS_AT_UPDATIME 0x00010000
+#define XFS_AT_UPDMTIME 0x00020000
+#define XFS_AT_UPDCTIME 0x00040000
+#define XFS_AT_ACL 0x00080000
+#define XFS_AT_CAP 0x00100000
+#define XFS_AT_INF 0x00200000
+#define XFS_AT_XFLAGS 0x00400000
+#define XFS_AT_EXTSIZE 0x00800000
+#define XFS_AT_NEXTENTS 0x01000000
+#define XFS_AT_ANEXTENTS 0x02000000
+#define XFS_AT_PROJID 0x04000000
+#define XFS_AT_SIZE_NOPERM 0x08000000
+#define XFS_AT_GENCOUNT 0x10000000
+
+#define XFS_AT_ALL (XFS_AT_TYPE|XFS_AT_MODE|XFS_AT_UID|XFS_AT_GID|\
+ XFS_AT_FSID|XFS_AT_NODEID|XFS_AT_NLINK|XFS_AT_SIZE|\
+ XFS_AT_ATIME|XFS_AT_MTIME|XFS_AT_CTIME|XFS_AT_RDEV|\
+ XFS_AT_BLKSIZE|XFS_AT_NBLOCKS|XFS_AT_VCODE|XFS_AT_MAC|\
+ XFS_AT_ACL|XFS_AT_CAP|XFS_AT_INF|XFS_AT_XFLAGS|XFS_AT_EXTSIZE|\
+ XFS_AT_NEXTENTS|XFS_AT_ANEXTENTS|XFS_AT_PROJID|XFS_AT_GENCOUNT)
+
+#define XFS_AT_STAT (XFS_AT_TYPE|XFS_AT_MODE|XFS_AT_UID|XFS_AT_GID|\
+ XFS_AT_FSID|XFS_AT_NODEID|XFS_AT_NLINK|XFS_AT_SIZE|\
+ XFS_AT_ATIME|XFS_AT_MTIME|XFS_AT_CTIME|XFS_AT_RDEV|\
+ XFS_AT_BLKSIZE|XFS_AT_NBLOCKS|XFS_AT_PROJID)
+
+#define XFS_AT_TIMES (XFS_AT_ATIME|XFS_AT_MTIME|XFS_AT_CTIME)
+
+#define XFS_AT_UPDTIMES (XFS_AT_UPDATIME|XFS_AT_UPDMTIME|XFS_AT_UPDCTIME)
+
+#define XFS_AT_NOSET (XFS_AT_NLINK|XFS_AT_RDEV|XFS_AT_FSID|XFS_AT_NODEID|\
+ XFS_AT_TYPE|XFS_AT_BLKSIZE|XFS_AT_NBLOCKS|XFS_AT_VCODE|\
+ XFS_AT_NEXTENTS|XFS_AT_ANEXTENTS|XFS_AT_GENCOUNT)
+
+/*
+ * Modes.
+ */
+#define VSUID S_ISUID /* set user id on execution */
+#define VSGID S_ISGID /* set group id on execution */
+#define VSVTX S_ISVTX /* save swapped text even after use */
+#define VREAD S_IRUSR /* read, write, execute permissions */
+#define VWRITE S_IWUSR
+#define VEXEC S_IXUSR
+
+#define MODEMASK S_IALLUGO /* mode bits plus permission bits */
+
+/*
+ * Check whether mandatory file locking is enabled.
+ */
+#define MANDLOCK(vp, mode) \
+ ((vp)->v_type == VREG && ((mode) & (VSGID|(VEXEC>>3))) == VSGID)
+
+extern void vn_init(void);
+extern int vn_wait(struct vnode *);
+extern vnode_t *vn_initialize(struct inode *);
+
+/*
+ * Acquiring and invalidating vnodes:
+ *
+ * if (vn_get(vp, version, 0))
+ * ...;
+ * vn_purge(vp, version);
+ *
+ * vn_get and vn_purge must be called with vmap_t arguments, sampled
+ * while a lock that the vnode's VOP_RECLAIM function acquires is
+ * held, to ensure that the vnode sampled with the lock held isn't
+ * recycled (VOP_RECLAIMed) or deallocated between the release of the lock
+ * and the subsequent vn_get or vn_purge.
+ */
+
+/*
+ * vnode_map structures _must_ match vn_epoch and vnode structure sizes.
+ */
+typedef struct vnode_map {
+ vfs_t *v_vfsp;
+ vnumber_t v_number; /* in-core vnode number */
+ xfs_ino_t v_ino; /* inode # */
+} vmap_t;
+
+#define VMAP(vp, vmap) {(vmap).v_vfsp = (vp)->v_vfsp, \
+ (vmap).v_number = (vp)->v_number, \
+ (vmap).v_ino = (vp)->v_inode.i_ino; }
+
+extern void vn_purge(struct vnode *, vmap_t *);
+extern vnode_t *vn_get(struct vnode *, vmap_t *);
+extern int vn_revalidate(struct vnode *);
+extern void vn_revalidate_core(struct vnode *, vattr_t *);
+extern void vn_remove(struct vnode *);
+
+static inline int vn_count(struct vnode *vp)
+{
+ return atomic_read(&LINVFS_GET_IP(vp)->i_count);
+}
+
+/*
+ * Vnode reference counting functions (and macros for compatibility).
+ */
+extern vnode_t *vn_hold(struct vnode *);
+extern void vn_rele(struct vnode *);
+
+#if defined(XFS_VNODE_TRACE)
+#define VN_HOLD(vp) \
+ ((void)vn_hold(vp), \
+ vn_trace_hold(vp, __FILE__, __LINE__, (inst_t *)__return_address))
+#define VN_RELE(vp) \
+ (vn_trace_rele(vp, __FILE__, __LINE__, (inst_t *)__return_address), \
+ iput(LINVFS_GET_IP(vp)))
+#else
+#define VN_HOLD(vp) ((void)vn_hold(vp))
+#define VN_RELE(vp) (iput(LINVFS_GET_IP(vp)))
+#endif
+
+/*
+ * Vname handling macros.
+ */
+#define VNAME(dentry) ((char *) (dentry)->d_name.name)
+#define VNAMELEN(dentry) ((dentry)->d_name.len)
+#define VNAME_TO_VNODE(dentry) (LINVFS_GET_VP((dentry)->d_inode))
+
+/*
+ * Vnode spinlock manipulation.
+ */
+#define VN_LOCK(vp) mutex_spinlock(&(vp)->v_lock)
+#define VN_UNLOCK(vp, s) mutex_spinunlock(&(vp)->v_lock, s)
+#define VN_FLAGSET(vp,b) vn_flagset(vp,b)
+#define VN_FLAGCLR(vp,b) vn_flagclr(vp,b)
+
+static __inline__ void vn_flagset(struct vnode *vp, uint flag)
+{
+ spin_lock(&vp->v_lock);
+ vp->v_flag |= flag;
+ spin_unlock(&vp->v_lock);
+}
+
+static __inline__ void vn_flagclr(struct vnode *vp, uint flag)
+{
+ spin_lock(&vp->v_lock);
+ vp->v_flag &= ~flag;
+ spin_unlock(&vp->v_lock);
+}
+
+/*
+ * Update modify/access/change times on the vnode
+ */
+#define VN_MTIMESET(vp, tvp) (LINVFS_GET_IP(vp)->i_mtime = *(tvp))
+#define VN_ATIMESET(vp, tvp) (LINVFS_GET_IP(vp)->i_atime = *(tvp))
+#define VN_CTIMESET(vp, tvp) (LINVFS_GET_IP(vp)->i_ctime = *(tvp))
+
+/*
+ * Dealing with bad inodes
+ */
+static inline void vn_mark_bad(struct vnode *vp)
+{
+ make_bad_inode(LINVFS_GET_IP(vp));
+}
+
+static inline int VN_BAD(struct vnode *vp)
+{
+ return is_bad_inode(LINVFS_GET_IP(vp));
+}
+
+/*
+ * Some useful predicates.
+ */
+#define VN_MAPPED(vp) mapping_mapped(LINVFS_GET_IP(vp)->i_mapping)
+#define VN_CACHED(vp) (LINVFS_GET_IP(vp)->i_mapping->nrpages)
+#define VN_DIRTY(vp) mapping_tagged(LINVFS_GET_IP(vp)->i_mapping, \
+ PAGECACHE_TAG_DIRTY)
+#define VMODIFY(vp) VN_FLAGSET(vp, VMODIFIED)
+#define VUNMODIFY(vp) VN_FLAGCLR(vp, VMODIFIED)
+
+/*
+ * Flags to VOP_SETATTR/VOP_GETATTR.
+ */
+#define ATTR_UTIME 0x01 /* non-default utime(2) request */
+#define ATTR_DMI 0x08 /* invocation from a DMI function */
+#define ATTR_LAZY 0x80 /* set/get attributes lazily */
+#define ATTR_NONBLOCK 0x100 /* return EAGAIN if operation would block */
+
+/*
+ * Flags to VOP_FSYNC and VOP_RECLAIM.
+ */
+#define FSYNC_NOWAIT 0 /* asynchronous flush */
+#define FSYNC_WAIT 0x1 /* synchronous fsync or forced reclaim */
+#define FSYNC_INVAL 0x2 /* flush and invalidate cached data */
+#define FSYNC_DATA 0x4 /* synchronous fsync of data only */
+
+/*
+ * Tracking vnode activity.
+ */
+#if defined(XFS_VNODE_TRACE)
+
+#define VNODE_TRACE_SIZE 16 /* number of trace entries */
+#define VNODE_KTRACE_ENTRY 1
+#define VNODE_KTRACE_EXIT 2
+#define VNODE_KTRACE_HOLD 3
+#define VNODE_KTRACE_REF 4
+#define VNODE_KTRACE_RELE 5
+
+extern void vn_trace_entry(struct vnode *, char *, inst_t *);
+extern void vn_trace_exit(struct vnode *, char *, inst_t *);
+extern void vn_trace_hold(struct vnode *, char *, int, inst_t *);
+extern void vn_trace_ref(struct vnode *, char *, int, inst_t *);
+extern void vn_trace_rele(struct vnode *, char *, int, inst_t *);
+
+#define VN_TRACE(vp) \
+ vn_trace_ref(vp, __FILE__, __LINE__, (inst_t *)__return_address)
+#else
+#define vn_trace_entry(a,b,c)
+#define vn_trace_exit(a,b,c)
+#define vn_trace_hold(a,b,c,d)
+#define vn_trace_ref(a,b,c,d)
+#define vn_trace_rele(a,b,c,d)
+#define VN_TRACE(vp)
+#endif
+
+#endif /* __XFS_VNODE_H__ */
--- /dev/null
+/*
+ * linux/include/asm-arm/arch-ixp4xx/io.h
+ *
+ * Author: Deepak Saxena <dsaxena@plexity.net>
+ *
+ * Copyright (C) 2002-2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __ASM_ARM_ARCH_IO_H
+#define __ASM_ARM_ARCH_IO_H
+
+#include <asm/hardware.h>
+
+#define IO_SPACE_LIMIT 0xffff0000
+
+#define BIT(x) ((1)<<(x))
+
+
+extern int (*ixp4xx_pci_read)(u32 addr, u32 cmd, u32* data);
+extern int ixp4xx_pci_write(u32 addr, u32 cmd, u32 data);
+
+
+/*
+ * IXP4xx provides two methods of accessing PCI memory space:
+ *
+ * 1) A direct mapped window from 0x48000000 to 0x4bffffff (64MB).
+ * To access PCI via this space, we simply ioremap() the BAR
+ * into the kernel and we can use the standard read[bwl]/write[bwl]
+ * macros. This is the preffered method due to speed but it
+ * limits the system to just 64MB of PCI memory. This can be
+ * problamatic if using video cards and other memory-heavy
+ * targets.
+ *
+ * 2) If > 64MB of memory space is required, the IXP4xx can be configured
+ * to use indirect registers to access PCI (as we do below for I/O
+ * transactions). This allows for up to 128MB (0x48000000 to 0x4fffffff)
+ * of memory on the bus. The disadvantadge of this is that every
+ * PCI access requires three local register accesses plus a spinlock,
+ * but in some cases the performance hit is acceptable. In addition,
+ * you cannot mmap() PCI devices in this case.
+ *
+ */
+#ifndef CONFIG_IXP4XX_INDIRECT_PCI
+
+#define __mem_pci(a) (a)
+
+#else
+
+#include <linux/mm.h>
+
+/*
+ * In the case of using indirect PCI, we simply return the actual PCI
+ * address and our read/write implementation use that to drive the
+ * access registers. If something outside of PCI is ioremap'd, we
+ * fallback to the default.
+ */
+static inline void __iomem *
+__ixp4xx_ioremap(unsigned long addr, size_t size, unsigned long flags, unsigned long align)
+{
+ extern void __iomem * __ioremap(unsigned long, size_t, unsigned long, unsigned long);
+ if((addr < 0x48000000) || (addr > 0x4fffffff))
+ return __ioremap(addr, size, flags, align);
+
+ return (void *)addr;
+}
+
+static inline void
+__ixp4xx_iounmap(void __iomem *addr)
+{
+ extern void __iounmap(void __iomem *addr);
+
+ if ((u32)addr >= VMALLOC_START)
+ __iounmap(addr);
+}
+
+#define __arch_ioremap(a, s, f, x) __ixp4xx_ioremap(a, s, f, x)
+#define __arch_iounmap(a) __ixp4xx_iounmap(a)
+
+#define writeb(p, v) __ixp4xx_writeb(p, v)
+#define writew(p, v) __ixp4xx_writew(p, v)
+#define writel(p, v) __ixp4xx_writel(p, v)
+
+#define writesb(p, v, l) __ixp4xx_writesb(p, v, l)
+#define writesw(p, v, l) __ixp4xx_writesw(p, v, l)
+#define writesl(p, v, l) __ixp4xx_writesl(p, v, l)
+
+#define readb(p) __ixp4xx_readb(p)
+#define readw(p) __ixp4xx_readw(p)
+#define readl(p) __ixp4xx_readl(p)
+
+#define readsb(p, v, l) __ixp4xx_readsb(p, v, l)
+#define readsw(p, v, l) __ixp4xx_readsw(p, v, l)
+#define readsl(p, v, l) __ixp4xx_readsl(p, v, l)
+
+static inline void
+__ixp4xx_writeb(u8 value, u32 addr)
+{
+ u32 n, byte_enables, data;
+
+ if (addr >= VMALLOC_START) {
+ __raw_writeb(value, addr);
+ return;
+ }
+
+ n = addr % 4;
+ byte_enables = (0xf & ~BIT(n)) << IXP4XX_PCI_NP_CBE_BESL;
+ data = value << (8*n);
+ ixp4xx_pci_write(addr, byte_enables | NP_CMD_MEMWRITE, data);
+}
+
+static inline void
+__ixp4xx_writesb(u32 bus_addr, u8 *vaddr, int count)
+{
+ while (count--)
+ writeb(*vaddr++, bus_addr);
+}
+
+static inline void
+__ixp4xx_writew(u16 value, u32 addr)
+{
+ u32 n, byte_enables, data;
+
+ if (addr >= VMALLOC_START) {
+ __raw_writew(value, addr);
+ return;
+ }
+
+ n = addr % 4;
+ byte_enables = (0xf & ~(BIT(n) | BIT(n+1))) << IXP4XX_PCI_NP_CBE_BESL;
+ data = value << (8*n);
+ ixp4xx_pci_write(addr, byte_enables | NP_CMD_MEMWRITE, data);
+}
+
+static inline void
+__ixp4xx_writesw(u32 bus_addr, u16 *vaddr, int count)
+{
+ while (count--)
+ writew(*vaddr++, bus_addr);
+}
+
+static inline void
+__ixp4xx_writel(u32 value, u32 addr)
+{
+ if (addr >= VMALLOC_START) {
+ __raw_writel(value, addr);
+ return;
+ }
+
+ ixp4xx_pci_write(addr, NP_CMD_MEMWRITE, value);
+}
+
+static inline void
+__ixp4xx_writesl(u32 bus_addr, u32 *vaddr, int count)
+{
+ while (count--)
+ writel(*vaddr++, bus_addr);
+}
+
+static inline unsigned char
+__ixp4xx_readb(u32 addr)
+{
+ u32 n, byte_enables, data;
+
+ if (addr >= VMALLOC_START)
+ return __raw_readb(addr);
+
+ n = addr % 4;
+ byte_enables = (0xf & ~BIT(n)) << IXP4XX_PCI_NP_CBE_BESL;
+ if (ixp4xx_pci_read(addr, byte_enables | NP_CMD_MEMREAD, &data))
+ return 0xff;
+
+ return data >> (8*n);
+}
+
+static inline void
+__ixp4xx_readsb(u32 bus_addr, u8 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = readb(bus_addr);
+}
+
+static inline unsigned short
+__ixp4xx_readw(u32 addr)
+{
+ u32 n, byte_enables, data;
+
+ if (addr >= VMALLOC_START)
+ return __raw_readw(addr);
+
+ n = addr % 4;
+ byte_enables = (0xf & ~(BIT(n) | BIT(n+1))) << IXP4XX_PCI_NP_CBE_BESL;
+ if (ixp4xx_pci_read(addr, byte_enables | NP_CMD_MEMREAD, &data))
+ return 0xffff;
+
+ return data>>(8*n);
+}
+
+static inline void
+__ixp4xx_readsw(u32 bus_addr, u16 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = readw(bus_addr);
+}
+
+static inline unsigned long
+__ixp4xx_readl(u32 addr)
+{
+ u32 data;
+
+ if (addr >= VMALLOC_START)
+ return __raw_readl(addr);
+
+ if (ixp4xx_pci_read(addr, NP_CMD_MEMREAD, &data))
+ return 0xffffffff;
+
+ return data;
+}
+
+static inline void
+__ixp4xx_readsl(u32 bus_addr, u32 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = readl(bus_addr);
+}
+
+
+/*
+ * We can use the built-in functions b/c they end up calling writeb/readb
+ */
+#define memset_io(c,v,l) _memset_io((c),(v),(l))
+#define memcpy_fromio(a,c,l) _memcpy_fromio((a),(c),(l))
+#define memcpy_toio(c,a,l) _memcpy_toio((c),(a),(l))
+
+#define eth_io_copy_and_sum(s,c,l,b) \
+ eth_copy_and_sum((s),__mem_pci(c),(l),(b))
+
+static inline int
+check_signature(unsigned long bus_addr, const unsigned char *signature,
+ int length)
+{
+ int retval = 0;
+ do {
+ if (readb(bus_addr) != *signature)
+ goto out;
+ bus_addr++;
+ signature++;
+ length--;
+ } while (length);
+ retval = 1;
+out:
+ return retval;
+}
+
+#endif
+
+/*
+ * IXP4xx does not have a transparent cpu -> PCI I/O translation
+ * window. Instead, it has a set of registers that must be tweaked
+ * with the proper byte lanes, command types, and address for the
+ * transaction. This means that we need to override the default
+ * I/O functions.
+ */
+#define outb(p, v) __ixp4xx_outb(p, v)
+#define outw(p, v) __ixp4xx_outw(p, v)
+#define outl(p, v) __ixp4xx_outl(p, v)
+
+#define outsb(p, v, l) __ixp4xx_outsb(p, v, l)
+#define outsw(p, v, l) __ixp4xx_outsw(p, v, l)
+#define outsl(p, v, l) __ixp4xx_outsl(p, v, l)
+
+#define inb(p) __ixp4xx_inb(p)
+#define inw(p) __ixp4xx_inw(p)
+#define inl(p) __ixp4xx_inl(p)
+
+#define insb(p, v, l) __ixp4xx_insb(p, v, l)
+#define insw(p, v, l) __ixp4xx_insw(p, v, l)
+#define insl(p, v, l) __ixp4xx_insl(p, v, l)
+
+
+static inline void
+__ixp4xx_outb(u8 value, u32 addr)
+{
+ u32 n, byte_enables, data;
+ n = addr % 4;
+ byte_enables = (0xf & ~BIT(n)) << IXP4XX_PCI_NP_CBE_BESL;
+ data = value << (8*n);
+ ixp4xx_pci_write(addr, byte_enables | NP_CMD_IOWRITE, data);
+}
+
+static inline void
+__ixp4xx_outsb(u32 io_addr, const u8 *vaddr, u32 count)
+{
+ while (count--)
+ outb(*vaddr++, io_addr);
+}
+
+static inline void
+__ixp4xx_outw(u16 value, u32 addr)
+{
+ u32 n, byte_enables, data;
+ n = addr % 4;
+ byte_enables = (0xf & ~(BIT(n) | BIT(n+1))) << IXP4XX_PCI_NP_CBE_BESL;
+ data = value << (8*n);
+ ixp4xx_pci_write(addr, byte_enables | NP_CMD_IOWRITE, data);
+}
+
+static inline void
+__ixp4xx_outsw(u32 io_addr, const u16 *vaddr, u32 count)
+{
+ while (count--)
+ outw(cpu_to_le16(*vaddr++), io_addr);
+}
+
+static inline void
+__ixp4xx_outl(u32 value, u32 addr)
+{
+ ixp4xx_pci_write(addr, NP_CMD_IOWRITE, value);
+}
+
+static inline void
+__ixp4xx_outsl(u32 io_addr, const u32 *vaddr, u32 count)
+{
+ while (count--)
+ outl(*vaddr++, io_addr);
+}
+
+static inline u8
+__ixp4xx_inb(u32 addr)
+{
+ u32 n, byte_enables, data;
+ n = addr % 4;
+ byte_enables = (0xf & ~BIT(n)) << IXP4XX_PCI_NP_CBE_BESL;
+ if (ixp4xx_pci_read(addr, byte_enables | NP_CMD_IOREAD, &data))
+ return 0xff;
+
+ return data >> (8*n);
+}
+
+static inline void
+__ixp4xx_insb(u32 io_addr, u8 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = inb(io_addr);
+}
+
+static inline u16
+__ixp4xx_inw(u32 addr)
+{
+ u32 n, byte_enables, data;
+ n = addr % 4;
+ byte_enables = (0xf & ~(BIT(n) | BIT(n+1))) << IXP4XX_PCI_NP_CBE_BESL;
+ if (ixp4xx_pci_read(addr, byte_enables | NP_CMD_IOREAD, &data))
+ return 0xffff;
+
+ return data>>(8*n);
+}
+
+static inline void
+__ixp4xx_insw(u32 io_addr, u16 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = le16_to_cpu(inw(io_addr));
+}
+
+static inline u32
+__ixp4xx_inl(u32 addr)
+{
+ u32 data;
+ if (ixp4xx_pci_read(addr, NP_CMD_IOREAD, &data))
+ return 0xffffffff;
+
+ return data;
+}
+
+static inline void
+__ixp4xx_insl(u32 io_addr, u32 *vaddr, u32 count)
+{
+ while (count--)
+ *vaddr++ = inl(io_addr);
+}
+
+
+#endif // __ASM_ARM_ARCH_IO_H
+
--- /dev/null
+/*
+ * include/asm-arm/arch-ixp4xx/ixp4xx-regs.h
+ *
+ * Register definitions for IXP4xx chipset. This file contains
+ * register location and bit definitions only. Platform specific
+ * definitions and helper function declarations are in platform.h
+ * and machine-name.h.
+ *
+ * Copyright (C) 2002 Intel Corporation.
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#ifndef __ASM_ARCH_HARDWARE_H__
+#error "Do not include this directly, instead #include <asm/hardware.h>"
+#endif
+
+#ifndef _ASM_ARM_IXP4XX_H_
+#define _ASM_ARM_IXP4XX_H_
+
+/*
+ * IXP4xx Linux Memory Map:
+ *
+ * Phy Size Virt Description
+ * =========================================================================
+ *
+ * 0x00000000 0x10000000(max) PAGE_OFFSET System RAM
+ *
+ * 0x48000000 0x04000000 ioremap'd PCI Memory Space
+ *
+ * 0x50000000 0x10000000 ioremap'd EXP BUS
+ *
+ * 0x6000000 0x00004000 ioremap'd QMgr
+ *
+ * 0xC0000000 0x00001000 0xffbfe000 PCI CFG
+ *
+ * 0xC4000000 0x00001000 0xffbfd000 EXP CFG
+ *
+ * 0xC8000000 0x0000C000 0xffbf2000 On-Chip Peripherals
+ */
+
+
+/*
+ * Expansion BUS Configuration registers
+ */
+#define IXP4XX_EXP_CFG_BASE_PHYS (0xC4000000)
+#define IXP4XX_EXP_CFG_BASE_VIRT (0xFFBFD000)
+#define IXP4XX_EXP_CFG_REGION_SIZE (0x00001000)
+
+/*
+ * PCI Config registers
+ */
+#define IXP4XX_PCI_CFG_BASE_PHYS (0xC0000000)
+#define IXP4XX_PCI_CFG_BASE_VIRT (0xFFBFE000)
+#define IXP4XX_PCI_CFG_REGION_SIZE (0x00001000)
+
+/*
+ * Peripheral space
+ */
+#define IXP4XX_PERIPHERAL_BASE_PHYS (0xC8000000)
+#define IXP4XX_PERIPHERAL_BASE_VIRT (0xFFBF2000)
+#define IXP4XX_PERIPHERAL_REGION_SIZE (0x0000C000)
+
+#define IXP4XX_EXP_CS0_OFFSET 0x00
+#define IXP4XX_EXP_CS1_OFFSET 0x04
+#define IXP4XX_EXP_CS2_OFFSET 0x08
+#define IXP4XX_EXP_CS3_OFFSET 0x0C
+#define IXP4XX_EXP_CS4_OFFSET 0x10
+#define IXP4XX_EXP_CS5_OFFSET 0x14
+#define IXP4XX_EXP_CS6_OFFSET 0x18
+#define IXP4XX_EXP_CS7_OFFSET 0x1C
+#define IXP4XX_EXP_CFG0_OFFSET 0x20
+#define IXP4XX_EXP_CFG1_OFFSET 0x24
+#define IXP4XX_EXP_CFG2_OFFSET 0x28
+#define IXP4XX_EXP_CFG3_OFFSET 0x2C
+
+/*
+ * Expansion Bus Controller registers.
+ */
+#define IXP4XX_EXP_REG(x) ((volatile u32 *)(IXP4XX_EXP_CFG_BASE_VIRT+(x)))
+
+#define IXP4XX_EXP_CS0 IXP4XX_EXP_REG(IXP4XX_EXP_CS0_OFFSET)
+#define IXP4XX_EXP_CS1 IXP4XX_EXP_REG(IXP4XX_EXP_CS1_OFFSET)
+#define IXP4XX_EXP_CS2 IXP4XX_EXP_REG(IXP4XX_EXP_CS2_OFFSET)
+#define IXP4XX_EXP_CS3 IXP4XX_EXP_REG(IXP4XX_EXP_CS3_OFFSET)
+#define IXP4XX_EXP_CS4 IXP4XX_EXP_REG(IXP4XX_EXP_CS4_OFFSET)
+#define IXP4XX_EXP_CS5 IXP4XX_EXP_REG(IXP4XX_EXP_CS5_OFFSET)
+#define IXP4XX_EXP_CS6 IXP4XX_EXP_REG(IXP4XX_EXP_CS6_OFFSET)
+#define IXP4XX_EXP_CS7 IXP4XX_EXP_REG(IXP4XX_EXP_CS7_OFFSET)
+
+#define IXP4XX_EXP_CFG0 IXP4XX_EXP_REG(IXP4XX_EXP_CFG0_OFFSET)
+#define IXP4XX_EXP_CFG1 IXP4XX_EXP_REG(IXP4XX_EXP_CFG1_OFFSET)
+#define IXP4XX_EXP_CFG2 IXP4XX_EXP_REG(IXP4XX_EXP_CFG2_OFFSET)
+#define IXP4XX_EXP_CFG3 IXP4XX_EXP_REG(IXP4XX_EXP_CFG3_OFFSET)
+
+
+/*
+ * Peripheral Space Register Region Base Addresses
+ */
+#define IXP4XX_UART1_BASE_PHYS (IXP4XX_PERIPHERAL_BASE_PHYS + 0x0000)
+#define IXP4XX_UART2_BASE_PHYS (IXP4XX_PERIPHERAL_BASE_PHYS + 0x1000)
+#define IXP4XX_PMU_BASE_PHYS (IXP4XX_PERIPHERAL_BASE_PHYS + 0x2000)
+#define IXP4XX_INTC_BASE_PHYS (IXP4XX_PERIPHERAL_BASE_PHYS + 0x3000)
+#define IXP4XX_GPIO_BASE_PHYS (IXP4XX_PERIPHERAL_BASE_PHYS + 0x4000)
+#define IXP4XX_TIMER_BASE_PHYS (IXP4XX_PERIPHERAL_BASE_PHYS + 0x5000)
+#define IXP4XX_USB_BASE_PHYS (IXP4XX_PERIPHERAL_BASE_PHYS + 0x5000)
+
+#define IXP4XX_UART1_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x0000)
+#define IXP4XX_UART2_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x1000)
+#define IXP4XX_PMU_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x2000)
+#define IXP4XX_INTC_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x3000)
+#define IXP4XX_GPIO_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x4000)
+#define IXP4XX_TIMER_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x5000)
+#define IXP4XX_USB_BASE_VIRT (IXP4XX_PERIPHERAL_BASE_VIRT + 0x5000)
+
+/*
+ * Constants to make it easy to access Interrupt Controller registers
+ */
+#define IXP4XX_ICPR_OFFSET 0x00 /* Interrupt Status */
+#define IXP4XX_ICMR_OFFSET 0x04 /* Interrupt Enable */
+#define IXP4XX_ICLR_OFFSET 0x08 /* Interrupt IRQ/FIQ Select */
+#define IXP4XX_ICIP_OFFSET 0x0C /* IRQ Status */
+#define IXP4XX_ICFP_OFFSET 0x10 /* FIQ Status */
+#define IXP4XX_ICHR_OFFSET 0x14 /* Interrupt Priority */
+#define IXP4XX_ICIH_OFFSET 0x18 /* IRQ Highest Pri Int */
+#define IXP4XX_ICFH_OFFSET 0x1C /* FIQ Highest Pri Int */
+
+/*
+ * Interrupt Controller Register Definitions.
+ */
+
+#define IXP4XX_INTC_REG(x) ((volatile u32 *)(IXP4XX_INTC_BASE_VIRT+(x)))
+
+#define IXP4XX_ICPR IXP4XX_INTC_REG(IXP4XX_ICPR_OFFSET)
+#define IXP4XX_ICMR IXP4XX_INTC_REG(IXP4XX_ICMR_OFFSET)
+#define IXP4XX_ICLR IXP4XX_INTC_REG(IXP4XX_ICLR_OFFSET)
+#define IXP4XX_ICIP IXP4XX_INTC_REG(IXP4XX_ICIP_OFFSET)
+#define IXP4XX_ICFP IXP4XX_INTC_REG(IXP4XX_ICFP_OFFSET)
+#define IXP4XX_ICHR IXP4XX_INTC_REG(IXP4XX_ICHR_OFFSET)
+#define IXP4XX_ICIH IXP4XX_INTC_REG(IXP4XX_ICIH_OFFSET)
+#define IXP4XX_ICFH IXP4XX_INTC_REG(IXP4XX_ICFH_OFFSET)
+
+/*
+ * Constants to make it easy to access GPIO registers
+ */
+#define IXP4XX_GPIO_GPOUTR_OFFSET 0x00
+#define IXP4XX_GPIO_GPOER_OFFSET 0x04
+#define IXP4XX_GPIO_GPINR_OFFSET 0x08
+#define IXP4XX_GPIO_GPISR_OFFSET 0x0C
+#define IXP4XX_GPIO_GPIT1R_OFFSET 0x10
+#define IXP4XX_GPIO_GPIT2R_OFFSET 0x14
+#define IXP4XX_GPIO_GPCLKR_OFFSET 0x18
+#define IXP4XX_GPIO_GPDBSELR_OFFSET 0x1C
+
+/*
+ * GPIO Register Definitions.
+ * [Only perform 32bit reads/writes]
+ */
+#define IXP4XX_GPIO_REG(x) ((volatile u32 *)(IXP4XX_GPIO_BASE_VIRT+(x)))
+
+#define IXP4XX_GPIO_GPOUTR IXP4XX_GPIO_REG(IXP4XX_GPIO_GPOUTR_OFFSET)
+#define IXP4XX_GPIO_GPOER IXP4XX_GPIO_REG(IXP4XX_GPIO_GPOER_OFFSET)
+#define IXP4XX_GPIO_GPINR IXP4XX_GPIO_REG(IXP4XX_GPIO_GPINR_OFFSET)
+#define IXP4XX_GPIO_GPISR IXP4XX_GPIO_REG(IXP4XX_GPIO_GPISR_OFFSET)
+#define IXP4XX_GPIO_GPIT1R IXP4XX_GPIO_REG(IXP4XX_GPIO_GPIT1R_OFFSET)
+#define IXP4XX_GPIO_GPIT2R IXP4XX_GPIO_REG(IXP4XX_GPIO_GPIT2R_OFFSET)
+#define IXP4XX_GPIO_GPCLKR IXP4XX_GPIO_REG(IXP4XX_GPIO_GPCLKR_OFFSET)
+#define IXP4XX_GPIO_GPDBSELR IXP4XX_GPIO_REG(IXP4XX_GPIO_GPDBSELR_OFFSET)
+
+/*
+ * GPIO register bit definitions
+ */
+
+/* Interrupt styles
+ */
+#define IXP4XX_GPIO_STYLE_ACTIVE_HIGH 0x0
+#define IXP4XX_GPIO_STYLE_ACTIVE_LOW 0x1
+#define IXP4XX_GPIO_STYLE_RISING_EDGE 0x2
+#define IXP4XX_GPIO_STYLE_FALLING_EDGE 0x3
+#define IXP4XX_GPIO_STYLE_TRANSITIONAL 0x4
+
+/*
+ * Mask used to clear interrupt styles
+ */
+#define IXP4XX_GPIO_STYLE_CLEAR 0x7
+#define IXP4XX_GPIO_STYLE_SIZE 3
+
+/*
+ * Constants to make it easy to access Timer Control/Status registers
+ */
+#define IXP4XX_OSTS_OFFSET 0x00 /* Continious TimeStamp */
+#define IXP4XX_OST1_OFFSET 0x04 /* Timer 1 Timestamp */
+#define IXP4XX_OSRT1_OFFSET 0x08 /* Timer 1 Reload */
+#define IXP4XX_OST2_OFFSET 0x0C /* Timer 2 Timestamp */
+#define IXP4XX_OSRT2_OFFSET 0x10 /* Timer 2 Reload */
+#define IXP4XX_OSWT_OFFSET 0x14 /* Watchdog Timer */
+#define IXP4XX_OSWE_OFFSET 0x18 /* Watchdog Enable */
+#define IXP4XX_OSWK_OFFSET 0x1C /* Watchdog Key */
+#define IXP4XX_OSST_OFFSET 0x20 /* Timer Status */
+
+/*
+ * Operating System Timer Register Definitions.
+ */
+
+#define IXP4XX_TIMER_REG(x) ((volatile u32 *)(IXP4XX_TIMER_BASE_VIRT+(x)))
+
+#define IXP4XX_OSTS IXP4XX_TIMER_REG(IXP4XX_OSTS_OFFSET)
+#define IXP4XX_OST1 IXP4XX_TIMER_REG(IXP4XX_OST1_OFFSET)
+#define IXP4XX_OSRT1 IXP4XX_TIMER_REG(IXP4XX_OSRT1_OFFSET)
+#define IXP4XX_OST2 IXP4XX_TIMER_REG(IXP4XX_OST2_OFFSET)
+#define IXP4XX_OSRT2 IXP4XX_TIMER_REG(IXP4XX_OSRT2_OFFSET)
+#define IXP4XX_OSWT IXP4XX_TIMER_REG(IXP4XX_OSWT_OFFSET)
+#define IXP4XX_OSWE IXP4XX_TIMER_REG(IXP4XX_OSWE_OFFSET)
+#define IXP4XX_OSWK IXP4XX_TIMER_REG(IXP4XX_OSWK_OFFSET)
+#define IXP4XX_OSST IXP4XX_TIMER_REG(IXP4XX_OSST_OFFSET)
+
+/*
+ * Timer register values and bit definitions
+ */
+#define IXP4XX_OST_ENABLE 0x00000001
+#define IXP4XX_OST_ONE_SHOT 0x00000002
+/* Low order bits of reload value ignored */
+#define IXP4XX_OST_RELOAD_MASK 0x00000003
+#define IXP4XX_OST_DISABLED 0x00000000
+#define IXP4XX_OSST_TIMER_1_PEND 0x00000001
+#define IXP4XX_OSST_TIMER_2_PEND 0x00000002
+#define IXP4XX_OSST_TIMER_TS_PEND 0x00000004
+#define IXP4XX_OSST_TIMER_WDOG_PEND 0x00000008
+#define IXP4XX_OSST_TIMER_WARM_RESET 0x00000010
+
+#define IXP4XX_WDT_KEY 0x0000482E
+
+#define IXP4XX_WDT_RESET_ENABLE 0x00000001
+#define IXP4XX_WDT_IRQ_ENABLE 0x00000002
+#define IXP4XX_WDT_COUNT_ENABLE 0x00000004
+
+
+/*
+ * Constants to make it easy to access PCI Control/Status registers
+ */
+#define PCI_NP_AD_OFFSET 0x00
+#define PCI_NP_CBE_OFFSET 0x04
+#define PCI_NP_WDATA_OFFSET 0x08
+#define PCI_NP_RDATA_OFFSET 0x0c
+#define PCI_CRP_AD_CBE_OFFSET 0x10
+#define PCI_CRP_WDATA_OFFSET 0x14
+#define PCI_CRP_RDATA_OFFSET 0x18
+#define PCI_CSR_OFFSET 0x1c
+#define PCI_ISR_OFFSET 0x20
+#define PCI_INTEN_OFFSET 0x24
+#define PCI_DMACTRL_OFFSET 0x28
+#define PCI_AHBMEMBASE_OFFSET 0x2c
+#define PCI_AHBIOBASE_OFFSET 0x30
+#define PCI_PCIMEMBASE_OFFSET 0x34
+#define PCI_AHBDOORBELL_OFFSET 0x38
+#define PCI_PCIDOORBELL_OFFSET 0x3C
+#define PCI_ATPDMA0_AHBADDR_OFFSET 0x40
+#define PCI_ATPDMA0_PCIADDR_OFFSET 0x44
+#define PCI_ATPDMA0_LENADDR_OFFSET 0x48
+#define PCI_ATPDMA1_AHBADDR_OFFSET 0x4C
+#define PCI_ATPDMA1_PCIADDR_OFFSET 0x50
+#define PCI_ATPDMA1_LENADDR_OFFSET 0x54
+
+/*
+ * PCI Control/Status Registers
+ */
+#define IXP4XX_PCI_CSR(x) ((volatile u32 *)(IXP4XX_PCI_CFG_BASE_VIRT+(x)))
+
+#define PCI_NP_AD IXP4XX_PCI_CSR(PCI_NP_AD_OFFSET)
+#define PCI_NP_CBE IXP4XX_PCI_CSR(PCI_NP_CBE_OFFSET)
+#define PCI_NP_WDATA IXP4XX_PCI_CSR(PCI_NP_WDATA_OFFSET)
+#define PCI_NP_RDATA IXP4XX_PCI_CSR(PCI_NP_RDATA_OFFSET)
+#define PCI_CRP_AD_CBE IXP4XX_PCI_CSR(PCI_CRP_AD_CBE_OFFSET)
+#define PCI_CRP_WDATA IXP4XX_PCI_CSR(PCI_CRP_WDATA_OFFSET)
+#define PCI_CRP_RDATA IXP4XX_PCI_CSR(PCI_CRP_RDATA_OFFSET)
+#define PCI_CSR IXP4XX_PCI_CSR(PCI_CSR_OFFSET)
+#define PCI_ISR IXP4XX_PCI_CSR(PCI_ISR_OFFSET)
+#define PCI_INTEN IXP4XX_PCI_CSR(PCI_INTEN_OFFSET)
+#define PCI_DMACTRL IXP4XX_PCI_CSR(PCI_DMACTRL_OFFSET)
+#define PCI_AHBMEMBASE IXP4XX_PCI_CSR(PCI_AHBMEMBASE_OFFSET)
+#define PCI_AHBIOBASE IXP4XX_PCI_CSR(PCI_AHBIOBASE_OFFSET)
+#define PCI_PCIMEMBASE IXP4XX_PCI_CSR(PCI_PCIMEMBASE_OFFSET)
+#define PCI_AHBDOORBELL IXP4XX_PCI_CSR(PCI_AHBDOORBELL_OFFSET)
+#define PCI_PCIDOORBELL IXP4XX_PCI_CSR(PCI_PCIDOORBELL_OFFSET)
+#define PCI_ATPDMA0_AHBADDR IXP4XX_PCI_CSR(PCI_ATPDMA0_AHBADDR_OFFSET)
+#define PCI_ATPDMA0_PCIADDR IXP4XX_PCI_CSR(PCI_ATPDMA0_PCIADDR_OFFSET)
+#define PCI_ATPDMA0_LENADDR IXP4XX_PCI_CSR(PCI_ATPDMA0_LENADDR_OFFSET)
+#define PCI_ATPDMA1_AHBADDR IXP4XX_PCI_CSR(PCI_ATPDMA1_AHBADDR_OFFSET)
+#define PCI_ATPDMA1_PCIADDR IXP4XX_PCI_CSR(PCI_ATPDMA1_PCIADDR_OFFSET)
+#define PCI_ATPDMA1_LENADDR IXP4XX_PCI_CSR(PCI_ATPDMA1_LENADDR_OFFSET)
+
+/*
+ * PCI register values and bit definitions
+ */
+
+/* CSR bit definitions */
+#define PCI_CSR_HOST 0x00000001
+#define PCI_CSR_ARBEN 0x00000002
+#define PCI_CSR_ADS 0x00000004
+#define PCI_CSR_PDS 0x00000008
+#define PCI_CSR_ABE 0x00000010
+#define PCI_CSR_DBT 0x00000020
+#define PCI_CSR_ASE 0x00000100
+#define PCI_CSR_IC 0x00008000
+
+/* ISR (Interrupt status) Register bit definitions */
+#define PCI_ISR_PSE 0x00000001
+#define PCI_ISR_PFE 0x00000002
+#define PCI_ISR_PPE 0x00000004
+#define PCI_ISR_AHBE 0x00000008
+#define PCI_ISR_APDC 0x00000010
+#define PCI_ISR_PADC 0x00000020
+#define PCI_ISR_ADB 0x00000040
+#define PCI_ISR_PDB 0x00000080
+
+/* INTEN (Interrupt Enable) Register bit definitions */
+#define PCI_INTEN_PSE 0x00000001
+#define PCI_INTEN_PFE 0x00000002
+#define PCI_INTEN_PPE 0x00000004
+#define PCI_INTEN_AHBE 0x00000008
+#define PCI_INTEN_APDC 0x00000010
+#define PCI_INTEN_PADC 0x00000020
+#define PCI_INTEN_ADB 0x00000040
+#define PCI_INTEN_PDB 0x00000080
+
+/*
+ * Shift value for byte enable on NP cmd/byte enable register
+ */
+#define IXP4XX_PCI_NP_CBE_BESL 4
+
+/*
+ * PCI commands supported by NP access unit
+ */
+#define NP_CMD_IOREAD 0x2
+#define NP_CMD_IOWRITE 0x3
+#define NP_CMD_CONFIGREAD 0xa
+#define NP_CMD_CONFIGWRITE 0xb
+#define NP_CMD_MEMREAD 0x6
+#define NP_CMD_MEMWRITE 0x7
+
+/*
+ * Constants for CRP access into local config space
+ */
+#define CRP_AD_CBE_BESL 20
+#define CRP_AD_CBE_WRITE 0x00010000
+
+
+/*
+ * USB Device Controller
+ *
+ * These are used by the USB gadget driver, so they don't follow the
+ * IXP4XX_ naming convetions.
+ *
+ */
+# define IXP4XX_USB_REG(x) (*((volatile u32 *)(x)))
+
+/* UDC Undocumented - Reserved1 */
+#define UDC_RES1 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0004)
+/* UDC Undocumented - Reserved2 */
+#define UDC_RES2 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0008)
+/* UDC Undocumented - Reserved3 */
+#define UDC_RES3 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x000C)
+/* UDC Control Register */
+#define UDCCR IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0000)
+/* UDC Endpoint 0 Control/Status Register */
+#define UDCCS0 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0010)
+/* UDC Endpoint 1 (IN) Control/Status Register */
+#define UDCCS1 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0014)
+/* UDC Endpoint 2 (OUT) Control/Status Register */
+#define UDCCS2 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0018)
+/* UDC Endpoint 3 (IN) Control/Status Register */
+#define UDCCS3 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x001C)
+/* UDC Endpoint 4 (OUT) Control/Status Register */
+#define UDCCS4 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0020)
+/* UDC Endpoint 5 (Interrupt) Control/Status Register */
+#define UDCCS5 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0024)
+/* UDC Endpoint 6 (IN) Control/Status Register */
+#define UDCCS6 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0028)
+/* UDC Endpoint 7 (OUT) Control/Status Register */
+#define UDCCS7 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x002C)
+/* UDC Endpoint 8 (IN) Control/Status Register */
+#define UDCCS8 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0030)
+/* UDC Endpoint 9 (OUT) Control/Status Register */
+#define UDCCS9 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0034)
+/* UDC Endpoint 10 (Interrupt) Control/Status Register */
+#define UDCCS10 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0038)
+/* UDC Endpoint 11 (IN) Control/Status Register */
+#define UDCCS11 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x003C)
+/* UDC Endpoint 12 (OUT) Control/Status Register */
+#define UDCCS12 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0040)
+/* UDC Endpoint 13 (IN) Control/Status Register */
+#define UDCCS13 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0044)
+/* UDC Endpoint 14 (OUT) Control/Status Register */
+#define UDCCS14 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0048)
+/* UDC Endpoint 15 (Interrupt) Control/Status Register */
+#define UDCCS15 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x004C)
+/* UDC Frame Number Register High */
+#define UFNRH IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0060)
+/* UDC Frame Number Register Low */
+#define UFNRL IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0064)
+/* UDC Byte Count Reg 2 */
+#define UBCR2 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0068)
+/* UDC Byte Count Reg 4 */
+#define UBCR4 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x006c)
+/* UDC Byte Count Reg 7 */
+#define UBCR7 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0070)
+/* UDC Byte Count Reg 9 */
+#define UBCR9 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0074)
+/* UDC Byte Count Reg 12 */
+#define UBCR12 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0078)
+/* UDC Byte Count Reg 14 */
+#define UBCR14 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x007c)
+/* UDC Endpoint 0 Data Register */
+#define UDDR0 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0080)
+/* UDC Endpoint 1 Data Register */
+#define UDDR1 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0100)
+/* UDC Endpoint 2 Data Register */
+#define UDDR2 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0180)
+/* UDC Endpoint 3 Data Register */
+#define UDDR3 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0200)
+/* UDC Endpoint 4 Data Register */
+#define UDDR4 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0400)
+/* UDC Endpoint 5 Data Register */
+#define UDDR5 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x00A0)
+/* UDC Endpoint 6 Data Register */
+#define UDDR6 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0600)
+/* UDC Endpoint 7 Data Register */
+#define UDDR7 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0680)
+/* UDC Endpoint 8 Data Register */
+#define UDDR8 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0700)
+/* UDC Endpoint 9 Data Register */
+#define UDDR9 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0900)
+/* UDC Endpoint 10 Data Register */
+#define UDDR10 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x00C0)
+/* UDC Endpoint 11 Data Register */
+#define UDDR11 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0B00)
+/* UDC Endpoint 12 Data Register */
+#define UDDR12 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0B80)
+/* UDC Endpoint 13 Data Register */
+#define UDDR13 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0C00)
+/* UDC Endpoint 14 Data Register */
+#define UDDR14 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0E00)
+/* UDC Endpoint 15 Data Register */
+#define UDDR15 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x00E0)
+/* UDC Interrupt Control Register 0 */
+#define UICR0 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0050)
+/* UDC Interrupt Control Register 1 */
+#define UICR1 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0054)
+/* UDC Status Interrupt Register 0 */
+#define USIR0 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x0058)
+/* UDC Status Interrupt Register 1 */
+#define USIR1 IXP4XX_USB_REG(IXP4XX_USB_BASE_VIRT+0x005C)
+
+#define UDCCR_UDE (1 << 0) /* UDC enable */
+#define UDCCR_UDA (1 << 1) /* UDC active */
+#define UDCCR_RSM (1 << 2) /* Device resume */
+#define UDCCR_RESIR (1 << 3) /* Resume interrupt request */
+#define UDCCR_SUSIR (1 << 4) /* Suspend interrupt request */
+#define UDCCR_SRM (1 << 5) /* Suspend/resume interrupt mask */
+#define UDCCR_RSTIR (1 << 6) /* Reset interrupt request */
+#define UDCCR_REM (1 << 7) /* Reset interrupt mask */
+
+#define UDCCS0_OPR (1 << 0) /* OUT packet ready */
+#define UDCCS0_IPR (1 << 1) /* IN packet ready */
+#define UDCCS0_FTF (1 << 2) /* Flush Tx FIFO */
+#define UDCCS0_DRWF (1 << 3) /* Device remote wakeup feature */
+#define UDCCS0_SST (1 << 4) /* Sent stall */
+#define UDCCS0_FST (1 << 5) /* Force stall */
+#define UDCCS0_RNE (1 << 6) /* Receive FIFO no empty */
+#define UDCCS0_SA (1 << 7) /* Setup active */
+
+#define UDCCS_BI_TFS (1 << 0) /* Transmit FIFO service */
+#define UDCCS_BI_TPC (1 << 1) /* Transmit packet complete */
+#define UDCCS_BI_FTF (1 << 2) /* Flush Tx FIFO */
+#define UDCCS_BI_TUR (1 << 3) /* Transmit FIFO underrun */
+#define UDCCS_BI_SST (1 << 4) /* Sent stall */
+#define UDCCS_BI_FST (1 << 5) /* Force stall */
+#define UDCCS_BI_TSP (1 << 7) /* Transmit short packet */
+
+#define UDCCS_BO_RFS (1 << 0) /* Receive FIFO service */
+#define UDCCS_BO_RPC (1 << 1) /* Receive packet complete */
+#define UDCCS_BO_DME (1 << 3) /* DMA enable */
+#define UDCCS_BO_SST (1 << 4) /* Sent stall */
+#define UDCCS_BO_FST (1 << 5) /* Force stall */
+#define UDCCS_BO_RNE (1 << 6) /* Receive FIFO not empty */
+#define UDCCS_BO_RSP (1 << 7) /* Receive short packet */
+
+#define UDCCS_II_TFS (1 << 0) /* Transmit FIFO service */
+#define UDCCS_II_TPC (1 << 1) /* Transmit packet complete */
+#define UDCCS_II_FTF (1 << 2) /* Flush Tx FIFO */
+#define UDCCS_II_TUR (1 << 3) /* Transmit FIFO underrun */
+#define UDCCS_II_TSP (1 << 7) /* Transmit short packet */
+
+#define UDCCS_IO_RFS (1 << 0) /* Receive FIFO service */
+#define UDCCS_IO_RPC (1 << 1) /* Receive packet complete */
+#define UDCCS_IO_ROF (1 << 3) /* Receive overflow */
+#define UDCCS_IO_DME (1 << 3) /* DMA enable */
+#define UDCCS_IO_RNE (1 << 6) /* Receive FIFO not empty */
+#define UDCCS_IO_RSP (1 << 7) /* Receive short packet */
+
+#define UDCCS_INT_TFS (1 << 0) /* Transmit FIFO service */
+#define UDCCS_INT_TPC (1 << 1) /* Transmit packet complete */
+#define UDCCS_INT_FTF (1 << 2) /* Flush Tx FIFO */
+#define UDCCS_INT_TUR (1 << 3) /* Transmit FIFO underrun */
+#define UDCCS_INT_SST (1 << 4) /* Sent stall */
+#define UDCCS_INT_FST (1 << 5) /* Force stall */
+#define UDCCS_INT_TSP (1 << 7) /* Transmit short packet */
+
+#define UICR0_IM0 (1 << 0) /* Interrupt mask ep 0 */
+#define UICR0_IM1 (1 << 1) /* Interrupt mask ep 1 */
+#define UICR0_IM2 (1 << 2) /* Interrupt mask ep 2 */
+#define UICR0_IM3 (1 << 3) /* Interrupt mask ep 3 */
+#define UICR0_IM4 (1 << 4) /* Interrupt mask ep 4 */
+#define UICR0_IM5 (1 << 5) /* Interrupt mask ep 5 */
+#define UICR0_IM6 (1 << 6) /* Interrupt mask ep 6 */
+#define UICR0_IM7 (1 << 7) /* Interrupt mask ep 7 */
+
+#define UICR1_IM8 (1 << 0) /* Interrupt mask ep 8 */
+#define UICR1_IM9 (1 << 1) /* Interrupt mask ep 9 */
+#define UICR1_IM10 (1 << 2) /* Interrupt mask ep 10 */
+#define UICR1_IM11 (1 << 3) /* Interrupt mask ep 11 */
+#define UICR1_IM12 (1 << 4) /* Interrupt mask ep 12 */
+#define UICR1_IM13 (1 << 5) /* Interrupt mask ep 13 */
+#define UICR1_IM14 (1 << 6) /* Interrupt mask ep 14 */
+#define UICR1_IM15 (1 << 7) /* Interrupt mask ep 15 */
+
+#define USIR0_IR0 (1 << 0) /* Interrup request ep 0 */
+#define USIR0_IR1 (1 << 1) /* Interrup request ep 1 */
+#define USIR0_IR2 (1 << 2) /* Interrup request ep 2 */
+#define USIR0_IR3 (1 << 3) /* Interrup request ep 3 */
+#define USIR0_IR4 (1 << 4) /* Interrup request ep 4 */
+#define USIR0_IR5 (1 << 5) /* Interrup request ep 5 */
+#define USIR0_IR6 (1 << 6) /* Interrup request ep 6 */
+#define USIR0_IR7 (1 << 7) /* Interrup request ep 7 */
+
+#define USIR1_IR8 (1 << 0) /* Interrup request ep 8 */
+#define USIR1_IR9 (1 << 1) /* Interrup request ep 9 */
+#define USIR1_IR10 (1 << 2) /* Interrup request ep 10 */
+#define USIR1_IR11 (1 << 3) /* Interrup request ep 11 */
+#define USIR1_IR12 (1 << 4) /* Interrup request ep 12 */
+#define USIR1_IR13 (1 << 5) /* Interrup request ep 13 */
+#define USIR1_IR14 (1 << 6) /* Interrup request ep 14 */
+#define USIR1_IR15 (1 << 7) /* Interrup request ep 15 */
+
+#define DCMD_LENGTH 0x01fff /* length mask (max = 8K - 1) */
+
+#endif
--- /dev/null
+/*
+ * include/asm-arm/arch-ixp4xx/platform.h
+ *
+ * Constants and functions that are useful to IXP4xx platform-specific code
+ * and device drivers.
+ *
+ * Copyright (C) 2004 MontaVista Software, Inc.
+ */
+
+#ifndef __ASM_ARCH_HARDWARE_H__
+#error "Do not include this directly, instead #include <asm/hardware.h>"
+#endif
+
+#ifndef __ASSEMBLY__
+
+#include <asm/types.h>
+
+/*
+ * Expansion bus memory regions
+ */
+#define IXP4XX_EXP_BUS_BASE_PHYS (0x50000000)
+
+#define IXP4XX_EXP_BUS_CSX_REGION_SIZE (0x01000000)
+
+#define IXP4XX_EXP_BUS_CS0_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x00000000)
+#define IXP4XX_EXP_BUS_CS1_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x01000000)
+#define IXP4XX_EXP_BUS_CS2_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x02000000)
+#define IXP4XX_EXP_BUS_CS3_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x03000000)
+#define IXP4XX_EXP_BUS_CS4_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x04000000)
+#define IXP4XX_EXP_BUS_CS5_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x05000000)
+#define IXP4XX_EXP_BUS_CS6_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x06000000)
+#define IXP4XX_EXP_BUS_CS7_BASE_PHYS (IXP4XX_EXP_BUS_BASE_PHYS + 0x07000000)
+
+#define IXP4XX_FLASH_WRITABLE (0x2)
+#define IXP4XX_FLASH_DEFAULT (0xbcd23c40)
+#define IXP4XX_FLASH_WRITE (0xbcd23c42)
+
+/*
+ * Clock Speed Definitions.
+ */
+#define IXP4XX_PERIPHERAL_BUS_CLOCK (66) /* 66Mhzi APB BUS */
+#define IXP4XX_UART_XTAL 14745600
+
+/*
+ * The IXP4xx chips do not have an I2C unit, so GPIO lines are just
+ * used to
+ * Used as platform_data to provide GPIO pin information to the ixp42x
+ * I2C driver.
+ */
+struct ixp4xx_i2c_pins {
+ unsigned long sda_pin;
+ unsigned long scl_pin;
+};
+
+
+struct sys_timer;
+
+/*
+ * Functions used by platform-level setup code
+ */
+extern void ixp4xx_map_io(void);
+extern void ixp4xx_init_irq(void);
+extern struct sys_timer ixp4xx_timer;
+extern void ixp4xx_pci_preinit(void);
+struct pci_sys_data;
+extern int ixp4xx_setup(int nr, struct pci_sys_data *sys);
+extern struct pci_bus *ixp4xx_scan_bus(int nr, struct pci_sys_data *sys);
+
+/*
+ * GPIO-functions
+ */
+/*
+ * The following converted to the real HW bits the gpio_line_config
+ */
+/* GPIO pin types */
+#define IXP4XX_GPIO_OUT 0x1
+#define IXP4XX_GPIO_IN 0x2
+
+#define IXP4XX_GPIO_INTSTYLE_MASK 0x7C /* Bits [6:2] define interrupt style */
+
+/*
+ * GPIO interrupt types.
+ */
+#define IXP4XX_GPIO_ACTIVE_HIGH 0x4 /* Default */
+#define IXP4XX_GPIO_ACTIVE_LOW 0x8
+#define IXP4XX_GPIO_RISING_EDGE 0x10
+#define IXP4XX_GPIO_FALLING_EDGE 0x20
+#define IXP4XX_GPIO_TRANSITIONAL 0x40
+
+/* GPIO signal types */
+#define IXP4XX_GPIO_LOW 0
+#define IXP4XX_GPIO_HIGH 1
+
+/* GPIO Clocks */
+#define IXP4XX_GPIO_CLK_0 14
+#define IXP4XX_GPIO_CLK_1 15
+
+extern void gpio_line_config(u8 line, u32 style);
+
+static inline void gpio_line_get(u8 line, int *value)
+{
+ *value = (*IXP4XX_GPIO_GPINR >> line) & 0x1;
+}
+
+static inline void gpio_line_set(u8 line, int value)
+{
+ if (value == IXP4XX_GPIO_HIGH)
+ *IXP4XX_GPIO_GPOUTR |= (1 << line);
+ else if (value == IXP4XX_GPIO_LOW)
+ *IXP4XX_GPIO_GPOUTR &= ~(1 << line);
+}
+
+static inline void gpio_line_isr_clear(u8 line)
+{
+ *IXP4XX_GPIO_GPISR = (1 << line);
+}
+
+#endif // __ASSEMBLY__
+
--- /dev/null
+/*
+ * include/asm-arm/arch-ixp4xx/uncompress.h
+ *
+ * Copyright (C) 2002 Intel Corporation.
+ * Copyright (C) 2003-2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#ifndef _ARCH_UNCOMPRESS_H_
+#define _ARCH_UNCOMPRESS_H_
+
+#include <asm/hardware.h>
+#include <asm/mach-types.h>
+#include <linux/serial_reg.h>
+
+#define TX_DONE (UART_LSR_TEMT|UART_LSR_THRE)
+
+static volatile u32* uart_base;
+
+static __inline__ void putc(char c)
+{
+ /* Check THRE and TEMT bits before we transmit the character.
+ */
+ while ((uart_base[UART_LSR] & TX_DONE) != TX_DONE);
+ *uart_base = c;
+}
+
+/*
+ * This does not append a newline
+ */
+static void putstr(const char *s)
+{
+ while (*s)
+ {
+ putc(*s);
+ if (*s == '\n')
+ putc('\r');
+ s++;
+ }
+}
+
+static __inline__ void __arch_decomp_setup(unsigned long arch_id)
+{
+ /*
+ * Coyote only has UART2 connected
+ */
+ if (machine_is_adi_coyote())
+ uart_base = (volatile u32*) IXP4XX_UART2_BASE_PHYS;
+ else
+ uart_base = (volatile u32*) IXP4XX_UART1_BASE_PHYS;
+}
+
+/*
+ * arch_id is a variable in decompress_kernel()
+ */
+#define arch_decomp_setup() __arch_decomp_setup(arch_id)
+
+#define arch_decomp_wdog()
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/hardware/clock.h
+ *
+ * Copyright (C) 2004 ARM Limited.
+ * Written by Deep Blue Solutions Limited.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef ASMARM_CLOCK_H
+#define ASMARM_CLOCK_H
+
+struct device;
+
+/*
+ * The base API.
+ */
+
+
+/*
+ * struct clk - an machine class defined object / cookie.
+ */
+struct clk;
+
+/**
+ * clk_get - lookup and obtain a reference to a clock producer.
+ * @dev: device for clock "consumer"
+ * @id: device ID
+ *
+ * Returns a struct clk corresponding to the clock producer, or
+ * valid IS_ERR() condition containing errno.
+ */
+struct clk *clk_get(struct device *dev, const char *id);
+
+/**
+ * clk_enable - inform the system when the clock source should be running.
+ * @clk: clock source
+ *
+ * If the clock can not be enabled/disabled, this should return success.
+ *
+ * Returns success (0) or negative errno.
+ */
+int clk_enable(struct clk *clk);
+
+/**
+ * clk_disable - inform the system when the clock source is no longer required.
+ * @clk: clock source
+ */
+void clk_disable(struct clk *clk);
+
+/**
+ * clk_use - increment the use count
+ * @clk: clock source
+ *
+ * Returns success (0) or negative errno.
+ */
+int clk_use(struct clk *clk);
+
+/**
+ * clk_unuse - decrement the use count
+ * @clk: clock source
+ */
+void clk_unuse(struct clk *clk);
+
+/**
+ * clk_get_rate - obtain the current clock rate (in Hz) for a clock source.
+ * This is only valid once the clock source has been enabled.
+ * @clk: clock source
+ */
+unsigned long clk_get_rate(struct clk *clk);
+
+/**
+ * clk_put - "free" the clock source
+ * @clk: clock source
+ */
+void clk_put(struct clk *clk);
+
+
+/*
+ * The remaining APIs are optional for machine class support.
+ */
+
+
+/**
+ * clk_round_rate - adjust a rate to the exact rate a clock can provide
+ * @clk: clock source
+ * @rate: desired clock rate in Hz
+ *
+ * Returns rounded clock rate in Hz, or negative errno.
+ */
+long clk_round_rate(struct clk *clk, unsigned long rate);
+
+/**
+ * clk_set_rate - set the clock rate for a clock source
+ * @clk: clock source
+ * @rate: desired clock rate in Hz
+ *
+ * Returns success (0) or negative errno.
+ */
+int clk_set_rate(struct clk *clk, unsigned long rate);
+
+/**
+ * clk_set_parent - set the parent clock source for this clock
+ * @clk: clock source
+ * @parent: parent clock source
+ *
+ * Returns success (0) or negative errno.
+ */
+int clk_set_parent(struct clk *clk, struct clk *parent);
+
+/**
+ * clk_get_parent - get the parent clock source for this clock
+ * @clk: clock source
+ *
+ * Returns struct clk corresponding to parent clock source, or
+ * valid IS_ERR() condition containing errno.
+ */
+struct clk *clk_get_parent(struct clk *clk);
+
+#endif
--- /dev/null
+/*
+ * linux/include/asm-arm/mach/time.h
+ *
+ * Copyright (C) 2004 MontaVista Software, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#ifndef __ASM_ARM_MACH_TIME_H
+#define __ASM_ARM_MACH_TIME_H
+
+#include <linux/sysdev.h>
+
+/*
+ * This is our kernel timer structure.
+ *
+ * - init
+ * Initialise the kernels jiffy timer source, claim interrupt
+ * using setup_irq. This is called early on during initialisation
+ * while interrupts are still disabled on the local CPU.
+ * - suspend
+ * Suspend the kernel jiffy timer source, if necessary. This
+ * is called with interrupts disabled, after all normal devices
+ * have been suspended. If no action is required, set this to
+ * NULL.
+ * - resume
+ * Resume the kernel jiffy timer source, if necessary. This
+ * is called with interrupts disabled before any normal devices
+ * are resumed. If no action is required, set this to NULL.
+ * - offset
+ * Return the timer offset in microseconds since the last timer
+ * interrupt. Note: this must take account of any unprocessed
+ * timer interrupt which may be pending.
+ */
+struct sys_timer {
+ struct sys_device dev;
+ void (*init)(void);
+ void (*suspend)(void);
+ void (*resume)(void);
+ unsigned long (*offset)(void);
+};
+
+extern struct sys_timer *system_timer;
+extern void timer_tick(struct pt_regs *);
+
+/*
+ * Kernel time keeping support.
+ */
+extern int (*set_rtc)(void);
+extern void save_time_delta(struct timespec *delta, struct timespec *rtc);
+extern void restore_time_delta(struct timespec *delta, struct timespec *rtc);
+
+#endif
--- /dev/null
+#ifndef _ASM_GENERIC_CRASHDUMP_H_
+#define _ASM_GENERIC_CRASHDUMP_H_
+
+/*
+ * linux/include/asm-generic/crashdump.h
+ *
+ * Copyright (c) 2003, 2004 Red Hat, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifdef __KERNEL__
+
+#define platform_timestamp(x) do { (x) = 0; } while (0)
+
+#define platform_fix_regs() do { } while (0)
+#define platform_init_stack(stackptr) do { } while (0)
+#define platform_cleanup_stack(stackptr) do { } while (0)
+#define platform_start_crashdump(stackptr,dumpfunc,regs) (0)
+
+#undef ELF_CORE_COPY_REGS
+#define ELF_CORE_COPY_REGS(x, y) do { struct pt_regs *z; z = (y); } while (0)
+
+#define show_mem() do {} while (0)
+
+#define show_state() do {} while (0)
+
+#define show_regs(x) do { struct pt_regs *z; z = (x); } while (0)
+
+#undef KM_CRASHDUMP
+#define KM_CRASHDUMP 0
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_GENERIC_CRASHDUMP_H */
--- /dev/null
+#ifndef _ASM_GENERIC_DISKDUMP_H_
+#define _ASM_GENERIC_DISKDUMP_H_
+
+#include <asm-generic/crashdump.h>
+
+const static int platform_supports_diskdump = 0;
+
+struct disk_dump_sub_header {};
+
+#define size_of_sub_header() 1
+#define write_sub_header() 1
+
+#endif /* _ASM_GENERIC_DISKDUMP_H */
--- /dev/null
+#ifndef _ASM_GENERIC_NETDUMP_H_
+#define _ASM_GENERIC_NETDUMP_H_
+
+/*
+ * linux/include/asm-generic/netdump.h
+ *
+ * Copyright (c) 2003, 2004 Red Hat, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#include <asm-generic/crashdump.h>
+
+#ifdef __KERNEL__
+
+#warning netdump is not supported on this platform
+const static int platform_supports_netdump = 0;
+
+static inline int page_is_ram(unsigned long x) { return 0; }
+
+#define platform_machine_type() (EM_NONE)
+#define platform_effective_version(x) (0)
+#define platform_next_available(x) ((u32)0)
+#define platform_freeze_cpu() do { } while (0)
+#define platform_jiffy_cycles(x) do { } while (0)
+#define platform_max_pfn() (0)
+#define platform_get_regs(x,y) (0)
+
+#undef kmap_atomic
+#undef kunmap_atomic
+static inline char *kmap_atomic(void *page, int idx) { return NULL; }
+#define kunmap_atomic(addr, idx) do { } while (0)
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_GENERIC_NETDUMP_H */
--- /dev/null
+#ifndef _ASM_I386_CRASHDUMP_H
+#define _ASM_I386_CRASHDUMP_H
+
+/*
+ * linux/include/asm-i386/crashdump.h
+ *
+ * Copyright (c) 2003, 2004 Red Hat, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifdef __KERNEL__
+
+#include <asm/irq.h>
+
+/*
+ * Structure taken from arch/i386/kernel/irq.c. It used to live in asm/irq.h.
+ */
+#ifdef CONFIG_4KSTACKS
+union dump_irq_ctx {
+ struct thread_info tinfo;
+ u32 stack[THREAD_SIZE/sizeof(u32)];
+};
+#endif
+
+extern int page_is_ram (unsigned long);
+extern unsigned long next_ram_page (unsigned long);
+
+#define platform_timestamp(x) rdtscll(x)
+
+#define platform_fix_regs() \
+{ \
+ unsigned long esp; \
+ unsigned short ss; \
+ esp = (unsigned long) ((char *)regs + sizeof (struct pt_regs)); \
+ ss = __KERNEL_DS; \
+ if (regs->xcs & 3) { \
+ esp = regs->esp; \
+ ss = regs->xss & 0xffff; \
+ } \
+ myregs = *regs; \
+ myregs.esp = esp; \
+ myregs.xss = (myregs.xss & 0xffff0000) | ss; \
+};
+
+static inline void platform_init_stack(void **stackptr)
+{
+#ifdef CONFIG_4KSTACKS
+ *stackptr = (void *)kmalloc(sizeof(union dump_irq_ctx), GFP_KERNEL);
+ if (*stackptr)
+ memset(*stackptr, 0, sizeof(union dump_irq_ctx));
+ else
+ printk(KERN_WARNING
+ "crashdump: unable to allocate separate stack\n");
+#else
+ *stackptr = NULL;
+#endif
+}
+
+typedef asmlinkage void (*crashdump_func_t)(struct pt_regs *, void *);
+
+static inline void platform_start_crashdump(void *stackptr,
+ crashdump_func_t dumpfunc,
+ struct pt_regs *regs)
+{
+ if (!stackptr)
+ dumpfunc(regs, NULL);
+#ifdef CONFIG_4KSTACKS
+ else {
+ u32 *dsp;
+ union dump_irq_ctx * curctx;
+ union dump_irq_ctx * dumpctx;
+
+ curctx = (union dump_irq_ctx *) current_thread_info();
+ dumpctx = (union dump_irq_ctx *) stackptr;
+
+ /* build the stack frame on the IRQ stack */
+ dsp = (u32*) ((char*)dumpctx + sizeof(*dumpctx));
+ dumpctx->tinfo.task = curctx->tinfo.task;
+ dumpctx->tinfo.previous_esp = current_stack_pointer;
+
+ *--dsp = (u32) NULL;
+ *--dsp = (u32) regs;
+
+ asm volatile(
+ " xchgl %%ebx,%%esp \n"
+ " call *%%eax \n"
+ " xchgl %%ebx,%%esp \n"
+ : : "a"(dumpfunc), "b"(dsp)
+ : "memory", "cc", "edx", "ecx"
+ );
+ }
+#endif
+}
+
+#define platform_cleanup_stack(stackptr) \
+do { \
+ if (stackptr) \
+ kfree(stackptr); \
+} while (0)
+
+#define platform_freeze_cpu() \
+{ \
+ for (;;) local_irq_disable(); \
+}
+
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_I386_CRASHDUMP_H */
--- /dev/null
+#ifndef _ASM_I386_DISKDUMP_H
+#define _ASM_I386_DISKDUMP_H
+
+/*
+ * linux/include/asm-i386/diskdump.h
+ *
+ * Copyright (c) 2004 FUJITSU LIMITED
+ * Copyright (c) 2003 Red Hat, Inc. All rights reserved.
+ */
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifdef __KERNEL__
+
+#include <linux/elf.h>
+#include <asm/crashdump.h>
+
+const static int platform_supports_diskdump = 1;
+
+struct disk_dump_sub_header {
+ elf_gregset_t elf_regs;
+};
+
+#define size_of_sub_header() ((sizeof(struct disk_dump_sub_header) + PAGE_SIZE - 1) / DUMP_BLOCK_SIZE)
+
+#define write_sub_header() \
+({ \
+ int ret; \
+ \
+ ELF_CORE_COPY_REGS(dump_sub_header.elf_regs, (&myregs)); \
+ clear_page(scratch); \
+ memcpy(scratch, &dump_sub_header, sizeof(dump_sub_header)); \
+ \
+ if ((ret = write_blocks(dump_part, 2, scratch, 1)) >= 0) \
+ ret = 1; /* size of sub header in page */; \
+ ret; \
+})
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_I386_DISKDUMP_H */
--- /dev/null
+#ifndef _ASM_I386_NETDUMP_H_
+#define _ASM_I386_NETDUMP_H_
+
+/*
+ * linux/include/asm-i386/netdump.h
+ *
+ * Copyright (c) 2003, 2004 Red Hat, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifdef __KERNEL__
+
+#include <asm/irq.h>
+#include <asm/crashdump.h>
+
+const static int platform_supports_netdump = 1;
+
+#define platform_page_is_ram(x) (page_is_ram(x))
+#define platform_machine_type() (EM_386)
+
+static inline unsigned char platform_effective_version(req_t *req)
+{
+ if (req->from == 0)
+ return NETDUMP_VERSION;
+ else
+ return min_t(unsigned char, req->from, NETDUMP_VERSION_MAX);
+}
+
+#define platform_max_pfn() (num_physpages)
+
+static inline u32 platform_next_available(unsigned long pfn)
+{
+ unsigned long pgnum = next_ram_page(pfn);
+
+ if (pgnum < platform_max_pfn()) {
+ return (u32)pgnum;
+ }
+ return 0;
+}
+
+static inline void platform_jiffy_cycles(unsigned long long *jcp)
+{
+ unsigned long long t0, t1;
+
+ platform_timestamp(t0);
+ netdump_mdelay(1);
+ platform_timestamp(t1);
+ if (t1 > t0)
+ *jcp = t1 - t0;
+}
+
+static inline unsigned int platform_get_regs(char *tmp, struct pt_regs *myregs)
+{
+ elf_gregset_t elf_regs;
+ char *tmp2;
+
+ tmp2 = tmp + sprintf(tmp, "Sending register info.\n");
+ ELF_CORE_COPY_REGS(elf_regs, myregs);
+ memcpy(tmp2, &elf_regs, sizeof(elf_regs));
+
+ return(strlen(tmp) + sizeof(elf_regs));
+}
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_I386_NETDUMP_H_ */
--- /dev/null
+#ifndef _ASM_IA64_CRASHDUMP_H
+#define _ASM_IA64_CRASHDUMP_H
+
+/*
+ * linux/include/asm-ia64/diskdump.h
+ *
+ * Copyright (c) 2004 FUJITSU LIMITED
+ * Copyright (c) 2003 Red Hat, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifdef __KERNEL__
+
+#include <linux/elf.h>
+#include <asm/unwind.h>
+#include <asm/ptrace.h>
+
+extern void ia64_do_copy_regs(struct unw_frame_info *, void *arg);
+extern void ia64_freeze_cpu(struct unw_frame_info *, void *arg);
+extern void ia64_start_dump(struct unw_frame_info *, void *arg);
+extern int page_is_ram(unsigned long);
+extern unsigned long next_ram_page(unsigned long);
+
+#define platform_timestamp(x) ({ x = ia64_get_itc(); })
+
+#define platform_fix_regs() \
+{ \
+ struct unw_frame_info *info = platform_arg; \
+ \
+ current->thread.ksp = (__u64)info->sw - 16; \
+ myregs = *regs; \
+}
+
+#define platform_init_stack(stackptr) do { } while (0)
+#define platform_cleanup_stack(stackptr) do { } while (0)
+
+typedef asmlinkage void (*crashdump_func_t)(struct pt_regs *, void *);
+
+/* Container to hold dump hander information */
+struct dump_call_param {
+ crashdump_func_t func;
+ struct pt_regs *regs;
+};
+
+static inline void platform_start_crashdump(void *stackptr,
+ crashdump_func_t dumpfunc,
+ struct pt_regs *regs)
+{
+ struct dump_call_param param;
+
+ param.func = dumpfunc;
+ param.regs = regs;
+ unw_init_running(ia64_start_dump, ¶m);
+}
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_IA64_CRASHDUMP_H */
--- /dev/null
+#ifndef _ASM_IA64_DISKDUMP_H
+#define _ASM_IA64_DISKDUMP_H
+
+/*
+ * linux/include/asm-ia64/diskdump.h
+ *
+ * Copyright (c) 2004 FUJITSU LIMITED
+ * Copyright (c) 2003 Red Hat, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifdef __KERNEL__
+
+#include <asm/crashdump.h>
+
+
+const static int platform_supports_diskdump = 1;
+
+struct disk_dump_sub_header {
+ elf_gregset_t elf_regs;
+ struct switch_stack *sw[NR_CPUS];
+};
+
+#define size_of_sub_header() ((sizeof(struct disk_dump_sub_header) + PAGE_SIZE - 1) / DUMP_BLOCK_SIZE)
+
+#define write_sub_header() \
+({ \
+ int ret; \
+ struct unw_frame_info *info = platform_arg; \
+ \
+ ia64_do_copy_regs(info, &dump_sub_header.elf_regs); \
+ dump_sub_header.sw[smp_processor_id()] = info->sw; \
+ clear_page(scratch); \
+ memcpy(scratch, &dump_sub_header, sizeof(dump_sub_header));\
+ \
+ if ((ret = write_blocks(dump_part, 2, scratch, 1)) >= 0)\
+ ret = 1; /* size of sub header in page */; \
+ ret; \
+})
+
+#define platform_freeze_cpu() \
+{ \
+ unw_init_running(ia64_freeze_cpu, \
+ &dump_sub_header.sw[smp_processor_id()]); \
+}
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_IA64_DISKDUMP_H */
--- /dev/null
+#ifndef _ASM_IA64_NETDUMP_H_
+#define _ASM_IA64_NETDUMP_H_
+
+/*
+ * linux/include/asm-ia64/netdump.h
+ *
+ * Copyright (c) 2003, 2004 Red Hat, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifdef __KERNEL__
+
+#include <asm/crashdump.h>
+
+const static int platform_supports_netdump = 1;
+
+#define platform_machine_type() (EM_IA_64)
+
+#define platform_page_is_ram(x) page_is_ram(x)
+
+static inline unsigned char platform_effective_version(req_t *req)
+{
+ if (req->from > 0)
+ return min_t(unsigned char, req->from, NETDUMP_VERSION_MAX);
+ else
+ return 0;
+}
+
+extern void *high_memory;
+#define platform_max_pfn() ((__pa(high_memory)) / PAGE_SIZE)
+
+static inline u32 platform_next_available(unsigned long pfn)
+{
+ unsigned long pgnum = next_ram_page(pfn);
+
+ if (pgnum < platform_max_pfn()) {
+ return (u32)pgnum;
+ }
+ return 0;
+}
+
+static inline void platform_jiffy_cycles(unsigned long long *jcp)
+{
+ unsigned long long t0, t1;
+
+ platform_timestamp(t0);
+ netdump_mdelay(1);
+ platform_timestamp(t1);
+ if (t1 > t0)
+ *jcp = t1 - t0;
+}
+
+#define platform_freeze_cpu() \
+{ \
+ unw_init_running(ia64_freeze_cpu, 0); \
+}
+
+static inline unsigned int platform_get_regs(char *tmp, struct pt_regs *myregs)
+{
+ char *tmp2;
+
+ tmp2 = tmp + sprintf(tmp, "Sending register info.\n");
+ memcpy(tmp2, myregs, sizeof(struct pt_regs));
+
+ return(strlen(tmp) + sizeof(struct pt_regs));
+}
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_IA64_NETDUMP_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 1992-1997,2000-2004 Silicon Graphics, Inc. All Rights Reserved.
+ */
+
+#ifndef _ASM_IA64_SN_L1_H
+#define _ASM_IA64_SN_L1_H
+
+/* brick type response codes */
+#define L1_BRICKTYPE_PX 0x23 /* # */
+#define L1_BRICKTYPE_PE 0x25 /* % */
+#define L1_BRICKTYPE_N_p0 0x26 /* & */
+#define L1_BRICKTYPE_IP45 0x34 /* 4 */
+#define L1_BRICKTYPE_IP41 0x35 /* 5 */
+#define L1_BRICKTYPE_TWISTER 0x36 /* 6 */ /* IP53 & ROUTER */
+#define L1_BRICKTYPE_IX 0x3d /* = */
+#define L1_BRICKTYPE_IP34 0x61 /* a */
+#define L1_BRICKTYPE_GA 0x62 /* b */
+#define L1_BRICKTYPE_C 0x63 /* c */
+#define L1_BRICKTYPE_OPUS_TIO 0x66 /* f */
+#define L1_BRICKTYPE_I 0x69 /* i */
+#define L1_BRICKTYPE_N 0x6e /* n */
+#define L1_BRICKTYPE_OPUS 0x6f /* o */
+#define L1_BRICKTYPE_P 0x70 /* p */
+#define L1_BRICKTYPE_R 0x72 /* r */
+#define L1_BRICKTYPE_CHI_CG 0x76 /* v */
+#define L1_BRICKTYPE_X 0x78 /* x */
+#define L1_BRICKTYPE_X2 0x79 /* y */
+#define L1_BRICKTYPE_SA 0x5e /* ^ */ /* TIO bringup brick */
+#define L1_BRICKTYPE_PA 0x6a /* j */
+#define L1_BRICKTYPE_IA 0x6b /* k */
+
+#endif /* _ASM_IA64_SN_L1_H */
--- /dev/null
+/*
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2001-2004 Silicon Graphics, Inc. All rights reserved.
+ */
+
+#ifndef _ASM_IA64_SN_SHUB_MMR_H
+#define _ASM_IA64_SN_SHUB_MMR_H
+
+/* ==================================================================== */
+/* Register "SH_IPI_INT" */
+/* SHub Inter-Processor Interrupt Registers */
+/* ==================================================================== */
+#define SH_IPI_INT 0x0000000110000380UL
+#define SH_IPI_INT_MASK 0x8ff3ffffffefffffUL
+#define SH_IPI_INT_INIT 0x0000000000000000UL
+
+/* SH_IPI_INT_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_IPI_INT_TYPE_SHFT 0
+#define SH_IPI_INT_TYPE_MASK 0x0000000000000007UL
+
+/* SH_IPI_INT_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_IPI_INT_AGT_SHFT 3
+#define SH_IPI_INT_AGT_MASK 0x0000000000000008UL
+
+/* SH_IPI_INT_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_IPI_INT_PID_SHFT 4
+#define SH_IPI_INT_PID_MASK 0x00000000000ffff0UL
+
+/* SH_IPI_INT_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_IPI_INT_BASE_SHFT 21
+#define SH_IPI_INT_BASE_MASK 0x0003ffffffe00000UL
+
+/* SH_IPI_INT_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_IPI_INT_IDX_SHFT 52
+#define SH_IPI_INT_IDX_MASK 0x0ff0000000000000UL
+
+/* SH_IPI_INT_SEND */
+/* Description: Send Interrupt Message to PI, This generates a puls */
+#define SH_IPI_INT_SEND_SHFT 63
+#define SH_IPI_INT_SEND_MASK 0x8000000000000000UL
+
+/* ==================================================================== */
+/* Register "SH_EVENT_OCCURRED" */
+/* SHub Interrupt Event Occurred */
+/* ==================================================================== */
+#define SH_EVENT_OCCURRED 0x0000000110010000UL
+#define SH_EVENT_OCCURRED_ALIAS 0x0000000110010008UL
+
+/* ==================================================================== */
+/* Register "SH_PI_CAM_CONTROL" */
+/* CRB CAM MMR Access Control */
+/* ==================================================================== */
+#ifndef __ASSEMBLY__
+#define SH_PI_CAM_CONTROL 0x0000000120050300UL
+#else
+#define SH_PI_CAM_CONTROL 0x0000000120050300
+#endif
+
+/* ==================================================================== */
+/* Register "SH_SHUB_ID" */
+/* SHub ID Number */
+/* ==================================================================== */
+#define SH_SHUB_ID 0x0000000110060580UL
+#define SH_SHUB_ID_REVISION_SHFT 28
+#define SH_SHUB_ID_REVISION_MASK 0x00000000f0000000
+
+/* ==================================================================== */
+/* Register "SH_PTC_0" */
+/* Puge Translation Cache Message Configuration Information */
+/* ==================================================================== */
+#define SH_PTC_0 0x00000001101a0000UL
+#define SH_PTC_1 0x00000001101a0080UL
+
+/* ==================================================================== */
+/* Register "SH_RTC" */
+/* Real-time Clock */
+/* ==================================================================== */
+#define SH_RTC 0x00000001101c0000UL
+#define SH_RTC_MASK 0x007fffffffffffffUL
+
+/* ==================================================================== */
+/* Register "SH_MEMORY_WRITE_STATUS_0|1" */
+/* Memory Write Status for CPU 0 & 1 */
+/* ==================================================================== */
+#define SH_MEMORY_WRITE_STATUS_0 0x0000000120070000UL
+#define SH_MEMORY_WRITE_STATUS_1 0x0000000120070080UL
+
+/* ==================================================================== */
+/* Register "SH_PIO_WRITE_STATUS_0|1" */
+/* PIO Write Status for CPU 0 & 1 */
+/* ==================================================================== */
+#ifndef __ASSEMBLY__
+#define SH_PIO_WRITE_STATUS_0 0x0000000120070200UL
+#define SH_PIO_WRITE_STATUS_1 0x0000000120070280UL
+
+/* SH_PIO_WRITE_STATUS_0_WRITE_DEADLOCK */
+/* Description: Deadlock response detected */
+#define SH_PIO_WRITE_STATUS_0_WRITE_DEADLOCK_SHFT 1
+#define SH_PIO_WRITE_STATUS_0_WRITE_DEADLOCK_MASK 0x0000000000000002
+
+/* SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT */
+/* Description: Count of currently pending PIO writes */
+#define SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_SHFT 56
+#define SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_MASK 0x3f00000000000000UL
+#else
+#define SH_PIO_WRITE_STATUS_0 0x0000000120070200
+#define SH_PIO_WRITE_STATUS_0_PENDING_WRITE_COUNT_SHFT 56
+#define SH_PIO_WRITE_STATUS_0_WRITE_DEADLOCK_SHFT 1
+#endif
+
+/* ==================================================================== */
+/* Register "SH_PIO_WRITE_STATUS_0_ALIAS" */
+/* ==================================================================== */
+#ifndef __ASSEMBLY__
+#define SH_PIO_WRITE_STATUS_0_ALIAS 0x0000000120070208UL
+#else
+#define SH_PIO_WRITE_STATUS_0_ALIAS 0x0000000120070208
+#endif
+
+/* ==================================================================== */
+/* Register "SH_EVENT_OCCURRED" */
+/* SHub Interrupt Event Occurred */
+/* ==================================================================== */
+/* SH_EVENT_OCCURRED_UART_INT */
+/* Description: Pending Junk Bus UART Interrupt */
+#define SH_EVENT_OCCURRED_UART_INT_SHFT 20
+#define SH_EVENT_OCCURRED_UART_INT_MASK 0x0000000000100000
+
+/* SH_EVENT_OCCURRED_IPI_INT */
+/* Description: Pending IPI Interrupt */
+#define SH_EVENT_OCCURRED_IPI_INT_SHFT 28
+#define SH_EVENT_OCCURRED_IPI_INT_MASK 0x0000000010000000
+
+/* SH_EVENT_OCCURRED_II_INT0 */
+/* Description: Pending II 0 Interrupt */
+#define SH_EVENT_OCCURRED_II_INT0_SHFT 29
+#define SH_EVENT_OCCURRED_II_INT0_MASK 0x0000000020000000
+
+/* SH_EVENT_OCCURRED_II_INT1 */
+/* Description: Pending II 1 Interrupt */
+#define SH_EVENT_OCCURRED_II_INT1_SHFT 30
+#define SH_EVENT_OCCURRED_II_INT1_MASK 0x0000000040000000
+
+/* ==================================================================== */
+/* Register "SH_PTC_0" */
+/* Puge Translation Cache Message Configuration Information */
+/* ==================================================================== */
+#define SH_PTC_0 0x00000001101a0000UL
+#define SH_PTC_0_MASK 0x80000000fffffffd
+#define SH_PTC_0_INIT 0x0000000000000000
+
+/* SH_PTC_0_A */
+/* Description: Type */
+#define SH_PTC_0_A_SHFT 0
+#define SH_PTC_0_A_MASK 0x0000000000000001
+
+/* SH_PTC_0_PS */
+/* Description: Page Size */
+#define SH_PTC_0_PS_SHFT 2
+#define SH_PTC_0_PS_MASK 0x00000000000000fc
+
+/* SH_PTC_0_RID */
+/* Description: Region ID */
+#define SH_PTC_0_RID_SHFT 8
+#define SH_PTC_0_RID_MASK 0x00000000ffffff00
+
+/* SH_PTC_0_START */
+/* Description: Start */
+#define SH_PTC_0_START_SHFT 63
+#define SH_PTC_0_START_MASK 0x8000000000000000
+
+/* ==================================================================== */
+/* Register "SH_PTC_1" */
+/* Puge Translation Cache Message Configuration Information */
+/* ==================================================================== */
+#define SH_PTC_1 0x00000001101a0080UL
+#define SH_PTC_1_MASK 0x9ffffffffffff000
+#define SH_PTC_1_INIT 0x0000000000000000
+
+/* SH_PTC_1_VPN */
+/* Description: Virtual page number */
+#define SH_PTC_1_VPN_SHFT 12
+#define SH_PTC_1_VPN_MASK 0x1ffffffffffff000
+
+/* SH_PTC_1_START */
+/* Description: PTC_1 Start */
+#define SH_PTC_1_START_SHFT 63
+#define SH_PTC_1_START_MASK 0x8000000000000000
+
+/*
+ * Register definitions
+ */
+
+/* ==================================================================== */
+/* Register "SH_RTC1_INT_CONFIG" */
+/* SHub RTC 1 Interrupt Config Registers */
+/* ==================================================================== */
+
+#define SH_RTC1_INT_CONFIG 0x0000000110001480
+#define SH_RTC1_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_RTC1_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_RTC1_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_RTC1_INT_CONFIG_TYPE_SHFT 0
+#define SH_RTC1_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_RTC1_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_RTC1_INT_CONFIG_AGT_SHFT 3
+#define SH_RTC1_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_RTC1_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_RTC1_INT_CONFIG_PID_SHFT 4
+#define SH_RTC1_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_RTC1_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_RTC1_INT_CONFIG_BASE_SHFT 21
+#define SH_RTC1_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_RTC1_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_RTC1_INT_CONFIG_IDX_SHFT 52
+#define SH_RTC1_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_RTC1_INT_ENABLE" */
+/* SHub RTC 1 Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_RTC1_INT_ENABLE 0x0000000110001500
+#define SH_RTC1_INT_ENABLE_MASK 0x0000000000000001
+#define SH_RTC1_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_RTC1_INT_ENABLE_RTC1_ENABLE */
+/* Description: Enable RTC 1 Interrupt */
+#define SH_RTC1_INT_ENABLE_RTC1_ENABLE_SHFT 0
+#define SH_RTC1_INT_ENABLE_RTC1_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_RTC2_INT_CONFIG" */
+/* SHub RTC 2 Interrupt Config Registers */
+/* ==================================================================== */
+
+#define SH_RTC2_INT_CONFIG 0x0000000110001580
+#define SH_RTC2_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_RTC2_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_RTC2_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_RTC2_INT_CONFIG_TYPE_SHFT 0
+#define SH_RTC2_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_RTC2_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_RTC2_INT_CONFIG_AGT_SHFT 3
+#define SH_RTC2_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_RTC2_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_RTC2_INT_CONFIG_PID_SHFT 4
+#define SH_RTC2_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_RTC2_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_RTC2_INT_CONFIG_BASE_SHFT 21
+#define SH_RTC2_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_RTC2_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_RTC2_INT_CONFIG_IDX_SHFT 52
+#define SH_RTC2_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_RTC2_INT_ENABLE" */
+/* SHub RTC 2 Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_RTC2_INT_ENABLE 0x0000000110001600
+#define SH_RTC2_INT_ENABLE_MASK 0x0000000000000001
+#define SH_RTC2_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_RTC2_INT_ENABLE_RTC2_ENABLE */
+/* Description: Enable RTC 2 Interrupt */
+#define SH_RTC2_INT_ENABLE_RTC2_ENABLE_SHFT 0
+#define SH_RTC2_INT_ENABLE_RTC2_ENABLE_MASK 0x0000000000000001
+
+/* ==================================================================== */
+/* Register "SH_RTC3_INT_CONFIG" */
+/* SHub RTC 3 Interrupt Config Registers */
+/* ==================================================================== */
+
+#define SH_RTC3_INT_CONFIG 0x0000000110001680
+#define SH_RTC3_INT_CONFIG_MASK 0x0ff3ffffffefffff
+#define SH_RTC3_INT_CONFIG_INIT 0x0000000000000000
+
+/* SH_RTC3_INT_CONFIG_TYPE */
+/* Description: Type of Interrupt: 0=INT, 2=PMI, 4=NMI, 5=INIT */
+#define SH_RTC3_INT_CONFIG_TYPE_SHFT 0
+#define SH_RTC3_INT_CONFIG_TYPE_MASK 0x0000000000000007
+
+/* SH_RTC3_INT_CONFIG_AGT */
+/* Description: Agent, must be 0 for SHub */
+#define SH_RTC3_INT_CONFIG_AGT_SHFT 3
+#define SH_RTC3_INT_CONFIG_AGT_MASK 0x0000000000000008
+
+/* SH_RTC3_INT_CONFIG_PID */
+/* Description: Processor ID, same setting as on targeted McKinley */
+#define SH_RTC3_INT_CONFIG_PID_SHFT 4
+#define SH_RTC3_INT_CONFIG_PID_MASK 0x00000000000ffff0
+
+/* SH_RTC3_INT_CONFIG_BASE */
+/* Description: Optional interrupt vector area, 2MB aligned */
+#define SH_RTC3_INT_CONFIG_BASE_SHFT 21
+#define SH_RTC3_INT_CONFIG_BASE_MASK 0x0003ffffffe00000
+
+/* SH_RTC3_INT_CONFIG_IDX */
+/* Description: Targeted McKinley interrupt vector */
+#define SH_RTC3_INT_CONFIG_IDX_SHFT 52
+#define SH_RTC3_INT_CONFIG_IDX_MASK 0x0ff0000000000000
+
+/* ==================================================================== */
+/* Register "SH_RTC3_INT_ENABLE" */
+/* SHub RTC 3 Interrupt Enable Registers */
+/* ==================================================================== */
+
+#define SH_RTC3_INT_ENABLE 0x0000000110001700
+#define SH_RTC3_INT_ENABLE_MASK 0x0000000000000001
+#define SH_RTC3_INT_ENABLE_INIT 0x0000000000000000
+
+/* SH_RTC3_INT_ENABLE_RTC3_ENABLE */
+/* Description: Enable RTC 3 Interrupt */
+#define SH_RTC3_INT_ENABLE_RTC3_ENABLE_SHFT 0
+#define SH_RTC3_INT_ENABLE_RTC3_ENABLE_MASK 0x0000000000000001
+
+/* SH_EVENT_OCCURRED_RTC1_INT */
+/* Description: Pending RTC 1 Interrupt */
+#define SH_EVENT_OCCURRED_RTC1_INT_SHFT 24
+#define SH_EVENT_OCCURRED_RTC1_INT_MASK 0x0000000001000000
+
+/* SH_EVENT_OCCURRED_RTC2_INT */
+/* Description: Pending RTC 2 Interrupt */
+#define SH_EVENT_OCCURRED_RTC2_INT_SHFT 25
+#define SH_EVENT_OCCURRED_RTC2_INT_MASK 0x0000000002000000
+
+/* SH_EVENT_OCCURRED_RTC3_INT */
+/* Description: Pending RTC 3 Interrupt */
+#define SH_EVENT_OCCURRED_RTC3_INT_SHFT 26
+#define SH_EVENT_OCCURRED_RTC3_INT_MASK 0x0000000004000000
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPB" */
+/* RTC Compare Value for Processor B */
+/* ==================================================================== */
+
+#define SH_INT_CMPB 0x00000001101b0080
+#define SH_INT_CMPB_MASK 0x007fffffffffffff
+#define SH_INT_CMPB_INIT 0x0000000000000000
+
+/* SH_INT_CMPB_REAL_TIME_CMPB */
+/* Description: Real Time Clock Compare */
+#define SH_INT_CMPB_REAL_TIME_CMPB_SHFT 0
+#define SH_INT_CMPB_REAL_TIME_CMPB_MASK 0x007fffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPC" */
+/* RTC Compare Value for Processor C */
+/* ==================================================================== */
+
+#define SH_INT_CMPC 0x00000001101b0100
+#define SH_INT_CMPC_MASK 0x007fffffffffffff
+#define SH_INT_CMPC_INIT 0x0000000000000000
+
+/* SH_INT_CMPC_REAL_TIME_CMPC */
+/* Description: Real Time Clock Compare */
+#define SH_INT_CMPC_REAL_TIME_CMPC_SHFT 0
+#define SH_INT_CMPC_REAL_TIME_CMPC_MASK 0x007fffffffffffff
+
+/* ==================================================================== */
+/* Register "SH_INT_CMPD" */
+/* RTC Compare Value for Processor D */
+/* ==================================================================== */
+
+#define SH_INT_CMPD 0x00000001101b0180
+#define SH_INT_CMPD_MASK 0x007fffffffffffff
+#define SH_INT_CMPD_INIT 0x0000000000000000
+
+/* SH_INT_CMPD_REAL_TIME_CMPD */
+/* Description: Real Time Clock Compare */
+#define SH_INT_CMPD_REAL_TIME_CMPD_SHFT 0
+#define SH_INT_CMPD_REAL_TIME_CMPD_MASK 0x007fffffffffffff
+
+#endif /* _ASM_IA64_SN_SHUB_MMR_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2003, 2004 Ralf Baechle
+ */
+#ifndef __ASM_MACH_YOSEMITE_CPU_FEATURE_OVERRIDES_H
+#define __ASM_MACH_YOSEMITE_CPU_FEATURE_OVERRIDES_H
+
+/*
+ * Momentum Jaguar ATX always has the RM9000 processor.
+ */
+#define cpu_has_watch 1
+#define cpu_has_mips16 0
+#define cpu_has_divec 0
+#define cpu_has_vce 0
+#define cpu_has_cache_cdex_p 0
+#define cpu_has_cache_cdex_s 0
+#define cpu_has_prefetch 1
+#define cpu_has_mcheck 0
+#define cpu_has_ejtag 0
+
+#define cpu_has_llsc 1
+#define cpu_has_vtag_icache 0
+#define cpu_has_dc_aliases 0
+#define cpu_has_ic_fills_f_dc 0
+
+#define cpu_has_nofpuex 0
+#define cpu_has_64bits 1
+
+#define cpu_has_subset_pcaches 0
+
+#define cpu_dcache_line_size() 32
+#define cpu_icache_line_size() 32
+#define cpu_scache_line_size() 32
+
+/*
+ * On the RM9000 we need to ensure that I-cache lines being fetches only
+ * contain valid instructions are funny things will happen.
+ */
+#define PLAT_TRAMPOLINE_STUFF_LINE 32UL
+
+#endif /* __ASM_MACH_YOSEMITE_CPU_FEATURE_OVERRIDES_H */
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2004 by Ralf Baechle
+ */
+#ifndef __ASM_MIPS_MARVELL_H
+#define __ASM_MIPS_MARVELL_H
+
+#include <linux/pci.h>
+
+#include <asm/byteorder.h>
+
+extern unsigned long marvell_base;
+
+/*
+ * Because of an error/peculiarity in the Galileo chip, we need to swap the
+ * bytes when running bigendian.
+ */
+#define __MV_READ(ofs) \
+ (*(volatile u32 *)(marvell_base+(ofs)))
+#define __MV_WRITE(ofs, data) \
+ do { *(volatile u32 *)(marvell_base+(ofs)) = (data); } while (0)
+
+#define MV_READ(ofs) le32_to_cpu(__MV_READ(ofs))
+#define MV_WRITE(ofs, data) __MV_WRITE(ofs, cpu_to_le32(data))
+
+#define MV_READ_16(ofs) \
+ le16_to_cpu(*(volatile u16 *)(marvell_base+(ofs)))
+#define MV_WRITE_16(ofs, data) \
+ *(volatile u16 *)(marvell_base+(ofs)) = cpu_to_le16(data)
+
+#define MV_READ_8(ofs) \
+ *(volatile u8 *)(marvell_base+(ofs))
+#define MV_WRITE_8(ofs, data) \
+ *(volatile u8 *)(marvell_base+(ofs)) = data
+
+#define MV_SET_REG_BITS(ofs, bits) \
+ (*((volatile u32 *)(marvell_base + (ofs)))) |= ((u32)cpu_to_le32(bits))
+#define MV_RESET_REG_BITS(ofs, bits) \
+ (*((volatile u32 *)(marvell_base + (ofs)))) &= ~((u32)cpu_to_le32(bits))
+
+extern struct pci_ops mv_pci_ops;
+
+struct mv_pci_controller {
+ struct pci_controller pcic;
+
+ /*
+ * GT-64240/MV-64340 specific, per host bus information
+ */
+ unsigned long config_addr;
+ unsigned long config_vreg;
+};
+
+#endif /* __ASM_MIPS_MARVELL_H */
--- /dev/null
+#ifndef _ASM_PPC64_DISKDUMP_H_
+#define _ASM_PPC64_DISKDUMP_H_
+
+#include <asm-generic/diskdump.h>
+
+#endif /* _ASM_PPC64_DISKDUMP_H_ */
--- /dev/null
+/*
+ * include/asm-ppc/fsl_ocp.h
+ *
+ * Definitions for the on-chip peripherals on Freescale PPC processors
+ *
+ * Maintainer: Kumar Gala (kumar.gala@freescale.com)
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASM_FS_OCP_H__
+#define __ASM_FS_OCP_H__
+
+/* A table of information for supporting the Gianfar Ethernet Controller
+ * This helps identify which enet controller we are dealing with,
+ * and what type of enet controller it is
+ */
+struct ocp_gfar_data {
+ uint interruptTransmit;
+ uint interruptError;
+ uint interruptReceive;
+ uint interruptPHY;
+ uint flags;
+ uint phyid;
+ uint phyregidx;
+ unsigned char mac_addr[6];
+};
+
+/* Flags in the flags field */
+#define GFAR_HAS_COALESCE 0x20
+#define GFAR_HAS_RMON 0x10
+#define GFAR_HAS_MULTI_INTR 0x08
+#define GFAR_FIRM_SET_MACADDR 0x04
+#define GFAR_HAS_PHY_INTR 0x02 /* if not set use a timer */
+#define GFAR_HAS_GIGABIT 0x01
+
+/* Data structure for I2C support. Just contains a couple flags
+ * to distinguish various I2C implementations*/
+struct ocp_fs_i2c_data {
+ uint flags;
+};
+
+/* Flags for I2C */
+#define FS_I2C_SEPARATE_DFSRR 0x02
+#define FS_I2C_CLOCK_5200 0x01
+
+#endif /* __ASM_FS_OCP_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * CPM2 Internal Memory Map
+ * Copyright (c) 1999 Dan Malek (dmalek@jlc.net)
+ *
+ * The Internal Memory Map for devices with CPM2 on them. This
+ * is the superset of all CPM2 devices (8260, 8266, 8280, 8272,
+ * 8560).
+ */
+#ifdef __KERNEL__
+#ifndef __IMMAP_CPM2__
+#define __IMMAP_CPM2__
+
+/* System configuration registers.
+*/
+typedef struct sys_82xx_conf {
+ u32 sc_siumcr;
+ u32 sc_sypcr;
+ u8 res1[6];
+ u16 sc_swsr;
+ u8 res2[20];
+ u32 sc_bcr;
+ u8 sc_ppc_acr;
+ u8 res3[3];
+ u32 sc_ppc_alrh;
+ u32 sc_ppc_alrl;
+ u8 sc_lcl_acr;
+ u8 res4[3];
+ u32 sc_lcl_alrh;
+ u32 sc_lcl_alrl;
+ u32 sc_tescr1;
+ u32 sc_tescr2;
+ u32 sc_ltescr1;
+ u32 sc_ltescr2;
+ u32 sc_pdtea;
+ u8 sc_pdtem;
+ u8 res5[3];
+ u32 sc_ldtea;
+ u8 sc_ldtem;
+ u8 res6[163];
+} sysconf_82xx_cpm2_t;
+
+typedef struct sys_85xx_conf {
+ u32 sc_cear;
+ u16 sc_ceer;
+ u16 sc_cemr;
+ u8 res1[70];
+ u32 sc_smaer;
+ u8 res2[4];
+ u32 sc_smevr;
+ u32 sc_smctr;
+ u32 sc_lmaer;
+ u8 res3[4];
+ u32 sc_lmevr;
+ u32 sc_lmctr;
+ u8 res4[144];
+} sysconf_85xx_cpm2_t;
+
+typedef union sys_conf {
+ sysconf_82xx_cpm2_t siu_82xx;
+ sysconf_85xx_cpm2_t siu_85xx;
+} sysconf_cpm2_t;
+
+
+
+/* Memory controller registers.
+*/
+typedef struct mem_ctlr {
+ u32 memc_br0;
+ u32 memc_or0;
+ u32 memc_br1;
+ u32 memc_or1;
+ u32 memc_br2;
+ u32 memc_or2;
+ u32 memc_br3;
+ u32 memc_or3;
+ u32 memc_br4;
+ u32 memc_or4;
+ u32 memc_br5;
+ u32 memc_or5;
+ u32 memc_br6;
+ u32 memc_or6;
+ u32 memc_br7;
+ u32 memc_or7;
+ u32 memc_br8;
+ u32 memc_or8;
+ u32 memc_br9;
+ u32 memc_or9;
+ u32 memc_br10;
+ u32 memc_or10;
+ u32 memc_br11;
+ u32 memc_or11;
+ u8 res1[8];
+ u32 memc_mar;
+ u8 res2[4];
+ u32 memc_mamr;
+ u32 memc_mbmr;
+ u32 memc_mcmr;
+ u8 res3[8];
+ u16 memc_mptpr;
+ u8 res4[2];
+ u32 memc_mdr;
+ u8 res5[4];
+ u32 memc_psdmr;
+ u32 memc_lsdmr;
+ u8 memc_purt;
+ u8 res6[3];
+ u8 memc_psrt;
+ u8 res7[3];
+ u8 memc_lurt;
+ u8 res8[3];
+ u8 memc_lsrt;
+ u8 res9[3];
+ u32 memc_immr;
+ u32 memc_pcibr0;
+ u32 memc_pcibr1;
+ u8 res10[16];
+ u32 memc_pcimsk0;
+ u32 memc_pcimsk1;
+ u8 res11[52];
+} memctl_cpm2_t;
+
+/* System Integration Timers.
+*/
+typedef struct sys_int_timers {
+ u8 res1[32];
+ u16 sit_tmcntsc;
+ u8 res2[2];
+ u32 sit_tmcnt;
+ u8 res3[4];
+ u32 sit_tmcntal;
+ u8 res4[16];
+ u16 sit_piscr;
+ u8 res5[2];
+ u32 sit_pitc;
+ u32 sit_pitr;
+ u8 res6[94];
+ u8 res7[390];
+} sit_cpm2_t;
+
+#define PISCR_PIRQ_MASK ((u16)0xff00)
+#define PISCR_PS ((u16)0x0080)
+#define PISCR_PIE ((u16)0x0004)
+#define PISCR_PTF ((u16)0x0002)
+#define PISCR_PTE ((u16)0x0001)
+
+/* PCI Controller.
+*/
+typedef struct pci_ctlr {
+ u32 pci_omisr;
+ u32 pci_omimr;
+ u8 res1[8];
+ u32 pci_ifqpr;
+ u32 pci_ofqpr;
+ u8 res2[8];
+ u32 pci_imr0;
+ u32 pci_imr1;
+ u32 pci_omr0;
+ u32 pci_omr1;
+ u32 pci_odr;
+ u8 res3[4];
+ u32 pci_idr;
+ u8 res4[20];
+ u32 pci_imisr;
+ u32 pci_imimr;
+ u8 res5[24];
+ u32 pci_ifhpr;
+ u8 res6[4];
+ u32 pci_iftpr;
+ u8 res7[4];
+ u32 pci_iphpr;
+ u8 res8[4];
+ u32 pci_iptpr;
+ u8 res9[4];
+ u32 pci_ofhpr;
+ u8 res10[4];
+ u32 pci_oftpr;
+ u8 res11[4];
+ u32 pci_ophpr;
+ u8 res12[4];
+ u32 pci_optpr;
+ u8 res13[8];
+ u32 pci_mucr;
+ u8 res14[8];
+ u32 pci_qbar;
+ u8 res15[12];
+ u32 pci_dmamr0;
+ u32 pci_dmasr0;
+ u32 pci_dmacdar0;
+ u8 res16[4];
+ u32 pci_dmasar0;
+ u8 res17[4];
+ u32 pci_dmadar0;
+ u8 res18[4];
+ u32 pci_dmabcr0;
+ u32 pci_dmandar0;
+ u8 res19[86];
+ u32 pci_dmamr1;
+ u32 pci_dmasr1;
+ u32 pci_dmacdar1;
+ u8 res20[4];
+ u32 pci_dmasar1;
+ u8 res21[4];
+ u32 pci_dmadar1;
+ u8 res22[4];
+ u32 pci_dmabcr1;
+ u32 pci_dmandar1;
+ u8 res23[88];
+ u32 pci_dmamr2;
+ u32 pci_dmasr2;
+ u32 pci_dmacdar2;
+ u8 res24[4];
+ u32 pci_dmasar2;
+ u8 res25[4];
+ u32 pci_dmadar2;
+ u8 res26[4];
+ u32 pci_dmabcr2;
+ u32 pci_dmandar2;
+ u8 res27[88];
+ u32 pci_dmamr3;
+ u32 pci_dmasr3;
+ u32 pci_dmacdar3;
+ u8 res28[4];
+ u32 pci_dmasar3;
+ u8 res29[4];
+ u32 pci_dmadar3;
+ u8 res30[4];
+ u32 pci_dmabcr3;
+ u32 pci_dmandar3;
+ u8 res31[344];
+ u32 pci_potar0;
+ u8 res32[4];
+ u32 pci_pobar0;
+ u8 res33[4];
+ u32 pci_pocmr0;
+ u8 res34[4];
+ u32 pci_potar1;
+ u8 res35[4];
+ u32 pci_pobar1;
+ u8 res36[4];
+ u32 pci_pocmr1;
+ u8 res37[4];
+ u32 pci_potar2;
+ u8 res38[4];
+ u32 pci_pobar2;
+ u8 res39[4];
+ u32 pci_pocmr2;
+ u8 res40[50];
+ u32 pci_ptcr;
+ u32 pci_gpcr;
+ u32 pci_gcr;
+ u32 pci_esr;
+ u32 pci_emr;
+ u32 pci_ecr;
+ u32 pci_eacr;
+ u8 res41[4];
+ u32 pci_edcr;
+ u8 res42[4];
+ u32 pci_eccr;
+ u8 res43[44];
+ u32 pci_pitar1;
+ u8 res44[4];
+ u32 pci_pibar1;
+ u8 res45[4];
+ u32 pci_picmr1;
+ u8 res46[4];
+ u32 pci_pitar0;
+ u8 res47[4];
+ u32 pci_pibar0;
+ u8 res48[4];
+ u32 pci_picmr0;
+ u8 res49[4];
+ u32 pci_cfg_addr;
+ u32 pci_cfg_data;
+ u32 pci_int_ack;
+ u8 res50[756];
+} pci_cpm2_t;
+
+/* Interrupt Controller.
+*/
+typedef struct interrupt_controller {
+ u16 ic_sicr;
+ u8 res1[2];
+ u32 ic_sivec;
+ u32 ic_sipnrh;
+ u32 ic_sipnrl;
+ u32 ic_siprr;
+ u32 ic_scprrh;
+ u32 ic_scprrl;
+ u32 ic_simrh;
+ u32 ic_simrl;
+ u32 ic_siexr;
+ u8 res2[88];
+} intctl_cpm2_t;
+
+/* Clocks and Reset.
+*/
+typedef struct clk_and_reset {
+ u32 car_sccr;
+ u8 res1[4];
+ u32 car_scmr;
+ u8 res2[4];
+ u32 car_rsr;
+ u32 car_rmr;
+ u8 res[104];
+} car_cpm2_t;
+
+/* Input/Output Port control/status registers.
+ * Names consistent with processor manual, although they are different
+ * from the original 8xx names.......
+ */
+typedef struct io_port {
+ u32 iop_pdira;
+ u32 iop_ppara;
+ u32 iop_psora;
+ u32 iop_podra;
+ u32 iop_pdata;
+ u8 res1[12];
+ u32 iop_pdirb;
+ u32 iop_pparb;
+ u32 iop_psorb;
+ u32 iop_podrb;
+ u32 iop_pdatb;
+ u8 res2[12];
+ u32 iop_pdirc;
+ u32 iop_pparc;
+ u32 iop_psorc;
+ u32 iop_podrc;
+ u32 iop_pdatc;
+ u8 res3[12];
+ u32 iop_pdird;
+ u32 iop_ppard;
+ u32 iop_psord;
+ u32 iop_podrd;
+ u32 iop_pdatd;
+ u8 res4[12];
+} iop_cpm2_t;
+
+/* Communication Processor Module Timers
+*/
+typedef struct cpm_timers {
+ u8 cpmt_tgcr1;
+ u8 res1[3];
+ u8 cpmt_tgcr2;
+ u8 res2[11];
+ u16 cpmt_tmr1;
+ u16 cpmt_tmr2;
+ u16 cpmt_trr1;
+ u16 cpmt_trr2;
+ u16 cpmt_tcr1;
+ u16 cpmt_tcr2;
+ u16 cpmt_tcn1;
+ u16 cpmt_tcn2;
+ u16 cpmt_tmr3;
+ u16 cpmt_tmr4;
+ u16 cpmt_trr3;
+ u16 cpmt_trr4;
+ u16 cpmt_tcr3;
+ u16 cpmt_tcr4;
+ u16 cpmt_tcn3;
+ u16 cpmt_tcn4;
+ u16 cpmt_ter1;
+ u16 cpmt_ter2;
+ u16 cpmt_ter3;
+ u16 cpmt_ter4;
+ u8 res3[584];
+} cpmtimer_cpm2_t;
+
+/* DMA control/status registers.
+*/
+typedef struct sdma_csr {
+ u8 res0[24];
+ u8 sdma_sdsr;
+ u8 res1[3];
+ u8 sdma_sdmr;
+ u8 res2[3];
+ u8 sdma_idsr1;
+ u8 res3[3];
+ u8 sdma_idmr1;
+ u8 res4[3];
+ u8 sdma_idsr2;
+ u8 res5[3];
+ u8 sdma_idmr2;
+ u8 res6[3];
+ u8 sdma_idsr3;
+ u8 res7[3];
+ u8 sdma_idmr3;
+ u8 res8[3];
+ u8 sdma_idsr4;
+ u8 res9[3];
+ u8 sdma_idmr4;
+ u8 res10[707];
+} sdma_cpm2_t;
+
+/* Fast controllers
+*/
+typedef struct fcc {
+ u32 fcc_gfmr;
+ u32 fcc_fpsmr;
+ u16 fcc_ftodr;
+ u8 res1[2];
+ u16 fcc_fdsr;
+ u8 res2[2];
+ u16 fcc_fcce;
+ u8 res3[2];
+ u16 fcc_fccm;
+ u8 res4[2];
+ u8 fcc_fccs;
+ u8 res5[3];
+ u8 fcc_ftirr_phy[4];
+} fcc_t;
+
+/* Fast controllers continued
+ */
+typedef struct fcc_c {
+ u32 fcc_firper;
+ u32 fcc_firer;
+ u32 fcc_firsr_hi;
+ u32 fcc_firsr_lo;
+ u8 fcc_gfemr;
+ u8 res1[15];
+} fcc_c_t;
+
+/* TC Layer
+ */
+typedef struct tclayer {
+ u16 tc_tcmode;
+ u16 tc_cdsmr;
+ u16 tc_tcer;
+ u16 tc_rcc;
+ u16 tc_tcmr;
+ u16 tc_fcc;
+ u16 tc_ccc;
+ u16 tc_icc;
+ u16 tc_tcc;
+ u16 tc_ecc;
+ u8 res1[12];
+} tclayer_t;
+
+
+/* I2C
+*/
+typedef struct i2c {
+ u8 i2c_i2mod;
+ u8 res1[3];
+ u8 i2c_i2add;
+ u8 res2[3];
+ u8 i2c_i2brg;
+ u8 res3[3];
+ u8 i2c_i2com;
+ u8 res4[3];
+ u8 i2c_i2cer;
+ u8 res5[3];
+ u8 i2c_i2cmr;
+ u8 res6[331];
+} i2c_cpm2_t;
+
+typedef struct scc { /* Serial communication channels */
+ u32 scc_gsmrl;
+ u32 scc_gsmrh;
+ u16 scc_psmr;
+ u8 res1[2];
+ u16 scc_todr;
+ u16 scc_dsr;
+ u16 scc_scce;
+ u8 res2[2];
+ u16 scc_sccm;
+ u8 res3;
+ u8 scc_sccs;
+ u8 res4[8];
+} scc_t;
+
+typedef struct smc { /* Serial management channels */
+ u8 res1[2];
+ u16 smc_smcmr;
+ u8 res2[2];
+ u8 smc_smce;
+ u8 res3[3];
+ u8 smc_smcm;
+ u8 res4[5];
+} smc_t;
+
+/* Serial Peripheral Interface.
+*/
+typedef struct spi_ctrl {
+ u16 spi_spmode;
+ u8 res1[4];
+ u8 spi_spie;
+ u8 res2[3];
+ u8 spi_spim;
+ u8 res3[2];
+ u8 spi_spcom;
+ u8 res4[82];
+} spictl_cpm2_t;
+
+/* CPM Mux.
+*/
+typedef struct cpmux {
+ u8 cmx_si1cr;
+ u8 res1;
+ u8 cmx_si2cr;
+ u8 res2;
+ u32 cmx_fcr;
+ u32 cmx_scr;
+ u8 cmx_smr;
+ u8 res3;
+ u16 cmx_uar;
+ u8 res4[16];
+} cpmux_t;
+
+/* SIRAM control
+*/
+typedef struct siram {
+ u16 si_amr;
+ u16 si_bmr;
+ u16 si_cmr;
+ u16 si_dmr;
+ u8 si_gmr;
+ u8 res1;
+ u8 si_cmdr;
+ u8 res2;
+ u8 si_str;
+ u8 res3;
+ u16 si_rsr;
+} siramctl_t;
+
+typedef struct mcc {
+ u16 mcc_mcce;
+ u8 res1[2];
+ u16 mcc_mccm;
+ u8 res2[2];
+ u8 mcc_mccf;
+ u8 res3[7];
+} mcc_t;
+
+typedef struct comm_proc {
+ u32 cp_cpcr;
+ u32 cp_rccr;
+ u8 res1[14];
+ u16 cp_rter;
+ u8 res2[2];
+ u16 cp_rtmr;
+ u16 cp_rtscr;
+ u8 res3[2];
+ u32 cp_rtsr;
+ u8 res4[12];
+} cpm_cpm2_t;
+
+/* USB Controller.
+*/
+typedef struct usb_ctlr {
+ u8 usb_usmod;
+ u8 usb_usadr;
+ u8 usb_uscom;
+ u8 res1[1];
+ u16 usb_usep1;
+ u16 usb_usep2;
+ u16 usb_usep3;
+ u16 usb_usep4;
+ u8 res2[4];
+ u16 usb_usber;
+ u8 res3[2];
+ u16 usb_usbmr;
+ u8 usb_usbs;
+ u8 res4[7];
+} usb_cpm2_t;
+
+/* ...and the whole thing wrapped up....
+*/
+
+typedef struct immap {
+ /* Some references are into the unique and known dpram spaces,
+ * others are from the generic base.
+ */
+#define im_dprambase im_dpram1
+ u8 im_dpram1[16*1024];
+ u8 res1[16*1024];
+ u8 im_dpram2[4*1024];
+ u8 res2[8*1024];
+ u8 im_dpram3[4*1024];
+ u8 res3[16*1024];
+
+ sysconf_cpm2_t im_siu_conf; /* SIU Configuration */
+ memctl_cpm2_t im_memctl; /* Memory Controller */
+ sit_cpm2_t im_sit; /* System Integration Timers */
+ pci_cpm2_t im_pci; /* PCI Controller */
+ intctl_cpm2_t im_intctl; /* Interrupt Controller */
+ car_cpm2_t im_clkrst; /* Clocks and reset */
+ iop_cpm2_t im_ioport; /* IO Port control/status */
+ cpmtimer_cpm2_t im_cpmtimer; /* CPM timers */
+ sdma_cpm2_t im_sdma; /* SDMA control/status */
+
+ fcc_t im_fcc[3]; /* Three FCCs */
+ u8 res4z[32];
+ fcc_c_t im_fcc_c[3]; /* Continued FCCs */
+
+ u8 res4[32];
+
+ tclayer_t im_tclayer[8]; /* Eight TCLayers */
+ u16 tc_tcgsr;
+ u16 tc_tcger;
+
+ /* First set of baud rate generators.
+ */
+ u8 res[236];
+ u32 im_brgc5;
+ u32 im_brgc6;
+ u32 im_brgc7;
+ u32 im_brgc8;
+
+ u8 res5[608];
+
+ i2c_cpm2_t im_i2c; /* I2C control/status */
+ cpm_cpm2_t im_cpm; /* Communication processor */
+
+ /* Second set of baud rate generators.
+ */
+ u32 im_brgc1;
+ u32 im_brgc2;
+ u32 im_brgc3;
+ u32 im_brgc4;
+
+ scc_t im_scc[4]; /* Four SCCs */
+ smc_t im_smc[2]; /* Couple of SMCs */
+ spictl_cpm2_t im_spi; /* A SPI */
+ cpmux_t im_cpmux; /* CPM clock route mux */
+ siramctl_t im_siramctl1; /* First SI RAM Control */
+ mcc_t im_mcc1; /* First MCC */
+ siramctl_t im_siramctl2; /* Second SI RAM Control */
+ mcc_t im_mcc2; /* Second MCC */
+ usb_cpm2_t im_usb; /* USB Controller */
+
+ u8 res6[1153];
+
+ u16 im_si1txram[256];
+ u8 res7[512];
+ u16 im_si1rxram[256];
+ u8 res8[512];
+ u16 im_si2txram[256];
+ u8 res9[512];
+ u16 im_si2rxram[256];
+ u8 res10[512];
+ u8 res11[4096];
+} cpm2_map_t;
+
+extern cpm2_map_t *cpm2_immr;
+
+#endif /* __IMMAP_CPM2__ */
+#endif /* __KERNEL__ */
--- /dev/null
+/*
+ * include/asm-ppc/mpc52xx.h
+ *
+ * Prototypes, etc. for the Freescale MPC52xx embedded cpu chips
+ * May need to be cleaned as the port goes on ...
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Originally written by Dale Farnsworth <dfarnsworth@mvista.com>
+ * for the 2.4 kernel.
+ *
+ * Copyright (C) 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright (C) 2003 MontaVista, Software, Inc.
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#ifndef __ASM_MPC52xx_H__
+#define __ASM_MPC52xx_H__
+
+#ifndef __ASSEMBLY__
+#include <asm/ppcboot.h>
+#include <asm/types.h>
+
+struct pt_regs;
+struct ocp_def;
+#endif /* __ASSEMBLY__ */
+
+
+/* ======================================================================== */
+/* Main registers/struct addresses */
+/* ======================================================================== */
+/* Theses are PHYSICAL addresses ! */
+/* TODO : There should be no static mapping, but it's not yet the case, so */
+/* we require a 1:1 mapping */
+
+#define MPC52xx_MBAR 0xf0000000 /* Phys address */
+#define MPC52xx_MBAR_SIZE 0x00010000
+#define MPC52xx_MBAR_VIRT 0xf0000000 /* Virt address */
+
+#define MPC52xx_MMAP_CTL (MPC52xx_MBAR + 0x0000)
+#define MPC52xx_SDRAM (MPC52xx_MBAR + 0x0100)
+#define MPC52xx_CDM (MPC52xx_MBAR + 0x0200)
+#define MPC52xx_SFTRST (MPC52xx_MBAR + 0x0220)
+#define MPC52xx_SFTRST_BIT 0x01000000
+#define MPC52xx_INTR (MPC52xx_MBAR + 0x0500)
+#define MPC52xx_GPTx(x) (MPC52xx_MBAR + 0x0600 + ((x)<<4))
+#define MPC52xx_RTC (MPC52xx_MBAR + 0x0800)
+#define MPC52xx_MSCAN1 (MPC52xx_MBAR + 0x0900)
+#define MPC52xx_MSCAN2 (MPC52xx_MBAR + 0x0980)
+#define MPC52xx_GPIO (MPC52xx_MBAR + 0x0b00)
+#define MPC52xx_GPIO_WKUP (MPC52xx_MBAR + 0x0c00)
+#define MPC52xx_PCI (MPC52xx_MBAR + 0x0d00)
+#define MPC52xx_USB_OHCI (MPC52xx_MBAR + 0x1000)
+#define MPC52xx_SDMA (MPC52xx_MBAR + 0x1200)
+#define MPC52xx_XLB (MPC52xx_MBAR + 0x1f00)
+#define MPC52xx_PSCx(x) (MPC52xx_MBAR + 0x2000 + ((x)<<9))
+#define MPC52xx_PSC1 (MPC52xx_MBAR + 0x2000)
+#define MPC52xx_PSC2 (MPC52xx_MBAR + 0x2200)
+#define MPC52xx_PSC3 (MPC52xx_MBAR + 0x2400)
+#define MPC52xx_PSC4 (MPC52xx_MBAR + 0x2600)
+#define MPC52xx_PSC5 (MPC52xx_MBAR + 0x2800)
+#define MPC52xx_PSC6 (MPC52xx_MBAR + 0x2C00)
+#define MPC52xx_FEC (MPC52xx_MBAR + 0x3000)
+#define MPC52xx_ATA (MPC52xx_MBAR + 0x3a00)
+#define MPC52xx_I2C1 (MPC52xx_MBAR + 0x3d00)
+#define MPC52xx_I2C_MICR (MPC52xx_MBAR + 0x3d20)
+#define MPC52xx_I2C2 (MPC52xx_MBAR + 0x3d40)
+
+/* SRAM used for SDMA */
+#define MPC52xx_SRAM (MPC52xx_MBAR + 0x8000)
+#define MPC52xx_SRAM_SIZE (16*1024)
+
+
+/* ======================================================================== */
+/* IRQ mapping */
+/* ======================================================================== */
+/* Be sure to look at mpc52xx_pic.h if you wish for whatever reason to change
+ * this
+ */
+
+#define MPC52xx_CRIT_IRQ_NUM 4
+#define MPC52xx_MAIN_IRQ_NUM 17
+#define MPC52xx_SDMA_IRQ_NUM 17
+#define MPC52xx_PERP_IRQ_NUM 23
+
+#define MPC52xx_CRIT_IRQ_BASE 0
+#define MPC52xx_MAIN_IRQ_BASE (MPC52xx_CRIT_IRQ_BASE + MPC52xx_CRIT_IRQ_NUM)
+#define MPC52xx_SDMA_IRQ_BASE (MPC52xx_MAIN_IRQ_BASE + MPC52xx_MAIN_IRQ_NUM)
+#define MPC52xx_PERP_IRQ_BASE (MPC52xx_SDMA_IRQ_BASE + MPC52xx_SDMA_IRQ_NUM)
+
+#define MPC52xx_IRQ0 (MPC52xx_CRIT_IRQ_BASE + 0)
+#define MPC52xx_SLICE_TIMER_0_IRQ (MPC52xx_CRIT_IRQ_BASE + 1)
+#define MPC52xx_HI_INT_IRQ (MPC52xx_CRIT_IRQ_BASE + 2)
+#define MPC52xx_CCS_IRQ (MPC52xx_CRIT_IRQ_BASE + 3)
+
+#define MPC52xx_IRQ1 (MPC52xx_MAIN_IRQ_BASE + 1)
+#define MPC52xx_IRQ2 (MPC52xx_MAIN_IRQ_BASE + 2)
+#define MPC52xx_IRQ3 (MPC52xx_MAIN_IRQ_BASE + 3)
+
+#define MPC52xx_SDMA_IRQ (MPC52xx_PERP_IRQ_BASE + 0)
+#define MPC52xx_PSC1_IRQ (MPC52xx_PERP_IRQ_BASE + 1)
+#define MPC52xx_PSC2_IRQ (MPC52xx_PERP_IRQ_BASE + 2)
+#define MPC52xx_PSC3_IRQ (MPC52xx_PERP_IRQ_BASE + 3)
+#define MPC52xx_PSC6_IRQ (MPC52xx_PERP_IRQ_BASE + 4)
+#define MPC52xx_IRDA_IRQ (MPC52xx_PERP_IRQ_BASE + 4)
+#define MPC52xx_FEC_IRQ (MPC52xx_PERP_IRQ_BASE + 5)
+#define MPC52xx_USB_IRQ (MPC52xx_PERP_IRQ_BASE + 6)
+#define MPC52xx_ATA_IRQ (MPC52xx_PERP_IRQ_BASE + 7)
+#define MPC52xx_PCI_CNTRL_IRQ (MPC52xx_PERP_IRQ_BASE + 8)
+#define MPC52xx_PCI_SCIRX_IRQ (MPC52xx_PERP_IRQ_BASE + 9)
+#define MPC52xx_PCI_SCITX_IRQ (MPC52xx_PERP_IRQ_BASE + 10)
+#define MPC52xx_PSC4_IRQ (MPC52xx_PERP_IRQ_BASE + 11)
+#define MPC52xx_PSC5_IRQ (MPC52xx_PERP_IRQ_BASE + 12)
+#define MPC52xx_SPI_MODF_IRQ (MPC52xx_PERP_IRQ_BASE + 13)
+#define MPC52xx_SPI_SPIF_IRQ (MPC52xx_PERP_IRQ_BASE + 14)
+#define MPC52xx_I2C1_IRQ (MPC52xx_PERP_IRQ_BASE + 15)
+#define MPC52xx_I2C2_IRQ (MPC52xx_PERP_IRQ_BASE + 16)
+#define MPC52xx_CAN1_IRQ (MPC52xx_PERP_IRQ_BASE + 17)
+#define MPC52xx_CAN2_IRQ (MPC52xx_PERP_IRQ_BASE + 18)
+#define MPC52xx_IR_RX_IRQ (MPC52xx_PERP_IRQ_BASE + 19)
+#define MPC52xx_IR_TX_IRQ (MPC52xx_PERP_IRQ_BASE + 20)
+#define MPC52xx_XLB_ARB_IRQ (MPC52xx_PERP_IRQ_BASE + 21)
+
+
+
+/* ======================================================================== */
+/* Structures mapping of some unit register set */
+/* ======================================================================== */
+
+#ifndef __ASSEMBLY__
+
+/* Memory Mapping Control */
+struct mpc52xx_mmap_ctl {
+ u32 mbar; /* MMAP_CTRL + 0x00 */
+
+ u32 cs0_start; /* MMAP_CTRL + 0x04 */
+ u32 cs0_stop; /* MMAP_CTRL + 0x08 */
+ u32 cs1_start; /* MMAP_CTRL + 0x0c */
+ u32 cs1_stop; /* MMAP_CTRL + 0x10 */
+ u32 cs2_start; /* MMAP_CTRL + 0x14 */
+ u32 cs2_stop; /* MMAP_CTRL + 0x18 */
+ u32 cs3_start; /* MMAP_CTRL + 0x1c */
+ u32 cs3_stop; /* MMAP_CTRL + 0x20 */
+ u32 cs4_start; /* MMAP_CTRL + 0x24 */
+ u32 cs4_stop; /* MMAP_CTRL + 0x28 */
+ u32 cs5_start; /* MMAP_CTRL + 0x2c */
+ u32 cs5_stop; /* MMAP_CTRL + 0x30 */
+
+ u32 sdram0; /* MMAP_CTRL + 0x34 */
+ u32 sdram1; /* MMAP_CTRL + 0X38 */
+
+ u32 reserved[4]; /* MMAP_CTRL + 0x3c .. 0x48 */
+
+ u32 boot_start; /* MMAP_CTRL + 0x4c */
+ u32 boot_stop; /* MMAP_CTRL + 0x50 */
+
+ u32 ipbi_ws_ctrl; /* MMAP_CTRL + 0x54 */
+
+ u32 cs6_start; /* MMAP_CTRL + 0x58 */
+ u32 cs6_stop; /* MMAP_CTRL + 0x5c */
+ u32 cs7_start; /* MMAP_CTRL + 0x60 */
+ u32 cs7_stop; /* MMAP_CTRL + 0x60 */
+};
+
+/* SDRAM control */
+struct mpc52xx_sdram {
+ u32 mode; /* SDRAM + 0x00 */
+ u32 ctrl; /* SDRAM + 0x04 */
+ u32 config1; /* SDRAM + 0x08 */
+ u32 config2; /* SDRAM + 0x0c */
+};
+
+/* Interrupt controller */
+struct mpc52xx_intr {
+ u32 per_mask; /* INTR + 0x00 */
+ u32 per_pri1; /* INTR + 0x04 */
+ u32 per_pri2; /* INTR + 0x08 */
+ u32 per_pri3; /* INTR + 0x0c */
+ u32 ctrl; /* INTR + 0x10 */
+ u32 main_mask; /* INTR + 0x14 */
+ u32 main_pri1; /* INTR + 0x18 */
+ u32 main_pri2; /* INTR + 0x1c */
+ u32 reserved1; /* INTR + 0x20 */
+ u32 enc_status; /* INTR + 0x24 */
+ u32 crit_status; /* INTR + 0x28 */
+ u32 main_status; /* INTR + 0x2c */
+ u32 per_status; /* INTR + 0x30 */
+ u32 reserved2; /* INTR + 0x34 */
+ u32 per_error; /* INTR + 0x38 */
+};
+
+/* SDMA */
+struct mpc52xx_sdma {
+ u32 taskBar; /* SDMA + 0x00 */
+ u32 currentPointer; /* SDMA + 0x04 */
+ u32 endPointer; /* SDMA + 0x08 */
+ u32 variablePointer;/* SDMA + 0x0c */
+
+ u8 IntVect1; /* SDMA + 0x10 */
+ u8 IntVect2; /* SDMA + 0x11 */
+ u16 PtdCntrl; /* SDMA + 0x12 */
+
+ u32 IntPend; /* SDMA + 0x14 */
+ u32 IntMask; /* SDMA + 0x18 */
+
+ u16 tcr[16]; /* SDMA + 0x1c .. 0x3a */
+
+ u8 ipr[32]; /* SDMA + 0x3c .. 5b */
+
+ u32 cReqSelect; /* SDMA + 0x5c */
+ u32 task_size0; /* SDMA + 0x60 */
+ u32 task_size1; /* SDMA + 0x64 */
+ u32 MDEDebug; /* SDMA + 0x68 */
+ u32 ADSDebug; /* SDMA + 0x6c */
+ u32 Value1; /* SDMA + 0x70 */
+ u32 Value2; /* SDMA + 0x74 */
+ u32 Control; /* SDMA + 0x78 */
+ u32 Status; /* SDMA + 0x7c */
+ u32 PTDDebug; /* SDMA + 0x80 */
+};
+
+/* GPT */
+struct mpc52xx_gpt {
+ u32 mode; /* GPTx + 0x00 */
+ u32 count; /* GPTx + 0x04 */
+ u32 pwm; /* GPTx + 0x08 */
+ u32 status; /* GPTx + 0X0c */
+};
+
+/* RTC */
+struct mpc52xx_rtc {
+ u32 time_set; /* RTC + 0x00 */
+ u32 date_set; /* RTC + 0x04 */
+ u32 stopwatch; /* RTC + 0x08 */
+ u32 int_enable; /* RTC + 0x0c */
+ u32 time; /* RTC + 0x10 */
+ u32 date; /* RTC + 0x14 */
+ u32 stopwatch_intr; /* RTC + 0x18 */
+ u32 bus_error; /* RTC + 0x1c */
+ u32 dividers; /* RTC + 0x20 */
+};
+
+/* GPIO */
+struct mpc52xx_gpio {
+ u32 port_config; /* GPIO + 0x00 */
+ u32 simple_gpioe; /* GPIO + 0x04 */
+ u32 simple_ode; /* GPIO + 0x08 */
+ u32 simple_ddr; /* GPIO + 0x0c */
+ u32 simple_dvo; /* GPIO + 0x10 */
+ u32 simple_ival; /* GPIO + 0x14 */
+ u8 outo_gpioe; /* GPIO + 0x18 */
+ u8 reserved1[3]; /* GPIO + 0x19 */
+ u8 outo_dvo; /* GPIO + 0x1c */
+ u8 reserved2[3]; /* GPIO + 0x1d */
+ u8 sint_gpioe; /* GPIO + 0x20 */
+ u8 reserved3[3]; /* GPIO + 0x21 */
+ u8 sint_ode; /* GPIO + 0x24 */
+ u8 reserved4[3]; /* GPIO + 0x25 */
+ u8 sint_ddr; /* GPIO + 0x28 */
+ u8 reserved5[3]; /* GPIO + 0x29 */
+ u8 sint_dvo; /* GPIO + 0x2c */
+ u8 reserved6[3]; /* GPIO + 0x2d */
+ u8 sint_inten; /* GPIO + 0x30 */
+ u8 reserved7[3]; /* GPIO + 0x31 */
+ u16 sint_itype; /* GPIO + 0x34 */
+ u16 reserved8; /* GPIO + 0x36 */
+ u8 gpio_control; /* GPIO + 0x38 */
+ u8 reserved9[3]; /* GPIO + 0x39 */
+ u8 sint_istat; /* GPIO + 0x3c */
+ u8 sint_ival; /* GPIO + 0x3d */
+ u8 bus_errs; /* GPIO + 0x3e */
+ u8 reserved10; /* GPIO + 0x3f */
+};
+
+#define MPC52xx_GPIO_PSC_CONFIG_UART_WITHOUT_CD 4
+#define MPC52xx_GPIO_PSC_CONFIG_UART_WITH_CD 5
+#define MPC52xx_GPIO_PCI_DIS (1<<15)
+
+/* GPIO with WakeUp*/
+struct mpc52xx_gpio_wkup {
+ u8 wkup_gpioe; /* GPIO_WKUP + 0x00 */
+ u8 reserved1[3]; /* GPIO_WKUP + 0x03 */
+ u8 wkup_ode; /* GPIO_WKUP + 0x04 */
+ u8 reserved2[3]; /* GPIO_WKUP + 0x05 */
+ u8 wkup_ddr; /* GPIO_WKUP + 0x08 */
+ u8 reserved3[3]; /* GPIO_WKUP + 0x09 */
+ u8 wkup_dvo; /* GPIO_WKUP + 0x0C */
+ u8 reserved4[3]; /* GPIO_WKUP + 0x0D */
+ u8 wkup_inten; /* GPIO_WKUP + 0x10 */
+ u8 reserved5[3]; /* GPIO_WKUP + 0x11 */
+ u8 wkup_iinten; /* GPIO_WKUP + 0x14 */
+ u8 reserved6[3]; /* GPIO_WKUP + 0x15 */
+ u16 wkup_itype; /* GPIO_WKUP + 0x18 */
+ u8 reserved7[2]; /* GPIO_WKUP + 0x1A */
+ u8 wkup_maste; /* GPIO_WKUP + 0x1C */
+ u8 reserved8[3]; /* GPIO_WKUP + 0x1D */
+ u8 wkup_ival; /* GPIO_WKUP + 0x20 */
+ u8 reserved9[3]; /* GPIO_WKUP + 0x21 */
+ u8 wkup_istat; /* GPIO_WKUP + 0x24 */
+ u8 reserved10[3]; /* GPIO_WKUP + 0x25 */
+};
+
+/* XLB Bus control */
+struct mpc52xx_xlb {
+ u8 reserved[0x40];
+ u32 config; /* XLB + 0x40 */
+ u32 version; /* XLB + 0x44 */
+ u32 status; /* XLB + 0x48 */
+ u32 int_enable; /* XLB + 0x4c */
+ u32 addr_capture; /* XLB + 0x50 */
+ u32 bus_sig_capture; /* XLB + 0x54 */
+ u32 addr_timeout; /* XLB + 0x58 */
+ u32 data_timeout; /* XLB + 0x5c */
+ u32 bus_act_timeout; /* XLB + 0x60 */
+ u32 master_pri_enable; /* XLB + 0x64 */
+ u32 master_priority; /* XLB + 0x68 */
+ u32 base_address; /* XLB + 0x6c */
+ u32 snoop_window; /* XLB + 0x70 */
+};
+
+#define MPC52xx_XLB_CFG_SNOOP (1 << 15)
+
+/* Clock Distribution control */
+struct mpc52xx_cdm {
+ u32 jtag_id; /* CDM + 0x00 reg0 read only */
+ u32 rstcfg; /* CDM + 0x04 reg1 read only */
+ u32 breadcrumb; /* CDM + 0x08 reg2 */
+
+ u8 mem_clk_sel; /* CDM + 0x0c reg3 byte0 */
+ u8 xlb_clk_sel; /* CDM + 0x0d reg3 byte1 read only */
+ u8 ipb_clk_sel; /* CDM + 0x0e reg3 byte2 */
+ u8 pci_clk_sel; /* CDM + 0x0f reg3 byte3 */
+
+ u8 ext_48mhz_en; /* CDM + 0x10 reg4 byte0 */
+ u8 fd_enable; /* CDM + 0x11 reg4 byte1 */
+ u16 fd_counters; /* CDM + 0x12 reg4 byte2,3 */
+
+ u32 clk_enables; /* CDM + 0x14 reg5 */
+
+ u8 osc_disable; /* CDM + 0x18 reg6 byte0 */
+ u8 reserved0[3]; /* CDM + 0x19 reg6 byte1,2,3 */
+
+ u8 ccs_sleep_enable; /* CDM + 0x1c reg7 byte0 */
+ u8 osc_sleep_enable; /* CDM + 0x1d reg7 byte1 */
+ u8 reserved1; /* CDM + 0x1e reg7 byte2 */
+ u8 ccs_qreq_test; /* CDM + 0x1f reg7 byte3 */
+
+ u8 soft_reset; /* CDM + 0x20 u8 byte0 */
+ u8 no_ckstp; /* CDM + 0x21 u8 byte0 */
+ u8 reserved2[2]; /* CDM + 0x22 u8 byte1,2,3 */
+
+ u8 pll_lock; /* CDM + 0x24 reg9 byte0 */
+ u8 pll_looselock; /* CDM + 0x25 reg9 byte1 */
+ u8 pll_sm_lockwin; /* CDM + 0x26 reg9 byte2 */
+ u8 reserved3; /* CDM + 0x27 reg9 byte3 */
+
+ u16 reserved4; /* CDM + 0x28 reg10 byte0,1 */
+ u16 mclken_div_psc1; /* CDM + 0x2a reg10 byte2,3 */
+
+ u16 reserved5; /* CDM + 0x2c reg11 byte0,1 */
+ u16 mclken_div_psc2; /* CDM + 0x2e reg11 byte2,3 */
+
+ u16 reserved6; /* CDM + 0x30 reg12 byte0,1 */
+ u16 mclken_div_psc3; /* CDM + 0x32 reg12 byte2,3 */
+
+ u16 reserved7; /* CDM + 0x34 reg13 byte0,1 */
+ u16 mclken_div_psc6; /* CDM + 0x36 reg13 byte2,3 */
+};
+
+#endif /* __ASSEMBLY__ */
+
+
+/* ========================================================================= */
+/* Prototypes for MPC52xx syslib */
+/* ========================================================================= */
+
+#ifndef __ASSEMBLY__
+
+extern void mpc52xx_init_irq(void);
+extern int mpc52xx_get_irq(struct pt_regs *regs);
+
+extern unsigned long mpc52xx_find_end_of_memory(void);
+extern void mpc52xx_set_bat(void);
+extern void mpc52xx_map_io(void);
+extern void mpc52xx_restart(char *cmd);
+extern void mpc52xx_halt(void);
+extern void mpc52xx_power_off(void);
+extern void mpc52xx_progress(char *s, unsigned short hex);
+extern void mpc52xx_calibrate_decr(void);
+extern void mpc52xx_add_board_devices(struct ocp_def board_ocp[]);
+
+#endif /* __ASSEMBLY__ */
+
+
+/* ========================================================================= */
+/* Platform configuration */
+/* ========================================================================= */
+
+/* The U-Boot platform information struct */
+extern bd_t __res;
+
+/* Platform options */
+#if defined(CONFIG_LITE5200)
+#include <platforms/lite5200.h>
+#endif
+
+
+#endif /* __ASM_MPC52xx_H__ */
--- /dev/null
+/*
+ * include/asm-ppc/mpc52xx_psc.h
+ *
+ * Definitions of consts/structs to drive the Freescale MPC52xx OnChip
+ * PSCs. Theses are shared between multiple drivers since a PSC can be
+ * UART, AC97, IR, I2S, ... So this header is in asm-ppc.
+ *
+ *
+ * Maintainer : Sylvain Munaut <tnt@246tNt.com>
+ *
+ * Based/Extracted from some header of the 2.4 originally written by
+ * Dale Farnsworth <dfarnsworth@mvista.com>
+ *
+ * Copyright (C) 2004 Sylvain Munaut <tnt@246tNt.com>
+ * Copyright (C) 2003 MontaVista, Software, Inc.
+ *
+ * This file is licensed under the terms of the GNU General Public License
+ * version 2. This program is licensed "as is" without any warranty of any
+ * kind, whether express or implied.
+ */
+
+#ifndef __ASM_MPC52xx_PSC_H__
+#define __ASM_MPC52xx_PSC_H__
+
+#include <asm/types.h>
+
+/* Max number of PSCs */
+#define MPC52xx_PSC_MAXNUM 6
+
+/* Programmable Serial Controller (PSC) status register bits */
+#define MPC52xx_PSC_SR_CDE 0x0080
+#define MPC52xx_PSC_SR_RXRDY 0x0100
+#define MPC52xx_PSC_SR_RXFULL 0x0200
+#define MPC52xx_PSC_SR_TXRDY 0x0400
+#define MPC52xx_PSC_SR_TXEMP 0x0800
+#define MPC52xx_PSC_SR_OE 0x1000
+#define MPC52xx_PSC_SR_PE 0x2000
+#define MPC52xx_PSC_SR_FE 0x4000
+#define MPC52xx_PSC_SR_RB 0x8000
+
+/* PSC Command values */
+#define MPC52xx_PSC_RX_ENABLE 0x0001
+#define MPC52xx_PSC_RX_DISABLE 0x0002
+#define MPC52xx_PSC_TX_ENABLE 0x0004
+#define MPC52xx_PSC_TX_DISABLE 0x0008
+#define MPC52xx_PSC_SEL_MODE_REG_1 0x0010
+#define MPC52xx_PSC_RST_RX 0x0020
+#define MPC52xx_PSC_RST_TX 0x0030
+#define MPC52xx_PSC_RST_ERR_STAT 0x0040
+#define MPC52xx_PSC_RST_BRK_CHG_INT 0x0050
+#define MPC52xx_PSC_START_BRK 0x0060
+#define MPC52xx_PSC_STOP_BRK 0x0070
+
+/* PSC TxRx FIFO status bits */
+#define MPC52xx_PSC_RXTX_FIFO_ERR 0x0040
+#define MPC52xx_PSC_RXTX_FIFO_UF 0x0020
+#define MPC52xx_PSC_RXTX_FIFO_OF 0x0010
+#define MPC52xx_PSC_RXTX_FIFO_FR 0x0008
+#define MPC52xx_PSC_RXTX_FIFO_FULL 0x0004
+#define MPC52xx_PSC_RXTX_FIFO_ALARM 0x0002
+#define MPC52xx_PSC_RXTX_FIFO_EMPTY 0x0001
+
+/* PSC interrupt mask bits */
+#define MPC52xx_PSC_IMR_TXRDY 0x0100
+#define MPC52xx_PSC_IMR_RXRDY 0x0200
+#define MPC52xx_PSC_IMR_DB 0x0400
+#define MPC52xx_PSC_IMR_IPC 0x8000
+
+/* PSC input port change bit */
+#define MPC52xx_PSC_CTS 0x01
+#define MPC52xx_PSC_DCD 0x02
+#define MPC52xx_PSC_D_CTS 0x10
+#define MPC52xx_PSC_D_DCD 0x20
+
+/* PSC mode fields */
+#define MPC52xx_PSC_MODE_5_BITS 0x00
+#define MPC52xx_PSC_MODE_6_BITS 0x01
+#define MPC52xx_PSC_MODE_7_BITS 0x02
+#define MPC52xx_PSC_MODE_8_BITS 0x03
+#define MPC52xx_PSC_MODE_BITS_MASK 0x03
+#define MPC52xx_PSC_MODE_PAREVEN 0x00
+#define MPC52xx_PSC_MODE_PARODD 0x04
+#define MPC52xx_PSC_MODE_PARFORCE 0x08
+#define MPC52xx_PSC_MODE_PARNONE 0x10
+#define MPC52xx_PSC_MODE_ERR 0x20
+#define MPC52xx_PSC_MODE_FFULL 0x40
+#define MPC52xx_PSC_MODE_RXRTS 0x80
+
+#define MPC52xx_PSC_MODE_ONE_STOP_5_BITS 0x00
+#define MPC52xx_PSC_MODE_ONE_STOP 0x07
+#define MPC52xx_PSC_MODE_TWO_STOP 0x0f
+
+#define MPC52xx_PSC_RFNUM_MASK 0x01ff
+
+
+/* Structure of the hardware registers */
+struct mpc52xx_psc {
+ u8 mode; /* PSC + 0x00 */
+ u8 reserved0[3];
+ union { /* PSC + 0x04 */
+ u16 status;
+ u16 clock_select;
+ } sr_csr;
+#define mpc52xx_psc_status sr_csr.status
+#define mpc52xx_psc_clock_select sr_csr.clock_select
+ u16 reserved1;
+ u8 command; /* PSC + 0x08 */
+ u8 reserved2[3];
+ union { /* PSC + 0x0c */
+ u8 buffer_8;
+ u16 buffer_16;
+ u32 buffer_32;
+ } buffer;
+#define mpc52xx_psc_buffer_8 buffer.buffer_8
+#define mpc52xx_psc_buffer_16 buffer.buffer_16
+#define mpc52xx_psc_buffer_32 buffer.buffer_32
+ union { /* PSC + 0x10 */
+ u8 ipcr;
+ u8 acr;
+ } ipcr_acr;
+#define mpc52xx_psc_ipcr ipcr_acr.ipcr
+#define mpc52xx_psc_acr ipcr_acr.acr
+ u8 reserved3[3];
+ union { /* PSC + 0x14 */
+ u16 isr;
+ u16 imr;
+ } isr_imr;
+#define mpc52xx_psc_isr isr_imr.isr
+#define mpc52xx_psc_imr isr_imr.imr
+ u16 reserved4;
+ u8 ctur; /* PSC + 0x18 */
+ u8 reserved5[3];
+ u8 ctlr; /* PSC + 0x1c */
+ u8 reserved6[3];
+ u16 ccr; /* PSC + 0x20 */
+ u8 reserved7[14];
+ u8 ivr; /* PSC + 0x30 */
+ u8 reserved8[3];
+ u8 ip; /* PSC + 0x34 */
+ u8 reserved9[3];
+ u8 op1; /* PSC + 0x38 */
+ u8 reserved10[3];
+ u8 op0; /* PSC + 0x3c */
+ u8 reserved11[3];
+ u32 sicr; /* PSC + 0x40 */
+ u8 ircr1; /* PSC + 0x44 */
+ u8 reserved13[3];
+ u8 ircr2; /* PSC + 0x44 */
+ u8 reserved14[3];
+ u8 irsdr; /* PSC + 0x4c */
+ u8 reserved15[3];
+ u8 irmdr; /* PSC + 0x50 */
+ u8 reserved16[3];
+ u8 irfdr; /* PSC + 0x54 */
+ u8 reserved17[3];
+ u16 rfnum; /* PSC + 0x58 */
+ u16 reserved18;
+ u16 tfnum; /* PSC + 0x5c */
+ u16 reserved19;
+ u32 rfdata; /* PSC + 0x60 */
+ u16 rfstat; /* PSC + 0x64 */
+ u16 reserved20;
+ u8 rfcntl; /* PSC + 0x68 */
+ u8 reserved21[5];
+ u16 rfalarm; /* PSC + 0x6e */
+ u16 reserved22;
+ u16 rfrptr; /* PSC + 0x72 */
+ u16 reserved23;
+ u16 rfwptr; /* PSC + 0x76 */
+ u16 reserved24;
+ u16 rflrfptr; /* PSC + 0x7a */
+ u16 reserved25;
+ u16 rflwfptr; /* PSC + 0x7e */
+ u32 tfdata; /* PSC + 0x80 */
+ u16 tfstat; /* PSC + 0x84 */
+ u16 reserved26;
+ u8 tfcntl; /* PSC + 0x88 */
+ u8 reserved27[5];
+ u16 tfalarm; /* PSC + 0x8e */
+ u16 reserved28;
+ u16 tfrptr; /* PSC + 0x92 */
+ u16 reserved29;
+ u16 tfwptr; /* PSC + 0x96 */
+ u16 reserved30;
+ u16 tflrfptr; /* PSC + 0x9a */
+ u16 reserved31;
+ u16 tflwfptr; /* PSC + 0x9e */
+};
+
+
+#endif /* __ASM_MPC52xx_PSC_H__ */
--- /dev/null
+/*
+ * include/asm-ppc/mpc85xx.h
+ *
+ * MPC85xx definitions
+ *
+ * Maintainer: Kumar Gala <kumar.gala@freescale.com>
+ *
+ * Copyright 2004 Freescale Semiconductor, Inc
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifdef __KERNEL__
+#ifndef __ASM_MPC85xx_H__
+#define __ASM_MPC85xx_H__
+
+#include <linux/config.h>
+#include <asm/mmu.h>
+
+#ifdef CONFIG_85xx
+
+#ifdef CONFIG_MPC8540_ADS
+#include <platforms/85xx/mpc8540_ads.h>
+#endif
+#ifdef CONFIG_MPC8555_CDS
+#include <platforms/85xx/mpc8555_cds.h>
+#endif
+#ifdef CONFIG_MPC8560_ADS
+#include <platforms/85xx/mpc8560_ads.h>
+#endif
+#ifdef CONFIG_SBC8560
+#include <platforms/85xx/sbc8560.h>
+#endif
+
+#define _IO_BASE isa_io_base
+#define _ISA_MEM_BASE isa_mem_base
+#ifdef CONFIG_PCI
+#define PCI_DRAM_OFFSET pci_dram_offset
+#else
+#define PCI_DRAM_OFFSET 0
+#endif
+
+/*
+ * The "residual" board information structure the boot loader passes
+ * into the kernel.
+ */
+extern unsigned char __res[];
+
+/* Internal IRQs on MPC85xx OpenPIC */
+/* Not all of these exist on all MPC85xx implementations */
+
+#ifndef MPC85xx_OPENPIC_IRQ_OFFSET
+#define MPC85xx_OPENPIC_IRQ_OFFSET 64
+#endif
+
+/* The 32 internal sources */
+#define MPC85xx_IRQ_L2CACHE ( 0 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_ECM ( 1 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DDR ( 2 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_LBIU ( 3 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DMA0 ( 4 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DMA1 ( 5 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DMA2 ( 6 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DMA3 ( 7 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_PCI1 ( 8 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_PCI2 ( 9 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_RIO_ERROR ( 9 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_RIO_BELL (10 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_RIO_TX (11 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_RIO_RX (12 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC1_TX (13 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC1_RX (14 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC1_ERROR (18 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC2_TX (19 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC2_RX (20 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_TSEC2_ERROR (24 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_FEC (25 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_DUART (26 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_IIC1 (27 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_PERFMON (28 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_SEC2 (29 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_CPM (30 + MPC85xx_OPENPIC_IRQ_OFFSET)
+
+/* The 12 external interrupt lines */
+#define MPC85xx_IRQ_EXT0 (32 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT1 (33 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT2 (34 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT3 (35 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT4 (36 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT5 (37 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT6 (38 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT7 (39 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT8 (40 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT9 (41 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT10 (42 + MPC85xx_OPENPIC_IRQ_OFFSET)
+#define MPC85xx_IRQ_EXT11 (43 + MPC85xx_OPENPIC_IRQ_OFFSET)
+
+/* Offset from CCSRBAR */
+#define MPC85xx_CPM_OFFSET (0x80000)
+#define MPC85xx_CPM_SIZE (0x40000)
+#define MPC85xx_DMA_OFFSET (0x21000)
+#define MPC85xx_DMA_SIZE (0x01000)
+#define MPC85xx_ENET1_OFFSET (0x24000)
+#define MPC85xx_ENET1_SIZE (0x01000)
+#define MPC85xx_ENET2_OFFSET (0x25000)
+#define MPC85xx_ENET2_SIZE (0x01000)
+#define MPC85xx_ENET3_OFFSET (0x26000)
+#define MPC85xx_ENET3_SIZE (0x01000)
+#define MPC85xx_GUTS_OFFSET (0xe0000)
+#define MPC85xx_GUTS_SIZE (0x01000)
+#define MPC85xx_IIC1_OFFSET (0x03000)
+#define MPC85xx_IIC1_SIZE (0x01000)
+#define MPC85xx_OPENPIC_OFFSET (0x40000)
+#define MPC85xx_OPENPIC_SIZE (0x40000)
+#define MPC85xx_PCI1_OFFSET (0x08000)
+#define MPC85xx_PCI1_SIZE (0x01000)
+#define MPC85xx_PCI2_OFFSET (0x09000)
+#define MPC85xx_PCI2_SIZE (0x01000)
+#define MPC85xx_PERFMON_OFFSET (0xe1000)
+#define MPC85xx_PERFMON_SIZE (0x01000)
+#define MPC85xx_SEC2_OFFSET (0x30000)
+#define MPC85xx_SEC2_SIZE (0x10000)
+#define MPC85xx_UART0_OFFSET (0x04500)
+#define MPC85xx_UART0_SIZE (0x00100)
+#define MPC85xx_UART1_OFFSET (0x04600)
+#define MPC85xx_UART1_SIZE (0x00100)
+
+#define MPC85xx_CCSRBAR_SIZE (1024*1024)
+
+/* Let modules/drivers get at CCSRBAR */
+extern phys_addr_t get_ccsrbar(void);
+
+#ifdef MODULE
+#define CCSRBAR get_ccsrbar()
+#else
+#define CCSRBAR BOARD_CCSRBAR
+#endif
+
+#endif /* CONFIG_85xx */
+#endif /* __ASM_MPC85xx_H__ */
+#endif /* __KERNEL__ */
--- /dev/null
+#ifndef _ASM_PPC64_CRASHDUMP_H
+#define _ASM_PPC64_CRASHDUMP_H
+
+/*
+ * linux/include/asm-ppc64/crashdump.h
+ *
+ * Copyright (c) 2003, 2004 Red Hat, Inc. All rights reserved.
+ * Copyright (C) 2004 IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifdef __KERNEL__
+
+#include <asm/time.h>
+
+extern int page_is_ram (unsigned long);
+extern unsigned long next_ram_page (unsigned long);
+
+#define platform_timestamp(x) (x = get_tb())
+
+#define platform_fix_regs() \
+{ \
+ memcpy(&myregs, regs, sizeof(struct pt_regs)); \
+};
+
+#define platform_init_stack(stackptr) do { } while (0)
+#define platform_cleanup_stack(stackptr) do { } while (0)
+
+typedef asmlinkage void (*crashdump_func_t)(struct pt_regs *, void *);
+
+static inline void platform_start_crashdump(void *stackptr,
+ crashdump_func_t dumpfunc,
+ struct pt_regs *regs)
+{
+ dumpfunc(regs, NULL);
+}
+
+#define platform_freeze_cpu() \
+{ \
+ current->thread.ksp = __get_SP(); \
+ for (;;) local_irq_disable(); \
+}
+
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_PPC64_CRASHDUMP_H */
--- /dev/null
+#ifndef _ASM_PPC64_DISKDUMP_H_
+#define _ASM_PPC64_DISKDUMP_H_
+
+/*
+ * linux/include/asm-ppc64/diskdump.h
+ *
+ * Copyright (c) 2004 FUJITSU LIMITED
+ * Copyright (c) 2003 Red Hat, Inc. All rights reserved.
+ */
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifdef __KERNEL__
+
+#include <linux/elf.h>
+#include <asm/crashdump.h>
+
+const static int platform_supports_diskdump = 1;
+
+struct disk_dump_sub_header {
+ elf_gregset_t elf_regs;
+};
+
+#define size_of_sub_header() ((sizeof(struct disk_dump_sub_header) + PAGE_SIZE - 1) / DUMP_BLOCK_SIZE)
+
+#define write_sub_header() \
+({ \
+ int ret; \
+ struct disk_dump_sub_header *header; \
+ \
+ header = (struct disk_dump_sub_header *)scratch; \
+ ELF_CORE_COPY_REGS(header->elf_regs, (&myregs)); \
+ clear_page(scratch); \
+ if ((ret = write_blocks(dump_part, 2, scratch, 1)) >= 0)\
+ ret = 1; /* size of sub header in page */; \
+ ret; \
+})
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_PPC64_DISKDUMP_H_ */
--- /dev/null
+/*
+ * hvcserver.h
+ * Copyright (C) 2004 Ryan S Arnold, IBM Corporation
+ *
+ * PPC64 virtual I/O console server support.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef _PPC64_HVCSERVER_H
+#define _PPC64_HVCSERVER_H
+
+#include <linux/list.h>
+
+/* Converged Location Code length */
+#define HVCS_CLC_LENGTH 79
+
+/**
+ * hvcs_partner_info - an element in a list of partner info
+ * @node: list_head denoting this partner_info struct's position in the list of
+ * partner info.
+ * @unit_address: The partner unit address of this entry.
+ * @partition_ID: The partner partition ID of this entry.
+ * @location_code: The converged location code of this entry + 1 char for the
+ * null-term.
+ *
+ * This structure outlines the format that partner info is presented to a caller
+ * of the hvcs partner info fetching functions. These are strung together into
+ * a list using linux kernel lists.
+ */
+struct hvcs_partner_info {
+ struct list_head node;
+ uint32_t unit_address;
+ uint32_t partition_ID;
+ char location_code[HVCS_CLC_LENGTH + 1]; /* CLC + 1 null-term char */
+};
+
+extern int hvcs_free_partner_info(struct list_head *head);
+extern int hvcs_get_partner_info(uint32_t unit_address,
+ struct list_head *head, unsigned long *pi_buff);
+extern int hvcs_register_connection(uint32_t unit_address,
+ uint32_t p_partition_ID, uint32_t p_unit_address);
+extern int hvcs_free_connection(uint32_t unit_address);
+
+#endif /* _PPC64_HVCSERVER_H */
--- /dev/null
+#ifndef _ASM_PPC64_NETDUMP_H_
+#define _ASM_PPC64_NETDUMP_H_
+
+/*
+ * linux/include/asm-ppc64/netdump.h
+ *
+ * Copyright (c) 2003, 2004 Red Hat, Inc. All rights reserved.
+ * Copyright (C) 2004 IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifdef __KERNEL__
+
+#include <asm/crashdump.h>
+
+const static int platform_supports_netdump = 1;
+
+#define platform_page_is_ram(x) (page_is_ram(x))
+#define platform_machine_type() (EM_PPC64)
+
+static inline unsigned char platform_effective_version(req_t *req)
+{
+ if (req->from > 0)
+ return min_t(unsigned char, req->from, NETDUMP_VERSION_MAX);
+ else
+ return 0;
+}
+
+#define platform_max_pfn() (num_physpages)
+
+static inline u32 platform_next_available(unsigned long pfn)
+{
+ unsigned long pgnum = next_ram_page(pfn);
+
+ if (pgnum < platform_max_pfn()) {
+ return (u32)pgnum;
+ }
+ return 0;
+}
+
+static inline void platform_jiffy_cycles(unsigned long long *jcp)
+{
+ unsigned long long t0, t1;
+
+ platform_timestamp(t0);
+ netdump_mdelay(1);
+ platform_timestamp(t1);
+ if (t1 > t0)
+ *jcp = t1 - t0;
+}
+
+static inline unsigned int platform_get_regs(char *tmp, struct pt_regs *myregs)
+{
+ elf_gregset_t elf_regs;
+ char *tmp2;
+
+ tmp2 = tmp + sprintf(tmp, "Sending register info.\n");
+ ELF_CORE_COPY_REGS(elf_regs, myregs);
+ memcpy(tmp2, &elf_regs, sizeof(elf_regs));
+
+ return(strlen(tmp) + sizeof(elf_regs));
+}
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_PPC64_NETDUMP_H_ */
--- /dev/null
+#ifndef _ASM_S390_DISKDUMP_H_
+#define _ASM_S390_DISKDUMP_H_
+
+#include <asm-generic/diskdump.h>
+
+#endif /* _ASM_S390_DISKDUMP_H_ */
--- /dev/null
+#ifndef __ASM_ADC_H
+#define __ASM_ADC_H
+#ifdef __KERNEL__
+/*
+ * Copyright (C) 2004 Andriy Skulysh
+ */
+
+#include <asm/cpu/adc.h>
+
+int adc_single(unsigned int channel);
+
+#endif /* __KERNEL__ */
+#endif /* __ASM_ADC_H */
--- /dev/null
+#ifndef __ASM_SH64_CACHEFLUSH_H
+#define __ASM_SH64_CACHEFLUSH_H
+
+#ifndef __ASSEMBLY__
+
+#include <asm/page.h>
+
+struct vm_area_struct;
+struct page;
+struct mm_struct;
+
+extern void flush_cache_all(void);
+extern void flush_cache_mm(struct mm_struct *mm);
+extern void flush_cache_sigtramp(unsigned long start, unsigned long end);
+extern void flush_cache_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end);
+extern void flush_cache_page(struct vm_area_struct *vma, unsigned long addr);
+extern void flush_dcache_page(struct page *pg);
+extern void flush_icache_range(unsigned long start, unsigned long end);
+extern void flush_icache_user_range(struct vm_area_struct *vma,
+ struct page *page, unsigned long addr,
+ int len);
+
+#define flush_dcache_mmap_lock(mapping) do { } while (0)
+#define flush_dcache_mmap_unlock(mapping) do { } while (0)
+
+#define flush_cache_vmap(start, end) flush_cache_all()
+#define flush_cache_vunmap(start, end) flush_cache_all()
+
+#define flush_icache_page(vma, page) do { } while (0)
+
+#define copy_to_user_page(vma, page, vaddr, dst, src, len) \
+ do { \
+ flush_cache_page(vma, vaddr); \
+ memcpy(dst, src, len); \
+ flush_icache_user_range(vma, page, vaddr, len); \
+ } while (0)
+
+#define copy_from_user_page(vma, page, vaddr, dst, src, len) \
+ do { \
+ flush_cache_page(vma, vaddr); \
+ memcpy(dst, src, len); \
+ } while (0)
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_SH64_CACHEFLUSH_H */
+
--- /dev/null
+#ifndef __ASM_SH_DMA_MAPPING_H
+#define __ASM_SH_DMA_MAPPING_H
+
+#include <linux/config.h>
+#include <linux/mm.h>
+#include <asm/scatterlist.h>
+#include <asm/io.h>
+
+struct pci_dev;
+extern void *consistent_alloc(struct pci_dev *hwdev, size_t size,
+ dma_addr_t *dma_handle);
+extern void consistent_free(struct pci_dev *hwdev, size_t size,
+ void *vaddr, dma_addr_t dma_handle);
+
+#define dma_supported(dev, mask) (1)
+
+static inline int dma_set_mask(struct device *dev, u64 mask)
+{
+ if (!dev->dma_mask || !dma_supported(dev, mask))
+ return -EIO;
+
+ *dev->dma_mask = mask;
+
+ return 0;
+}
+
+static inline void *dma_alloc_coherent(struct device *dev, size_t size,
+ dma_addr_t *dma_handle, int flag)
+{
+ return consistent_alloc(NULL, size, dma_handle);
+}
+
+static inline void dma_free_coherent(struct device *dev, size_t size,
+ void *vaddr, dma_addr_t dma_handle)
+{
+ consistent_free(NULL, size, vaddr, dma_handle);
+}
+
+static inline void dma_cache_sync(void *vaddr, size_t size,
+ enum dma_data_direction dir)
+{
+ dma_cache_wback_inv((unsigned long)vaddr, size);
+}
+
+static inline dma_addr_t dma_map_single(struct device *dev,
+ void *ptr, size_t size,
+ enum dma_data_direction dir)
+{
+#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+ if (dev->bus == &pci_bus_type)
+ return virt_to_bus(ptr);
+#endif
+ dma_cache_sync(ptr, size, dir);
+
+ return virt_to_bus(ptr);
+}
+
+#define dma_unmap_single(dev, addr, size, dir) do { } while (0)
+
+static inline int dma_map_sg(struct device *dev, struct scatterlist *sg,
+ int nents, enum dma_data_direction dir)
+{
+ int i;
+
+ for (i = 0; i < nents; i++) {
+#if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+ dma_cache_sync(page_address(sg[i].page) + sg[i].offset,
+ sg[i].length, dir);
+#endif
+ sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset;
+ }
+
+ return nents;
+}
+
+#define dma_unmap_sg(dev, sg, nents, dir) do { } while (0)
+
+static inline dma_addr_t dma_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir)
+{
+ return dma_map_single(dev, page_address(page) + offset, size, dir);
+}
+
+static inline void dma_unmap_page(struct device *dev, dma_addr_t dma_address,
+ size_t size, enum dma_data_direction dir)
+{
+ dma_unmap_single(dev, dma_address, size, dir);
+}
+
+static inline void dma_sync_single(struct device *dev, dma_addr_t dma_handle,
+ size_t size, enum dma_data_direction dir)
+{
+#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+ if (dev->bus == &pci_bus_type)
+ return;
+#endif
+ dma_cache_sync(bus_to_virt(dma_handle), size, dir);
+}
+
+static inline void dma_sync_single_range(struct device *dev,
+ dma_addr_t dma_handle,
+ unsigned long offset, size_t size,
+ enum dma_data_direction dir)
+{
+#if defined(CONFIG_PCI) && !defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+ if (dev->bus == &pci_bus_type)
+ return;
+#endif
+ dma_cache_sync(bus_to_virt(dma_handle) + offset, size, dir);
+}
+
+static inline void dma_sync_sg(struct device *dev, struct scatterlist *sg,
+ int nelems, enum dma_data_direction dir)
+{
+ int i;
+
+ for (i = 0; i < nelems; i++) {
+#if !defined(CONFIG_PCI) || defined(CONFIG_SH_PCIDMA_NONCOHERENT)
+ dma_cache_sync(page_address(sg[i].page) + sg[i].offset,
+ sg[i].length, dir);
+#endif
+ sg[i].dma_address = page_to_phys(sg[i].page) + sg[i].offset;
+ }
+}
+
+static inline void dma_sync_single_for_cpu(struct device *dev,
+ dma_addr_t dma_handle, size_t size,
+ enum dma_data_direction dir)
+ __attribute__ ((alias("dma_sync_single")));
+
+static inline void dma_sync_single_for_device(struct device *dev,
+ dma_addr_t dma_handle, size_t size,
+ enum dma_data_direction dir)
+ __attribute__ ((alias("dma_sync_single")));
+
+static inline void dma_sync_sg_for_cpu(struct device *dev,
+ struct scatterlist *sg, int nelems,
+ enum dma_data_direction dir)
+ __attribute__ ((alias("dma_sync_sg")));
+
+static inline void dma_sync_sg_for_device(struct device *dev,
+ struct scatterlist *sg, int nelems,
+ enum dma_data_direction dir)
+ __attribute__ ((alias("dma_sync_sg")));
+
+static inline int dma_get_cache_alignment(void)
+{
+ /*
+ * Each processor family will define its own L1_CACHE_SHIFT,
+ * L1_CACHE_BYTES wraps to this, so this is always safe.
+ */
+ return L1_CACHE_BYTES;
+}
+
+static inline int dma_mapping_error(dma_addr_t dma_addr)
+{
+ return dma_addr == 0;
+}
+
+#endif /* __ASM_SH_DMA_MAPPING_H */
+
--- /dev/null
+#ifndef __ASM_SH64_IO_H
+#define __ASM_SH64_IO_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/io.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003 Paul Mundt
+ *
+ */
+
+/*
+ * Convention:
+ * read{b,w,l}/write{b,w,l} are for PCI,
+ * while in{b,w,l}/out{b,w,l} are for ISA
+ * These may (will) be platform specific function.
+ *
+ * In addition, we have
+ * ctrl_in{b,w,l}/ctrl_out{b,w,l} for SuperH specific I/O.
+ * which are processor specific. Address should be the result of
+ * onchip_remap();
+ */
+
+#include <asm/cache.h>
+#include <asm/system.h>
+#include <asm/page.h>
+
+#define virt_to_bus virt_to_phys
+#define bus_to_virt phys_to_virt
+#define page_to_bus page_to_phys
+
+/*
+ * Nothing overly special here.. instead of doing the same thing
+ * over and over again, we just define a set of sh64_in/out functions
+ * with an implicit size. The traditional read{b,w,l}/write{b,w,l}
+ * mess is wrapped to this, as are the SH-specific ctrl_in/out routines.
+ */
+static inline unsigned char sh64_in8(unsigned long addr)
+{
+ return *(volatile unsigned char *)addr;
+}
+
+static inline unsigned short sh64_in16(unsigned long addr)
+{
+ return *(volatile unsigned short *)addr;
+}
+
+static inline unsigned long sh64_in32(unsigned long addr)
+{
+ return *(volatile unsigned long *)addr;
+}
+
+static inline unsigned long long sh64_in64(unsigned long addr)
+{
+ return *(volatile unsigned long long *)addr;
+}
+
+static inline void sh64_out8(unsigned char b, unsigned long addr)
+{
+ *(volatile unsigned char *)addr = b;
+ wmb();
+}
+
+static inline void sh64_out16(unsigned short b, unsigned long addr)
+{
+ *(volatile unsigned short *)addr = b;
+ wmb();
+}
+
+static inline void sh64_out32(unsigned long b, unsigned long addr)
+{
+ *(volatile unsigned long *)addr = b;
+ wmb();
+}
+
+static inline void sh64_out64(unsigned long long b, unsigned long addr)
+{
+ *(volatile unsigned long long *)addr = b;
+ wmb();
+}
+
+#define readb(addr) sh64_in8(addr)
+#define readw(addr) sh64_in16(addr)
+#define readl(addr) sh64_in32(addr)
+#define readb_relaxed(addr) sh64_in8(addr)
+#define readw_relaxed(addr) sh64_in16(addr)
+#define readl_relaxed(addr) sh64_in32(addr)
+
+#define writeb(b, addr) sh64_out8(b, addr)
+#define writew(b, addr) sh64_out16(b, addr)
+#define writel(b, addr) sh64_out32(b, addr)
+
+#define ctrl_inb(addr) sh64_in8(addr)
+#define ctrl_inw(addr) sh64_in16(addr)
+#define ctrl_inl(addr) sh64_in32(addr)
+
+#define ctrl_outb(b, addr) sh64_out8(b, addr)
+#define ctrl_outw(b, addr) sh64_out16(b, addr)
+#define ctrl_outl(b, addr) sh64_out32(b, addr)
+
+unsigned long inb(unsigned long port);
+unsigned long inw(unsigned long port);
+unsigned long inl(unsigned long port);
+void outb(unsigned long value, unsigned long port);
+void outw(unsigned long value, unsigned long port);
+void outl(unsigned long value, unsigned long port);
+
+#define mmiowb()
+
+#ifdef __KERNEL__
+
+#ifdef CONFIG_SH_CAYMAN
+extern unsigned long smsc_superio_virt;
+#endif
+#ifdef CONFIG_PCI
+extern unsigned long pciio_virt;
+#endif
+
+#define IO_SPACE_LIMIT 0xffffffff
+
+/*
+ * Change virtual addresses to physical addresses and vv.
+ * These are trivial on the 1:1 Linux/SuperH mapping
+ */
+extern __inline__ unsigned long virt_to_phys(volatile void * address)
+{
+ return __pa(address);
+}
+
+extern __inline__ void * phys_to_virt(unsigned long address)
+{
+ return __va(address);
+}
+
+extern void * __ioremap(unsigned long phys_addr, unsigned long size,
+ unsigned long flags);
+
+extern __inline__ void * ioremap(unsigned long phys_addr, unsigned long size)
+{
+ return __ioremap(phys_addr, size, 1);
+}
+
+extern __inline__ void * ioremap_nocache (unsigned long phys_addr, unsigned long size)
+{
+ return __ioremap(phys_addr, size, 0);
+}
+
+extern void iounmap(void *addr);
+
+unsigned long onchip_remap(unsigned long addr, unsigned long size, const char* name);
+extern void onchip_unmap(unsigned long vaddr);
+
+static __inline__ int check_signature(unsigned long io_addr,
+ const unsigned char *signature, int length)
+{
+ int retval = 0;
+ do {
+ if (readb(io_addr) != *signature)
+ goto out;
+ io_addr++;
+ signature++;
+ length--;
+ } while (length);
+ retval = 1;
+out:
+ return retval;
+}
+
+/*
+ * The caches on some architectures aren't dma-coherent and have need to
+ * handle this in software. There are three types of operations that
+ * can be applied to dma buffers.
+ *
+ * - dma_cache_wback_inv(start, size) makes caches and RAM coherent by
+ * writing the content of the caches back to memory, if necessary.
+ * The function also invalidates the affected part of the caches as
+ * necessary before DMA transfers from outside to memory.
+ * - dma_cache_inv(start, size) invalidates the affected parts of the
+ * caches. Dirty lines of the caches may be written back or simply
+ * be discarded. This operation is necessary before dma operations
+ * to the memory.
+ * - dma_cache_wback(start, size) writes back any dirty lines but does
+ * not invalidate the cache. This can be used before DMA reads from
+ * memory,
+ */
+
+static __inline__ void dma_cache_wback_inv (unsigned long start, unsigned long size)
+{
+ unsigned long s = start & L1_CACHE_ALIGN_MASK;
+ unsigned long e = (start + size) & L1_CACHE_ALIGN_MASK;
+
+ for (; s <= e; s += L1_CACHE_BYTES)
+ asm volatile ("ocbp %0, 0" : : "r" (s));
+}
+
+static __inline__ void dma_cache_inv (unsigned long start, unsigned long size)
+{
+ // Note that caller has to be careful with overzealous
+ // invalidation should there be partial cache lines at the extremities
+ // of the specified range
+ unsigned long s = start & L1_CACHE_ALIGN_MASK;
+ unsigned long e = (start + size) & L1_CACHE_ALIGN_MASK;
+
+ for (; s <= e; s += L1_CACHE_BYTES)
+ asm volatile ("ocbi %0, 0" : : "r" (s));
+}
+
+static __inline__ void dma_cache_wback (unsigned long start, unsigned long size)
+{
+ unsigned long s = start & L1_CACHE_ALIGN_MASK;
+ unsigned long e = (start + size) & L1_CACHE_ALIGN_MASK;
+
+ for (; s <= e; s += L1_CACHE_BYTES)
+ asm volatile ("ocbwb %0, 0" : : "r" (s));
+}
+
+#endif /* __KERNEL__ */
+#endif /* __ASM_SH64_IO_H */
--- /dev/null
+/*
+ * linux/include/asm-shmedia/keyboard.h
+ *
+ * Copied from i386 version:
+ * Created 3 Nov 1996 by Geert Uytterhoeven
+ */
+
+/*
+ * This file contains the i386 architecture specific keyboard definitions
+ */
+
+#ifndef __ASM_SH64_KEYBOARD_H
+#define __ASM_SH64_KEYBOARD_H
+
+#ifdef __KERNEL__
+
+#include <linux/kernel.h>
+#include <linux/ioport.h>
+#include <asm/io.h>
+
+#ifdef CONFIG_SH_CAYMAN
+#define KEYBOARD_IRQ (START_EXT_IRQS + 2) /* SMSC SuperIO IRQ 1 */
+#endif
+#define DISABLE_KBD_DURING_INTERRUPTS 0
+
+extern int pckbd_setkeycode(unsigned int scancode, unsigned int keycode);
+extern int pckbd_getkeycode(unsigned int scancode);
+extern int pckbd_translate(unsigned char scancode, unsigned char *keycode,
+ char raw_mode);
+extern char pckbd_unexpected_up(unsigned char keycode);
+extern void pckbd_leds(unsigned char leds);
+extern void pckbd_init_hw(void);
+extern unsigned char pckbd_sysrq_xlate[128];
+
+#define kbd_setkeycode pckbd_setkeycode
+#define kbd_getkeycode pckbd_getkeycode
+#define kbd_translate pckbd_translate
+#define kbd_unexpected_up pckbd_unexpected_up
+#define kbd_leds pckbd_leds
+#define kbd_init_hw pckbd_init_hw
+#define kbd_sysrq_xlate pckbd_sysrq_xlate
+
+#define SYSRQ_KEY 0x54
+
+/* resource allocation */
+#define kbd_request_region()
+#define kbd_request_irq(handler) request_irq(KEYBOARD_IRQ, handler, 0, \
+ "keyboard", NULL)
+
+/* How to access the keyboard macros on this platform. */
+#define kbd_read_input() inb(KBD_DATA_REG)
+#define kbd_read_status() inb(KBD_STATUS_REG)
+#define kbd_write_output(val) outb(val, KBD_DATA_REG)
+#define kbd_write_command(val) outb(val, KBD_CNTL_REG)
+
+/* Some stoneage hardware needs delays after some operations. */
+#define kbd_pause() do { } while(0)
+
+/*
+ * Machine specific bits for the PS/2 driver
+ */
+
+#ifdef CONFIG_SH_CAYMAN
+#define AUX_IRQ (START_EXT_IRQS + 6) /* SMSC SuperIO IRQ12 */
+#endif
+
+#define aux_request_irq(hand, dev_id) \
+ request_irq(AUX_IRQ, hand, SA_SHIRQ, "PS2 Mouse", dev_id)
+
+#define aux_free_irq(dev_id) free_irq(AUX_IRQ, dev_id)
+
+#endif /* __KERNEL__ */
+#endif /* __ASM_SH64_KEYBOARD_H */
+
--- /dev/null
+#ifndef __ASM_SH64_PGTABLE_H
+#define __ASM_SH64_PGTABLE_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/pgtable.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ * Copyright (C) 2003, 2004 Paul Mundt
+ * Copyright (C) 2003, 2004 Richard Curnow
+ *
+ * This file contains the functions and defines necessary to modify and use
+ * the SuperH page table tree.
+ */
+
+#ifndef __ASSEMBLY__
+#include <asm/processor.h>
+#include <asm/page.h>
+#include <linux/threads.h>
+#include <linux/config.h>
+
+extern void paging_init(void);
+
+/* We provide our own get_unmapped_area to avoid cache synonym issue */
+#define HAVE_ARCH_UNMAPPED_AREA
+
+/*
+ * Basically we have the same two-level (which is the logical three level
+ * Linux page table layout folded) page tables as the i386.
+ */
+
+/*
+ * ZERO_PAGE is a global shared page that is always zero: used
+ * for zero-mapped memory areas etc..
+ */
+extern unsigned char empty_zero_page[PAGE_SIZE];
+#define ZERO_PAGE(vaddr) (mem_map + MAP_NR(empty_zero_page))
+
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * NEFF and NPHYS related defines.
+ * FIXME : These need to be model-dependent. For now this is OK, SH5-101 and SH5-103
+ * implement 32 bits effective and 32 bits physical. But future implementations may
+ * extend beyond this.
+ */
+#define NEFF 32
+#define NEFF_SIGN (1LL << (NEFF - 1))
+#define NEFF_MASK (-1LL << NEFF)
+
+#define NPHYS 32
+#define NPHYS_SIGN (1LL << (NPHYS - 1))
+#define NPHYS_MASK (-1LL << NPHYS)
+
+/* Typically 2-level is sufficient up to 32 bits of virtual address space, beyond
+ that 3-level would be appropriate. */
+#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+/* For 4k pages, this contains 512 entries, i.e. 9 bits worth of address. */
+#define PTRS_PER_PTE ((1<<PAGE_SHIFT)/sizeof(unsigned long long))
+#define PTE_MAGNITUDE 3 /* sizeof(unsigned long long) magnit. */
+#define PTE_SHIFT PAGE_SHIFT
+#define PTE_BITS (PAGE_SHIFT - PTE_MAGNITUDE)
+
+/* top level: PMD. */
+#define PGDIR_SHIFT (PTE_SHIFT + PTE_BITS)
+#define PGD_BITS (NEFF - PGDIR_SHIFT)
+#define PTRS_PER_PGD (1<<PGD_BITS)
+
+/* middle level: PMD. This doesn't do anything for the 2-level case. */
+#define PTRS_PER_PMD (1)
+
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+#define PMD_SHIFT PGDIR_SHIFT
+#define PMD_SIZE PGDIR_SIZE
+#define PMD_MASK PGDIR_MASK
+
+#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+/*
+ * three-level asymmetric paging structure: PGD is top level.
+ * The asymmetry comes from 32-bit pointers and 64-bit PTEs.
+ */
+/* bottom level: PTE. It's 9 bits = 512 pointers */
+#define PTRS_PER_PTE ((1<<PAGE_SHIFT)/sizeof(unsigned long long))
+#define PTE_MAGNITUDE 3 /* sizeof(unsigned long long) magnit. */
+#define PTE_SHIFT PAGE_SHIFT
+#define PTE_BITS (PAGE_SHIFT - PTE_MAGNITUDE)
+
+/* middle level: PMD. It's 10 bits = 1024 pointers */
+#define PTRS_PER_PMD ((1<<PAGE_SHIFT)/sizeof(unsigned long long *))
+#define PMD_MAGNITUDE 2 /* sizeof(unsigned long long *) magnit. */
+#define PMD_SHIFT (PTE_SHIFT + PTE_BITS)
+#define PMD_BITS (PAGE_SHIFT - PMD_MAGNITUDE)
+
+/* top level: PMD. It's 1 bit = 2 pointers */
+#define PGDIR_SHIFT (PMD_SHIFT + PMD_BITS)
+#define PGD_BITS (NEFF - PGDIR_SHIFT)
+#define PTRS_PER_PGD (1<<PGD_BITS)
+
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+#else
+#error "No defined number of page table levels"
+#endif
+
+/*
+ * Error outputs.
+ */
+#define pte_ERROR(e) \
+ printk("%s:%d: bad pte %016Lx.\n", __FILE__, __LINE__, pte_val(e))
+#define pmd_ERROR(e) \
+ printk("%s:%d: bad pmd %08lx.\n", __FILE__, __LINE__, pmd_val(e))
+#define pgd_ERROR(e) \
+ printk("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
+
+/*
+ * Table setting routines. Used within arch/mm only.
+ */
+#define set_pgd(pgdptr, pgdval) (*(pgdptr) = pgdval)
+#define set_pmd(pmdptr, pmdval) (*(pmdptr) = pmdval)
+
+static __inline__ void set_pte(pte_t *pteptr, pte_t pteval)
+{
+ unsigned long long x = ((unsigned long long) pteval.pte);
+ unsigned long long *xp = (unsigned long long *) pteptr;
+ /*
+ * Sign-extend based on NPHYS.
+ */
+ *(xp) = (x & NPHYS_SIGN) ? (x | NPHYS_MASK) : x;
+}
+
+static __inline__ void pmd_set(pmd_t *pmdp,pte_t *ptep)
+{
+ pmd_val(*pmdp) = (unsigned long) ptep;
+}
+
+/*
+ * PGD defines. Top level.
+ */
+
+/* To find an entry in a generic PGD. */
+#define pgd_index(address) (((address) >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
+#define __pgd_offset(address) pgd_index(address)
+#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))
+
+/* To find an entry in a kernel PGD. */
+#define pgd_offset_k(address) pgd_offset(&init_mm, address)
+
+/*
+ * PGD level access routines.
+ *
+ * Note1:
+ * There's no need to use physical addresses since the tree walk is all
+ * in performed in software, until the PTE translation.
+ *
+ * Note 2:
+ * A PGD entry can be uninitialized (_PGD_UNUSED), generically bad,
+ * clear (_PGD_EMPTY), present. When present, lower 3 nibbles contain
+ * _KERNPG_TABLE. Being a kernel virtual pointer also bit 31 must
+ * be 1. Assuming an arbitrary clear value of bit 31 set to 0 and
+ * lower 3 nibbles set to 0xFFF (_PGD_EMPTY) any other value is a
+ * bad pgd that must be notified via printk().
+ *
+ */
+#define _PGD_EMPTY 0x0
+
+#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+static inline int pgd_none(pgd_t pgd) { return 0; }
+static inline int pgd_bad(pgd_t pgd) { return 0; }
+#define pgd_present(pgd) ((pgd_val(pgd) & _PAGE_PRESENT) ? 1 : 0)
+#define pgd_clear(xx) do { } while(0)
+
+#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+#define pgd_present(pgd_entry) (1)
+#define pgd_none(pgd_entry) (pgd_val((pgd_entry)) == _PGD_EMPTY)
+/* TODO: Think later about what a useful definition of 'bad' would be now. */
+#define pgd_bad(pgd_entry) (0)
+#define pgd_clear(pgd_entry_p) (set_pgd((pgd_entry_p), __pgd(_PGD_EMPTY)))
+
+#endif
+
+
+#define pgd_page(pgd_entry) ((unsigned long) (pgd_val(pgd_entry) & PAGE_MASK))
+
+/*
+ * PMD defines. Middle level.
+ */
+
+/* PGD to PMD dereferencing */
+#if defined(CONFIG_SH64_PGTABLE_2_LEVEL)
+static inline pmd_t * pmd_offset(pgd_t * dir, unsigned long address)
+{
+ return (pmd_t *) dir;
+}
+#elif defined(CONFIG_SH64_PGTABLE_3_LEVEL)
+#define __pmd_offset(address) \
+ (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
+#define pmd_offset(dir, addr) \
+ ((pmd_t *) ((pgd_val(*(dir))) & PAGE_MASK) + __pmd_offset((addr)))
+#endif
+
+/*
+ * PMD level access routines. Same notes as above.
+ */
+#define _PMD_EMPTY 0x0
+/* Either the PMD is empty or present, it's not paged out */
+#define pmd_present(pmd_entry) (pmd_val(pmd_entry) & _PAGE_PRESENT)
+#define pmd_clear(pmd_entry_p) (set_pmd((pmd_entry_p), __pmd(_PMD_EMPTY)))
+#define pmd_none(pmd_entry) (pmd_val((pmd_entry)) == _PMD_EMPTY)
+#define pmd_bad(pmd_entry) ((pmd_val(pmd_entry) & (~PAGE_MASK & ~_PAGE_USER)) != _KERNPG_TABLE)
+
+#define pmd_page_kernel(pmd_entry) \
+ ((unsigned long) __va(pmd_val(pmd_entry) & PAGE_MASK))
+
+#define pmd_page(pmd) \
+ (virt_to_page(pmd_val(pmd)))
+
+/* PMD to PTE dereferencing */
+#define pte_index(address) \
+ ((address >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
+
+#define pte_offset_kernel(dir, addr) \
+ ((pte_t *) ((pmd_val(*(dir))) & PAGE_MASK) + pte_index((addr)))
+
+#define pte_offset_map(dir,addr) pte_offset_kernel(dir, addr)
+#define pte_offset_map_nested(dir,addr) pte_offset_kernel(dir, addr)
+#define pte_unmap(pte) do { } while (0)
+#define pte_unmap_nested(pte) do { } while (0)
+
+/* Round it up ! */
+#define USER_PTRS_PER_PGD ((TASK_SIZE+PGDIR_SIZE-1)/PGDIR_SIZE)
+#define FIRST_USER_PGD_NR 0
+
+#ifndef __ASSEMBLY__
+#define VMALLOC_END 0xff000000
+#define VMALLOC_START 0xf0000000
+#define VMALLOC_VMADDR(x) ((unsigned long)(x))
+
+#define IOBASE_VADDR 0xff000000
+#define IOBASE_END 0xffffffff
+
+/*
+ * PTEL coherent flags.
+ * See Chapter 17 ST50 CPU Core Volume 1, Architecture.
+ */
+/* The bits that are required in the SH-5 TLB are placed in the h/w-defined
+ positions, to avoid expensive bit shuffling on every refill. The remaining
+ bits are used for s/w purposes and masked out on each refill.
+
+ Note, the PTE slots are used to hold data of type swp_entry_t when a page is
+ swapped out. Only the _PAGE_PRESENT flag is significant when the page is
+ swapped out, and it must be placed so that it doesn't overlap either the
+ type or offset fields of swp_entry_t. For x86, offset is at [31:8] and type
+ at [6:1], with _PAGE_PRESENT at bit 0 for both pte_t and swp_entry_t. This
+ scheme doesn't map to SH-5 because bit [0] controls cacheability. So bit
+ [2] is used for _PAGE_PRESENT and the type field of swp_entry_t is split
+ into 2 pieces. That is handled by SWP_ENTRY and SWP_TYPE below. */
+#define _PAGE_WT 0x001 /* CB0: if cacheable, 1->write-thru, 0->write-back */
+#define _PAGE_DEVICE 0x001 /* CB0: if uncacheable, 1->device (i.e. no write-combining or reordering at bus level) */
+#define _PAGE_CACHABLE 0x002 /* CB1: uncachable/cachable */
+#define _PAGE_PRESENT 0x004 /* software: page referenced */
+#define _PAGE_FILE 0x004 /* software: only when !present */
+#define _PAGE_SIZE0 0x008 /* SZ0-bit : size of page */
+#define _PAGE_SIZE1 0x010 /* SZ1-bit : size of page */
+#define _PAGE_SHARED 0x020 /* software: reflects PTEH's SH */
+#define _PAGE_READ 0x040 /* PR0-bit : read access allowed */
+#define _PAGE_EXECUTE 0x080 /* PR1-bit : execute access allowed */
+#define _PAGE_WRITE 0x100 /* PR2-bit : write access allowed */
+#define _PAGE_USER 0x200 /* PR3-bit : user space access allowed */
+#define _PAGE_DIRTY 0x400 /* software: page accessed in write */
+#define _PAGE_ACCESSED 0x800 /* software: page referenced */
+
+/* Mask which drops software flags */
+#define _PAGE_FLAGS_HARDWARE_MASK 0xfffffffffffff3dbLL
+/* Flags default: 4KB, Read, Not write, Not execute, Not user */
+#define _PAGE_FLAGS_HARDWARE_DEFAULT 0x0000000000000040LL
+
+/*
+ * HugeTLB support
+ */
+#if defined(CONFIG_HUGETLB_PAGE_SIZE_64K)
+#define _PAGE_SZHUGE (_PAGE_SIZE0)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_1MB)
+#define _PAGE_SZHUGE (_PAGE_SIZE1)
+#elif defined(CONFIG_HUGETLB_PAGE_SIZE_512MB)
+#define _PAGE_SZHUGE (_PAGE_SIZE0 | _PAGE_SIZE1)
+#endif
+
+/*
+ * Default flags for a Kernel page.
+ * This is fundametally also SHARED because the main use of this define
+ * (other than for PGD/PMD entries) is for the VMALLOC pool which is
+ * contextless.
+ *
+ * _PAGE_EXECUTE is required for modules
+ *
+ */
+#define _KERNPG_TABLE (_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+ _PAGE_EXECUTE | \
+ _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_DIRTY | \
+ _PAGE_SHARED)
+
+/* Default flags for a User page */
+#define _PAGE_TABLE (_KERNPG_TABLE | _PAGE_USER)
+
+#define _PAGE_CHG_MASK (PTE_MASK | _PAGE_ACCESSED | _PAGE_DIRTY)
+
+#define PAGE_NONE __pgprot(_PAGE_CACHABLE | _PAGE_ACCESSED)
+#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+ _PAGE_CACHABLE | _PAGE_ACCESSED | _PAGE_USER | \
+ _PAGE_SHARED)
+/* We need to include PAGE_EXECUTE in PAGE_COPY because it is the default
+ * protection mode for the stack. */
+#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_CACHABLE | \
+ _PAGE_ACCESSED | _PAGE_USER | _PAGE_EXECUTE)
+#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_CACHABLE | \
+ _PAGE_ACCESSED | _PAGE_USER)
+#define PAGE_KERNEL __pgprot(_KERNPG_TABLE)
+
+
+/*
+ * In ST50 we have full permissions (Read/Write/Execute/Shared).
+ * Just match'em all. These are for mmap(), therefore all at least
+ * User/Cachable/Present/Accessed. No point in making Fault on Write.
+ */
+#define __MMAP_COMMON (_PAGE_PRESENT | _PAGE_USER | _PAGE_CACHABLE | _PAGE_ACCESSED)
+ /* sxwr */
+#define __P000 __pgprot(__MMAP_COMMON)
+#define __P001 __pgprot(__MMAP_COMMON | _PAGE_READ)
+#define __P010 __pgprot(__MMAP_COMMON)
+#define __P011 __pgprot(__MMAP_COMMON | _PAGE_READ)
+#define __P100 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE)
+#define __P101 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE | _PAGE_READ)
+#define __P110 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE)
+#define __P111 __pgprot(__MMAP_COMMON | _PAGE_EXECUTE | _PAGE_READ)
+
+#define __S000 __pgprot(__MMAP_COMMON | _PAGE_SHARED)
+#define __S001 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_READ)
+#define __S010 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_WRITE)
+#define __S011 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_READ | _PAGE_WRITE)
+#define __S100 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE)
+#define __S101 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_READ)
+#define __S110 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_WRITE)
+#define __S111 __pgprot(__MMAP_COMMON | _PAGE_SHARED | _PAGE_EXECUTE | _PAGE_READ | _PAGE_WRITE)
+
+/* Make it a device mapping for maximum safety (e.g. for mapping device
+ registers into user-space via /dev/map). */
+#define pgprot_noncached(x) __pgprot(((x).pgprot & ~(_PAGE_CACHABLE)) | _PAGE_DEVICE)
+#define pgprot_writecombine(prot) __pgprot(pgprot_val(prot) & ~_PAGE_CACHABLE)
+
+/*
+ * Handling allocation failures during page table setup.
+ */
+extern void __handle_bad_pmd_kernel(pmd_t * pmd);
+#define __handle_bad_pmd(x) __handle_bad_pmd_kernel(x)
+
+/*
+ * PTE level access routines.
+ *
+ * Note1:
+ * It's the tree walk leaf. This is physical address to be stored.
+ *
+ * Note 2:
+ * Regarding the choice of _PTE_EMPTY:
+
+ We must choose a bit pattern that cannot be valid, whether or not the page
+ is present. bit[2]==1 => present, bit[2]==0 => swapped out. If swapped
+ out, bits [31:8], [6:3], [1:0] are under swapper control, so only bit[7] is
+ left for us to select. If we force bit[7]==0 when swapped out, we could use
+ the combination bit[7,2]=2'b10 to indicate an empty PTE. Alternatively, if
+ we force bit[7]==1 when swapped out, we can use all zeroes to indicate
+ empty. This is convenient, because the page tables get cleared to zero
+ when they are allocated.
+
+ */
+#define _PTE_EMPTY 0x0
+#define pte_present(x) (pte_val(x) & _PAGE_PRESENT)
+#define pte_clear(xp) (set_pte(xp, __pte(_PTE_EMPTY)))
+#define pte_none(x) (pte_val(x) == _PTE_EMPTY)
+
+/*
+ * Some definitions to translate between mem_map, PTEs, and page
+ * addresses:
+ */
+
+/*
+ * Given a PTE, return the index of the mem_map[] entry corresponding
+ * to the page frame the PTE. Get the absolute physical address, make
+ * a relative physical address and translate it to an index.
+ */
+#define pte_pagenr(x) (((unsigned long) (pte_val(x)) - \
+ __MEMORY_START) >> PAGE_SHIFT)
+
+/*
+ * Given a PTE, return the "struct page *".
+ */
+#define pte_page(x) (mem_map + pte_pagenr(x))
+
+/*
+ * Return number of (down rounded) MB corresponding to x pages.
+ */
+#define pages_to_mb(x) ((x) >> (20-PAGE_SHIFT))
+
+
+/*
+ * The following have defined behavior only work if pte_present() is true.
+ */
+static inline int pte_read(pte_t pte) { return pte_val(pte) & _PAGE_READ; }
+static inline int pte_exec(pte_t pte) { return pte_val(pte) & _PAGE_EXECUTE; }
+static inline int pte_dirty(pte_t pte){ return pte_val(pte) & _PAGE_DIRTY; }
+static inline int pte_young(pte_t pte){ return pte_val(pte) & _PAGE_ACCESSED; }
+static inline int pte_file(pte_t pte) { return pte_val(pte) & _PAGE_FILE; }
+static inline int pte_write(pte_t pte){ return pte_val(pte) & _PAGE_WRITE; }
+
+extern inline pte_t pte_rdprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_READ)); return pte; }
+extern inline pte_t pte_wrprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_WRITE)); return pte; }
+extern inline pte_t pte_exprotect(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_EXECUTE)); return pte; }
+extern inline pte_t pte_mkclean(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_DIRTY)); return pte; }
+extern inline pte_t pte_mkold(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) & ~_PAGE_ACCESSED)); return pte; }
+
+extern inline pte_t pte_mkread(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_READ)); return pte; }
+extern inline pte_t pte_mkwrite(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_WRITE)); return pte; }
+extern inline pte_t pte_mkexec(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_EXECUTE)); return pte; }
+extern inline pte_t pte_mkdirty(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_DIRTY)); return pte; }
+extern inline pte_t pte_mkyoung(pte_t pte) { set_pte(&pte, __pte(pte_val(pte) | _PAGE_ACCESSED)); return pte; }
+
+/*
+ * Conversion functions: convert a page and protection to a page entry.
+ *
+ * extern pte_t mk_pte(struct page *page, pgprot_t pgprot)
+ */
+#define mk_pte(page,pgprot) \
+({ \
+ pte_t __pte; \
+ \
+ set_pte(&__pte, __pte((((page)-mem_map) << PAGE_SHIFT) | \
+ __MEMORY_START | pgprot_val((pgprot)))); \
+ __pte; \
+})
+
+/*
+ * This takes a (absolute) physical page address that is used
+ * by the remapping functions
+ */
+#define mk_pte_phys(physpage, pgprot) \
+({ pte_t __pte; set_pte(&__pte, __pte(physpage | pgprot_val(pgprot))); __pte; })
+
+extern inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{ set_pte(&pte, __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot))); return pte; }
+
+#define page_pte_prot(page, prot) mk_pte(page, prot)
+#define page_pte(page) page_pte_prot(page, __pgprot(0))
+
+typedef pte_t *pte_addr_t;
+#define pgtable_cache_init() do { } while (0)
+
+extern void update_mmu_cache(struct vm_area_struct * vma,
+ unsigned long address, pte_t pte);
+
+/* Encode and decode a swap entry */
+#define __swp_type(x) (((x).val & 3) + (((x).val >> 1) & 0x3c))
+#define __swp_offset(x) ((x).val >> 8)
+#define __swp_entry(type, offset) ((swp_entry_t) { ((offset << 8) + ((type & 0x3c) << 1) + (type & 3)) })
+#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val(pte) })
+#define __swp_entry_to_pte(x) ((pte_t) { (x).val })
+
+/* Encode and decode a nonlinear file mapping entry */
+#define PTE_FILE_MAX_BITS 29
+#define pte_to_pgoff(pte) (pte_val(pte))
+#define pgoff_to_pte(off) ((pte_t) { (off) | _PAGE_FILE })
+
+/* Needs to be defined here and not in linux/mm.h, as it is arch dependent */
+#define PageSkip(page) (0)
+#define kern_addr_valid(addr) (1)
+
+#define io_remap_page_range(vma, vaddr, paddr, size, prot) \
+ remap_pfn_range(vma, vaddr, (paddr) >> PAGE_SHIFT, size, prot)
+#endif /* !__ASSEMBLY__ */
+
+/*
+ * No page table caches to initialise
+ */
+#define pgtable_cache_init() do { } while (0)
+
+#define pte_pfn(x) (((unsigned long)((x).pte)) >> PAGE_SHIFT)
+#define pfn_pte(pfn, prot) __pte(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+#define pfn_pmd(pfn, prot) __pmd(((pfn) << PAGE_SHIFT) | pgprot_val(prot))
+
+extern pgd_t swapper_pg_dir[PTRS_PER_PGD];
+
+#include <asm-generic/pgtable.h>
+
+#endif /* __ASM_SH64_PGTABLE_H */
--- /dev/null
+#ifndef __ASM_SH64_PTRACE_H
+#define __ASM_SH64_PTRACE_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/ptrace.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ */
+
+/*
+ * This struct defines the way the registers are stored on the
+ * kernel stack during a system call or other kernel entry.
+ */
+struct pt_regs {
+ unsigned long long pc;
+ unsigned long long sr;
+ unsigned long long syscall_nr;
+ unsigned long long regs[63];
+ unsigned long long tregs[8];
+ unsigned long long pad[2];
+};
+
+#ifdef __KERNEL__
+#define user_mode(regs) (((regs)->sr & 0x40000000)==0)
+#define instruction_pointer(regs) ((regs)->pc)
+#define profile_pc(regs) instruction_pointer(regs)
+extern void show_regs(struct pt_regs *);
+#endif
+
+#define PTRACE_O_TRACESYSGOOD 0x00000001
+
+#endif /* __ASM_SH64_PTRACE_H */
--- /dev/null
+#ifndef __ASM_SH64_SEMAPHORE_H
+#define __ASM_SH64_SEMAPHORE_H
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * include/asm-sh64/semaphore.h
+ *
+ * Copyright (C) 2000, 2001 Paolo Alberelli
+ *
+ * SMP- and interrupt-safe semaphores.
+ *
+ * (C) Copyright 1996 Linus Torvalds
+ *
+ * SuperH verison by Niibe Yutaka
+ * (Currently no asm implementation but generic C code...)
+ *
+ */
+
+#include <linux/linkage.h>
+#include <linux/spinlock.h>
+#include <linux/wait.h>
+#include <linux/rwsem.h>
+
+#include <asm/system.h>
+#include <asm/atomic.h>
+
+struct semaphore {
+ atomic_t count;
+ int sleepers;
+ wait_queue_head_t wait;
+};
+
+#define __SEMAPHORE_INITIALIZER(name, n) \
+{ \
+ .count = ATOMIC_INIT(n), \
+ .sleepers = 0, \
+ .wait = __WAIT_QUEUE_HEAD_INITIALIZER((name).wait) \
+}
+
+#define __MUTEX_INITIALIZER(name) \
+ __SEMAPHORE_INITIALIZER(name,1)
+
+#define __DECLARE_SEMAPHORE_GENERIC(name,count) \
+ struct semaphore name = __SEMAPHORE_INITIALIZER(name,count)
+
+#define DECLARE_MUTEX(name) __DECLARE_SEMAPHORE_GENERIC(name,1)
+#define DECLARE_MUTEX_LOCKED(name) __DECLARE_SEMAPHORE_GENERIC(name,0)
+
+static inline void sema_init (struct semaphore *sem, int val)
+{
+/*
+ * *sem = (struct semaphore)__SEMAPHORE_INITIALIZER((*sem),val);
+ *
+ * i'd rather use the more flexible initialization above, but sadly
+ * GCC 2.7.2.3 emits a bogus warning. EGCS doesnt. Oh well.
+ */
+ atomic_set(&sem->count, val);
+ sem->sleepers = 0;
+ init_waitqueue_head(&sem->wait);
+}
+
+static inline void init_MUTEX (struct semaphore *sem)
+{
+ sema_init(sem, 1);
+}
+
+static inline void init_MUTEX_LOCKED (struct semaphore *sem)
+{
+ sema_init(sem, 0);
+}
+
+#if 0
+asmlinkage void __down_failed(void /* special register calling convention */);
+asmlinkage int __down_failed_interruptible(void /* params in registers */);
+asmlinkage int __down_failed_trylock(void /* params in registers */);
+asmlinkage void __up_wakeup(void /* special register calling convention */);
+#endif
+
+asmlinkage void __down(struct semaphore * sem);
+asmlinkage int __down_interruptible(struct semaphore * sem);
+asmlinkage int __down_trylock(struct semaphore * sem);
+asmlinkage void __up(struct semaphore * sem);
+
+extern spinlock_t semaphore_wake_lock;
+
+static inline void down(struct semaphore * sem)
+{
+ if (atomic_dec_return(&sem->count) < 0)
+ __down(sem);
+}
+
+static inline int down_interruptible(struct semaphore * sem)
+{
+ int ret = 0;
+
+ if (atomic_dec_return(&sem->count) < 0)
+ ret = __down_interruptible(sem);
+ return ret;
+}
+
+static inline int down_trylock(struct semaphore * sem)
+{
+ int ret = 0;
+
+ if (atomic_dec_return(&sem->count) < 0)
+ ret = __down_trylock(sem);
+ return ret;
+}
+
+/*
+ * Note! This is subtle. We jump to wake people up only if
+ * the semaphore was negative (== somebody was waiting on it).
+ */
+static inline void up(struct semaphore * sem)
+{
+ if (atomic_inc_return(&sem->count) <= 0)
+ __up(sem);
+}
+
+#endif /* __ASM_SH64_SEMAPHORE_H */
--- /dev/null
+#ifndef _ASM_X86_64_CRASH_H
+#define _ASM_X86_64_CRASH_H
+
+/*
+ * linux/include/asm-x86_64/crash.h
+ *
+ * Copyright (c) 2004 Red Hat, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifdef __KERNEL__
+
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <asm/mmzone.h>
+
+extern int page_is_ram(unsigned long);
+
+static inline void *
+map_virtual(u64 offset, struct page **pp)
+{
+ struct page *page;
+ unsigned long pfn;
+ void *vaddr;
+
+ pfn = (unsigned long)(offset >> PAGE_SHIFT);
+
+ if (!page_is_ram(pfn)) {
+ printk(KERN_INFO
+ "crash memory driver: !page_is_ram(pfn: %lx)\n", pfn);
+ return NULL;
+ }
+
+ if (!pfn_valid(pfn)) {
+ printk(KERN_INFO
+ "crash memory driver: invalid pfn: %lx )\n", pfn);
+ return NULL;
+ }
+
+ page = pfn_to_page(pfn);
+
+ vaddr = kmap(page);
+ if (!vaddr) {
+ printk(KERN_INFO
+ "crash memory driver: pfn: %lx kmap(page: %lx) failed\n",
+ pfn, (unsigned long)page);
+ return NULL;
+ }
+
+ *pp = page;
+ return (vaddr + (offset & (PAGE_SIZE-1)));
+}
+
+static inline void unmap_virtual(struct page *page)
+{
+ kunmap(page);
+}
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_X86_64_CRASH_H */
--- /dev/null
+/*
+ * include/asm-x86_64/crashdump.h
+ *
+ * Copyright (C) Hitachi, Ltd. 2004
+ * Written by Satoshi Oshima (oshima@sdl.hitachi.co.jp)
+ *
+ * Derived from include/asm-i386/diskdump.h
+ * Copyright (c) 2004 FUJITSU LIMITED
+ * Copyright (c) 2003 Red Hat, Inc. All rights reserved.
+ *
+ */
+
+#ifndef _ASM_X86_64_CRASHDUMP_H
+#define _ASM_X86_64_CRASHDUMP_H
+
+#ifdef __KERNEL__
+
+#include <linux/elf.h>
+
+extern int page_is_ram(unsigned long);
+extern unsigned long next_ram_page(unsigned long);
+
+#define platform_fix_regs() \
+{ \
+ unsigned long rsp; \
+ unsigned short ss; \
+ rsp = (unsigned long) ((char *)regs + sizeof (struct pt_regs)); \
+ ss = __KERNEL_DS; \
+ if (regs->cs & 3) { \
+ rsp = regs->rsp; \
+ ss = regs->ss & 0xffff; \
+ } \
+ myregs = *regs; \
+ myregs.rsp = rsp; \
+ myregs.ss = (myregs.ss & (~0xffff)) | ss; \
+}
+
+#define platform_timestamp(x) rdtscll(x)
+
+#define platform_freeze_cpu() \
+{ \
+ for (;;) local_irq_disable(); \
+}
+
+static inline void platform_init_stack(void **stackptr)
+{
+ struct page *page;
+
+ if ((page = alloc_page(GFP_KERNEL)))
+ *stackptr = (void *)page_address(page);
+
+ if (*stackptr)
+ memset(*stackptr, 0, PAGE_SIZE);
+ else
+ printk(KERN_WARNING
+ "crashdump: unable to allocate separate stack\n");
+}
+
+#define platform_cleanup_stack(stackptr) \
+do { \
+ if (stackptr) \
+ free_page((unsigned long)stackptr); \
+} while (0)
+
+typedef asmlinkage void (*crashdump_func_t)(struct pt_regs *, void *);
+
+static inline void platform_start_crashdump(void *stackptr,
+ crashdump_func_t dumpfunc,
+ struct pt_regs *regs)
+{
+ static unsigned long old_rsp;
+ unsigned long new_rsp;
+
+ if (stackptr) {
+ asm volatile("movq %%rsp,%0" : "=r" (old_rsp));
+ new_rsp = (unsigned long)stackptr + PAGE_SIZE;
+ asm volatile("movq %0,%%rsp" :: "r" (new_rsp));
+ dumpfunc(regs, NULL);
+ asm volatile("movq %0,%%rsp" :: "r" (old_rsp));
+ } else
+ dumpfunc(regs, NULL);
+}
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_X86_64_CRASHDUMP_H */
--- /dev/null
+/*
+ * include/asm-x86_64/diskdump.h
+ *
+ * Copyright (C) Hitachi, Ltd. 2004
+ * Written by Satoshi Oshima (oshima@sdl.hitachi.co.jp)
+ *
+ * Derived from include/asm-i386/diskdump.h
+ * Copyright (c) 2004 FUJITSU LIMITED
+ * Copyright (c) 2003 Red Hat, Inc. All rights reserved.
+ *
+ */
+
+#ifndef _ASM_X86_64_DISKDUMP_H
+#define _ASM_X86_64_DISKDUMP_H
+
+#ifdef __KERNEL__
+
+#include <linux/elf.h>
+#include <asm/crashdump.h>
+
+const static int platform_supports_diskdump = 1;
+
+struct disk_dump_sub_header {
+ elf_gregset_t elf_regs;
+};
+
+#define size_of_sub_header() ((sizeof(struct disk_dump_sub_header) + PAGE_SIZE - 1) / DUMP_BLOCK_SIZE)
+
+#define write_sub_header() \
+({ \
+ int ret; \
+ \
+ ELF_CORE_COPY_REGS(dump_sub_header.elf_regs, (&myregs)); \
+ clear_page(scratch); \
+ memcpy(scratch, &dump_sub_header, sizeof(dump_sub_header)); \
+ \
+ if ((ret = write_blocks(dump_part, 2, scratch, 1)) >= 0) \
+ ret = 1; /* size of sub header in page */; \
+ ret; \
+})
+
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_X86_64_DISKDUMP_H */
--- /dev/null
+#ifndef _ASM_X86_64_NETDUMP_H_
+#define _ASM_X86_64_NETDUMP_H_
+
+/*
+ * linux/include/asm-x86_64/netdump.h
+ *
+ * Copyright (c) 2003, 2004 Red Hat, Inc. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifdef __KERNEL__
+
+#include <asm/crashdump.h>
+
+const static int platform_supports_netdump = 1;
+
+#define platform_machine_type() (EM_X86_64)
+
+#define platform_page_is_ram(x) (page_is_ram(x) && \
+ kern_addr_valid((unsigned long)pfn_to_kaddr(x)))
+
+static inline unsigned char platform_effective_version(req_t *req)
+{
+ if (req->from > 0)
+ return min_t(unsigned char, req->from, NETDUMP_VERSION_MAX);
+ else
+ return 0;
+}
+
+#define platform_max_pfn() (num_physpages)
+
+static inline u32 platform_next_available(unsigned long pfn)
+{
+ unsigned long pgnum = next_ram_page(pfn);
+
+ if (pgnum < platform_max_pfn()) {
+ return (u32)pgnum;
+ }
+ return 0;
+}
+
+static inline void platform_jiffy_cycles(unsigned long long *jcp)
+{
+ unsigned long long t0, t1;
+
+ platform_timestamp(t0);
+ netdump_mdelay(1);
+ platform_timestamp(t1);
+ if (t1 > t0)
+ *jcp = t1 - t0;
+}
+
+static inline unsigned int platform_get_regs(char *tmp, struct pt_regs *myregs)
+{
+ elf_gregset_t elf_regs;
+ char *tmp2;
+
+ tmp2 = tmp + sprintf(tmp, "Sending register info.\n");
+ ELF_CORE_COPY_REGS(elf_regs, myregs);
+ memcpy(tmp2, &elf_regs, sizeof(elf_regs));
+
+ return(strlen(tmp) + sizeof(elf_regs));
+}
+#endif /* __KERNEL__ */
+
+#endif /* _ASM_X86_64_NETDUMP_H */
--- /dev/null
+#ifndef _LINUX_CRC_CCITT_H
+#define _LINUX_CRC_CCITT_H
+#ifdef __KERNEL__
+
+#include <linux/types.h>
+
+extern u16 const crc_ccitt_table[256];
+
+extern u16 crc_ccitt(u16 crc, const u8 *buffer, size_t len);
+
+static inline u16 crc_ccitt_byte(u16 crc, const u8 c)
+{
+ return (crc >> 8) ^ crc_ccitt_table[(crc ^ c) & 0xff];
+}
+
+#endif /* __KERNEL__ */
+#endif /* _LINUX_CRC_CCITT_H */
--- /dev/null
+#ifndef _LINUX_DISKDUMP_H
+#define _LINUX_DISKDUMP_H
+
+/*
+ * linux/include/linux/diskdump.h
+ *
+ * Copyright (c) 2004 FUJITSU LIMITED
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#include <linux/list.h>
+#include <linux/blkdev.h>
+#include <linux/utsname.h>
+#include <linux/device.h>
+
+/* The minimum Dump I/O unit. Must be the same of PAGE_SIZE */
+#define DUMP_BLOCK_SIZE PAGE_SIZE
+#define DUMP_BLOCK_SHIFT PAGE_SHIFT
+
+int diskdump_register_hook(void (*dump_func)(struct pt_regs *));
+void diskdump_unregister_hook(void);
+
+/*
+ * The handler of diskdump module
+ */
+struct disk_dump_ops {
+ int (*add_dump)(struct device *, struct block_device *);
+ int (*remove_dump)(struct block_device *);
+ int (*find_dump)(struct block_device *);
+};
+
+int diskdump_register_ops(struct disk_dump_ops* op);
+void diskdump_unregister_ops(void);
+
+
+/*
+ * The handler that adapter driver provides for the common module of
+ * dump
+ */
+struct disk_dump_partition;
+struct disk_dump_device;
+
+struct disk_dump_type {
+ void *(*probe)(struct device *);
+ int (*add_device)(struct disk_dump_device *);
+ void (*remove_device)(struct disk_dump_device *);
+ struct module *owner;
+ struct list_head list;
+};
+
+struct disk_dump_device_ops {
+ int (*sanity_check)(struct disk_dump_device *);
+ int (*quiesce)(struct disk_dump_device *);
+ int (*shutdown)(struct disk_dump_device *);
+ int (*rw_block)(struct disk_dump_partition *, int rw, unsigned long block_nr, void *buf, int len);
+};
+
+/* The data structure for a dump device */
+struct disk_dump_device {
+ struct list_head list;
+ struct disk_dump_device_ops ops;
+ struct disk_dump_type *dump_type;
+ void *device;
+ unsigned int max_blocks;
+ struct list_head partitions;
+};
+
+/* The data structure for a dump partition */
+struct disk_dump_partition {
+ struct list_head list;
+ struct disk_dump_device *device;
+ struct block_device *bdev;
+ unsigned long start_sect;
+ unsigned long nr_sects;
+};
+
+int register_disk_dump_type(struct disk_dump_type *);
+int unregister_disk_dump_type(struct disk_dump_type *);
+
+
+/*
+ * sysfs interface
+ */
+ssize_t diskdump_sysfs_store(struct device *dev, const char *buf, size_t count);
+ssize_t diskdump_sysfs_show(struct device *dev, char *buf);
+
+
+void diskdump_update(void);
+void diskdump_setup_timestamp(void);
+
+/* mdelay() is trapped by WARN_ON if we are in the interrupt context. */
+#define diskdump_mdelay(n) \
+({unsigned long __ms=(n); while (__ms--) udelay(1000);})
+
+
+/*
+ * Architecture-independent dump header
+ */
+
+/* The signature which is written in each block in the dump partition */
+#define DISK_DUMP_SIGNATURE "DISKDUMP"
+#define DISK_DUMP_HEADER_VERSION 1
+
+#define DUMP_PARTITION_SIGNATURE "diskdump"
+
+#define DUMP_HEADER_COMPLETED 0
+#define DUMP_HEADER_INCOMPLETED 1
+
+struct disk_dump_header {
+ char signature[8]; /* = "DISKDUMP" */
+ int header_version; /* Dump header version */
+ struct new_utsname utsname; /* copy of system_utsname */
+ struct timespec timestamp; /* Time stamp */
+ unsigned int status; /* Above flags */
+ int block_size; /* Size of a block in byte */
+ int sub_hdr_size; /* Size of arch dependent
+ header in blocks */
+ unsigned int bitmap_blocks; /* Size of Memory bitmap in
+ block */
+ unsigned int max_mapnr; /* = max_mapnr */
+ unsigned int total_ram_blocks;/* Size of Memory in block */
+ unsigned int device_blocks; /* Number of total blocks in
+ * the dump device */
+ unsigned int written_blocks; /* Number of written blocks */
+ unsigned int current_cpu; /* CPU# which handles dump */
+ int nr_cpus; /* Number of CPUs */
+ struct task_struct *tasks[NR_CPUS];
+};
+
+/* Diskdump state */
+extern enum disk_dump_states {
+ DISK_DUMP_INITIAL,
+ DISK_DUMP_RUNNING,
+ DISK_DUMP_SUCCESS,
+ DISK_DUMP_FAILURE,
+} disk_dump_state;
+
+/*
+ * Calculate the check sum of the whole module
+ */
+#define get_crc_module() \
+({ \
+ struct module *module = &__this_module; \
+ crc32_le(0, (char *)(module->module_core), \
+ ((unsigned long)module - (unsigned long)(module->module_core))); \
+})
+
+/* Calculate the checksum of the whole module */
+#define set_crc_modules() \
+({ \
+ module_crc = 0; \
+ module_crc = get_crc_module(); \
+})
+
+/*
+ * Compare the checksum value that is stored in module_crc to the check
+ * sum of current whole module. Must be called with holding disk_dump_lock.
+ * Return TRUE if they are the same, else return FALSE
+ *
+ */
+#define check_crc_module() \
+({ \
+ uint32_t orig_crc, cur_crc; \
+ \
+ orig_crc = module_crc; module_crc = 0; \
+ cur_crc = get_crc_module(); \
+ module_crc = orig_crc; \
+ orig_crc == cur_crc; \
+})
+
+
+#endif /* _LINUX_DISKDUMP_H */
--- /dev/null
+#ifndef __HPET__
+#define __HPET__ 1
+
+#include <linux/compiler.h>
+
+/*
+ * Offsets into HPET Registers
+ */
+
+struct hpet {
+ u64 hpet_cap; /* capabilities */
+ u64 res0; /* reserved */
+ u64 hpet_config; /* configuration */
+ u64 res1; /* reserved */
+ u64 hpet_isr; /* interrupt status reg */
+ u64 res2[25]; /* reserved */
+ union { /* main counter */
+ u64 _hpet_mc64;
+ u32 _hpet_mc32;
+ unsigned long _hpet_mc;
+ } _u0;
+ u64 res3; /* reserved */
+ struct hpet_timer {
+ u64 hpet_config; /* configuration/cap */
+ union { /* timer compare register */
+ u64 _hpet_hc64;
+ u32 _hpet_hc32;
+ unsigned long _hpet_compare;
+ } _u1;
+ u64 hpet_fsb[2]; /* FSB route */
+ } hpet_timers[1];
+};
+
+#define hpet_mc _u0._hpet_mc
+#define hpet_compare _u1._hpet_compare
+
+#define HPET_MAX_TIMERS (32)
+
+/*
+ * HPET general capabilities register
+ */
+
+#define HPET_COUNTER_CLK_PERIOD_MASK (0xffffffff00000000ULL)
+#define HPET_COUNTER_CLK_PERIOD_SHIFT (32UL)
+#define HPET_VENDOR_ID_MASK (0x00000000ffff0000ULL)
+#define HPET_VENDOR_ID_SHIFT (16ULL)
+#define HPET_LEG_RT_CAP_MASK (0x8000)
+#define HPET_COUNTER_SIZE_MASK (0x2000)
+#define HPET_NUM_TIM_CAP_MASK (0x1f00)
+#define HPET_NUM_TIM_CAP_SHIFT (8ULL)
+
+/*
+ * HPET general configuration register
+ */
+
+#define HPET_LEG_RT_CNF_MASK (2UL)
+#define HPET_ENABLE_CNF_MASK (1UL)
+
+
+/*
+ * Timer configuration register
+ */
+
+#define Tn_INT_ROUTE_CAP_MASK (0xffffffff00000000ULL)
+#define Tn_INI_ROUTE_CAP_SHIFT (32UL)
+#define Tn_FSB_INT_DELCAP_MASK (0x8000UL)
+#define Tn_FSB_INT_DELCAP_SHIFT (15)
+#define Tn_FSB_EN_CNF_MASK (0x4000UL)
+#define Tn_FSB_EN_CNF_SHIFT (14)
+#define Tn_INT_ROUTE_CNF_MASK (0x3e00UL)
+#define Tn_INT_ROUTE_CNF_SHIFT (9)
+#define Tn_32MODE_CNF_MASK (0x0100UL)
+#define Tn_VAL_SET_CNF_MASK (0x0040UL)
+#define Tn_SIZE_CAP_MASK (0x0020UL)
+#define Tn_PER_INT_CAP_MASK (0x0010UL)
+#define Tn_TYPE_CNF_MASK (0x0008UL)
+#define Tn_INT_ENB_CNF_MASK (0x0004UL)
+#define Tn_INT_TYPE_CNF_MASK (0x0002UL)
+
+/*
+ * Timer FSB Interrupt Route Register
+ */
+
+#define Tn_FSB_INT_ADDR_MASK (0xffffffff00000000ULL)
+#define Tn_FSB_INT_ADDR_SHIFT (32UL)
+#define Tn_FSB_INT_VAL_MASK (0x00000000ffffffffULL)
+
+struct hpet_info {
+ unsigned long hi_ireqfreq; /* Hz */
+ unsigned long hi_flags; /* information */
+ unsigned short hi_hpet;
+ unsigned short hi_timer;
+};
+
+#define HPET_INFO_PERIODIC 0x0001 /* timer is periodic */
+
+#define HPET_IE_ON _IO('h', 0x01) /* interrupt on */
+#define HPET_IE_OFF _IO('h', 0x02) /* interrupt off */
+#define HPET_INFO _IOR('h', 0x03, struct hpet_info)
+#define HPET_EPI _IO('h', 0x04) /* enable periodic */
+#define HPET_DPI _IO('h', 0x05) /* disable periodic */
+#define HPET_IRQFREQ _IOW('h', 0x6, unsigned long) /* IRQFREQ usec */
+
+/*
+ * exported interfaces
+ */
+
+struct hpet_task {
+ void (*ht_func) (void *);
+ void *ht_data;
+ void *ht_opaque;
+};
+
+struct hpet_data {
+ unsigned long hd_phys_address;
+ void __iomem *hd_address;
+ unsigned short hd_nirqs;
+ unsigned short hd_flags;
+ unsigned int hd_state; /* timer allocated */
+ unsigned int hd_irq[HPET_MAX_TIMERS];
+};
+
+#define HPET_DATA_PLATFORM 0x0001 /* platform call to hpet_alloc */
+
+static inline void hpet_reserve_timer(struct hpet_data *hd, int timer)
+{
+ hd->hd_state |= (1 << timer);
+ return;
+}
+
+int hpet_alloc(struct hpet_data *);
+int hpet_register(struct hpet_task *, int);
+int hpet_unregister(struct hpet_task *);
+int hpet_control(struct hpet_task *, unsigned int, unsigned long);
+
+#endif /* !__HPET__ */
--- /dev/null
+#ifndef _LINUX_MEMPOLICY_H
+#define _LINUX_MEMPOLICY_H 1
+
+#include <linux/errno.h>
+
+/*
+ * NUMA memory policies for Linux.
+ * Copyright 2003,2004 Andi Kleen SuSE Labs
+ */
+
+/* Policies */
+#define MPOL_DEFAULT 0
+#define MPOL_PREFERRED 1
+#define MPOL_BIND 2
+#define MPOL_INTERLEAVE 3
+
+#define MPOL_MAX MPOL_INTERLEAVE
+
+/* Flags for get_mem_policy */
+#define MPOL_F_NODE (1<<0) /* return next IL mode instead of node mask */
+#define MPOL_F_ADDR (1<<1) /* look up vma using address */
+
+/* Flags for mbind */
+#define MPOL_MF_STRICT (1<<0) /* Verify existing pages in the mapping */
+
+#ifdef __KERNEL__
+
+#include <linux/config.h>
+#include <linux/mmzone.h>
+#include <linux/bitmap.h>
+#include <linux/slab.h>
+#include <linux/rbtree.h>
+#include <linux/spinlock.h>
+
+struct vm_area_struct;
+
+#ifdef CONFIG_NUMA
+
+/*
+ * Describe a memory policy.
+ *
+ * A mempolicy can be either associated with a process or with a VMA.
+ * For VMA related allocations the VMA policy is preferred, otherwise
+ * the process policy is used. Interrupts ignore the memory policy
+ * of the current process.
+ *
+ * Locking policy for interlave:
+ * In process context there is no locking because only the process accesses
+ * its own state. All vma manipulation is somewhat protected by a down_read on
+ * mmap_sem. For allocating in the interleave policy the page_table_lock
+ * must be also aquired to protect il_next.
+ *
+ * Freeing policy:
+ * When policy is MPOL_BIND v.zonelist is kmalloc'ed and must be kfree'd.
+ * All other policies don't have any external state. mpol_free() handles this.
+ *
+ * Copying policy objects:
+ * For MPOL_BIND the zonelist must be always duplicated. mpol_clone() does this.
+ */
+struct mempolicy {
+ atomic_t refcnt;
+ short policy; /* See MPOL_* above */
+ union {
+ struct zonelist *zonelist; /* bind */
+ short preferred_node; /* preferred */
+ DECLARE_BITMAP(nodes, MAX_NUMNODES); /* interleave */
+ /* undefined for default */
+ } v;
+};
+
+/*
+ * Support for managing mempolicy data objects (clone, copy, destroy)
+ * The default fast path of a NULL MPOL_DEFAULT policy is always inlined.
+ */
+
+extern void __mpol_free(struct mempolicy *pol);
+static inline void mpol_free(struct mempolicy *pol)
+{
+ if (pol)
+ __mpol_free(pol);
+}
+
+extern struct mempolicy *__mpol_copy(struct mempolicy *pol);
+static inline struct mempolicy *mpol_copy(struct mempolicy *pol)
+{
+ if (pol)
+ pol = __mpol_copy(pol);
+ return pol;
+}
+
+#define vma_policy(vma) ((vma)->vm_policy)
+#define vma_set_policy(vma, pol) ((vma)->vm_policy = (pol))
+
+static inline void mpol_get(struct mempolicy *pol)
+{
+ if (pol)
+ atomic_inc(&pol->refcnt);
+}
+
+extern int __mpol_equal(struct mempolicy *a, struct mempolicy *b);
+static inline int mpol_equal(struct mempolicy *a, struct mempolicy *b)
+{
+ if (a == b)
+ return 1;
+ return __mpol_equal(a, b);
+}
+#define vma_mpol_equal(a,b) mpol_equal(vma_policy(a), vma_policy(b))
+
+/* Could later add inheritance of the process policy here. */
+
+#define mpol_set_vma_default(vma) ((vma)->vm_policy = NULL)
+
+/*
+ * Hugetlb policy. i386 hugetlb so far works with node numbers
+ * instead of zone lists, so give it special interfaces for now.
+ */
+extern int mpol_first_node(struct vm_area_struct *vma, unsigned long addr);
+extern int mpol_node_valid(int nid, struct vm_area_struct *vma,
+ unsigned long addr);
+
+/*
+ * Tree of shared policies for a shared memory region.
+ * Maintain the policies in a pseudo mm that contains vmas. The vmas
+ * carry the policy. As a special twist the pseudo mm is indexed in pages, not
+ * bytes, so that we can work with shared memory segments bigger than
+ * unsigned long.
+ */
+
+struct sp_node {
+ struct rb_node nd;
+ unsigned long start, end;
+ struct mempolicy *policy;
+};
+
+struct shared_policy {
+ struct rb_root root;
+ spinlock_t lock;
+};
+
+static inline void mpol_shared_policy_init(struct shared_policy *info)
+{
+ info->root = RB_ROOT;
+ spin_lock_init(&info->lock);
+}
+
+int mpol_set_shared_policy(struct shared_policy *info,
+ struct vm_area_struct *vma,
+ struct mempolicy *new);
+void mpol_free_shared_policy(struct shared_policy *p);
+struct mempolicy *mpol_shared_policy_lookup(struct shared_policy *sp,
+ unsigned long idx);
+
+extern void numa_default_policy(void);
+extern void numa_policy_init(void);
+
+#else
+
+struct mempolicy {};
+
+static inline int mpol_equal(struct mempolicy *a, struct mempolicy *b)
+{
+ return 1;
+}
+#define vma_mpol_equal(a,b) 1
+
+#define mpol_set_vma_default(vma) do {} while(0)
+
+static inline void mpol_free(struct mempolicy *p)
+{
+}
+
+static inline void mpol_get(struct mempolicy *pol)
+{
+}
+
+static inline struct mempolicy *mpol_copy(struct mempolicy *old)
+{
+ return NULL;
+}
+
+static inline int mpol_first_node(struct vm_area_struct *vma, unsigned long a)
+{
+ return numa_node_id();
+}
+
+static inline int
+mpol_node_valid(int nid, struct vm_area_struct *vma, unsigned long a)
+{
+ return 1;
+}
+
+struct shared_policy {};
+
+static inline int mpol_set_shared_policy(struct shared_policy *info,
+ struct vm_area_struct *vma,
+ struct mempolicy *new)
+{
+ return -EINVAL;
+}
+
+static inline void mpol_shared_policy_init(struct shared_policy *info)
+{
+}
+
+static inline void mpol_free_shared_policy(struct shared_policy *p)
+{
+}
+
+static inline struct mempolicy *
+mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
+{
+ return NULL;
+}
+
+#define vma_policy(vma) NULL
+#define vma_set_policy(vma, pol) do {} while(0)
+
+static inline void numa_policy_init(void)
+{
+}
+
+static inline void numa_default_policy(void)
+{
+}
+
+#endif /* CONFIG_NUMA */
+#endif /* __KERNEL__ */
+
+#endif
--- /dev/null
+/*
+ * File pci-acpi.h
+ *
+ * Copyright (C) 2004 Intel
+ * Copyright (C) Tom Long Nguyen (tom.l.nguyen@intel.com)
+ */
+
+#ifndef _PCI_ACPI_H_
+#define _PCI_ACPI_H_
+
+#define OSC_QUERY_TYPE 0
+#define OSC_SUPPORT_TYPE 1
+#define OSC_CONTROL_TYPE 2
+#define OSC_SUPPORT_MASKS 0x1f
+
+/*
+ * _OSC DW0 Definition
+ */
+#define OSC_QUERY_ENABLE 1
+#define OSC_REQUEST_ERROR 2
+#define OSC_INVALID_UUID_ERROR 4
+#define OSC_INVALID_REVISION_ERROR 8
+#define OSC_CAPABILITIES_MASK_ERROR 16
+
+/*
+ * _OSC DW1 Definition (OS Support Fields)
+ */
+#define OSC_EXT_PCI_CONFIG_SUPPORT 1
+#define OSC_ACTIVE_STATE_PWR_SUPPORT 2
+#define OSC_CLOCK_PWR_CAPABILITY_SUPPORT 4
+#define OSC_PCI_SEGMENT_GROUPS_SUPPORT 8
+#define OSC_MSI_SUPPORT 16
+
+/*
+ * _OSC DW1 Definition (OS Control Fields)
+ */
+#define OSC_PCI_EXPRESS_NATIVE_HP_CONTROL 1
+#define OSC_SHPC_NATIVE_HP_CONTROL 2
+#define OSC_PCI_EXPRESS_PME_CONTROL 4
+#define OSC_PCI_EXPRESS_AER_CONTROL 8
+#define OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL 16
+
+#define OSC_CONTROL_MASKS (OSC_PCI_EXPRESS_NATIVE_HP_CONTROL | \
+ OSC_SHPC_NATIVE_HP_CONTROL | \
+ OSC_PCI_EXPRESS_PME_CONTROL | \
+ OSC_PCI_EXPRESS_AER_CONTROL | \
+ OSC_PCI_EXPRESS_CAP_STRUCTURE_CONTROL)
+
+#ifdef CONFIG_ACPI
+extern acpi_status pci_osc_control_set(u32 flags);
+extern acpi_status pci_osc_support_set(u32 flags);
+#else
+#if !defined(acpi_status)
+typedef u32 acpi_status;
+#define AE_ERROR (acpi_status) (0x0001)
+#endif
+static inline acpi_status pci_osc_control_set(u32 flags) {return AE_ERROR;}
+static inline acpi_status pci_osc_support_set(u32 flags) {return AE_ERROR;}
+#endif
+
+#endif /* _PCI_ACPI_H_ */
--- /dev/null
+#ifndef _LINUX_PRIO_TREE_H
+#define _LINUX_PRIO_TREE_H
+
+struct prio_tree_node {
+ struct prio_tree_node *left;
+ struct prio_tree_node *right;
+ struct prio_tree_node *parent;
+};
+
+struct prio_tree_root {
+ struct prio_tree_node *prio_tree_node;
+ unsigned int index_bits;
+};
+
+struct prio_tree_iter {
+ struct prio_tree_node *cur;
+ unsigned long mask;
+ unsigned long value;
+ int size_level;
+
+ struct prio_tree_root *root;
+ pgoff_t r_index;
+ pgoff_t h_index;
+};
+
+static inline void prio_tree_iter_init(struct prio_tree_iter *iter,
+ struct prio_tree_root *root, pgoff_t r_index, pgoff_t h_index)
+{
+ iter->root = root;
+ iter->r_index = r_index;
+ iter->h_index = h_index;
+}
+
+#define INIT_PRIO_TREE_ROOT(ptr) \
+do { \
+ (ptr)->prio_tree_node = NULL; \
+ (ptr)->index_bits = 1; \
+} while (0)
+
+#define INIT_PRIO_TREE_NODE(ptr) \
+do { \
+ (ptr)->left = (ptr)->right = (ptr)->parent = (ptr); \
+} while (0)
+
+#define INIT_PRIO_TREE_ITER(ptr) \
+do { \
+ (ptr)->cur = NULL; \
+ (ptr)->mask = 0UL; \
+ (ptr)->value = 0UL; \
+ (ptr)->size_level = 0; \
+} while (0)
+
+#define prio_tree_entry(ptr, type, member) \
+ ((type *)((char *)(ptr)-(unsigned long)(&((type *)0)->member)))
+
+static inline int prio_tree_empty(const struct prio_tree_root *root)
+{
+ return root->prio_tree_node == NULL;
+}
+
+static inline int prio_tree_root(const struct prio_tree_node *node)
+{
+ return node->parent == node;
+}
+
+static inline int prio_tree_left_empty(const struct prio_tree_node *node)
+{
+ return node->left == node;
+}
+
+static inline int prio_tree_right_empty(const struct prio_tree_node *node)
+{
+ return node->right == node;
+}
+
+#endif /* _LINUX_PRIO_TREE_H */
--- /dev/null
+/*
+ * $Id: mtd-abi.h,v 1.6 2004/08/09 13:38:30 dwmw2 Exp $
+ *
+ * Portions of MTD ABI definition which are shared by kernel and user space
+ */
+
+#ifndef __MTD_ABI_H__
+#define __MTD_ABI_H__
+
+#ifndef __KERNEL__ /* Urgh. The whole point of splitting this out into
+ separate files was to avoid #ifdef __KERNEL__ */
+#define __user
+#endif
+
+struct erase_info_user {
+ uint32_t start;
+ uint32_t length;
+};
+
+struct mtd_oob_buf {
+ uint32_t start;
+ uint32_t length;
+ unsigned char __user *ptr;
+};
+
+#define MTD_ABSENT 0
+#define MTD_RAM 1
+#define MTD_ROM 2
+#define MTD_NORFLASH 3
+#define MTD_NANDFLASH 4
+#define MTD_PEROM 5
+#define MTD_OTHER 14
+#define MTD_UNKNOWN 15
+
+#define MTD_CLEAR_BITS 1 // Bits can be cleared (flash)
+#define MTD_SET_BITS 2 // Bits can be set
+#define MTD_ERASEABLE 4 // Has an erase function
+#define MTD_WRITEB_WRITEABLE 8 // Direct IO is possible
+#define MTD_VOLATILE 16 // Set for RAMs
+#define MTD_XIP 32 // eXecute-In-Place possible
+#define MTD_OOB 64 // Out-of-band data (NAND flash)
+#define MTD_ECC 128 // Device capable of automatic ECC
+
+// Some common devices / combinations of capabilities
+#define MTD_CAP_ROM 0
+#define MTD_CAP_RAM (MTD_CLEAR_BITS|MTD_SET_BITS|MTD_WRITEB_WRITEABLE)
+#define MTD_CAP_NORFLASH (MTD_CLEAR_BITS|MTD_ERASEABLE)
+#define MTD_CAP_NANDFLASH (MTD_CLEAR_BITS|MTD_ERASEABLE|MTD_OOB)
+#define MTD_WRITEABLE (MTD_CLEAR_BITS|MTD_SET_BITS)
+
+
+// Types of automatic ECC/Checksum available
+#define MTD_ECC_NONE 0 // No automatic ECC available
+#define MTD_ECC_RS_DiskOnChip 1 // Automatic ECC on DiskOnChip
+#define MTD_ECC_SW 2 // SW ECC for Toshiba & Samsung devices
+
+/* ECC byte placement */
+#define MTD_NANDECC_OFF 0 // Switch off ECC (Not recommended)
+#define MTD_NANDECC_PLACE 1 // Use the given placement in the structure (YAFFS1 legacy mode)
+#define MTD_NANDECC_AUTOPLACE 2 // Use the default placement scheme
+#define MTD_NANDECC_PLACEONLY 3 // Use the given placement in the structure (Do not store ecc result on read)
+
+struct mtd_info_user {
+ uint8_t type;
+ uint32_t flags;
+ uint32_t size; // Total size of the MTD
+ uint32_t erasesize;
+ uint32_t oobblock; // Size of OOB blocks (e.g. 512)
+ uint32_t oobsize; // Amount of OOB data per block (e.g. 16)
+ uint32_t ecctype;
+ uint32_t eccsize;
+};
+
+struct region_info_user {
+ uint32_t offset; /* At which this region starts,
+ * from the beginning of the MTD */
+ uint32_t erasesize; /* For this region */
+ uint32_t numblocks; /* Number of blocks in this region */
+ uint32_t regionindex;
+};
+
+#define MEMGETINFO _IOR('M', 1, struct mtd_info_user)
+#define MEMERASE _IOW('M', 2, struct erase_info_user)
+#define MEMWRITEOOB _IOWR('M', 3, struct mtd_oob_buf)
+#define MEMREADOOB _IOWR('M', 4, struct mtd_oob_buf)
+#define MEMLOCK _IOW('M', 5, struct erase_info_user)
+#define MEMUNLOCK _IOW('M', 6, struct erase_info_user)
+#define MEMGETREGIONCOUNT _IOR('M', 7, int)
+#define MEMGETREGIONINFO _IOWR('M', 8, struct region_info_user)
+#define MEMSETOOBSEL _IOW('M', 9, struct nand_oobinfo)
+#define MEMGETOOBSEL _IOR('M', 10, struct nand_oobinfo)
+#define MEMGETBADBLOCK _IOW('M', 11, loff_t)
+#define MEMSETBADBLOCK _IOW('M', 12, loff_t)
+
+struct nand_oobinfo {
+ uint32_t useecc;
+ uint32_t eccbytes;
+ uint32_t oobfree[8][2];
+ uint32_t eccpos[32];
+};
+
+#endif /* __MTD_ABI_H__ */
--- /dev/null
+#ifndef __NET_PKT_ACT_H
+#define __NET_PKT_ACT_H
+
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <linux/bitops.h>
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/socket.h>
+#include <linux/sockios.h>
+#include <linux/in.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/proc_fs.h>
+#include <net/sock.h>
+#include <net/pkt_sched.h>
+
+#define tca_st(val) (struct tcf_##val *)
+#define PRIV(a,name) ( tca_st(name) (a)->priv)
+
+#if 0 /* control */
+#define DPRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#else
+#define DPRINTK(format,args...)
+#endif
+
+#if 0 /* data */
+#define D2PRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#else
+#define D2PRINTK(format,args...)
+#endif
+
+static __inline__ unsigned
+tcf_hash(u32 index)
+{
+ return index & MY_TAB_MASK;
+}
+
+/* probably move this from being inline
+ * and put into act_generic
+*/
+static inline void
+tcf_hash_destroy(struct tcf_st *p)
+{
+ unsigned h = tcf_hash(p->index);
+ struct tcf_st **p1p;
+
+ for (p1p = &tcf_ht[h]; *p1p; p1p = &(*p1p)->next) {
+ if (*p1p == p) {
+ write_lock_bh(&tcf_t_lock);
+ *p1p = p->next;
+ write_unlock_bh(&tcf_t_lock);
+#ifdef CONFIG_NET_ESTIMATOR
+ gen_kill_estimator(&p->bstats, &p->rate_est);
+#endif
+ kfree(p);
+ return;
+ }
+ }
+ BUG_TRAP(0);
+}
+
+static inline int
+tcf_hash_release(struct tcf_st *p, int bind )
+{
+ int ret = 0;
+ if (p) {
+ if (bind) {
+ p->bindcnt--;
+ }
+ p->refcnt--;
+ if(p->bindcnt <=0 && p->refcnt <= 0) {
+ tcf_hash_destroy(p);
+ ret = 1;
+ }
+ }
+ return ret;
+}
+
+static __inline__ int
+tcf_dump_walker(struct sk_buff *skb, struct netlink_callback *cb,
+ struct tc_action *a)
+{
+ struct tcf_st *p;
+ int err =0, index = -1,i= 0, s_i = 0, n_i = 0;
+ struct rtattr *r ;
+
+ read_lock(&tcf_t_lock);
+
+ s_i = cb->args[0];
+
+ for (i = 0; i < MY_TAB_SIZE; i++) {
+ p = tcf_ht[tcf_hash(i)];
+
+ for (; p; p = p->next) {
+ index++;
+ if (index < s_i)
+ continue;
+ a->priv = p;
+ a->order = n_i;
+ r = (struct rtattr*) skb->tail;
+ RTA_PUT(skb, a->order, 0, NULL);
+ err = tcf_action_dump_1(skb, a, 0, 0);
+ if (0 > err) {
+ index--;
+ skb_trim(skb, (u8*)r - skb->data);
+ goto done;
+ }
+ r->rta_len = skb->tail - (u8*)r;
+ n_i++;
+ if (n_i >= TCA_ACT_MAX_PRIO) {
+ goto done;
+ }
+ }
+ }
+done:
+ read_unlock(&tcf_t_lock);
+ if (n_i)
+ cb->args[0] += n_i;
+ return n_i;
+
+rtattr_failure:
+ skb_trim(skb, (u8*)r - skb->data);
+ goto done;
+}
+
+static __inline__ int
+tcf_del_walker(struct sk_buff *skb, struct tc_action *a)
+{
+ struct tcf_st *p, *s_p;
+ struct rtattr *r ;
+ int i= 0, n_i = 0;
+
+ r = (struct rtattr*) skb->tail;
+ RTA_PUT(skb, a->order, 0, NULL);
+ RTA_PUT(skb, TCA_KIND, IFNAMSIZ, a->ops->kind);
+ for (i = 0; i < MY_TAB_SIZE; i++) {
+ p = tcf_ht[tcf_hash(i)];
+
+ while (p != NULL) {
+ s_p = p->next;
+ if (ACT_P_DELETED == tcf_hash_release(p, 0)) {
+ module_put(a->ops->owner);
+ }
+ n_i++;
+ p = s_p;
+ }
+ }
+ RTA_PUT(skb, TCA_FCNT, 4, &n_i);
+ r->rta_len = skb->tail - (u8*)r;
+
+ return n_i;
+rtattr_failure:
+ skb_trim(skb, (u8*)r - skb->data);
+ return -EINVAL;
+}
+
+static __inline__ int
+tcf_generic_walker(struct sk_buff *skb, struct netlink_callback *cb, int type,
+ struct tc_action *a)
+{
+ if (type == RTM_DELACTION) {
+ return tcf_del_walker(skb,a);
+ } else if (type == RTM_GETACTION) {
+ return tcf_dump_walker(skb,cb,a);
+ } else {
+ printk("tcf_generic_walker: unknown action %d\n",type);
+ return -EINVAL;
+ }
+}
+
+static __inline__ struct tcf_st *
+tcf_hash_lookup(u32 index)
+{
+ struct tcf_st *p;
+
+ read_lock(&tcf_t_lock);
+ for (p = tcf_ht[tcf_hash(index)]; p; p = p->next) {
+ if (p->index == index)
+ break;
+ }
+ read_unlock(&tcf_t_lock);
+ return p;
+}
+
+static __inline__ u32
+tcf_hash_new_index(void)
+{
+ do {
+ if (++idx_gen == 0)
+ idx_gen = 1;
+ } while (tcf_hash_lookup(idx_gen));
+
+ return idx_gen;
+}
+
+
+static inline int
+tcf_hash_search(struct tc_action *a, u32 index)
+{
+ struct tcf_st *p = tcf_hash_lookup(index);
+
+ if (p != NULL) {
+ a->priv = p;
+ return 1;
+ }
+ return 0;
+}
+
+#ifdef CONFIG_NET_ACT_INIT
+static inline struct tcf_st *
+tcf_hash_check(struct tc_st *parm, struct tc_action *a, int ovr, int bind)
+{
+ struct tcf_st *p = NULL;
+ if (parm->index && (p = tcf_hash_lookup(parm->index)) != NULL) {
+ spin_lock(&p->lock);
+ if (bind) {
+ p->bindcnt++;
+ p->refcnt++;
+ }
+ spin_unlock(&p->lock);
+ a->priv = (void *) p;
+ }
+ return p;
+}
+
+static inline struct tcf_st *
+tcf_hash_create(struct tc_st *parm, struct rtattr *est, struct tc_action *a, int size, int ovr, int bind)
+{
+ unsigned h;
+ struct tcf_st *p = NULL;
+
+ p = kmalloc(size, GFP_KERNEL);
+ if (p == NULL)
+ return p;
+
+ memset(p, 0, size);
+ p->refcnt = 1;
+
+ if (bind) {
+ p->bindcnt = 1;
+ }
+
+ spin_lock_init(&p->lock);
+ p->stats_lock = &p->lock;
+ p->index = parm->index ? : tcf_hash_new_index();
+ p->tm.install = jiffies;
+ p->tm.lastuse = jiffies;
+#ifdef CONFIG_NET_ESTIMATOR
+ if (est)
+ gen_new_estimator(&p->bstats, &p->rate_est, p->stats_lock, est);
+#endif
+ h = tcf_hash(p->index);
+ write_lock_bh(&tcf_t_lock);
+ p->next = tcf_ht[h];
+ tcf_ht[h] = p;
+ write_unlock_bh(&tcf_t_lock);
+
+ a->priv = (void *) p;
+ return p;
+}
+
+static inline struct tcf_st *
+tcf_hash_init(struct tc_st *parm, struct rtattr *est, struct tc_action *a, int size, int ovr, int bind)
+{
+ struct tcf_st *p = tcf_hash_check (parm,a,ovr,bind);
+
+ if (!p)
+ p = tcf_hash_create(parm, est, a, size, ovr, bind);
+ return p;
+}
+
+#endif
+
+#endif
--- /dev/null
+/*
+ * linux/kernel/dump.c
+ *
+ * Copyright (C) 2004 FUJITSU LIMITED
+ * Written by Nobuhiro Tachino (ntachino@jp.fujitsu.com)
+ *
+ */
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/delay.h>
+#include <linux/nmi.h>
+#include <linux/timer.h>
+#include <linux/interrupt.h>
+#include <linux/workqueue.h>
+#include <linux/genhd.h>
+#include <linux/diskdump.h>
+#include <asm/diskdump.h>
+
+static DECLARE_MUTEX(dump_ops_mutex);
+struct disk_dump_ops* dump_ops = NULL;
+
+int diskdump_mode = 0;
+EXPORT_SYMBOL_GPL(diskdump_mode);
+
+void (*diskdump_func) (struct pt_regs *regs) = NULL;
+EXPORT_SYMBOL_GPL(diskdump_func);
+
+static unsigned long long timestamp_base;
+static unsigned long timestamp_hz;
+
+
+/*
+ * register/unregister diskdump operations
+ */
+int diskdump_register_ops(struct disk_dump_ops* op)
+{
+ down(&dump_ops_mutex);
+ if (dump_ops) {
+ up(&dump_ops_mutex);
+ return -EEXIST;
+ }
+ dump_ops = op;
+ up(&dump_ops_mutex);
+
+ return 0;
+}
+
+EXPORT_SYMBOL_GPL(diskdump_register_ops);
+
+void diskdump_unregister_ops(void)
+{
+ down(&dump_ops_mutex);
+ dump_ops = NULL;
+ up(&dump_ops_mutex);
+}
+
+EXPORT_SYMBOL_GPL(diskdump_unregister_ops);
+
+
+/*
+ * sysfs interface
+ */
+static struct gendisk *device_to_gendisk(struct device *dev)
+{
+ struct dentry *d;
+ struct qstr qstr;
+
+ /* trace symlink to "block" */
+ qstr.name = "block";
+ qstr.len = strlen(qstr.name);
+ qstr.hash = full_name_hash(qstr.name, qstr.len);
+ d = d_lookup(dev->kobj.dentry, &qstr);
+ if (!d || !d->d_fsdata)
+ return NULL;
+ else
+ return container_of(d->d_fsdata, struct gendisk, kobj);
+}
+
+ssize_t diskdump_sysfs_store(struct device *dev, const char *buf, size_t count)
+{
+ struct gendisk *disk;
+ struct block_device *bdev;
+ int part, remove = 0;
+
+ if (!dump_ops || !dump_ops->add_dump || !dump_ops->remove_dump)
+ return count;
+
+ /* get partition number */
+ sscanf (buf, "%d\n", &part);
+ if (part < 0) {
+ part = -part;
+ remove = 1;
+ }
+
+ /* get block device */
+ if (!(disk = device_to_gendisk(dev)) ||
+ !(bdev = bdget_disk(disk, part)))
+ return count;
+
+ /* add/remove device */
+ down(&dump_ops_mutex);
+ if (!remove)
+ dump_ops->add_dump(dev, bdev);
+ else
+ dump_ops->remove_dump(bdev);
+ up(&dump_ops_mutex);
+
+ return count;
+}
+
+EXPORT_SYMBOL_GPL(diskdump_sysfs_store);
+
+ssize_t diskdump_sysfs_show(struct device *dev, char *buf)
+{
+ struct gendisk *disk;
+ struct block_device *bdev;
+ int part, tmp, len = 0, maxlen = 1024;
+ char* p = buf;
+ char name[BDEVNAME_SIZE];
+
+ if (!dump_ops || !dump_ops->find_dump)
+ return 0;
+
+ /* get gendisk */
+ disk = device_to_gendisk(dev);
+ if (!disk || !disk->part)
+ return 0;
+
+ /* print device */
+ down(&dump_ops_mutex);
+ for (part = 0; part < disk->minors - 1; part++) {
+ bdev = bdget_disk(disk, part);
+ if (dump_ops->find_dump(bdev)) {
+ tmp = sprintf(p, "%s\n", bdevname(bdev, name));
+ len += tmp;
+ p += tmp;
+ }
+ bdput(bdev);
+ if(len >= maxlen)
+ break;
+ }
+ up(&dump_ops_mutex);
+
+ return len;
+}
+
+EXPORT_SYMBOL_GPL(diskdump_sysfs_show);
+
+/*
+ * run timer/tasklet/workqueue during dump
+ */
+void diskdump_setup_timestamp(void)
+{
+ unsigned long long t;
+
+ platform_timestamp(timestamp_base);
+ udelay(1000000/HZ);
+ platform_timestamp(t);
+ timestamp_hz = (unsigned long)(t - timestamp_base);
+ diskdump_update();
+}
+
+EXPORT_SYMBOL_GPL(diskdump_setup_timestamp);
+
+void diskdump_update(void)
+{
+ unsigned long long t;
+
+ touch_nmi_watchdog();
+
+ /* update jiffies */
+ platform_timestamp(t);
+ while (t > timestamp_base + timestamp_hz) {
+ timestamp_base += timestamp_hz;
+ jiffies++;
+ platform_timestamp(t);
+ }
+
+ dump_run_timers();
+ dump_run_tasklet();
+ dump_run_workqueue();
+}
+
+EXPORT_SYMBOL_GPL(diskdump_update);
+
+
+/*
+ * register/unregister hook
+ */
+int diskdump_register_hook(void (*dump_func) (struct pt_regs *))
+{
+ if (diskdump_func)
+ return -EEXIST;
+
+ diskdump_func = dump_func;
+
+ return 0;
+}
+
+EXPORT_SYMBOL_GPL(diskdump_register_hook);
+
+void diskdump_unregister_hook(void)
+{
+ diskdump_func = NULL;
+}
+
+EXPORT_SYMBOL_GPL(diskdump_unregister_hook);
+
+void (*netdump_func) (struct pt_regs *regs) = NULL;
+int netdump_mode = 0;
+EXPORT_SYMBOL_GPL(netdump_mode);
+
+/*
+ * Try crashdump. Diskdump is first, netdump is second.
+ * We clear diskdump_func before call of diskdump_func, so
+ * If double panic would occur in diskdump, netdump can handle
+ * it.
+ */
+void try_crashdump(struct pt_regs *regs)
+{
+ void (*func)(struct pt_regs *);
+
+ if (diskdump_func) {
+ func = diskdump_func;
+ diskdump_func = NULL;
+ func(regs);
+ }
+ if (netdump_func)
+ netdump_func(regs);
+}
--- /dev/null
+/* module-verify-sig.c: module signature checker
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ * - Derived from GregKH's RSA module signer
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/elf.h>
+#include <linux/crypto.h>
+#include <linux/crypto/ksign.h>
+#include "module-verify.h"
+
+#undef MODSIGN_DEBUG
+
+#ifdef MODSIGN_DEBUG
+#define _debug(FMT, ...) printk(FMT, ##__VA_ARGS__)
+#else
+#define _debug(FMT, ...) do {} while (0)
+#endif
+
+#ifdef MODSIGN_DEBUG
+#define count_and_csum(C, __p,__n) \
+do { \
+ int __loop; \
+ for (__loop = 0; __loop < __n; __loop++) { \
+ (C)->csum += __p[__loop]; \
+ (C)->xcsum += __p[__loop]; \
+ } \
+ (C)->signed_size += __n; \
+} while(0)
+#else
+#define count_and_csum(C, __p,__n) \
+do { \
+ (C)->signed_size += __n; \
+} while(0)
+#endif
+
+#define crypto_digest_update_data(C,PTR,N) \
+do { \
+ size_t __n = (N); \
+ uint8_t *__p = (uint8_t *)(PTR); \
+ count_and_csum((C), __p, __n); \
+ crypto_digest_update_kernel((C)->digest, __p, __n); \
+} while(0)
+
+#define crypto_digest_update_val(C,VAL) \
+do { \
+ size_t __n = sizeof(VAL); \
+ uint8_t *__p = (uint8_t *)&(VAL); \
+ count_and_csum((C), __p, __n); \
+ crypto_digest_update_kernel((C)->digest, __p, __n); \
+} while(0)
+
+static int module_verify_canonicalise(struct module_verify_data *mvdata);
+
+static int extract_elf_rela(struct module_verify_data *mvdata,
+ int secix,
+ const Elf_Rela *relatab, size_t nrels,
+ const char *sh_name);
+
+static int extract_elf_rel(struct module_verify_data *mvdata,
+ int secix,
+ const Elf_Rel *reltab, size_t nrels,
+ const char *sh_name);
+
+static int signedonly;
+
+/*****************************************************************************/
+/*
+ * verify a module's signature
+ */
+int module_verify_signature(struct module_verify_data *mvdata)
+{
+ const Elf_Shdr *sechdrs = mvdata->sections;
+ const char *secstrings = mvdata->secstrings;
+ const char *sig;
+ unsigned sig_size;
+ int i, ret;
+
+ for (i = 1; i < mvdata->nsects; i++) {
+ switch (sechdrs[i].sh_type) {
+ case SHT_PROGBITS:
+ if (strcmp(mvdata->secstrings + sechdrs[i].sh_name,
+ ".module_sig") == 0) {
+ mvdata->sig_index = i;
+ }
+ break;
+ }
+ }
+
+ if (mvdata->sig_index <= 0)
+ goto no_signature;
+
+ sig = mvdata->buffer + sechdrs[mvdata->sig_index].sh_offset;
+ sig_size = sechdrs[mvdata->sig_index].sh_size;
+
+ _debug("sig in section %d (size %d)\n",
+ mvdata->sig_index, sig_size);
+
+ /* produce a canonicalisation map for the sections */
+ ret = module_verify_canonicalise(mvdata);
+ if (ret < 0)
+ return ret;
+
+ /* grab an SHA1 transformation context
+ * - !!! if this tries to load the sha1.ko module, we will deadlock!!!
+ */
+ mvdata->digest = crypto_alloc_tfm2("sha1", 0, 1);
+ if (!mvdata->digest) {
+ printk("Couldn't load module - SHA1 transform unavailable\n");
+ return -EPERM;
+ }
+
+ crypto_digest_init(mvdata->digest);
+
+#ifdef MODSIGN_DEBUG
+ mvdata->xcsum = 0;
+#endif
+
+ /* load data from each relevant section into the digest */
+ for (i = 1; i < mvdata->nsects; i++) {
+ unsigned long sh_type = sechdrs[i].sh_type;
+ unsigned long sh_info = sechdrs[i].sh_info;
+ unsigned long sh_size = sechdrs[i].sh_size;
+ unsigned long sh_flags = sechdrs[i].sh_flags;
+ const char *sh_name = secstrings + sechdrs[i].sh_name;
+ const void *data = mvdata->buffer + sechdrs[i].sh_offset;
+
+ if (i == mvdata->sig_index)
+ continue;
+
+#ifdef MODSIGN_DEBUG
+ mvdata->csum = 0;
+#endif
+
+ /* it would be nice to include relocation sections, but the act
+ * of adding a signature to the module seems changes their
+ * contents, because the symtab gets changed when sections are
+ * added or removed */
+ if (sh_type == SHT_REL || sh_type == SHT_RELA) {
+ if (mvdata->canonlist[sh_info]) {
+ uint32_t xsh_info = mvdata->canonmap[sh_info];
+
+ crypto_digest_update_data(mvdata, sh_name, strlen(sh_name));
+ crypto_digest_update_val(mvdata, sechdrs[i].sh_type);
+ crypto_digest_update_val(mvdata, sechdrs[i].sh_flags);
+ crypto_digest_update_val(mvdata, sechdrs[i].sh_size);
+ crypto_digest_update_val(mvdata, sechdrs[i].sh_addralign);
+ crypto_digest_update_val(mvdata, xsh_info);
+
+ if (sh_type == SHT_RELA)
+ ret = extract_elf_rela(
+ mvdata, i,
+ data,
+ sh_size / sizeof(Elf_Rela),
+ sh_name);
+ else
+ ret = extract_elf_rel(
+ mvdata, i,
+ data,
+ sh_size / sizeof(Elf_Rel),
+ sh_name);
+
+ if (ret < 0)
+ goto format_error;
+ }
+
+ continue;
+ }
+
+ /* include allocatable loadable sections */
+ if (sh_type != SHT_NOBITS && sh_flags & SHF_ALLOC)
+ goto include_section;
+
+ continue;
+
+ include_section:
+ crypto_digest_update_data(mvdata, sh_name, strlen(sh_name));
+ crypto_digest_update_val(mvdata, sechdrs[i].sh_type);
+ crypto_digest_update_val(mvdata, sechdrs[i].sh_flags);
+ crypto_digest_update_val(mvdata, sechdrs[i].sh_size);
+ crypto_digest_update_val(mvdata, sechdrs[i].sh_addralign);
+ crypto_digest_update_data(mvdata, data, sh_size);
+
+ _debug("%08zx %02x digested the %s section, size %ld\n",
+ mvdata->signed_size, mvdata->csum, sh_name, sh_size);
+
+ mvdata->canonlist[i] = 1;
+ }
+
+ _debug("Contributed %zu bytes to the digest (csum 0x%02x)\n",
+ mvdata->signed_size, mvdata->xcsum);
+
+ /* do the actual signature verification */
+ i = ksign_verify_signature(sig, sig_size, mvdata->digest);
+
+ _debug("verify-sig : %d\n", i);
+
+ if (i == 0)
+ i = 1;
+ return i;
+
+ format_error:
+ crypto_free_tfm(mvdata->digest);
+ return -ELIBBAD;
+
+ /* deal with the case of an unsigned module */
+ no_signature:
+ if (!signedonly)
+ return 0;
+ printk("An attempt to load unsigned module was rejected\n");
+ return -EPERM;
+
+} /* end module_verify_signature() */
+
+/*****************************************************************************/
+/*
+ * canonicalise the section table index numbers
+ */
+static int module_verify_canonicalise(struct module_verify_data *mvdata)
+{
+ int canon, loop, changed, tmp;
+
+ /* produce a list of index numbers of sections that contribute
+ * to the kernel's module image
+ */
+ mvdata->canonlist =
+ kmalloc(sizeof(int) * mvdata->nsects * 2, GFP_KERNEL);
+ if (!mvdata->canonlist)
+ return -ENOMEM;
+
+ mvdata->canonmap = mvdata->canonlist + mvdata->nsects;
+ canon = 0;
+
+ for (loop = 1; loop < mvdata->nsects; loop++) {
+ const Elf_Shdr *section = mvdata->sections + loop;
+
+ if (loop != mvdata->sig_index) {
+ /* we only need to canonicalise allocatable sections */
+ if (section->sh_flags & SHF_ALLOC)
+ mvdata->canonlist[canon++] = loop;
+ }
+ }
+
+ /* canonicalise the index numbers of the contributing section */
+ do {
+ changed = 0;
+
+ for (loop = 0; loop < canon - 1; loop++) {
+ const char *x, *y;
+
+ x = mvdata->secstrings +
+ mvdata->sections[mvdata->canonlist[loop + 0]].sh_name;
+ y = mvdata->secstrings +
+ mvdata->sections[mvdata->canonlist[loop + 1]].sh_name;
+
+ if (strcmp(x, y) > 0) {
+ tmp = mvdata->canonlist[loop + 0];
+ mvdata->canonlist[loop + 0] =
+ mvdata->canonlist[loop + 1];
+ mvdata->canonlist[loop + 1] = tmp;
+ changed = 1;
+ }
+ }
+
+ } while(changed);
+
+ for (loop = 0; loop < canon; loop++)
+ mvdata->canonmap[mvdata->canonlist[loop]] = loop + 1;
+
+ return 0;
+
+} /* end module_verify_canonicalise() */
+
+/*****************************************************************************/
+/*
+ * extract a RELA table
+ * - need to canonicalise the entries in case section addition/removal has
+ * rearranged the symbol table and the section table
+ */
+static int extract_elf_rela(struct module_verify_data *mvdata,
+ int secix,
+ const Elf_Rela *relatab, size_t nrels,
+ const char *sh_name)
+{
+ struct {
+#if defined(MODULES_ARE_ELF32)
+ uint32_t r_offset;
+ uint32_t r_addend;
+ uint32_t st_value;
+ uint32_t st_size;
+ uint16_t st_shndx;
+ uint8_t r_type;
+ uint8_t st_info;
+ uint8_t st_other;
+#elif defined(MODULES_ARE_ELF64)
+ uint64_t r_offset;
+ uint64_t r_addend;
+ uint64_t st_value;
+ uint64_t st_size;
+ uint32_t r_type;
+ uint16_t st_shndx;
+ uint8_t st_info;
+ uint8_t st_other;
+#else
+#error unsupported module type
+#endif
+ } __attribute__((packed)) relocation;
+
+ const Elf_Rela *reloc;
+ const Elf_Sym *symbol;
+ size_t loop;
+
+ /* contribute the relevant bits from a join of { RELA, SYMBOL, SECTION } */
+ for (loop = 0; loop < nrels; loop++) {
+ int st_shndx;
+
+ reloc = &relatab[loop];
+
+ /* decode the relocation */
+ relocation.r_offset = reloc->r_offset;
+ relocation.r_addend = reloc->r_addend;
+ relocation.r_type = ELF_R_TYPE(reloc->r_info);
+
+ /* decode the symbol referenced by the relocation */
+ symbol = &mvdata->symbols[ELF_R_SYM(reloc->r_info)];
+ relocation.st_info = symbol->st_info;
+ relocation.st_other = symbol->st_other;
+ relocation.st_value = symbol->st_value;
+ relocation.st_size = symbol->st_size;
+ relocation.st_shndx = symbol->st_shndx;
+ st_shndx = symbol->st_shndx;
+
+ /* canonicalise the section used by the symbol */
+ if (st_shndx > SHN_UNDEF && st_shndx < mvdata->nsects)
+ relocation.st_shndx = mvdata->canonmap[st_shndx];
+
+ crypto_digest_update_val(mvdata, relocation);
+
+ /* undefined symbols must be named if referenced */
+ if (st_shndx == SHN_UNDEF) {
+ const char *name = mvdata->strings + symbol->st_name;
+ crypto_digest_update_data(mvdata,
+ name, strlen(name) + 1);
+ }
+ }
+
+ _debug("%08zx %02x digested the %s section, nrels %zu\n",
+ mvdata->signed_size, mvdata->csum, sh_name, nrels);
+
+ return 0;
+} /* end extract_elf_rela() */
+
+/*****************************************************************************/
+/*
+ *
+ */
+static int extract_elf_rel(struct module_verify_data *mvdata,
+ int secix,
+ const Elf_Rel *reltab, size_t nrels,
+ const char *sh_name)
+{
+ struct {
+#if defined(MODULES_ARE_ELF32)
+ uint32_t r_offset;
+ uint32_t st_value;
+ uint32_t st_size;
+ uint16_t st_shndx;
+ uint8_t r_type;
+ uint8_t st_info;
+ uint8_t st_other;
+#elif defined(MODULES_ARE_ELF64)
+ uint64_t r_offset;
+ uint64_t st_value;
+ uint64_t st_size;
+ uint32_t r_type;
+ uint16_t st_shndx;
+ uint8_t st_info;
+ uint8_t st_other;
+#else
+#error unsupported module type
+#endif
+ } __attribute__((packed)) relocation;
+
+ const Elf_Rel *reloc;
+ const Elf_Sym *symbol;
+ size_t loop;
+
+ /* contribute the relevant bits from a join of { RELA, SYMBOL, SECTION } */
+ for (loop = 0; loop < nrels; loop++) {
+ int st_shndx;
+
+ reloc = &reltab[loop];
+
+ /* decode the relocation */
+ relocation.r_offset = reloc->r_offset;
+ relocation.r_type = ELF_R_TYPE(reloc->r_info);
+
+ /* decode the symbol referenced by the relocation */
+ symbol = &mvdata->symbols[ELF_R_SYM(reloc->r_info)];
+ relocation.st_info = symbol->st_info;
+ relocation.st_other = symbol->st_other;
+ relocation.st_value = symbol->st_value;
+ relocation.st_size = symbol->st_size;
+ relocation.st_shndx = symbol->st_shndx;
+ st_shndx = symbol->st_shndx;
+
+ /* canonicalise the section used by the symbol */
+ if (st_shndx > SHN_UNDEF && st_shndx < mvdata->nsects)
+ relocation.st_shndx = mvdata->canonmap[st_shndx];
+
+ crypto_digest_update_val(mvdata, relocation);
+
+ /* undefined symbols must be named if referenced */
+ if (st_shndx == SHN_UNDEF) {
+ const char *name = mvdata->strings + symbol->st_name;
+ crypto_digest_update_data(mvdata,
+ name, strlen(name) + 1);
+ }
+ }
+
+ _debug("%08zx %02x digested the %s section, nrels %zu\n",
+ mvdata->signed_size, mvdata->csum, sh_name, nrels);
+
+ return 0;
+} /* end extract_elf_rel() */
+
+static int __init sign_setup(char *str)
+{
+ signedonly = 1;
+ return 0;
+}
+__setup("enforcemodulesig", sign_setup);
--- /dev/null
+/* module-verify.c: module verifier
+ *
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/config.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/elf.h>
+#include <linux/crypto.h>
+#include <linux/crypto/ksign.h>
+#include "module-verify.h"
+
+#if 0
+#define _debug(FMT, ...) printk(FMT, ##__VA_ARGS__)
+#else
+#define _debug(FMT, ...) do {} while (0)
+#endif
+
+static int module_verify_elf(struct module_verify_data *mvdata);
+
+/*****************************************************************************/
+/*
+ * verify a module's integrity
+ * - check the ELF is viable
+ * - check the module's signature if it has one
+ */
+int module_verify(const Elf_Ehdr *hdr, size_t size)
+{
+ struct module_verify_data mvdata;
+ int ret;
+
+ memset(&mvdata, 0, sizeof(mvdata));
+ mvdata.buffer = hdr;
+ mvdata.hdr = hdr;
+ mvdata.size = size;
+
+ ret = module_verify_elf(&mvdata);
+ if (ret < 0) {
+ if (ret == -ELIBBAD)
+ printk("Module failed ELF checks\n");
+ goto error;
+ }
+
+#ifdef CONFIG_MODULE_SIG
+ ret = module_verify_signature(&mvdata);
+#endif
+
+ error:
+ kfree(mvdata.secsizes);
+ kfree(mvdata.canonlist);
+ return ret;
+
+} /* end module_verify() */
+
+/*****************************************************************************/
+/*
+ * verify the ELF structure of a module
+ */
+static int module_verify_elf(struct module_verify_data *mvdata)
+{
+ const Elf_Ehdr *hdr = mvdata->hdr;
+ const Elf_Shdr *section, *section2, *secstop;
+ const Elf_Rela *relas, *rela, *relastop;
+ const Elf_Rel *rels, *rel, *relstop;
+ const Elf_Sym *symbol, *symstop;
+ size_t size, sssize, *secsize, tmp, tmp2;
+ long last;
+ int line;
+
+ size = mvdata->size;
+ mvdata->nsects = hdr->e_shnum;
+
+#define elfcheck(X) \
+do { if (unlikely(!(X))) { line = __LINE__; goto elfcheck_error; } } while(0)
+
+#define seccheck(X) \
+do { if (unlikely(!(X))) { line = __LINE__; goto seccheck_error; } } while(0)
+
+#define symcheck(X) \
+do { if (unlikely(!(X))) { line = __LINE__; goto symcheck_error; } } while(0)
+
+#define relcheck(X) \
+do { if (unlikely(!(X))) { line = __LINE__; goto relcheck_error; } } while(0)
+
+#define relacheck(X) \
+do { if (unlikely(!(X))) { line = __LINE__; goto relacheck_error; } } while(0)
+
+ /* validate the ELF header */
+ elfcheck(hdr->e_ehsize < size);
+ elfcheck(hdr->e_entry == 0);
+ elfcheck(hdr->e_phoff == 0);
+ elfcheck(hdr->e_phnum == 0);
+
+ elfcheck(hdr->e_shnum < SHN_LORESERVE);
+ elfcheck(hdr->e_shoff < size);
+ elfcheck(hdr->e_shoff >= hdr->e_ehsize);
+ elfcheck((hdr->e_shoff & (sizeof(long) - 1)) == 0);
+ elfcheck(hdr->e_shstrndx > 0);
+ elfcheck(hdr->e_shstrndx < hdr->e_shnum);
+ elfcheck(hdr->e_shentsize == sizeof(Elf_Shdr));
+
+ tmp = (size_t) hdr->e_shentsize * (size_t) hdr->e_shnum;
+ elfcheck(tmp < size - hdr->e_shoff);
+
+ /* allocate a table to hold in-file section sizes */
+ mvdata->secsizes = kmalloc(hdr->e_shnum * sizeof(size_t), GFP_KERNEL);
+ if (!mvdata->secsizes)
+ return -ENOMEM;
+
+ memset(mvdata->secsizes, 0, hdr->e_shnum * sizeof(size_t));
+
+ /* validate the ELF section headers */
+ mvdata->sections = mvdata->buffer + hdr->e_shoff;
+ secstop = mvdata->sections + mvdata->nsects;
+
+ sssize = mvdata->sections[hdr->e_shstrndx].sh_size;
+ elfcheck(sssize > 0);
+
+ section = mvdata->sections;
+ seccheck(section->sh_type == SHT_NULL);
+ seccheck(section->sh_size == 0);
+ seccheck(section->sh_offset == 0);
+
+ secsize = mvdata->secsizes + 1;
+ for (section++; section < secstop; secsize++, section++) {
+ seccheck(section->sh_name < sssize);
+ seccheck(section->sh_link < hdr->e_shnum);
+
+ if (section->sh_entsize > 0)
+ seccheck(section->sh_size % section->sh_entsize == 0);
+
+ seccheck(section->sh_offset >= hdr->e_ehsize);
+ seccheck(section->sh_offset < size);
+
+ /* determine the section's in-file size */
+ tmp = size - section->sh_offset;
+ if (section->sh_offset < hdr->e_shoff)
+ tmp = hdr->e_shoff - section->sh_offset;
+
+ for (section2 = mvdata->sections + 1; section2 < secstop; section2++) {
+ if (section->sh_offset < section2->sh_offset) {
+ tmp2 = section2->sh_offset - section->sh_offset;
+ if (tmp2 < tmp)
+ tmp = tmp2;
+ }
+ }
+ *secsize = tmp;
+
+ _debug("Section %ld: %zx bytes at %lx\n",
+ section - mvdata->sections,
+ *secsize,
+ section->sh_offset);
+
+ /* perform section type specific checks */
+ switch (section->sh_type) {
+ case SHT_NOBITS:
+ break;
+
+ case SHT_REL:
+ seccheck(section->sh_entsize == sizeof(Elf_Rel));
+ goto more_rel_checks;
+
+ case SHT_RELA:
+ seccheck(section->sh_entsize == sizeof(Elf_Rela));
+ more_rel_checks:
+ seccheck(section->sh_info > 0);
+ seccheck(section->sh_info < hdr->e_shnum);
+ goto more_sec_checks;
+
+ case SHT_SYMTAB:
+ seccheck(section->sh_entsize == sizeof(Elf_Sym));
+ goto more_sec_checks;
+
+ default:
+ more_sec_checks:
+ /* most types of section must be contained entirely
+ * within the file */
+ seccheck(section->sh_size <= *secsize);
+ break;
+ }
+ }
+
+ /* validate the ELF section names */
+ section = &mvdata->sections[hdr->e_shstrndx];
+
+ seccheck(section->sh_offset != hdr->e_shoff);
+
+ mvdata->secstrings = mvdata->buffer + section->sh_offset;
+
+ last = -1;
+ for (section = mvdata->sections + 1; section < secstop; section++) {
+ const char *secname;
+ tmp = sssize - section->sh_name;
+ secname = mvdata->secstrings + section->sh_name;
+ seccheck(secname[0] != 0);
+ if (section->sh_name > last)
+ last = section->sh_name;
+ }
+
+ if (last > -1) {
+ tmp = sssize - last;
+ elfcheck(memchr(mvdata->secstrings + last, 0, tmp) != NULL);
+ }
+
+ /* look for various sections in the module */
+ for (section = mvdata->sections + 1; section < secstop; section++) {
+ switch (section->sh_type) {
+ case SHT_SYMTAB:
+ if (strcmp(mvdata->secstrings + section->sh_name,
+ ".symtab") == 0
+ ) {
+ seccheck(mvdata->symbols == NULL);
+ mvdata->symbols =
+ mvdata->buffer + section->sh_offset;
+ mvdata->nsyms =
+ section->sh_size / sizeof(Elf_Sym);
+ seccheck(section->sh_size > 0);
+ }
+ break;
+
+ case SHT_STRTAB:
+ if (strcmp(mvdata->secstrings + section->sh_name,
+ ".strtab") == 0
+ ) {
+ seccheck(mvdata->strings == NULL);
+ mvdata->strings =
+ mvdata->buffer + section->sh_offset;
+ sssize = mvdata->nstrings = section->sh_size;
+ seccheck(section->sh_size > 0);
+ }
+ break;
+ }
+ }
+
+ if (!mvdata->symbols) {
+ printk("Couldn't locate module symbol table\n");
+ goto format_error;
+ }
+
+ if (!mvdata->strings) {
+ printk("Couldn't locate module strings table\n");
+ goto format_error;
+ }
+
+ /* validate the symbol table */
+ symstop = mvdata->symbols + mvdata->nsyms;
+
+ symbol = mvdata->symbols;
+ symcheck(ELF_ST_TYPE(symbol[0].st_info) == STT_NOTYPE);
+ symcheck(symbol[0].st_shndx == SHN_UNDEF);
+ symcheck(symbol[0].st_value == 0);
+ symcheck(symbol[0].st_size == 0);
+
+ last = -1;
+ for (symbol++; symbol < symstop; symbol++) {
+ symcheck(symbol->st_name < sssize);
+ if (symbol->st_name > last)
+ last = symbol->st_name;
+ symcheck(symbol->st_shndx < mvdata->nsects ||
+ symbol->st_shndx >= SHN_LORESERVE);
+ }
+
+ if (last > -1) {
+ tmp = sssize - last;
+ elfcheck(memchr(mvdata->strings + last, 0, tmp) != NULL);
+ }
+
+ /* validate each relocation table as best we can */
+ for (section = mvdata->sections + 1; section < secstop; section++) {
+ section2 = mvdata->sections + section->sh_info;
+
+ switch (section->sh_type) {
+ case SHT_REL:
+ rels = mvdata->buffer + section->sh_offset;
+ relstop = mvdata->buffer + section->sh_offset + section->sh_size;
+
+ for (rel = rels; rel < relstop; rel++) {
+ relcheck(rel->r_offset < section2->sh_size);
+ relcheck(ELF_R_SYM(rel->r_info) < mvdata->nsyms);
+ }
+
+ break;
+
+ case SHT_RELA:
+ relas = mvdata->buffer + section->sh_offset;
+ relastop = mvdata->buffer + section->sh_offset + section->sh_size;
+
+ for (rela = relas; rela < relastop; rela++) {
+ relacheck(rela->r_offset < section2->sh_size);
+ relacheck(ELF_R_SYM(rela->r_info) < mvdata->nsyms);
+ }
+
+ break;
+
+ default:
+ break;
+ }
+ }
+
+
+ _debug("ELF okay\n");
+ return 0;
+
+ elfcheck_error:
+ printk("Verify ELF error (assertion %d)\n", line);
+ goto format_error;
+
+ seccheck_error:
+ printk("Verify ELF error [sec %ld] (assertion %d)\n",
+ (long)(section - mvdata->sections), line);
+ goto format_error;
+
+ symcheck_error:
+ printk("Verify ELF error [sym %ld] (assertion %d)\n",
+ (long)(symbol - mvdata->symbols), line);
+ goto format_error;
+
+ relcheck_error:
+ printk("Verify ELF error [sec %ld rel %ld] (assertion %d)\n",
+ (long)(section - mvdata->sections),
+ (long)(rel - rels), line);
+ goto format_error;
+
+ relacheck_error:
+ printk("Verify ELF error [sec %ld rela %ld] (assertion %d)\n",
+ (long)(section - mvdata->sections),
+ (long)(rela - relas), line);
+ goto format_error;
+
+ format_error:
+ return -ELIBBAD;
+
+} /* end module_verify_elf() */
--- /dev/null
+/* module-verify.h: module verification definitions
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/types.h>
+#include <asm/module.h>
+
+struct module_verify_data {
+ struct crypto_tfm *digest; /* module signature digest */
+ const void *buffer; /* module buffer */
+ const Elf_Ehdr *hdr; /* ELF header */
+ const Elf_Shdr *sections; /* ELF section table */
+ const Elf_Sym *symbols; /* ELF symbol table */
+ const char *secstrings; /* ELF section string table */
+ const char *strings; /* ELF string table */
+ size_t *secsizes; /* section size list */
+ size_t size; /* module object size */
+ size_t nsects; /* number of sections */
+ size_t nsyms; /* number of symbols */
+ size_t nstrings; /* size of strings section */
+ size_t signed_size; /* count of bytes contributed to digest */
+ int *canonlist; /* list of canonicalised sections */
+ int *canonmap; /* section canonicalisation map */
+ int sig_index; /* module signature section index */
+ uint8_t xcsum; /* checksum of bytes contributed to digest */
+ uint8_t csum; /* checksum of bytes representing a section */
+};
+
+extern int module_verify(const Elf_Ehdr *hdr, size_t size);
+extern int module_verify_signature(struct module_verify_data *mvdata);
--- /dev/null
+/*
+ * Simple NUMA memory policy for the Linux kernel.
+ *
+ * Copyright 2003,2004 Andi Kleen, SuSE Labs.
+ * Subject to the GNU Public License, version 2.
+ *
+ * NUMA policy allows the user to give hints in which node(s) memory should
+ * be allocated.
+ *
+ * Support four policies per VMA and per process:
+ *
+ * The VMA policy has priority over the process policy for a page fault.
+ *
+ * interleave Allocate memory interleaved over a set of nodes,
+ * with normal fallback if it fails.
+ * For VMA based allocations this interleaves based on the
+ * offset into the backing object or offset into the mapping
+ * for anonymous memory. For process policy an process counter
+ * is used.
+ * bind Only allocate memory on a specific set of nodes,
+ * no fallback.
+ * preferred Try a specific node first before normal fallback.
+ * As a special case node -1 here means do the allocation
+ * on the local CPU. This is normally identical to default,
+ * but useful to set in a VMA when you have a non default
+ * process policy.
+ * default Allocate on the local node first, or when on a VMA
+ * use the process policy. This is what Linux always did
+ * in a NUMA aware kernel and still does by, ahem, default.
+ *
+ * The process policy is applied for most non interrupt memory allocations
+ * in that process' context. Interrupts ignore the policies and always
+ * try to allocate on the local CPU. The VMA policy is only applied for memory
+ * allocations for a VMA in the VM.
+ *
+ * Currently there are a few corner cases in swapping where the policy
+ * is not applied, but the majority should be handled. When process policy
+ * is used it is not remembered over swap outs/swap ins.
+ *
+ * Only the highest zone in the zone hierarchy gets policied. Allocations
+ * requesting a lower zone just use default policy. This implies that
+ * on systems with highmem kernel lowmem allocation don't get policied.
+ * Same with GFP_DMA allocations.
+ *
+ * For shmfs/tmpfs/hugetlbfs shared memory the policy is shared between
+ * all users and remembered even when nobody has memory mapped.
+ */
+
+/* Notebook:
+ fix mmap readahead to honour policy and enable policy for any page cache
+ object
+ statistics for bigpages
+ global policy for page cache? currently it uses process policy. Requires
+ first item above.
+ handle mremap for shared memory (currently ignored for the policy)
+ grows down?
+ make bind policy root only? It can trigger oom much faster and the
+ kernel is not always grateful with that.
+ could replace all the switch()es with a mempolicy_ops structure.
+*/
+
+#include <linux/mempolicy.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/hugetlb.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/mm.h>
+#include <linux/nodemask.h>
+#include <linux/gfp.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <linux/module.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/compat.h>
+#include <linux/mempolicy.h>
+#include <asm/tlbflush.h>
+#include <asm/uaccess.h>
+
+static kmem_cache_t *policy_cache;
+static kmem_cache_t *sn_cache;
+
+#define PDprintk(fmt...)
+
+/* Highest zone. An specific allocation for a zone below that is not
+ policied. */
+static int policy_zone;
+
+static struct mempolicy default_policy = {
+ .refcnt = ATOMIC_INIT(1), /* never free it */
+ .policy = MPOL_DEFAULT,
+};
+
+/* Check if all specified nodes are online */
+static int nodes_online(unsigned long *nodes)
+{
+ DECLARE_BITMAP(online2, MAX_NUMNODES);
+
+ bitmap_copy(online2, nodes_addr(node_online_map), MAX_NUMNODES);
+ if (bitmap_empty(online2, MAX_NUMNODES))
+ set_bit(0, online2);
+ if (!bitmap_subset(nodes, online2, MAX_NUMNODES))
+ return -EINVAL;
+ return 0;
+}
+
+/* Do sanity checking on a policy */
+static int mpol_check_policy(int mode, unsigned long *nodes)
+{
+ int empty = bitmap_empty(nodes, MAX_NUMNODES);
+
+ switch (mode) {
+ case MPOL_DEFAULT:
+ if (!empty)
+ return -EINVAL;
+ break;
+ case MPOL_BIND:
+ case MPOL_INTERLEAVE:
+ /* Preferred will only use the first bit, but allow
+ more for now. */
+ if (empty)
+ return -EINVAL;
+ break;
+ }
+ return nodes_online(nodes);
+}
+
+/* Copy a node mask from user space. */
+static int get_nodes(unsigned long *nodes, unsigned long __user *nmask,
+ unsigned long maxnode, int mode)
+{
+ unsigned long k;
+ unsigned long nlongs;
+ unsigned long endmask;
+
+ --maxnode;
+ bitmap_zero(nodes, MAX_NUMNODES);
+ if (maxnode == 0 || !nmask)
+ return 0;
+
+ nlongs = BITS_TO_LONGS(maxnode);
+ if ((maxnode % BITS_PER_LONG) == 0)
+ endmask = ~0UL;
+ else
+ endmask = (1UL << (maxnode % BITS_PER_LONG)) - 1;
+
+ /* When the user specified more nodes than supported just check
+ if the non supported part is all zero. */
+ if (nlongs > BITS_TO_LONGS(MAX_NUMNODES)) {
+ if (nlongs > PAGE_SIZE/sizeof(long))
+ return -EINVAL;
+ for (k = BITS_TO_LONGS(MAX_NUMNODES); k < nlongs; k++) {
+ unsigned long t;
+ if (get_user(t, nmask + k))
+ return -EFAULT;
+ if (k == nlongs - 1) {
+ if (t & endmask)
+ return -EINVAL;
+ } else if (t)
+ return -EINVAL;
+ }
+ nlongs = BITS_TO_LONGS(MAX_NUMNODES);
+ endmask = ~0UL;
+ }
+
+ if (copy_from_user(nodes, nmask, nlongs*sizeof(unsigned long)))
+ return -EFAULT;
+ nodes[nlongs-1] &= endmask;
+ return mpol_check_policy(mode, nodes);
+}
+
+/* Generate a custom zonelist for the BIND policy. */
+static struct zonelist *bind_zonelist(unsigned long *nodes)
+{
+ struct zonelist *zl;
+ int num, max, nd;
+
+ max = 1 + MAX_NR_ZONES * bitmap_weight(nodes, MAX_NUMNODES);
+ zl = kmalloc(sizeof(void *) * max, GFP_KERNEL);
+ if (!zl)
+ return NULL;
+ num = 0;
+ for (nd = find_first_bit(nodes, MAX_NUMNODES);
+ nd < MAX_NUMNODES;
+ nd = find_next_bit(nodes, MAX_NUMNODES, 1+nd)) {
+ int k;
+ for (k = MAX_NR_ZONES-1; k >= 0; k--) {
+ struct zone *z = &NODE_DATA(nd)->node_zones[k];
+ if (!z->present_pages)
+ continue;
+ zl->zones[num++] = z;
+ if (k > policy_zone)
+ policy_zone = k;
+ }
+ }
+ BUG_ON(num >= max);
+ zl->zones[num] = NULL;
+ return zl;
+}
+
+/* Create a new policy */
+static struct mempolicy *mpol_new(int mode, unsigned long *nodes)
+{
+ struct mempolicy *policy;
+
+ PDprintk("setting mode %d nodes[0] %lx\n", mode, nodes[0]);
+ if (mode == MPOL_DEFAULT)
+ return NULL;
+ policy = kmem_cache_alloc(policy_cache, GFP_KERNEL);
+ if (!policy)
+ return ERR_PTR(-ENOMEM);
+ atomic_set(&policy->refcnt, 1);
+ switch (mode) {
+ case MPOL_INTERLEAVE:
+ bitmap_copy(policy->v.nodes, nodes, MAX_NUMNODES);
+ break;
+ case MPOL_PREFERRED:
+ policy->v.preferred_node = find_first_bit(nodes, MAX_NUMNODES);
+ if (policy->v.preferred_node >= MAX_NUMNODES)
+ policy->v.preferred_node = -1;
+ break;
+ case MPOL_BIND:
+ policy->v.zonelist = bind_zonelist(nodes);
+ if (policy->v.zonelist == NULL) {
+ kmem_cache_free(policy_cache, policy);
+ return ERR_PTR(-ENOMEM);
+ }
+ break;
+ }
+ policy->policy = mode;
+ return policy;
+}
+
+/* Ensure all existing pages follow the policy. */
+static int
+verify_pages(unsigned long addr, unsigned long end, unsigned long *nodes)
+{
+ while (addr < end) {
+ struct page *p;
+ pte_t *pte;
+ pmd_t *pmd;
+ pgd_t *pgd = pgd_offset_k(addr);
+ if (pgd_none(*pgd)) {
+ addr = (addr + PGDIR_SIZE) & PGDIR_MASK;
+ continue;
+ }
+ pmd = pmd_offset(pgd, addr);
+ if (pmd_none(*pmd)) {
+ addr = (addr + PMD_SIZE) & PMD_MASK;
+ continue;
+ }
+ p = NULL;
+ pte = pte_offset_map(pmd, addr);
+ if (pte_present(*pte))
+ p = pte_page(*pte);
+ pte_unmap(pte);
+ if (p) {
+ unsigned nid = page_to_nid(p);
+ if (!test_bit(nid, nodes))
+ return -EIO;
+ }
+ addr += PAGE_SIZE;
+ }
+ return 0;
+}
+
+/* Step 1: check the range */
+static struct vm_area_struct *
+check_range(struct mm_struct *mm, unsigned long start, unsigned long end,
+ unsigned long *nodes, unsigned long flags)
+{
+ int err;
+ struct vm_area_struct *first, *vma, *prev;
+
+ first = find_vma(mm, start);
+ if (!first)
+ return ERR_PTR(-EFAULT);
+ prev = NULL;
+ for (vma = first; vma && vma->vm_start < end; vma = vma->vm_next) {
+ if (!vma->vm_next && vma->vm_end < end)
+ return ERR_PTR(-EFAULT);
+ if (prev && prev->vm_end < vma->vm_start)
+ return ERR_PTR(-EFAULT);
+ if ((flags & MPOL_MF_STRICT) && !is_vm_hugetlb_page(vma)) {
+ err = verify_pages(vma->vm_start, vma->vm_end, nodes);
+ if (err) {
+ first = ERR_PTR(err);
+ break;
+ }
+ }
+ prev = vma;
+ }
+ return first;
+}
+
+/* Apply policy to a single VMA */
+static int policy_vma(struct vm_area_struct *vma, struct mempolicy *new)
+{
+ int err = 0;
+ struct mempolicy *old = vma->vm_policy;
+
+ PDprintk("vma %lx-%lx/%lx vm_ops %p vm_file %p set_policy %p\n",
+ vma->vm_start, vma->vm_end, vma->vm_pgoff,
+ vma->vm_ops, vma->vm_file,
+ vma->vm_ops ? vma->vm_ops->set_policy : NULL);
+
+ if (vma->vm_ops && vma->vm_ops->set_policy)
+ err = vma->vm_ops->set_policy(vma, new);
+ if (!err) {
+ mpol_get(new);
+ vma->vm_policy = new;
+ mpol_free(old);
+ }
+ return err;
+}
+
+/* Step 2: apply policy to a range and do splits. */
+static int mbind_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end, struct mempolicy *new)
+{
+ struct vm_area_struct *next;
+ int err;
+
+ err = 0;
+ for (; vma && vma->vm_start < end; vma = next) {
+ next = vma->vm_next;
+ if (vma->vm_start < start)
+ err = split_vma(vma->vm_mm, vma, start, 1);
+ if (!err && vma->vm_end > end)
+ err = split_vma(vma->vm_mm, vma, end, 0);
+ if (!err)
+ err = policy_vma(vma, new);
+ if (err)
+ break;
+ }
+ return err;
+}
+
+/* Change policy for a memory range */
+asmlinkage long sys_mbind(unsigned long start, unsigned long len,
+ unsigned long mode,
+ unsigned long __user *nmask, unsigned long maxnode,
+ unsigned flags)
+{
+ struct vm_area_struct *vma;
+ struct mm_struct *mm = current->mm;
+ struct mempolicy *new;
+ unsigned long end;
+ DECLARE_BITMAP(nodes, MAX_NUMNODES);
+ int err;
+
+ if ((flags & ~(unsigned long)(MPOL_MF_STRICT)) || mode > MPOL_MAX)
+ return -EINVAL;
+ if (start & ~PAGE_MASK)
+ return -EINVAL;
+ if (mode == MPOL_DEFAULT)
+ flags &= ~MPOL_MF_STRICT;
+ len = (len + PAGE_SIZE - 1) & PAGE_MASK;
+ end = start + len;
+ if (end < start)
+ return -EINVAL;
+ if (end == start)
+ return 0;
+
+ err = get_nodes(nodes, nmask, maxnode, mode);
+ if (err)
+ return err;
+
+ new = mpol_new(mode, nodes);
+ if (IS_ERR(new))
+ return PTR_ERR(new);
+
+ PDprintk("mbind %lx-%lx mode:%ld nodes:%lx\n",start,start+len,
+ mode,nodes[0]);
+
+ down_write(&mm->mmap_sem);
+ vma = check_range(mm, start, end, nodes, flags);
+ err = PTR_ERR(vma);
+ if (!IS_ERR(vma))
+ err = mbind_range(vma, start, end, new);
+ up_write(&mm->mmap_sem);
+ mpol_free(new);
+ return err;
+}
+
+/* Set the process memory policy */
+asmlinkage long sys_set_mempolicy(int mode, unsigned long __user *nmask,
+ unsigned long maxnode)
+{
+ int err;
+ struct mempolicy *new;
+ DECLARE_BITMAP(nodes, MAX_NUMNODES);
+
+ if (mode > MPOL_MAX)
+ return -EINVAL;
+ err = get_nodes(nodes, nmask, maxnode, mode);
+ if (err)
+ return err;
+ new = mpol_new(mode, nodes);
+ if (IS_ERR(new))
+ return PTR_ERR(new);
+ mpol_free(current->mempolicy);
+ current->mempolicy = new;
+ if (new && new->policy == MPOL_INTERLEAVE)
+ current->il_next = find_first_bit(new->v.nodes, MAX_NUMNODES);
+ return 0;
+}
+
+/* Fill a zone bitmap for a policy */
+static void get_zonemask(struct mempolicy *p, unsigned long *nodes)
+{
+ int i;
+
+ bitmap_zero(nodes, MAX_NUMNODES);
+ switch (p->policy) {
+ case MPOL_BIND:
+ for (i = 0; p->v.zonelist->zones[i]; i++)
+ __set_bit(p->v.zonelist->zones[i]->zone_pgdat->node_id, nodes);
+ break;
+ case MPOL_DEFAULT:
+ break;
+ case MPOL_INTERLEAVE:
+ bitmap_copy(nodes, p->v.nodes, MAX_NUMNODES);
+ break;
+ case MPOL_PREFERRED:
+ /* or use current node instead of online map? */
+ if (p->v.preferred_node < 0)
+ bitmap_copy(nodes, nodes_addr(node_online_map), MAX_NUMNODES);
+ else
+ __set_bit(p->v.preferred_node, nodes);
+ break;
+ default:
+ BUG();
+ }
+}
+
+static int lookup_node(struct mm_struct *mm, unsigned long addr)
+{
+ struct page *p;
+ int err;
+
+ err = get_user_pages(current, mm, addr & PAGE_MASK, 1, 0, 0, &p, NULL);
+ if (err >= 0) {
+ err = page_to_nid(p);
+ put_page(p);
+ }
+ return err;
+}
+
+/* Copy a kernel node mask to user space */
+static int copy_nodes_to_user(unsigned long __user *mask, unsigned long maxnode,
+ void *nodes, unsigned nbytes)
+{
+ unsigned long copy = ALIGN(maxnode-1, 64) / 8;
+
+ if (copy > nbytes) {
+ if (copy > PAGE_SIZE)
+ return -EINVAL;
+ if (clear_user((char __user *)mask + nbytes, copy - nbytes))
+ return -EFAULT;
+ copy = nbytes;
+ }
+ return copy_to_user(mask, nodes, copy) ? -EFAULT : 0;
+}
+
+/* Retrieve NUMA policy */
+asmlinkage long sys_get_mempolicy(int __user *policy,
+ unsigned long __user *nmask,
+ unsigned long maxnode,
+ unsigned long addr, unsigned long flags)
+{
+ int err, pval;
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma = NULL;
+ struct mempolicy *pol = current->mempolicy;
+
+ if (flags & ~(unsigned long)(MPOL_F_NODE|MPOL_F_ADDR))
+ return -EINVAL;
+ if (nmask != NULL && maxnode < numnodes)
+ return -EINVAL;
+ if (flags & MPOL_F_ADDR) {
+ down_read(&mm->mmap_sem);
+ vma = find_vma_intersection(mm, addr, addr+1);
+ if (!vma) {
+ up_read(&mm->mmap_sem);
+ return -EFAULT;
+ }
+ if (vma->vm_ops && vma->vm_ops->get_policy)
+ pol = vma->vm_ops->get_policy(vma, addr);
+ else
+ pol = vma->vm_policy;
+ } else if (addr)
+ return -EINVAL;
+
+ if (!pol)
+ pol = &default_policy;
+
+ if (flags & MPOL_F_NODE) {
+ if (flags & MPOL_F_ADDR) {
+ err = lookup_node(mm, addr);
+ if (err < 0)
+ goto out;
+ pval = err;
+ } else if (pol == current->mempolicy &&
+ pol->policy == MPOL_INTERLEAVE) {
+ pval = current->il_next;
+ } else {
+ err = -EINVAL;
+ goto out;
+ }
+ } else
+ pval = pol->policy;
+
+ err = -EFAULT;
+ if (policy && put_user(pval, policy))
+ goto out;
+
+ err = 0;
+ if (nmask) {
+ DECLARE_BITMAP(nodes, MAX_NUMNODES);
+ get_zonemask(pol, nodes);
+ err = copy_nodes_to_user(nmask, maxnode, nodes, sizeof(nodes));
+ }
+
+ out:
+ if (vma)
+ up_read(¤t->mm->mmap_sem);
+ return err;
+}
+
+#ifdef CONFIG_COMPAT
+
+asmlinkage long compat_sys_get_mempolicy(int __user *policy,
+ compat_ulong_t __user *nmask,
+ compat_ulong_t maxnode,
+ compat_ulong_t addr, compat_ulong_t flags)
+{
+ long err;
+ unsigned long __user *nm = NULL;
+ unsigned long nr_bits, alloc_size;
+ DECLARE_BITMAP(bm, MAX_NUMNODES);
+
+ nr_bits = min_t(unsigned long, maxnode-1, MAX_NUMNODES);
+ alloc_size = ALIGN(nr_bits, BITS_PER_LONG) / 8;
+
+ if (nmask)
+ nm = compat_alloc_user_space(alloc_size);
+
+ err = sys_get_mempolicy(policy, nm, nr_bits+1, addr, flags);
+
+ if (!err && nmask) {
+ err = copy_from_user(bm, nm, alloc_size);
+ /* ensure entire bitmap is zeroed */
+ err |= clear_user(nmask, ALIGN(maxnode-1, 8) / 8);
+ err |= compat_put_bitmap(nmask, bm, nr_bits);
+ }
+
+ return err;
+}
+
+asmlinkage long compat_sys_set_mempolicy(int mode, compat_ulong_t __user *nmask,
+ compat_ulong_t maxnode)
+{
+ long err = 0;
+ unsigned long __user *nm = NULL;
+ unsigned long nr_bits, alloc_size;
+ DECLARE_BITMAP(bm, MAX_NUMNODES);
+
+ nr_bits = min_t(unsigned long, maxnode-1, MAX_NUMNODES);
+ alloc_size = ALIGN(nr_bits, BITS_PER_LONG) / 8;
+
+ if (nmask) {
+ err = compat_get_bitmap(bm, nmask, nr_bits);
+ nm = compat_alloc_user_space(alloc_size);
+ err |= copy_to_user(nm, bm, alloc_size);
+ }
+
+ if (err)
+ return -EFAULT;
+
+ return sys_set_mempolicy(mode, nm, nr_bits+1);
+}
+
+asmlinkage long compat_sys_mbind(compat_ulong_t start, compat_ulong_t len,
+ compat_ulong_t mode, compat_ulong_t __user *nmask,
+ compat_ulong_t maxnode, compat_ulong_t flags)
+{
+ long err = 0;
+ unsigned long __user *nm = NULL;
+ unsigned long nr_bits, alloc_size;
+ DECLARE_BITMAP(bm, MAX_NUMNODES);
+
+ nr_bits = min_t(unsigned long, maxnode-1, MAX_NUMNODES);
+ alloc_size = ALIGN(nr_bits, BITS_PER_LONG) / 8;
+
+ if (nmask) {
+ err = compat_get_bitmap(bm, nmask, nr_bits);
+ nm = compat_alloc_user_space(alloc_size);
+ err |= copy_to_user(nm, bm, alloc_size);
+ }
+
+ if (err)
+ return -EFAULT;
+
+ return sys_mbind(start, len, mode, nm, nr_bits+1, flags);
+}
+
+#endif
+
+/* Return effective policy for a VMA */
+static struct mempolicy *
+get_vma_policy(struct vm_area_struct *vma, unsigned long addr)
+{
+ struct mempolicy *pol = current->mempolicy;
+
+ if (vma) {
+ if (vma->vm_ops && vma->vm_ops->get_policy)
+ pol = vma->vm_ops->get_policy(vma, addr);
+ else if (vma->vm_policy &&
+ vma->vm_policy->policy != MPOL_DEFAULT)
+ pol = vma->vm_policy;
+ }
+ if (!pol)
+ pol = &default_policy;
+ return pol;
+}
+
+/* Return a zonelist representing a mempolicy */
+static struct zonelist *zonelist_policy(unsigned gfp, struct mempolicy *policy)
+{
+ int nd;
+
+ switch (policy->policy) {
+ case MPOL_PREFERRED:
+ nd = policy->v.preferred_node;
+ if (nd < 0)
+ nd = numa_node_id();
+ break;
+ case MPOL_BIND:
+ /* Lower zones don't get a policy applied */
+ if (gfp >= policy_zone)
+ return policy->v.zonelist;
+ /*FALL THROUGH*/
+ case MPOL_INTERLEAVE: /* should not happen */
+ case MPOL_DEFAULT:
+ nd = numa_node_id();
+ break;
+ default:
+ nd = 0;
+ BUG();
+ }
+ return NODE_DATA(nd)->node_zonelists + (gfp & GFP_ZONEMASK);
+}
+
+/* Do dynamic interleaving for a process */
+static unsigned interleave_nodes(struct mempolicy *policy)
+{
+ unsigned nid, next;
+ struct task_struct *me = current;
+
+ nid = me->il_next;
+ BUG_ON(nid >= MAX_NUMNODES);
+ next = find_next_bit(policy->v.nodes, MAX_NUMNODES, 1+nid);
+ if (next >= MAX_NUMNODES)
+ next = find_first_bit(policy->v.nodes, MAX_NUMNODES);
+ me->il_next = next;
+ return nid;
+}
+
+/* Do static interleaving for a VMA with known offset. */
+static unsigned offset_il_node(struct mempolicy *pol,
+ struct vm_area_struct *vma, unsigned long off)
+{
+ unsigned nnodes = bitmap_weight(pol->v.nodes, MAX_NUMNODES);
+ unsigned target = (unsigned)off % nnodes;
+ int c;
+ int nid = -1;
+
+ c = 0;
+ do {
+ nid = find_next_bit(pol->v.nodes, MAX_NUMNODES, nid+1);
+ c++;
+ } while (c <= target);
+ BUG_ON(nid >= MAX_NUMNODES);
+ BUG_ON(!test_bit(nid, pol->v.nodes));
+ return nid;
+}
+
+/* Allocate a page in interleaved policy.
+ Own path because it needs to do special accounting. */
+static struct page *alloc_page_interleave(unsigned gfp, unsigned order, unsigned nid)
+{
+ struct zonelist *zl;
+ struct page *page;
+
+ BUG_ON(!node_online(nid));
+ zl = NODE_DATA(nid)->node_zonelists + (gfp & GFP_ZONEMASK);
+ page = __alloc_pages(gfp, order, zl);
+ if (page && page_zone(page) == zl->zones[0]) {
+ zl->zones[0]->pageset[get_cpu()].interleave_hit++;
+ put_cpu();
+ }
+ return page;
+}
+
+/**
+ * alloc_page_vma - Allocate a page for a VMA.
+ *
+ * @gfp:
+ * %GFP_USER user allocation.
+ * %GFP_KERNEL kernel allocations,
+ * %GFP_HIGHMEM highmem/user allocations,
+ * %GFP_FS allocation should not call back into a file system.
+ * %GFP_ATOMIC don't sleep.
+ *
+ * @vma: Pointer to VMA or NULL if not available.
+ * @addr: Virtual Address of the allocation. Must be inside the VMA.
+ *
+ * This function allocates a page from the kernel page pool and applies
+ * a NUMA policy associated with the VMA or the current process.
+ * When VMA is not NULL caller must hold down_read on the mmap_sem of the
+ * mm_struct of the VMA to prevent it from going away. Should be used for
+ * all allocations for pages that will be mapped into
+ * user space. Returns NULL when no page can be allocated.
+ *
+ * Should be called with the mm_sem of the vma hold.
+ */
+struct page *
+alloc_page_vma(unsigned gfp, struct vm_area_struct *vma, unsigned long addr)
+{
+ struct mempolicy *pol = get_vma_policy(vma, addr);
+
+ if (unlikely(pol->policy == MPOL_INTERLEAVE)) {
+ unsigned nid;
+ if (vma) {
+ unsigned long off;
+ BUG_ON(addr >= vma->vm_end);
+ BUG_ON(addr < vma->vm_start);
+ off = vma->vm_pgoff;
+ off += (addr - vma->vm_start) >> PAGE_SHIFT;
+ nid = offset_il_node(pol, vma, off);
+ } else {
+ /* fall back to process interleaving */
+ nid = interleave_nodes(pol);
+ }
+ return alloc_page_interleave(gfp, 0, nid);
+ }
+ return __alloc_pages(gfp, 0, zonelist_policy(gfp, pol));
+}
+
+/**
+ * alloc_pages_current - Allocate pages.
+ *
+ * @gfp:
+ * %GFP_USER user allocation,
+ * %GFP_KERNEL kernel allocation,
+ * %GFP_HIGHMEM highmem allocation,
+ * %GFP_FS don't call back into a file system.
+ * %GFP_ATOMIC don't sleep.
+ * @order: Power of two of allocation size in pages. 0 is a single page.
+ *
+ * Allocate a page from the kernel page pool. When not in
+ * interrupt context and apply the current process NUMA policy.
+ * Returns NULL when no page can be allocated.
+ */
+struct page *alloc_pages_current(unsigned gfp, unsigned order)
+{
+ struct mempolicy *pol = current->mempolicy;
+
+ if (!pol || in_interrupt())
+ pol = &default_policy;
+ if (pol->policy == MPOL_INTERLEAVE)
+ return alloc_page_interleave(gfp, order, interleave_nodes(pol));
+ return __alloc_pages(gfp, order, zonelist_policy(gfp, pol));
+}
+EXPORT_SYMBOL(alloc_pages_current);
+
+/* Slow path of a mempolicy copy */
+struct mempolicy *__mpol_copy(struct mempolicy *old)
+{
+ struct mempolicy *new = kmem_cache_alloc(policy_cache, GFP_KERNEL);
+
+ if (!new)
+ return ERR_PTR(-ENOMEM);
+ *new = *old;
+ atomic_set(&new->refcnt, 1);
+ if (new->policy == MPOL_BIND) {
+ int sz = ksize(old->v.zonelist);
+ new->v.zonelist = kmalloc(sz, SLAB_KERNEL);
+ if (!new->v.zonelist) {
+ kmem_cache_free(policy_cache, new);
+ return ERR_PTR(-ENOMEM);
+ }
+ memcpy(new->v.zonelist, old->v.zonelist, sz);
+ }
+ return new;
+}
+
+/* Slow path of a mempolicy comparison */
+int __mpol_equal(struct mempolicy *a, struct mempolicy *b)
+{
+ if (!a || !b)
+ return 0;
+ if (a->policy != b->policy)
+ return 0;
+ switch (a->policy) {
+ case MPOL_DEFAULT:
+ return 1;
+ case MPOL_INTERLEAVE:
+ return bitmap_equal(a->v.nodes, b->v.nodes, MAX_NUMNODES);
+ case MPOL_PREFERRED:
+ return a->v.preferred_node == b->v.preferred_node;
+ case MPOL_BIND: {
+ int i;
+ for (i = 0; a->v.zonelist->zones[i]; i++)
+ if (a->v.zonelist->zones[i] != b->v.zonelist->zones[i])
+ return 0;
+ return b->v.zonelist->zones[i] == NULL;
+ }
+ default:
+ BUG();
+ return 0;
+ }
+}
+
+/* Slow path of a mpol destructor. */
+void __mpol_free(struct mempolicy *p)
+{
+ if (!atomic_dec_and_test(&p->refcnt))
+ return;
+ if (p->policy == MPOL_BIND)
+ kfree(p->v.zonelist);
+ p->policy = MPOL_DEFAULT;
+ kmem_cache_free(policy_cache, p);
+}
+
+/*
+ * Hugetlb policy. Same as above, just works with node numbers instead of
+ * zonelists.
+ */
+
+/* Find first node suitable for an allocation */
+int mpol_first_node(struct vm_area_struct *vma, unsigned long addr)
+{
+ struct mempolicy *pol = get_vma_policy(vma, addr);
+
+ switch (pol->policy) {
+ case MPOL_DEFAULT:
+ return numa_node_id();
+ case MPOL_BIND:
+ return pol->v.zonelist->zones[0]->zone_pgdat->node_id;
+ case MPOL_INTERLEAVE:
+ return interleave_nodes(pol);
+ case MPOL_PREFERRED:
+ return pol->v.preferred_node >= 0 ?
+ pol->v.preferred_node : numa_node_id();
+ }
+ BUG();
+ return 0;
+}
+
+/* Find secondary valid nodes for an allocation */
+int mpol_node_valid(int nid, struct vm_area_struct *vma, unsigned long addr)
+{
+ struct mempolicy *pol = get_vma_policy(vma, addr);
+
+ switch (pol->policy) {
+ case MPOL_PREFERRED:
+ case MPOL_DEFAULT:
+ case MPOL_INTERLEAVE:
+ return 1;
+ case MPOL_BIND: {
+ struct zone **z;
+ for (z = pol->v.zonelist->zones; *z; z++)
+ if ((*z)->zone_pgdat->node_id == nid)
+ return 1;
+ return 0;
+ }
+ default:
+ BUG();
+ return 0;
+ }
+}
+
+/*
+ * Shared memory backing store policy support.
+ *
+ * Remember policies even when nobody has shared memory mapped.
+ * The policies are kept in Red-Black tree linked from the inode.
+ * They are protected by the sp->lock spinlock, which should be held
+ * for any accesses to the tree.
+ */
+
+/* lookup first element intersecting start-end */
+/* Caller holds sp->lock */
+static struct sp_node *
+sp_lookup(struct shared_policy *sp, unsigned long start, unsigned long end)
+{
+ struct rb_node *n = sp->root.rb_node;
+
+ while (n) {
+ struct sp_node *p = rb_entry(n, struct sp_node, nd);
+
+ if (start >= p->end)
+ n = n->rb_right;
+ else if (end <= p->start)
+ n = n->rb_left;
+ else
+ break;
+ }
+ if (!n)
+ return NULL;
+ for (;;) {
+ struct sp_node *w = NULL;
+ struct rb_node *prev = rb_prev(n);
+ if (!prev)
+ break;
+ w = rb_entry(prev, struct sp_node, nd);
+ if (w->end <= start)
+ break;
+ n = prev;
+ }
+ return rb_entry(n, struct sp_node, nd);
+}
+
+/* Insert a new shared policy into the list. */
+/* Caller holds sp->lock */
+static void sp_insert(struct shared_policy *sp, struct sp_node *new)
+{
+ struct rb_node **p = &sp->root.rb_node;
+ struct rb_node *parent = NULL;
+ struct sp_node *nd;
+
+ while (*p) {
+ parent = *p;
+ nd = rb_entry(parent, struct sp_node, nd);
+ if (new->start < nd->start)
+ p = &(*p)->rb_left;
+ else if (new->end > nd->end)
+ p = &(*p)->rb_right;
+ else
+ BUG();
+ }
+ rb_link_node(&new->nd, parent, p);
+ rb_insert_color(&new->nd, &sp->root);
+ PDprintk("inserting %lx-%lx: %d\n", new->start, new->end,
+ new->policy ? new->policy->policy : 0);
+}
+
+/* Find shared policy intersecting idx */
+struct mempolicy *
+mpol_shared_policy_lookup(struct shared_policy *sp, unsigned long idx)
+{
+ struct mempolicy *pol = NULL;
+ struct sp_node *sn;
+
+ if (!sp->root.rb_node)
+ return NULL;
+ spin_lock(&sp->lock);
+ sn = sp_lookup(sp, idx, idx+1);
+ if (sn) {
+ mpol_get(sn->policy);
+ pol = sn->policy;
+ }
+ spin_unlock(&sp->lock);
+ return pol;
+}
+
+static void sp_delete(struct shared_policy *sp, struct sp_node *n)
+{
+ PDprintk("deleting %lx-l%x\n", n->start, n->end);
+ rb_erase(&n->nd, &sp->root);
+ mpol_free(n->policy);
+ kmem_cache_free(sn_cache, n);
+}
+
+struct sp_node *
+sp_alloc(unsigned long start, unsigned long end, struct mempolicy *pol)
+{
+ struct sp_node *n = kmem_cache_alloc(sn_cache, GFP_KERNEL);
+
+ if (!n)
+ return NULL;
+ n->start = start;
+ n->end = end;
+ mpol_get(pol);
+ n->policy = pol;
+ return n;
+}
+
+/* Replace a policy range. */
+static int shared_policy_replace(struct shared_policy *sp, unsigned long start,
+ unsigned long end, struct sp_node *new)
+{
+ struct sp_node *n, *new2 = NULL;
+
+restart:
+ spin_lock(&sp->lock);
+ n = sp_lookup(sp, start, end);
+ /* Take care of old policies in the same range. */
+ while (n && n->start < end) {
+ struct rb_node *next = rb_next(&n->nd);
+ if (n->start >= start) {
+ if (n->end <= end)
+ sp_delete(sp, n);
+ else
+ n->start = end;
+ } else {
+ /* Old policy spanning whole new range. */
+ if (n->end > end) {
+ if (!new2) {
+ spin_unlock(&sp->lock);
+ new2 = sp_alloc(end, n->end, n->policy);
+ if (!new2)
+ return -ENOMEM;
+ goto restart;
+ }
+ n->end = end;
+ sp_insert(sp, new2);
+ new2 = NULL;
+ }
+ /* Old crossing beginning, but not end (easy) */
+ if (n->start < start && n->end > start)
+ n->end = start;
+ }
+ if (!next)
+ break;
+ n = rb_entry(next, struct sp_node, nd);
+ }
+ if (new)
+ sp_insert(sp, new);
+ spin_unlock(&sp->lock);
+ if (new2) {
+ mpol_free(new2->policy);
+ kmem_cache_free(sn_cache, new2);
+ }
+ return 0;
+}
+
+int mpol_set_shared_policy(struct shared_policy *info,
+ struct vm_area_struct *vma, struct mempolicy *npol)
+{
+ int err;
+ struct sp_node *new = NULL;
+ unsigned long sz = vma_pages(vma);
+
+ PDprintk("set_shared_policy %lx sz %lu %d %lx\n",
+ vma->vm_pgoff,
+ sz, npol? npol->policy : -1,
+ npol ? npol->v.nodes[0] : -1);
+
+ if (npol) {
+ new = sp_alloc(vma->vm_pgoff, vma->vm_pgoff + sz, npol);
+ if (!new)
+ return -ENOMEM;
+ }
+ err = shared_policy_replace(info, vma->vm_pgoff, vma->vm_pgoff+sz, new);
+ if (err && new)
+ kmem_cache_free(sn_cache, new);
+ return err;
+}
+
+/* Free a backing policy store on inode delete. */
+void mpol_free_shared_policy(struct shared_policy *p)
+{
+ struct sp_node *n;
+ struct rb_node *next;
+
+ if (!p->root.rb_node)
+ return;
+ spin_lock(&p->lock);
+ next = rb_first(&p->root);
+ while (next) {
+ n = rb_entry(next, struct sp_node, nd);
+ next = rb_next(&n->nd);
+ rb_erase(&n->nd, &p->root);
+ mpol_free(n->policy);
+ kmem_cache_free(sn_cache, n);
+ }
+ spin_unlock(&p->lock);
+}
+
+/* assumes fs == KERNEL_DS */
+void __init numa_policy_init(void)
+{
+ policy_cache = kmem_cache_create("numa_policy",
+ sizeof(struct mempolicy),
+ 0, SLAB_PANIC, NULL, NULL);
+
+ sn_cache = kmem_cache_create("shared_policy_node",
+ sizeof(struct sp_node),
+ 0, SLAB_PANIC, NULL, NULL);
+
+ /* Set interleaving policy for system init. This way not all
+ the data structures allocated at system boot end up in node zero. */
+
+ if (sys_set_mempolicy(MPOL_INTERLEAVE, nodes_addr(node_online_map),
+ MAX_NUMNODES) < 0)
+ printk("numa_policy_init: interleaving failed\n");
+}
+
+/* Reset policy of current process to default.
+ * Assumes fs == KERNEL_DS */
+void numa_default_policy(void)
+{
+ sys_set_mempolicy(MPOL_DEFAULT, NULL, 0);
+}
--- /dev/null
+/*
+ * mm/prio_tree.c - priority search tree for mapping->i_mmap
+ *
+ * Copyright (C) 2004, Rajesh Venkatasubramanian <vrajesh@umich.edu>
+ *
+ * This file is released under the GPL v2.
+ *
+ * Based on the radix priority search tree proposed by Edward M. McCreight
+ * SIAM Journal of Computing, vol. 14, no.2, pages 257-276, May 1985
+ *
+ * 02Feb2004 Initial version
+ */
+
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/mm.h>
+#include <linux/prio_tree.h>
+
+/*
+ * A clever mix of heap and radix trees forms a radix priority search tree (PST)
+ * which is useful for storing intervals, e.g, we can consider a vma as a closed
+ * interval of file pages [offset_begin, offset_end], and store all vmas that
+ * map a file in a PST. Then, using the PST, we can answer a stabbing query,
+ * i.e., selecting a set of stored intervals (vmas) that overlap with (map) a
+ * given input interval X (a set of consecutive file pages), in "O(log n + m)"
+ * time where 'log n' is the height of the PST, and 'm' is the number of stored
+ * intervals (vmas) that overlap (map) with the input interval X (the set of
+ * consecutive file pages).
+ *
+ * In our implementation, we store closed intervals of the form [radix_index,
+ * heap_index]. We assume that always radix_index <= heap_index. McCreight's PST
+ * is designed for storing intervals with unique radix indices, i.e., each
+ * interval have different radix_index. However, this limitation can be easily
+ * overcome by using the size, i.e., heap_index - radix_index, as part of the
+ * index, so we index the tree using [(radix_index,size), heap_index].
+ *
+ * When the above-mentioned indexing scheme is used, theoretically, in a 32 bit
+ * machine, the maximum height of a PST can be 64. We can use a balanced version
+ * of the priority search tree to optimize the tree height, but the balanced
+ * tree proposed by McCreight is too complex and memory-hungry for our purpose.
+ */
+
+/*
+ * The following macros are used for implementing prio_tree for i_mmap
+ */
+
+#define RADIX_INDEX(vma) ((vma)->vm_pgoff)
+#define VMA_SIZE(vma) (((vma)->vm_end - (vma)->vm_start) >> PAGE_SHIFT)
+/* avoid overflow */
+#define HEAP_INDEX(vma) ((vma)->vm_pgoff + (VMA_SIZE(vma) - 1))
+
+#define GET_INDEX_VMA(vma, radix, heap) \
+do { \
+ radix = RADIX_INDEX(vma); \
+ heap = HEAP_INDEX(vma); \
+} while (0)
+
+#define GET_INDEX(node, radix, heap) \
+do { \
+ struct vm_area_struct *__tmp = \
+ prio_tree_entry(node, struct vm_area_struct, shared.prio_tree_node);\
+ GET_INDEX_VMA(__tmp, radix, heap); \
+} while (0)
+
+static unsigned long index_bits_to_maxindex[BITS_PER_LONG];
+
+void __init prio_tree_init(void)
+{
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(index_bits_to_maxindex) - 1; i++)
+ index_bits_to_maxindex[i] = (1UL << (i + 1)) - 1;
+ index_bits_to_maxindex[ARRAY_SIZE(index_bits_to_maxindex) - 1] = ~0UL;
+}
+
+/*
+ * Maximum heap_index that can be stored in a PST with index_bits bits
+ */
+static inline unsigned long prio_tree_maxindex(unsigned int bits)
+{
+ return index_bits_to_maxindex[bits - 1];
+}
+
+static void prio_tree_remove(struct prio_tree_root *, struct prio_tree_node *);
+
+/*
+ * Extend a priority search tree so that it can store a node with heap_index
+ * max_heap_index. In the worst case, this algorithm takes O((log n)^2).
+ * However, this function is used rarely and the common case performance is
+ * not bad.
+ */
+static struct prio_tree_node *prio_tree_expand(struct prio_tree_root *root,
+ struct prio_tree_node *node, unsigned long max_heap_index)
+{
+ struct prio_tree_node *first = NULL, *prev, *last = NULL;
+
+ if (max_heap_index > prio_tree_maxindex(root->index_bits))
+ root->index_bits++;
+
+ while (max_heap_index > prio_tree_maxindex(root->index_bits)) {
+ root->index_bits++;
+
+ if (prio_tree_empty(root))
+ continue;
+
+ if (first == NULL) {
+ first = root->prio_tree_node;
+ prio_tree_remove(root, root->prio_tree_node);
+ INIT_PRIO_TREE_NODE(first);
+ last = first;
+ } else {
+ prev = last;
+ last = root->prio_tree_node;
+ prio_tree_remove(root, root->prio_tree_node);
+ INIT_PRIO_TREE_NODE(last);
+ prev->left = last;
+ last->parent = prev;
+ }
+ }
+
+ INIT_PRIO_TREE_NODE(node);
+
+ if (first) {
+ node->left = first;
+ first->parent = node;
+ } else
+ last = node;
+
+ if (!prio_tree_empty(root)) {
+ last->left = root->prio_tree_node;
+ last->left->parent = last;
+ }
+
+ root->prio_tree_node = node;
+ return node;
+}
+
+/*
+ * Replace a prio_tree_node with a new node and return the old node
+ */
+static struct prio_tree_node *prio_tree_replace(struct prio_tree_root *root,
+ struct prio_tree_node *old, struct prio_tree_node *node)
+{
+ INIT_PRIO_TREE_NODE(node);
+
+ if (prio_tree_root(old)) {
+ BUG_ON(root->prio_tree_node != old);
+ /*
+ * We can reduce root->index_bits here. However, it is complex
+ * and does not help much to improve performance (IMO).
+ */
+ node->parent = node;
+ root->prio_tree_node = node;
+ } else {
+ node->parent = old->parent;
+ if (old->parent->left == old)
+ old->parent->left = node;
+ else
+ old->parent->right = node;
+ }
+
+ if (!prio_tree_left_empty(old)) {
+ node->left = old->left;
+ old->left->parent = node;
+ }
+
+ if (!prio_tree_right_empty(old)) {
+ node->right = old->right;
+ old->right->parent = node;
+ }
+
+ return old;
+}
+
+/*
+ * Insert a prio_tree_node @node into a radix priority search tree @root. The
+ * algorithm typically takes O(log n) time where 'log n' is the number of bits
+ * required to represent the maximum heap_index. In the worst case, the algo
+ * can take O((log n)^2) - check prio_tree_expand.
+ *
+ * If a prior node with same radix_index and heap_index is already found in
+ * the tree, then returns the address of the prior node. Otherwise, inserts
+ * @node into the tree and returns @node.
+ */
+static struct prio_tree_node *prio_tree_insert(struct prio_tree_root *root,
+ struct prio_tree_node *node)
+{
+ struct prio_tree_node *cur, *res = node;
+ unsigned long radix_index, heap_index;
+ unsigned long r_index, h_index, index, mask;
+ int size_flag = 0;
+
+ GET_INDEX(node, radix_index, heap_index);
+
+ if (prio_tree_empty(root) ||
+ heap_index > prio_tree_maxindex(root->index_bits))
+ return prio_tree_expand(root, node, heap_index);
+
+ cur = root->prio_tree_node;
+ mask = 1UL << (root->index_bits - 1);
+
+ while (mask) {
+ GET_INDEX(cur, r_index, h_index);
+
+ if (r_index == radix_index && h_index == heap_index)
+ return cur;
+
+ if (h_index < heap_index ||
+ (h_index == heap_index && r_index > radix_index)) {
+ struct prio_tree_node *tmp = node;
+ node = prio_tree_replace(root, cur, node);
+ cur = tmp;
+ /* swap indices */
+ index = r_index;
+ r_index = radix_index;
+ radix_index = index;
+ index = h_index;
+ h_index = heap_index;
+ heap_index = index;
+ }
+
+ if (size_flag)
+ index = heap_index - radix_index;
+ else
+ index = radix_index;
+
+ if (index & mask) {
+ if (prio_tree_right_empty(cur)) {
+ INIT_PRIO_TREE_NODE(node);
+ cur->right = node;
+ node->parent = cur;
+ return res;
+ } else
+ cur = cur->right;
+ } else {
+ if (prio_tree_left_empty(cur)) {
+ INIT_PRIO_TREE_NODE(node);
+ cur->left = node;
+ node->parent = cur;
+ return res;
+ } else
+ cur = cur->left;
+ }
+
+ mask >>= 1;
+
+ if (!mask) {
+ mask = 1UL << (BITS_PER_LONG - 1);
+ size_flag = 1;
+ }
+ }
+ /* Should not reach here */
+ BUG();
+ return NULL;
+}
+
+/*
+ * Remove a prio_tree_node @node from a radix priority search tree @root. The
+ * algorithm takes O(log n) time where 'log n' is the number of bits required
+ * to represent the maximum heap_index.
+ */
+static void prio_tree_remove(struct prio_tree_root *root,
+ struct prio_tree_node *node)
+{
+ struct prio_tree_node *cur;
+ unsigned long r_index, h_index_right, h_index_left;
+
+ cur = node;
+
+ while (!prio_tree_left_empty(cur) || !prio_tree_right_empty(cur)) {
+ if (!prio_tree_left_empty(cur))
+ GET_INDEX(cur->left, r_index, h_index_left);
+ else {
+ cur = cur->right;
+ continue;
+ }
+
+ if (!prio_tree_right_empty(cur))
+ GET_INDEX(cur->right, r_index, h_index_right);
+ else {
+ cur = cur->left;
+ continue;
+ }
+
+ /* both h_index_left and h_index_right cannot be 0 */
+ if (h_index_left >= h_index_right)
+ cur = cur->left;
+ else
+ cur = cur->right;
+ }
+
+ if (prio_tree_root(cur)) {
+ BUG_ON(root->prio_tree_node != cur);
+ INIT_PRIO_TREE_ROOT(root);
+ return;
+ }
+
+ if (cur->parent->right == cur)
+ cur->parent->right = cur->parent;
+ else
+ cur->parent->left = cur->parent;
+
+ while (cur != node)
+ cur = prio_tree_replace(root, cur->parent, cur);
+}
+
+/*
+ * Following functions help to enumerate all prio_tree_nodes in the tree that
+ * overlap with the input interval X [radix_index, heap_index]. The enumeration
+ * takes O(log n + m) time where 'log n' is the height of the tree (which is
+ * proportional to # of bits required to represent the maximum heap_index) and
+ * 'm' is the number of prio_tree_nodes that overlap the interval X.
+ */
+
+static struct prio_tree_node *prio_tree_left(struct prio_tree_iter *iter,
+ unsigned long *r_index, unsigned long *h_index)
+{
+ if (prio_tree_left_empty(iter->cur))
+ return NULL;
+
+ GET_INDEX(iter->cur->left, *r_index, *h_index);
+
+ if (iter->r_index <= *h_index) {
+ iter->cur = iter->cur->left;
+ iter->mask >>= 1;
+ if (iter->mask) {
+ if (iter->size_level)
+ iter->size_level++;
+ } else {
+ if (iter->size_level) {
+ BUG_ON(!prio_tree_left_empty(iter->cur));
+ BUG_ON(!prio_tree_right_empty(iter->cur));
+ iter->size_level++;
+ iter->mask = ULONG_MAX;
+ } else {
+ iter->size_level = 1;
+ iter->mask = 1UL << (BITS_PER_LONG - 1);
+ }
+ }
+ return iter->cur;
+ }
+
+ return NULL;
+}
+
+static struct prio_tree_node *prio_tree_right(struct prio_tree_iter *iter,
+ unsigned long *r_index, unsigned long *h_index)
+{
+ unsigned long value;
+
+ if (prio_tree_right_empty(iter->cur))
+ return NULL;
+
+ if (iter->size_level)
+ value = iter->value;
+ else
+ value = iter->value | iter->mask;
+
+ if (iter->h_index < value)
+ return NULL;
+
+ GET_INDEX(iter->cur->right, *r_index, *h_index);
+
+ if (iter->r_index <= *h_index) {
+ iter->cur = iter->cur->right;
+ iter->mask >>= 1;
+ iter->value = value;
+ if (iter->mask) {
+ if (iter->size_level)
+ iter->size_level++;
+ } else {
+ if (iter->size_level) {
+ BUG_ON(!prio_tree_left_empty(iter->cur));
+ BUG_ON(!prio_tree_right_empty(iter->cur));
+ iter->size_level++;
+ iter->mask = ULONG_MAX;
+ } else {
+ iter->size_level = 1;
+ iter->mask = 1UL << (BITS_PER_LONG - 1);
+ }
+ }
+ return iter->cur;
+ }
+
+ return NULL;
+}
+
+static struct prio_tree_node *prio_tree_parent(struct prio_tree_iter *iter)
+{
+ iter->cur = iter->cur->parent;
+ if (iter->mask == ULONG_MAX)
+ iter->mask = 1UL;
+ else if (iter->size_level == 1)
+ iter->mask = 1UL;
+ else
+ iter->mask <<= 1;
+ if (iter->size_level)
+ iter->size_level--;
+ if (!iter->size_level && (iter->value & iter->mask))
+ iter->value ^= iter->mask;
+ return iter->cur;
+}
+
+static inline int overlap(struct prio_tree_iter *iter,
+ unsigned long r_index, unsigned long h_index)
+{
+ return iter->h_index >= r_index && iter->r_index <= h_index;
+}
+
+/*
+ * prio_tree_first:
+ *
+ * Get the first prio_tree_node that overlaps with the interval [radix_index,
+ * heap_index]. Note that always radix_index <= heap_index. We do a pre-order
+ * traversal of the tree.
+ */
+static struct prio_tree_node *prio_tree_first(struct prio_tree_iter *iter)
+{
+ struct prio_tree_root *root;
+ unsigned long r_index, h_index;
+
+ INIT_PRIO_TREE_ITER(iter);
+
+ root = iter->root;
+ if (prio_tree_empty(root))
+ return NULL;
+
+ GET_INDEX(root->prio_tree_node, r_index, h_index);
+
+ if (iter->r_index > h_index)
+ return NULL;
+
+ iter->mask = 1UL << (root->index_bits - 1);
+ iter->cur = root->prio_tree_node;
+
+ while (1) {
+ if (overlap(iter, r_index, h_index))
+ return iter->cur;
+
+ if (prio_tree_left(iter, &r_index, &h_index))
+ continue;
+
+ if (prio_tree_right(iter, &r_index, &h_index))
+ continue;
+
+ break;
+ }
+ return NULL;
+}
+
+/*
+ * prio_tree_next:
+ *
+ * Get the next prio_tree_node that overlaps with the input interval in iter
+ */
+static struct prio_tree_node *prio_tree_next(struct prio_tree_iter *iter)
+{
+ unsigned long r_index, h_index;
+
+repeat:
+ while (prio_tree_left(iter, &r_index, &h_index))
+ if (overlap(iter, r_index, h_index))
+ return iter->cur;
+
+ while (!prio_tree_right(iter, &r_index, &h_index)) {
+ while (!prio_tree_root(iter->cur) &&
+ iter->cur->parent->right == iter->cur)
+ prio_tree_parent(iter);
+
+ if (prio_tree_root(iter->cur))
+ return NULL;
+
+ prio_tree_parent(iter);
+ }
+
+ if (overlap(iter, r_index, h_index))
+ return iter->cur;
+
+ goto repeat;
+}
+
+/*
+ * Radix priority search tree for address_space->i_mmap
+ *
+ * For each vma that map a unique set of file pages i.e., unique [radix_index,
+ * heap_index] value, we have a corresponing priority search tree node. If
+ * multiple vmas have identical [radix_index, heap_index] value, then one of
+ * them is used as a tree node and others are stored in a vm_set list. The tree
+ * node points to the first vma (head) of the list using vm_set.head.
+ *
+ * prio_tree_root
+ * |
+ * A vm_set.head
+ * / \ /
+ * L R -> H-I-J-K-M-N-O-P-Q-S
+ * ^ ^ <-- vm_set.list -->
+ * tree nodes
+ *
+ * We need some way to identify whether a vma is a tree node, head of a vm_set
+ * list, or just a member of a vm_set list. We cannot use vm_flags to store
+ * such information. The reason is, in the above figure, it is possible that
+ * vm_flags' of R and H are covered by the different mmap_sems. When R is
+ * removed under R->mmap_sem, H replaces R as a tree node. Since we do not hold
+ * H->mmap_sem, we cannot use H->vm_flags for marking that H is a tree node now.
+ * That's why some trick involving shared.vm_set.parent is used for identifying
+ * tree nodes and list head nodes.
+ *
+ * vma radix priority search tree node rules:
+ *
+ * vma->shared.vm_set.parent != NULL ==> a tree node
+ * vma->shared.vm_set.head != NULL ==> list of others mapping same range
+ * vma->shared.vm_set.head == NULL ==> no others map the same range
+ *
+ * vma->shared.vm_set.parent == NULL
+ * vma->shared.vm_set.head != NULL ==> list head of vmas mapping same range
+ * vma->shared.vm_set.head == NULL ==> a list node
+ */
+
+/*
+ * Add a new vma known to map the same set of pages as the old vma:
+ * useful for fork's dup_mmap as well as vma_prio_tree_insert below.
+ * Note that it just happens to work correctly on i_mmap_nonlinear too.
+ */
+void vma_prio_tree_add(struct vm_area_struct *vma, struct vm_area_struct *old)
+{
+ /* Leave these BUG_ONs till prio_tree patch stabilizes */
+ BUG_ON(RADIX_INDEX(vma) != RADIX_INDEX(old));
+ BUG_ON(HEAP_INDEX(vma) != HEAP_INDEX(old));
+
+ vma->shared.vm_set.head = NULL;
+ vma->shared.vm_set.parent = NULL;
+
+ if (!old->shared.vm_set.parent)
+ list_add(&vma->shared.vm_set.list,
+ &old->shared.vm_set.list);
+ else if (old->shared.vm_set.head)
+ list_add_tail(&vma->shared.vm_set.list,
+ &old->shared.vm_set.head->shared.vm_set.list);
+ else {
+ INIT_LIST_HEAD(&vma->shared.vm_set.list);
+ vma->shared.vm_set.head = old;
+ old->shared.vm_set.head = vma;
+ }
+}
+
+void vma_prio_tree_insert(struct vm_area_struct *vma,
+ struct prio_tree_root *root)
+{
+ struct prio_tree_node *ptr;
+ struct vm_area_struct *old;
+
+ vma->shared.vm_set.head = NULL;
+
+ ptr = prio_tree_insert(root, &vma->shared.prio_tree_node);
+ if (ptr != &vma->shared.prio_tree_node) {
+ old = prio_tree_entry(ptr, struct vm_area_struct,
+ shared.prio_tree_node);
+ vma_prio_tree_add(vma, old);
+ }
+}
+
+void vma_prio_tree_remove(struct vm_area_struct *vma,
+ struct prio_tree_root *root)
+{
+ struct vm_area_struct *node, *head, *new_head;
+
+ if (!vma->shared.vm_set.head) {
+ if (!vma->shared.vm_set.parent)
+ list_del_init(&vma->shared.vm_set.list);
+ else
+ prio_tree_remove(root, &vma->shared.prio_tree_node);
+ } else {
+ /* Leave this BUG_ON till prio_tree patch stabilizes */
+ BUG_ON(vma->shared.vm_set.head->shared.vm_set.head != vma);
+ if (vma->shared.vm_set.parent) {
+ head = vma->shared.vm_set.head;
+ if (!list_empty(&head->shared.vm_set.list)) {
+ new_head = list_entry(
+ head->shared.vm_set.list.next,
+ struct vm_area_struct,
+ shared.vm_set.list);
+ list_del_init(&head->shared.vm_set.list);
+ } else
+ new_head = NULL;
+
+ prio_tree_replace(root, &vma->shared.prio_tree_node,
+ &head->shared.prio_tree_node);
+ head->shared.vm_set.head = new_head;
+ if (new_head)
+ new_head->shared.vm_set.head = head;
+
+ } else {
+ node = vma->shared.vm_set.head;
+ if (!list_empty(&vma->shared.vm_set.list)) {
+ new_head = list_entry(
+ vma->shared.vm_set.list.next,
+ struct vm_area_struct,
+ shared.vm_set.list);
+ list_del_init(&vma->shared.vm_set.list);
+ node->shared.vm_set.head = new_head;
+ new_head->shared.vm_set.head = node;
+ } else
+ node->shared.vm_set.head = NULL;
+ }
+ }
+}
+
+/*
+ * Helper function to enumerate vmas that map a given file page or a set of
+ * contiguous file pages. The function returns vmas that at least map a single
+ * page in the given range of contiguous file pages.
+ */
+struct vm_area_struct *vma_prio_tree_next(struct vm_area_struct *vma,
+ struct prio_tree_iter *iter)
+{
+ struct prio_tree_node *ptr;
+ struct vm_area_struct *next;
+
+ if (!vma) {
+ /*
+ * First call is with NULL vma
+ */
+ ptr = prio_tree_first(iter);
+ if (ptr) {
+ next = prio_tree_entry(ptr, struct vm_area_struct,
+ shared.prio_tree_node);
+ prefetch(next->shared.vm_set.head);
+ return next;
+ } else
+ return NULL;
+ }
+
+ if (vma->shared.vm_set.parent) {
+ if (vma->shared.vm_set.head) {
+ next = vma->shared.vm_set.head;
+ prefetch(next->shared.vm_set.list.next);
+ return next;
+ }
+ } else {
+ next = list_entry(vma->shared.vm_set.list.next,
+ struct vm_area_struct, shared.vm_set.list);
+ if (!next->shared.vm_set.head) {
+ prefetch(next->shared.vm_set.list.next);
+ return next;
+ }
+ }
+
+ ptr = prio_tree_next(iter);
+ if (ptr) {
+ next = prio_tree_entry(ptr, struct vm_area_struct,
+ shared.prio_tree_node);
+ prefetch(next->shared.vm_set.head);
+ return next;
+ } else
+ return NULL;
+}
--- /dev/null
+/*
+ * mm/thrash.c
+ *
+ * Copyright (C) 2004, Red Hat, Inc.
+ * Copyright (C) 2004, Rik van Riel <riel@redhat.com>
+ * Released under the GPL, see the file COPYING for details.
+ *
+ * Simple token based thrashing protection, using the algorithm
+ * described in: http://www.cs.wm.edu/~sjiang/token.pdf
+ */
+#include <linux/jiffies.h>
+#include <linux/mm.h>
+#include <linux/sched.h>
+#include <linux/swap.h>
+
+static spinlock_t swap_token_lock = SPIN_LOCK_UNLOCKED;
+static unsigned long swap_token_timeout;
+unsigned long swap_token_check;
+struct mm_struct * swap_token_mm = &init_mm;
+
+#define SWAP_TOKEN_CHECK_INTERVAL (HZ * 2)
+#define SWAP_TOKEN_TIMEOUT 0
+/*
+ * Currently disabled; Needs further code to work at HZ * 300.
+ */
+unsigned long swap_token_default_timeout = SWAP_TOKEN_TIMEOUT;
+
+/*
+ * Take the token away if the process had no page faults
+ * in the last interval, or if it has held the token for
+ * too long.
+ */
+#define SWAP_TOKEN_ENOUGH_RSS 1
+#define SWAP_TOKEN_TIMED_OUT 2
+static int should_release_swap_token(struct mm_struct *mm)
+{
+ int ret = 0;
+ if (!mm->recent_pagein)
+ ret = SWAP_TOKEN_ENOUGH_RSS;
+ else if (time_after(jiffies, swap_token_timeout))
+ ret = SWAP_TOKEN_TIMED_OUT;
+ mm->recent_pagein = 0;
+ return ret;
+}
+
+/*
+ * Try to grab the swapout protection token. We only try to
+ * grab it once every TOKEN_CHECK_INTERVAL, both to prevent
+ * SMP lock contention and to check that the process that held
+ * the token before is no longer thrashing.
+ */
+void grab_swap_token(void)
+{
+ struct mm_struct *mm;
+ int reason;
+
+ /* We have the token. Let others know we still need it. */
+ if (has_swap_token(current->mm)) {
+ current->mm->recent_pagein = 1;
+ return;
+ }
+
+ if (time_after(jiffies, swap_token_check)) {
+
+ /* Can't get swapout protection if we exceed our RSS limit. */
+ // if (current->mm->rss > current->mm->rlimit_rss)
+ // return;
+
+ /* ... or if we recently held the token. */
+ if (time_before(jiffies, current->mm->swap_token_time))
+ return;
+
+ if (!spin_trylock(&swap_token_lock))
+ return;
+
+ swap_token_check = jiffies + SWAP_TOKEN_CHECK_INTERVAL;
+
+ mm = swap_token_mm;
+ if ((reason = should_release_swap_token(mm))) {
+ unsigned long eligible = jiffies;
+ if (reason == SWAP_TOKEN_TIMED_OUT) {
+ eligible += swap_token_default_timeout;
+ }
+ mm->swap_token_time = eligible;
+ swap_token_timeout = jiffies + swap_token_default_timeout;
+ swap_token_mm = current->mm;
+ }
+ spin_unlock(&swap_token_lock);
+ }
+ return;
+}
+
+/* Called on process exit. */
+void __put_swap_token(struct mm_struct *mm)
+{
+ spin_lock(&swap_token_lock);
+ if (likely(mm == swap_token_mm)) {
+ swap_token_mm = &init_mm;
+ swap_token_check = jiffies;
+ }
+ spin_unlock(&swap_token_lock);
+}
--- /dev/null
+/*
+ * xfrm4_output.c - Common IPsec encapsulation code for IPv4.
+ * Copyright (c) 2004 Herbert Xu <herbert@gondor.apana.org.au>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <net/inet_ecn.h>
+#include <net/ip.h>
+#include <net/xfrm.h>
+#include <net/icmp.h>
+
+/* Add encapsulation header.
+ *
+ * In transport mode, the IP header will be moved forward to make space
+ * for the encapsulation header.
+ *
+ * In tunnel mode, the top IP header will be constructed per RFC 2401.
+ * The following fields in it shall be filled in by x->type->output:
+ * tot_len
+ * check
+ *
+ * On exit, skb->h will be set to the start of the payload to be processed
+ * by x->type->output and skb->nh will be set to the top IP header.
+ */
+static void xfrm4_encap(struct sk_buff *skb)
+{
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct iphdr *iph, *top_iph;
+
+ iph = skb->nh.iph;
+ skb->h.ipiph = iph;
+
+ skb->nh.raw = skb_push(skb, x->props.header_len);
+ top_iph = skb->nh.iph;
+
+ if (!x->props.mode) {
+ skb->h.raw += iph->ihl*4;
+ memmove(top_iph, iph, iph->ihl*4);
+ return;
+ }
+
+ top_iph->ihl = 5;
+ top_iph->version = 4;
+
+ /* DS disclosed */
+ top_iph->tos = INET_ECN_encapsulate(iph->tos, iph->tos);
+ if (x->props.flags & XFRM_STATE_NOECN)
+ IP_ECN_clear(top_iph);
+
+ top_iph->frag_off = iph->frag_off & htons(IP_DF);
+ if (!top_iph->frag_off)
+ __ip_select_ident(top_iph, dst, 0);
+
+ top_iph->ttl = dst_path_metric(dst, RTAX_HOPLIMIT);
+
+ top_iph->saddr = x->props.saddr.a4;
+ top_iph->daddr = x->id.daddr.a4;
+ top_iph->protocol = IPPROTO_IPIP;
+
+ memset(&(IPCB(skb)->opt), 0, sizeof(struct ip_options));
+}
+
+static int xfrm4_tunnel_check_size(struct sk_buff *skb)
+{
+ int mtu, ret = 0;
+ struct dst_entry *dst;
+ struct iphdr *iph = skb->nh.iph;
+
+ if (IPCB(skb)->flags & IPSKB_XFRM_TUNNEL_SIZE)
+ goto out;
+
+ IPCB(skb)->flags |= IPSKB_XFRM_TUNNEL_SIZE;
+
+ if (!(iph->frag_off & htons(IP_DF)))
+ goto out;
+
+ dst = skb->dst;
+ mtu = dst_pmtu(dst) - dst->header_len - dst->trailer_len;
+ if (skb->len > mtu) {
+ icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED, htonl(mtu));
+ ret = -EMSGSIZE;
+ }
+out:
+ return ret;
+}
+
+int xfrm4_output(struct sk_buff *skb)
+{
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ int err;
+
+ if (skb->ip_summed == CHECKSUM_HW) {
+ err = skb_checksum_help(skb, 0);
+ if (err)
+ goto error_nolock;
+ }
+
+ spin_lock_bh(&x->lock);
+ err = xfrm_state_check(x, skb);
+ if (err)
+ goto error;
+
+ if (x->props.mode) {
+ err = xfrm4_tunnel_check_size(skb);
+ if (err)
+ goto error;
+ }
+
+ xfrm4_encap(skb);
+
+ err = x->type->output(skb);
+ if (err)
+ goto error;
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+
+ spin_unlock_bh(&x->lock);
+
+ if (!(skb->dst = dst_pop(dst))) {
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+ err = NET_XMIT_BYPASS;
+
+out_exit:
+ return err;
+error:
+ spin_unlock_bh(&x->lock);
+error_nolock:
+ kfree_skb(skb);
+ goto out_exit;
+}
--- /dev/null
+/*
+ * xfrm6_output.c - Common IPsec encapsulation code for IPv6.
+ * Copyright (C) 2002 USAGI/WIDE Project
+ * Copyright (c) 2004 Herbert Xu <herbert@gondor.apana.org.au>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/skbuff.h>
+#include <linux/spinlock.h>
+#include <linux/icmpv6.h>
+#include <net/dsfield.h>
+#include <net/inet_ecn.h>
+#include <net/ipv6.h>
+#include <net/xfrm.h>
+
+/* Add encapsulation header.
+ *
+ * In transport mode, the IP header and mutable extension headers will be moved
+ * forward to make space for the encapsulation header.
+ *
+ * In tunnel mode, the top IP header will be constructed per RFC 2401.
+ * The following fields in it shall be filled in by x->type->output:
+ * payload_len
+ *
+ * On exit, skb->h will be set to the start of the encapsulation header to be
+ * filled in by x->type->output and skb->nh will be set to the nextheader field
+ * of the extension header directly preceding the encapsulation header, or in
+ * its absence, that of the top IP header. The value of skb->data will always
+ * point to the top IP header.
+ */
+static void xfrm6_encap(struct sk_buff *skb)
+{
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ struct ipv6hdr *iph, *top_iph;
+ int dsfield;
+
+ skb_push(skb, x->props.header_len);
+ iph = skb->nh.ipv6h;
+
+ if (!x->props.mode) {
+ u8 *prevhdr;
+ int hdr_len;
+
+ hdr_len = ip6_find_1stfragopt(skb, &prevhdr);
+ skb->nh.raw = prevhdr - x->props.header_len;
+ skb->h.raw = skb->data + hdr_len;
+ memmove(skb->data, iph, hdr_len);
+ return;
+ }
+
+ skb->nh.raw = skb->data;
+ top_iph = skb->nh.ipv6h;
+ skb->nh.raw = &top_iph->nexthdr;
+ skb->h.ipv6h = top_iph + 1;
+
+ top_iph->version = 6;
+ top_iph->priority = iph->priority;
+ top_iph->flow_lbl[0] = iph->flow_lbl[0];
+ top_iph->flow_lbl[1] = iph->flow_lbl[1];
+ top_iph->flow_lbl[2] = iph->flow_lbl[2];
+ dsfield = ipv6_get_dsfield(top_iph);
+ dsfield = INET_ECN_encapsulate(dsfield, dsfield);
+ if (x->props.flags & XFRM_STATE_NOECN)
+ dsfield &= ~INET_ECN_MASK;
+ ipv6_change_dsfield(top_iph, 0, dsfield);
+ top_iph->nexthdr = IPPROTO_IPV6;
+ top_iph->hop_limit = dst_path_metric(dst, RTAX_HOPLIMIT);
+ ipv6_addr_copy(&top_iph->saddr, (struct in6_addr *)&x->props.saddr);
+ ipv6_addr_copy(&top_iph->daddr, (struct in6_addr *)&x->id.daddr);
+}
+
+static int xfrm6_tunnel_check_size(struct sk_buff *skb)
+{
+ int mtu, ret = 0;
+ struct dst_entry *dst = skb->dst;
+
+ mtu = dst_pmtu(dst) - dst->header_len - dst->trailer_len;
+ if (mtu < IPV6_MIN_MTU)
+ mtu = IPV6_MIN_MTU;
+
+ if (skb->len > mtu) {
+ icmpv6_send(skb, ICMPV6_PKT_TOOBIG, 0, mtu, skb->dev);
+ ret = -EMSGSIZE;
+ }
+
+ return ret;
+}
+
+int xfrm6_output(struct sk_buff *skb)
+{
+ struct dst_entry *dst = skb->dst;
+ struct xfrm_state *x = dst->xfrm;
+ int err;
+
+ if (skb->ip_summed == CHECKSUM_HW) {
+ err = skb_checksum_help(skb, 0);
+ if (err)
+ goto error_nolock;
+ }
+
+ spin_lock_bh(&x->lock);
+ err = xfrm_state_check(x, skb);
+ if (err)
+ goto error;
+
+ if (x->props.mode) {
+ err = xfrm6_tunnel_check_size(skb);
+ if (err)
+ goto error;
+ }
+
+ xfrm6_encap(skb);
+
+ err = x->type->output(skb);
+ if (err)
+ goto error;
+
+ x->curlft.bytes += skb->len;
+ x->curlft.packets++;
+
+ spin_unlock_bh(&x->lock);
+
+ skb->nh.raw = skb->data;
+
+ if (!(skb->dst = dst_pop(dst))) {
+ err = -EHOSTUNREACH;
+ goto error_nolock;
+ }
+ err = NET_XMIT_BYPASS;
+
+out_exit:
+ return err;
+error:
+ spin_unlock_bh(&x->lock);
+error_nolock:
+ kfree_skb(skb);
+ goto out_exit;
+}
--- /dev/null
+/*
+ * Copyright (C)2003,2004 USAGI/WIDE Project
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ * Authors Mitsuru KANDA <mk@linux-ipv6.org>
+ * YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
+ *
+ * Based on net/ipv4/xfrm4_tunnel.c
+ *
+ */
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/xfrm.h>
+#include <linux/list.h>
+#include <net/ip.h>
+#include <net/xfrm.h>
+#include <net/ipv6.h>
+#include <net/protocol.h>
+#include <linux/ipv6.h>
+#include <linux/icmpv6.h>
+
+#ifdef CONFIG_IPV6_XFRM6_TUNNEL_DEBUG
+# define X6TDEBUG 3
+#else
+# define X6TDEBUG 1
+#endif
+
+#define X6TPRINTK(fmt, args...) printk(fmt, ## args)
+#define X6TNOPRINTK(fmt, args...) do { ; } while(0)
+
+#if X6TDEBUG >= 1
+# define X6TPRINTK1 X6TPRINTK
+#else
+# define X6TPRINTK1 X6TNOPRINTK
+#endif
+
+#if X6TDEBUG >= 3
+# define X6TPRINTK3 X6TPRINTK
+#else
+# define X6TPRINTK3 X6TNOPRINTK
+#endif
+
+/*
+ * xfrm_tunnel_spi things are for allocating unique id ("spi")
+ * per xfrm_address_t.
+ */
+struct xfrm6_tunnel_spi {
+ struct hlist_node list_byaddr;
+ struct hlist_node list_byspi;
+ xfrm_address_t addr;
+ u32 spi;
+ atomic_t refcnt;
+#ifdef XFRM6_TUNNEL_SPI_MAGIC
+ u32 magic;
+#endif
+};
+
+#ifdef CONFIG_IPV6_XFRM6_TUNNEL_DEBUG
+# define XFRM6_TUNNEL_SPI_MAGIC 0xdeadbeef
+#endif
+
+static rwlock_t xfrm6_tunnel_spi_lock = RW_LOCK_UNLOCKED;
+
+static u32 xfrm6_tunnel_spi;
+
+#define XFRM6_TUNNEL_SPI_MIN 1
+#define XFRM6_TUNNEL_SPI_MAX 0xffffffff
+
+static kmem_cache_t *xfrm6_tunnel_spi_kmem;
+
+#define XFRM6_TUNNEL_SPI_BYADDR_HSIZE 256
+#define XFRM6_TUNNEL_SPI_BYSPI_HSIZE 256
+
+static struct hlist_head xfrm6_tunnel_spi_byaddr[XFRM6_TUNNEL_SPI_BYADDR_HSIZE];
+static struct hlist_head xfrm6_tunnel_spi_byspi[XFRM6_TUNNEL_SPI_BYSPI_HSIZE];
+
+#ifdef XFRM6_TUNNEL_SPI_MAGIC
+static int x6spi_check_magic(const struct xfrm6_tunnel_spi *x6spi,
+ const char *name)
+{
+ if (unlikely(x6spi->magic != XFRM6_TUNNEL_SPI_MAGIC)) {
+ X6TPRINTK3(KERN_DEBUG "%s(): x6spi object "
+ "at %p has corrupted magic %08x "
+ "(should be %08x)\n",
+ name, x6spi, x6spi->magic, XFRM6_TUNNEL_SPI_MAGIC);
+ return -1;
+ }
+ return 0;
+}
+#else
+static int inline x6spi_check_magic(const struct xfrm6_tunnel_spi *x6spi,
+ const char *name)
+{
+ return 0;
+}
+#endif
+
+#define X6SPI_CHECK_MAGIC(x6spi) x6spi_check_magic((x6spi), __FUNCTION__)
+
+
+static unsigned inline xfrm6_tunnel_spi_hash_byaddr(xfrm_address_t *addr)
+{
+ unsigned h;
+
+ X6TPRINTK3(KERN_DEBUG "%s(addr=%p)\n", __FUNCTION__, addr);
+
+ h = addr->a6[0] ^ addr->a6[1] ^ addr->a6[2] ^ addr->a6[3];
+ h ^= h >> 16;
+ h ^= h >> 8;
+ h &= XFRM6_TUNNEL_SPI_BYADDR_HSIZE - 1;
+
+ X6TPRINTK3(KERN_DEBUG "%s() = %u\n", __FUNCTION__, h);
+
+ return h;
+}
+
+static unsigned inline xfrm6_tunnel_spi_hash_byspi(u32 spi)
+{
+ return spi % XFRM6_TUNNEL_SPI_BYSPI_HSIZE;
+}
+
+
+static int xfrm6_tunnel_spi_init(void)
+{
+ int i;
+
+ X6TPRINTK3(KERN_DEBUG "%s()\n", __FUNCTION__);
+
+ xfrm6_tunnel_spi = 0;
+ xfrm6_tunnel_spi_kmem = kmem_cache_create("xfrm6_tunnel_spi",
+ sizeof(struct xfrm6_tunnel_spi),
+ 0, SLAB_HWCACHE_ALIGN,
+ NULL, NULL);
+ if (!xfrm6_tunnel_spi_kmem) {
+ X6TPRINTK1(KERN_ERR
+ "%s(): failed to allocate xfrm6_tunnel_spi_kmem\n",
+ __FUNCTION__);
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++)
+ INIT_HLIST_HEAD(&xfrm6_tunnel_spi_byaddr[i]);
+ for (i = 0; i < XFRM6_TUNNEL_SPI_BYSPI_HSIZE; i++)
+ INIT_HLIST_HEAD(&xfrm6_tunnel_spi_byspi[i]);
+ return 0;
+}
+
+static void xfrm6_tunnel_spi_fini(void)
+{
+ int i;
+
+ X6TPRINTK3(KERN_DEBUG "%s()\n", __FUNCTION__);
+
+ for (i = 0; i < XFRM6_TUNNEL_SPI_BYADDR_HSIZE; i++) {
+ if (!hlist_empty(&xfrm6_tunnel_spi_byaddr[i]))
+ goto err;
+ }
+ for (i = 0; i < XFRM6_TUNNEL_SPI_BYSPI_HSIZE; i++) {
+ if (!hlist_empty(&xfrm6_tunnel_spi_byspi[i]))
+ goto err;
+ }
+ kmem_cache_destroy(xfrm6_tunnel_spi_kmem);
+ xfrm6_tunnel_spi_kmem = NULL;
+ return;
+err:
+ X6TPRINTK1(KERN_ERR "%s(): table is not empty\n", __FUNCTION__);
+ return;
+}
+
+static struct xfrm6_tunnel_spi *__xfrm6_tunnel_spi_lookup(xfrm_address_t *saddr)
+{
+ struct xfrm6_tunnel_spi *x6spi;
+ struct hlist_node *pos;
+
+ X6TPRINTK3(KERN_DEBUG "%s(saddr=%p)\n", __FUNCTION__, saddr);
+
+ hlist_for_each_entry(x6spi, pos,
+ &xfrm6_tunnel_spi_byaddr[xfrm6_tunnel_spi_hash_byaddr(saddr)],
+ list_byaddr) {
+ if (memcmp(&x6spi->addr, saddr, sizeof(x6spi->addr)) == 0) {
+ X6SPI_CHECK_MAGIC(x6spi);
+ X6TPRINTK3(KERN_DEBUG "%s() = %p(%u)\n", __FUNCTION__, x6spi, x6spi->spi);
+ return x6spi;
+ }
+ }
+
+ X6TPRINTK3(KERN_DEBUG "%s() = NULL(0)\n", __FUNCTION__);
+ return NULL;
+}
+
+u32 xfrm6_tunnel_spi_lookup(xfrm_address_t *saddr)
+{
+ struct xfrm6_tunnel_spi *x6spi;
+ u32 spi;
+
+ X6TPRINTK3(KERN_DEBUG "%s(saddr=%p)\n", __FUNCTION__, saddr);
+
+ read_lock_bh(&xfrm6_tunnel_spi_lock);
+ x6spi = __xfrm6_tunnel_spi_lookup(saddr);
+ spi = x6spi ? x6spi->spi : 0;
+ read_unlock_bh(&xfrm6_tunnel_spi_lock);
+ return spi;
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_spi_lookup);
+
+static u32 __xfrm6_tunnel_alloc_spi(xfrm_address_t *saddr)
+{
+ u32 spi;
+ struct xfrm6_tunnel_spi *x6spi;
+ struct hlist_node *pos;
+ unsigned index;
+
+ X6TPRINTK3(KERN_DEBUG "%s(saddr=%p)\n", __FUNCTION__, saddr);
+
+ if (xfrm6_tunnel_spi < XFRM6_TUNNEL_SPI_MIN ||
+ xfrm6_tunnel_spi >= XFRM6_TUNNEL_SPI_MAX)
+ xfrm6_tunnel_spi = XFRM6_TUNNEL_SPI_MIN;
+ else
+ xfrm6_tunnel_spi++;
+
+ for (spi = xfrm6_tunnel_spi; spi <= XFRM6_TUNNEL_SPI_MAX; spi++) {
+ index = xfrm6_tunnel_spi_hash_byspi(spi);
+ hlist_for_each_entry(x6spi, pos,
+ &xfrm6_tunnel_spi_byspi[index],
+ list_byspi) {
+ if (x6spi->spi == spi)
+ goto try_next_1;
+ }
+ xfrm6_tunnel_spi = spi;
+ goto alloc_spi;
+try_next_1:;
+ }
+ for (spi = XFRM6_TUNNEL_SPI_MIN; spi < xfrm6_tunnel_spi; spi++) {
+ index = xfrm6_tunnel_spi_hash_byspi(spi);
+ hlist_for_each_entry(x6spi, pos,
+ &xfrm6_tunnel_spi_byspi[index],
+ list_byspi) {
+ if (x6spi->spi == spi)
+ goto try_next_2;
+ }
+ xfrm6_tunnel_spi = spi;
+ goto alloc_spi;
+try_next_2:;
+ }
+ spi = 0;
+ goto out;
+alloc_spi:
+ X6TPRINTK3(KERN_DEBUG "%s(): allocate new spi for "
+ "%04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x\n",
+ __FUNCTION__,
+ NIP6(*(struct in6_addr *)saddr));
+ x6spi = kmem_cache_alloc(xfrm6_tunnel_spi_kmem, SLAB_ATOMIC);
+ if (!x6spi) {
+ X6TPRINTK1(KERN_ERR "%s(): kmem_cache_alloc() failed\n",
+ __FUNCTION__);
+ goto out;
+ }
+#ifdef XFRM6_TUNNEL_SPI_MAGIC
+ x6spi->magic = XFRM6_TUNNEL_SPI_MAGIC;
+#endif
+ memcpy(&x6spi->addr, saddr, sizeof(x6spi->addr));
+ x6spi->spi = spi;
+ atomic_set(&x6spi->refcnt, 1);
+
+ hlist_add_head(&x6spi->list_byspi, &xfrm6_tunnel_spi_byspi[index]);
+
+ index = xfrm6_tunnel_spi_hash_byaddr(saddr);
+ hlist_add_head(&x6spi->list_byaddr, &xfrm6_tunnel_spi_byaddr[index]);
+ X6SPI_CHECK_MAGIC(x6spi);
+out:
+ X6TPRINTK3(KERN_DEBUG "%s() = %u\n", __FUNCTION__, spi);
+ return spi;
+}
+
+u32 xfrm6_tunnel_alloc_spi(xfrm_address_t *saddr)
+{
+ struct xfrm6_tunnel_spi *x6spi;
+ u32 spi;
+
+ X6TPRINTK3(KERN_DEBUG "%s(saddr=%p)\n", __FUNCTION__, saddr);
+
+ write_lock_bh(&xfrm6_tunnel_spi_lock);
+ x6spi = __xfrm6_tunnel_spi_lookup(saddr);
+ if (x6spi) {
+ atomic_inc(&x6spi->refcnt);
+ spi = x6spi->spi;
+ } else
+ spi = __xfrm6_tunnel_alloc_spi(saddr);
+ write_unlock_bh(&xfrm6_tunnel_spi_lock);
+
+ X6TPRINTK3(KERN_DEBUG "%s() = %u\n", __FUNCTION__, spi);
+
+ return spi;
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_alloc_spi);
+
+void xfrm6_tunnel_free_spi(xfrm_address_t *saddr)
+{
+ struct xfrm6_tunnel_spi *x6spi;
+ struct hlist_node *pos, *n;
+
+ X6TPRINTK3(KERN_DEBUG "%s(saddr=%p)\n", __FUNCTION__, saddr);
+
+ write_lock_bh(&xfrm6_tunnel_spi_lock);
+
+ hlist_for_each_entry_safe(x6spi, pos, n,
+ &xfrm6_tunnel_spi_byaddr[xfrm6_tunnel_spi_hash_byaddr(saddr)],
+ list_byaddr)
+ {
+ if (memcmp(&x6spi->addr, saddr, sizeof(x6spi->addr)) == 0) {
+ X6TPRINTK3(KERN_DEBUG "%s(): x6spi object "
+ "for %04x:%04x:%04x:%04x:%04x:%04x:%04x:%04x "
+ "found at %p\n",
+ __FUNCTION__,
+ NIP6(*(struct in6_addr *)saddr),
+ x6spi);
+ X6SPI_CHECK_MAGIC(x6spi);
+ if (atomic_dec_and_test(&x6spi->refcnt)) {
+ hlist_del(&x6spi->list_byaddr);
+ hlist_del(&x6spi->list_byspi);
+ kmem_cache_free(xfrm6_tunnel_spi_kmem, x6spi);
+ break;
+ }
+ }
+ }
+ write_unlock_bh(&xfrm6_tunnel_spi_lock);
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_free_spi);
+
+static int xfrm6_tunnel_output(struct sk_buff *skb)
+{
+ struct ipv6hdr *top_iph;
+
+ top_iph = (struct ipv6hdr *)skb->data;
+ top_iph->payload_len = htons(skb->len - sizeof(struct ipv6hdr));
+
+ return 0;
+}
+
+static int xfrm6_tunnel_input(struct xfrm_state *x, struct xfrm_decap_state *decap, struct sk_buff *skb)
+{
+ return 0;
+}
+
+static struct xfrm6_tunnel *xfrm6_tunnel_handler;
+static DECLARE_MUTEX(xfrm6_tunnel_sem);
+
+int xfrm6_tunnel_register(struct xfrm6_tunnel *handler)
+{
+ int ret;
+
+ down(&xfrm6_tunnel_sem);
+ ret = 0;
+ if (xfrm6_tunnel_handler != NULL)
+ ret = -EINVAL;
+ if (!ret)
+ xfrm6_tunnel_handler = handler;
+ up(&xfrm6_tunnel_sem);
+
+ return ret;
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_register);
+
+int xfrm6_tunnel_deregister(struct xfrm6_tunnel *handler)
+{
+ int ret;
+
+ down(&xfrm6_tunnel_sem);
+ ret = 0;
+ if (xfrm6_tunnel_handler != handler)
+ ret = -EINVAL;
+ if (!ret)
+ xfrm6_tunnel_handler = NULL;
+ up(&xfrm6_tunnel_sem);
+
+ synchronize_net();
+
+ return ret;
+}
+
+EXPORT_SYMBOL(xfrm6_tunnel_deregister);
+
+static int xfrm6_tunnel_rcv(struct sk_buff **pskb, unsigned int *nhoffp)
+{
+ struct sk_buff *skb = *pskb;
+ struct xfrm6_tunnel *handler = xfrm6_tunnel_handler;
+ struct ipv6hdr *iph = skb->nh.ipv6h;
+ u32 spi;
+
+ /* device-like_ip6ip6_handler() */
+ if (handler && handler->handler(pskb, nhoffp) == 0)
+ return 0;
+
+ spi = xfrm6_tunnel_spi_lookup((xfrm_address_t *)&iph->saddr);
+ return xfrm6_rcv_spi(pskb, nhoffp, spi);
+}
+
+static void xfrm6_tunnel_err(struct sk_buff *skb, struct inet6_skb_parm *opt,
+ int type, int code, int offset, __u32 info)
+{
+ struct xfrm6_tunnel *handler = xfrm6_tunnel_handler;
+
+ /* call here first for device-like ip6ip6 err handling */
+ if (handler) {
+ handler->err_handler(skb, opt, type, code, offset, info);
+ return;
+ }
+
+ /* xfrm6_tunnel native err handling */
+ switch (type) {
+ case ICMPV6_DEST_UNREACH:
+ switch (code) {
+ case ICMPV6_NOROUTE:
+ case ICMPV6_ADM_PROHIBITED:
+ case ICMPV6_NOT_NEIGHBOUR:
+ case ICMPV6_ADDR_UNREACH:
+ case ICMPV6_PORT_UNREACH:
+ default:
+ X6TPRINTK3(KERN_DEBUG
+ "xfrm6_tunnel: Destination Unreach.\n");
+ break;
+ }
+ break;
+ case ICMPV6_PKT_TOOBIG:
+ X6TPRINTK3(KERN_DEBUG
+ "xfrm6_tunnel: Packet Too Big.\n");
+ break;
+ case ICMPV6_TIME_EXCEED:
+ switch (code) {
+ case ICMPV6_EXC_HOPLIMIT:
+ X6TPRINTK3(KERN_DEBUG
+ "xfrm6_tunnel: Too small Hoplimit.\n");
+ break;
+ case ICMPV6_EXC_FRAGTIME:
+ default:
+ break;
+ }
+ break;
+ case ICMPV6_PARAMPROB:
+ switch (code) {
+ case ICMPV6_HDR_FIELD: break;
+ case ICMPV6_UNK_NEXTHDR: break;
+ case ICMPV6_UNK_OPTION: break;
+ }
+ break;
+ default:
+ break;
+ }
+ return;
+}
+
+static int xfrm6_tunnel_init_state(struct xfrm_state *x, void *args)
+{
+ if (!x->props.mode)
+ return -EINVAL;
+
+ if (x->encap)
+ return -EINVAL;
+
+ x->props.header_len = sizeof(struct ipv6hdr);
+
+ return 0;
+}
+
+static void xfrm6_tunnel_destroy(struct xfrm_state *x)
+{
+ xfrm6_tunnel_free_spi((xfrm_address_t *)&x->props.saddr);
+}
+
+static struct xfrm_type xfrm6_tunnel_type = {
+ .description = "IP6IP6",
+ .owner = THIS_MODULE,
+ .proto = IPPROTO_IPV6,
+ .init_state = xfrm6_tunnel_init_state,
+ .destructor = xfrm6_tunnel_destroy,
+ .input = xfrm6_tunnel_input,
+ .output = xfrm6_tunnel_output,
+};
+
+static struct inet6_protocol xfrm6_tunnel_protocol = {
+ .handler = xfrm6_tunnel_rcv,
+ .err_handler = xfrm6_tunnel_err,
+ .flags = INET6_PROTO_NOPOLICY|INET6_PROTO_FINAL,
+};
+
+static int __init xfrm6_tunnel_init(void)
+{
+ X6TPRINTK3(KERN_DEBUG "%s()\n", __FUNCTION__);
+
+ if (xfrm_register_type(&xfrm6_tunnel_type, AF_INET6) < 0) {
+ X6TPRINTK1(KERN_ERR
+ "xfrm6_tunnel init: can't add xfrm type\n");
+ return -EAGAIN;
+ }
+ if (inet6_add_protocol(&xfrm6_tunnel_protocol, IPPROTO_IPV6) < 0) {
+ X6TPRINTK1(KERN_ERR
+ "xfrm6_tunnel init(): can't add protocol\n");
+ xfrm_unregister_type(&xfrm6_tunnel_type, AF_INET6);
+ return -EAGAIN;
+ }
+ if (xfrm6_tunnel_spi_init() < 0) {
+ X6TPRINTK1(KERN_ERR
+ "xfrm6_tunnel init: failed to initialize spi\n");
+ inet6_del_protocol(&xfrm6_tunnel_protocol, IPPROTO_IPV6);
+ xfrm_unregister_type(&xfrm6_tunnel_type, AF_INET6);
+ return -EAGAIN;
+ }
+ return 0;
+}
+
+static void __exit xfrm6_tunnel_fini(void)
+{
+ X6TPRINTK3(KERN_DEBUG "%s()\n", __FUNCTION__);
+
+ xfrm6_tunnel_spi_fini();
+ if (inet6_del_protocol(&xfrm6_tunnel_protocol, IPPROTO_IPV6) < 0)
+ X6TPRINTK1(KERN_ERR
+ "xfrm6_tunnel close: can't remove protocol\n");
+ if (xfrm_unregister_type(&xfrm6_tunnel_type, AF_INET6) < 0)
+ X6TPRINTK1(KERN_ERR
+ "xfrm6_tunnel close: can't remove xfrm type\n");
+}
+
+module_init(xfrm6_tunnel_init);
+module_exit(xfrm6_tunnel_fini);
+MODULE_LICENSE("GPL");
--- /dev/null
+/*
+ * net/sched/act_api.c Packet action API.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Author: Jamal Hadi Salim
+ *
+ *
+ */
+
+#include <asm/uaccess.h>
+#include <asm/system.h>
+#include <linux/bitops.h>
+#include <linux/config.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/socket.h>
+#include <linux/sockios.h>
+#include <linux/in.h>
+#include <linux/errno.h>
+#include <linux/interrupt.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+#include <linux/init.h>
+#include <linux/kmod.h>
+#include <net/sock.h>
+#include <net/sch_generic.h>
+#include <net/act_api.h>
+
+#if 1 /* control */
+#define DPRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#else
+#define DPRINTK(format,args...)
+#endif
+#if 0 /* data */
+#define D2PRINTK(format,args...) printk(KERN_DEBUG format,##args)
+#else
+#define D2PRINTK(format,args...)
+#endif
+
+static struct tc_action_ops *act_base = NULL;
+static rwlock_t act_mod_lock = RW_LOCK_UNLOCKED;
+
+int tcf_register_action(struct tc_action_ops *act)
+{
+ struct tc_action_ops *a, **ap;
+
+ write_lock(&act_mod_lock);
+ for (ap = &act_base; (a=*ap)!=NULL; ap = &a->next) {
+ if (act->type == a->type || (strcmp(act->kind, a->kind) == 0)) {
+ write_unlock(&act_mod_lock);
+ return -EEXIST;
+ }
+ }
+
+ act->next = NULL;
+ *ap = act;
+
+ write_unlock(&act_mod_lock);
+
+ return 0;
+}
+
+int tcf_unregister_action(struct tc_action_ops *act)
+{
+ struct tc_action_ops *a, **ap;
+ int err = -ENOENT;
+
+ write_lock(&act_mod_lock);
+ for (ap = &act_base; (a=*ap)!=NULL; ap = &a->next)
+ if(a == act)
+ break;
+
+ if (a) {
+ *ap = a->next;
+ a->next = NULL;
+ err = 0;
+ }
+ write_unlock(&act_mod_lock);
+ return err;
+}
+
+/* lookup by name */
+static struct tc_action_ops *tc_lookup_action_n(char *kind)
+{
+
+ struct tc_action_ops *a = NULL;
+
+ if (kind) {
+ read_lock(&act_mod_lock);
+ for (a = act_base; a; a = a->next) {
+ if (strcmp(kind,a->kind) == 0) {
+ if (!try_module_get(a->owner)) {
+ read_unlock(&act_mod_lock);
+ return NULL;
+ }
+ break;
+ }
+ }
+ read_unlock(&act_mod_lock);
+ }
+
+ return a;
+}
+
+/* lookup by rtattr */
+static struct tc_action_ops *tc_lookup_action(struct rtattr *kind)
+{
+
+ struct tc_action_ops *a = NULL;
+
+ if (kind) {
+ read_lock(&act_mod_lock);
+ for (a = act_base; a; a = a->next) {
+
+ if (strcmp((char*)RTA_DATA(kind),a->kind) == 0){
+ if (!try_module_get(a->owner)) {
+ read_unlock(&act_mod_lock);
+ return NULL;
+ }
+ break;
+ }
+ }
+ read_unlock(&act_mod_lock);
+ }
+
+ return a;
+}
+
+#if 0
+/* lookup by id */
+static struct tc_action_ops *tc_lookup_action_id(u32 type)
+{
+ struct tc_action_ops *a = NULL;
+
+ if (type) {
+ read_lock(&act_mod_lock);
+ for (a = act_base; a; a = a->next) {
+ if (a->type == type) {
+ if (!try_module_get(a->owner)) {
+ read_unlock(&act_mod_lock);
+ return NULL;
+ }
+ break;
+ }
+ }
+ read_unlock(&act_mod_lock);
+ }
+
+ return a;
+}
+#endif
+
+int tcf_action_exec(struct sk_buff *skb,struct tc_action *act, struct tcf_result *res)
+{
+
+ struct tc_action *a;
+ int ret = -1;
+
+ if (skb->tc_verd & TC_NCLS) {
+ skb->tc_verd = CLR_TC_NCLS(skb->tc_verd);
+ D2PRINTK("(%p)tcf_action_exec: cleared TC_NCLS in %s out %s\n",skb,skb->input_dev?skb->input_dev->name:"xxx",skb->dev->name);
+ ret = TC_ACT_OK;
+ goto exec_done;
+ }
+ while ((a = act) != NULL) {
+repeat:
+ if (a->ops && a->ops->act) {
+ ret = a->ops->act(&skb,a);
+ if (TC_MUNGED & skb->tc_verd) {
+ /* copied already, allow trampling */
+ skb->tc_verd = SET_TC_OK2MUNGE(skb->tc_verd);
+ skb->tc_verd = CLR_TC_MUNGED(skb->tc_verd);
+ }
+
+ if (ret != TC_ACT_PIPE)
+ goto exec_done;
+ if (ret == TC_ACT_REPEAT)
+ goto repeat; /* we need a ttl - JHS */
+
+ }
+ act = a->next;
+ }
+
+exec_done:
+ if (skb->tc_classid > 0) {
+ res->classid = skb->tc_classid;
+ res->class = 0;
+ skb->tc_classid = 0;
+ }
+
+ return ret;
+}
+
+void tcf_action_destroy(struct tc_action *act, int bind)
+{
+ struct tc_action *a;
+
+ for (a = act; act; a = act) {
+ if (a && a->ops && a->ops->cleanup) {
+ DPRINTK("tcf_action_destroy destroying %p next %p\n", a,a->next?a->next:NULL);
+ act = act->next;
+ if (ACT_P_DELETED == a->ops->cleanup(a, bind)) {
+ module_put(a->ops->owner);
+ }
+
+ a->ops = NULL;
+ kfree(a);
+ } else { /*FIXME: Remove later - catch insertion bugs*/
+ printk("tcf_action_destroy: BUG? destroying NULL ops \n");
+ if (a) {
+ act = act->next;
+ kfree(a);
+ } else {
+ printk("tcf_action_destroy: BUG? destroying NULL action! \n");
+ break;
+ }
+ }
+ }
+}
+
+int tcf_action_dump_old(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
+{
+ int err = -EINVAL;
+
+
+ if ( (NULL == a) || (NULL == a->ops)
+ || (NULL == a->ops->dump) )
+ return err;
+ return a->ops->dump(skb, a, bind, ref);
+
+}
+
+
+int tcf_action_dump_1(struct sk_buff *skb, struct tc_action *a, int bind, int ref)
+{
+ int err = -EINVAL;
+ unsigned char *b = skb->tail;
+ struct rtattr *r;
+
+
+ if ( (NULL == a) || (NULL == a->ops)
+ || (NULL == a->ops->dump) || (NULL == a->ops->kind))
+ return err;
+
+
+ RTA_PUT(skb, TCA_KIND, IFNAMSIZ, a->ops->kind);
+ if (tcf_action_copy_stats(skb,a))
+ goto rtattr_failure;
+ r = (struct rtattr*) skb->tail;
+ RTA_PUT(skb, TCA_OPTIONS, 0, NULL);
+ if ((err = tcf_action_dump_old(skb, a, bind, ref)) > 0) {
+ r->rta_len = skb->tail - (u8*)r;
+ return err;
+ }
+
+
+rtattr_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+
+}
+
+int tcf_action_dump(struct sk_buff *skb, struct tc_action *act, int bind, int ref)
+{
+ struct tc_action *a;
+ int err = -EINVAL;
+ unsigned char *b = skb->tail;
+ struct rtattr *r ;
+
+ while ((a = act) != NULL) {
+ r = (struct rtattr*) skb->tail;
+ act = a->next;
+ RTA_PUT(skb, a->order, 0, NULL);
+ err = tcf_action_dump_1(skb, a, bind, ref);
+ if (0 > err)
+ goto rtattr_failure;
+
+ r->rta_len = skb->tail - (u8*)r;
+ }
+
+ return 0;
+
+rtattr_failure:
+ skb_trim(skb, b - skb->data);
+ return -err;
+
+}
+
+int tcf_action_init_1(struct rtattr *rta, struct rtattr *est, struct tc_action *a, char *name, int ovr, int bind )
+{
+ struct tc_action_ops *a_o;
+ char act_name[4 + IFNAMSIZ + 1];
+ struct rtattr *tb[TCA_ACT_MAX+1];
+ struct rtattr *kind = NULL;
+
+ int err = -EINVAL;
+
+ if (NULL == name) {
+ if (rtattr_parse(tb, TCA_ACT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta))<0)
+ goto err_out;
+ kind = tb[TCA_ACT_KIND-1];
+ if (NULL != kind) {
+ sprintf(act_name, "%s", (char*)RTA_DATA(kind));
+ if (RTA_PAYLOAD(kind) >= IFNAMSIZ) {
+ printk(" Action %s bad\n", (char*)RTA_DATA(kind));
+ goto err_out;
+ }
+
+ } else {
+ printk("Action bad kind\n");
+ goto err_out;
+ }
+ a_o = tc_lookup_action(kind);
+ } else {
+ sprintf(act_name, "%s", name);
+ DPRINTK("tcf_action_init_1: finding %s\n",act_name);
+ a_o = tc_lookup_action_n(name);
+ }
+#ifdef CONFIG_KMOD
+ if (NULL == a_o) {
+ DPRINTK("tcf_action_init_1: trying to load module %s\n",act_name);
+ request_module (act_name);
+ a_o = tc_lookup_action_n(act_name);
+ }
+
+#endif
+ if (NULL == a_o) {
+ printk("failed to find %s\n",act_name);
+ goto err_out;
+ }
+
+ if (NULL == a) {
+ goto err_mod;
+ }
+
+ /* backward compatibility for policer */
+ if (NULL == name) {
+ err = a_o->init(tb[TCA_ACT_OPTIONS-1], est, a, ovr, bind);
+ if (0 > err ) {
+ err = -EINVAL;
+ goto err_mod;
+ }
+ } else {
+ err = a_o->init(rta, est, a, ovr, bind);
+ if (0 > err ) {
+ err = -EINVAL;
+ goto err_mod;
+ }
+ }
+
+ /* module count goes up only when brand new policy is created
+ if it exists and is only bound to in a_o->init() then
+ ACT_P_CREATED is not returned (a zero is).
+ */
+ if (ACT_P_CREATED != err) {
+ module_put(a_o->owner);
+ }
+ a->ops = a_o;
+ DPRINTK("tcf_action_init_1: successfull %s \n",act_name);
+
+ return 0;
+err_mod:
+ module_put(a_o->owner);
+err_out:
+ return err;
+}
+
+int tcf_action_init(struct rtattr *rta, struct rtattr *est, struct tc_action *a, char *name, int ovr , int bind)
+{
+ struct rtattr *tb[TCA_ACT_MAX_PRIO+1];
+ int i;
+ struct tc_action *act = a, *a_s = a;
+
+ int err = -EINVAL;
+
+ if (rtattr_parse(tb, TCA_ACT_MAX_PRIO, RTA_DATA(rta), RTA_PAYLOAD(rta))<0)
+ return err;
+
+ for (i=0; i < TCA_ACT_MAX_PRIO ; i++) {
+ if (tb[i]) {
+ if (NULL == act) {
+ act = kmalloc(sizeof(*act),GFP_KERNEL);
+ if (NULL == act) {
+ err = -ENOMEM;
+ goto bad_ret;
+ }
+ memset(act, 0,sizeof(*act));
+ }
+ act->next = NULL;
+ if (0 > tcf_action_init_1(tb[i],est,act,name,ovr,bind)) {
+ printk("Error processing action order %d\n",i);
+ return err;
+ }
+
+ act->order = i+1;
+ if (a_s != act) {
+ a_s->next = act;
+ a_s = act;
+ }
+ act = NULL;
+ }
+
+ }
+
+ return 0;
+bad_ret:
+ tcf_action_destroy(a, bind);
+ return err;
+}
+
+int tcf_action_copy_stats (struct sk_buff *skb,struct tc_action *a)
+{
+ int err;
+ struct gnet_dump d;
+ struct tcf_act_hdr *h = a->priv;
+
+#ifdef CONFIG_KMOD
+ /* place holder */
+#endif
+
+ if (NULL == h)
+ goto errout;
+
+ if (a->type == TCA_OLD_COMPAT)
+ err = gnet_stats_start_copy_compat(skb, TCA_ACT_STATS,
+ TCA_STATS, TCA_XSTATS, h->stats_lock, &d);
+ else
+ err = gnet_stats_start_copy(skb, TCA_ACT_STATS,
+ h->stats_lock, &d);
+
+ if (err < 0)
+ goto errout;
+
+ if (NULL != a->ops && NULL != a->ops->get_stats)
+ if (a->ops->get_stats(skb, a) < 0)
+ goto errout;
+
+ if (gnet_stats_copy_basic(&d, &h->bstats) < 0 ||
+#ifdef CONFIG_NET_ESTIMATOR
+ gnet_stats_copy_rate_est(&d, &h->rate_est) < 0 ||
+#endif
+ gnet_stats_copy_queue(&d, &h->qstats) < 0)
+ goto errout;
+
+ if (gnet_stats_finish_copy(&d) < 0)
+ goto errout;
+
+ return 0;
+
+errout:
+ return -1;
+}
+
+
+static int
+tca_get_fill(struct sk_buff *skb, struct tc_action *a,
+ u32 pid, u32 seq, unsigned flags, int event, int bind, int ref)
+{
+ struct tcamsg *t;
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb->tail;
+ struct rtattr *x;
+
+ nlh = NLMSG_PUT(skb, pid, seq, event, sizeof(*t));
+ nlh->nlmsg_flags = flags;
+ t = NLMSG_DATA(nlh);
+ t->tca_family = AF_UNSPEC;
+
+ x = (struct rtattr*) skb->tail;
+ RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
+
+ if (0 > tcf_action_dump(skb, a, bind, ref)) {
+ goto rtattr_failure;
+ }
+
+ x->rta_len = skb->tail - (u8*)x;
+
+ nlh->nlmsg_len = skb->tail - b;
+ return skb->len;
+
+rtattr_failure:
+nlmsg_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+
+static int act_get_notify(u32 pid, struct nlmsghdr *n,
+ struct tc_action *a, int event)
+{
+ struct sk_buff *skb;
+
+ int err = 0;
+
+ skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!skb)
+ return -ENOBUFS;
+
+ if (tca_get_fill(skb, a, pid, n->nlmsg_seq, 0, event, 0, 0) <= 0) {
+ kfree_skb(skb);
+ return -EINVAL;
+ }
+
+ err = netlink_unicast(rtnl,skb, pid, MSG_DONTWAIT);
+ if (err > 0)
+ err = 0;
+ return err;
+}
+
+static int tcf_action_get_1(struct rtattr *rta, struct tc_action *a, struct nlmsghdr *n, u32 pid)
+{
+ struct tc_action_ops *a_o;
+ char act_name[4 + IFNAMSIZ + 1];
+ struct rtattr *tb[TCA_ACT_MAX+1];
+ struct rtattr *kind = NULL;
+ int index;
+
+ int err = -EINVAL;
+
+ if (rtattr_parse(tb, TCA_ACT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta))<0)
+ goto err_out;
+
+
+ kind = tb[TCA_ACT_KIND-1];
+ if (NULL != kind) {
+ sprintf(act_name, "%s", (char*)RTA_DATA(kind));
+ if (RTA_PAYLOAD(kind) >= IFNAMSIZ) {
+ printk("tcf_action_get_1: action %s bad\n", (char*)RTA_DATA(kind));
+ goto err_out;
+ }
+
+ } else {
+ printk("tcf_action_get_1: action bad kind\n");
+ goto err_out;
+ }
+
+ if (tb[TCA_ACT_INDEX - 1]) {
+ index = *(int *)RTA_DATA(tb[TCA_ACT_INDEX - 1]);
+ } else {
+ printk("tcf_action_get_1: index not received\n");
+ goto err_out;
+ }
+
+ a_o = tc_lookup_action(kind);
+#ifdef CONFIG_KMOD
+ if (NULL == a_o) {
+ request_module (act_name);
+ a_o = tc_lookup_action_n(act_name);
+ }
+
+#endif
+ if (NULL == a_o) {
+ printk("failed to find %s\n",act_name);
+ goto err_out;
+ }
+
+ if (NULL == a) {
+ goto err_mod;
+ }
+
+ a->ops = a_o;
+
+ if (NULL == a_o->lookup || 0 == a_o->lookup(a, index)) {
+ a->ops = NULL;
+ err = -EINVAL;
+ goto err_mod;
+ }
+
+ module_put(a_o->owner);
+ return 0;
+err_mod:
+ module_put(a_o->owner);
+err_out:
+ return err;
+}
+
+static void cleanup_a (struct tc_action *act)
+{
+ struct tc_action *a;
+
+ for (a = act; act; a = act) {
+ if (a) {
+ act = act->next;
+ a->ops = NULL;
+ a->priv = NULL;
+ kfree(a);
+ } else {
+ printk("cleanup_a: BUG? empty action\n");
+ }
+ }
+}
+
+static struct tc_action_ops *get_ao(struct rtattr *kind, struct tc_action *a)
+{
+ char act_name[4 + IFNAMSIZ + 1];
+ struct tc_action_ops *a_o = NULL;
+
+ if (NULL != kind) {
+ sprintf(act_name, "%s", (char*)RTA_DATA(kind));
+ if (RTA_PAYLOAD(kind) >= IFNAMSIZ) {
+ printk("get_ao: action %s bad\n", (char*)RTA_DATA(kind));
+ return NULL;
+ }
+
+ } else {
+ printk("get_ao: action bad kind\n");
+ return NULL;
+ }
+
+ a_o = tc_lookup_action(kind);
+#ifdef CONFIG_KMOD
+ if (NULL == a_o) {
+ DPRINTK("get_ao: trying to load module %s\n",act_name);
+ request_module (act_name);
+ a_o = tc_lookup_action_n(act_name);
+ }
+#endif
+
+ if (NULL == a_o) {
+ printk("get_ao: failed to find %s\n",act_name);
+ return NULL;
+ }
+
+ a->ops = a_o;
+ return a_o;
+}
+
+static struct tc_action *create_a(int i)
+{
+ struct tc_action *act = NULL;
+
+ act = kmalloc(sizeof(*act),GFP_KERNEL);
+ if (NULL == act) { /* grrr .. */
+ printk("create_a: failed to alloc! \n");
+ return NULL;
+ }
+
+ memset(act, 0,sizeof(*act));
+
+ act->order = i;
+
+ return act;
+}
+
+static int tca_action_flush(struct rtattr *rta, struct nlmsghdr *n, u32 pid)
+{
+ struct sk_buff *skb;
+ unsigned char *b;
+ struct nlmsghdr *nlh;
+ struct tcamsg *t;
+ struct netlink_callback dcb;
+ struct rtattr *x;
+ struct rtattr *tb[TCA_ACT_MAX+1];
+ struct rtattr *kind = NULL;
+ struct tc_action *a = create_a(0);
+ int err = -EINVAL;
+
+ if (NULL == a) {
+ printk("tca_action_flush: couldnt create tc_action\n");
+ return err;
+ }
+
+ skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!skb) {
+ printk("tca_action_flush: failed skb alloc\n");
+ kfree(a);
+ return -ENOBUFS;
+ }
+
+ b = (unsigned char *)skb->tail;
+
+ if (rtattr_parse(tb, TCA_ACT_MAX, RTA_DATA(rta), RTA_PAYLOAD(rta))<0) {
+ goto err_out;
+ }
+
+ kind = tb[TCA_ACT_KIND-1];
+ if (NULL == get_ao(kind, a)) {
+ goto err_out;
+ }
+
+ nlh = NLMSG_PUT(skb, pid, n->nlmsg_seq, RTM_DELACTION, sizeof (*t));
+ t = NLMSG_DATA(nlh);
+ t->tca_family = AF_UNSPEC;
+
+ x = (struct rtattr *) skb->tail;
+ RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
+
+ err = a->ops->walk(skb, &dcb, RTM_DELACTION, a);
+ if (0 > err ) {
+ goto rtattr_failure;
+ }
+
+ x->rta_len = skb->tail - (u8 *) x;
+
+ nlh->nlmsg_len = skb->tail - b;
+ nlh->nlmsg_flags |= NLM_F_ROOT;
+ module_put(a->ops->owner);
+ kfree(a);
+ err = rtnetlink_send(skb, pid, RTMGRP_TC, n->nlmsg_flags&NLM_F_ECHO);
+ if (err > 0)
+ return 0;
+
+ return err;
+
+
+rtattr_failure:
+ module_put(a->ops->owner);
+nlmsg_failure:
+err_out:
+ kfree_skb(skb);
+ kfree(a);
+ return err;
+}
+
+static int tca_action_gd(struct rtattr *rta, struct nlmsghdr *n, u32 pid, int event )
+{
+
+ int s = 0;
+ int i, ret = 0;
+ struct tc_action *act = NULL;
+ struct rtattr *tb[TCA_ACT_MAX_PRIO+1];
+ struct tc_action *a = NULL, *a_s = NULL;
+
+ if (event != RTM_GETACTION && event != RTM_DELACTION)
+ ret = -EINVAL;
+
+ if (rtattr_parse(tb, TCA_ACT_MAX_PRIO, RTA_DATA(rta), RTA_PAYLOAD(rta))<0) {
+ ret = -EINVAL;
+ goto nlmsg_failure;
+ }
+
+ if (event == RTM_DELACTION && n->nlmsg_flags&NLM_F_ROOT) {
+ if (NULL != tb[0] && NULL == tb[1]) {
+ return tca_action_flush(tb[0],n,pid);
+ }
+ }
+
+ for (i=0; i < TCA_ACT_MAX_PRIO ; i++) {
+
+ if (NULL == tb[i])
+ break;
+
+ act = create_a(i+1);
+ if (NULL != a && a != act) {
+ a->next = act;
+ a = act;
+ } else {
+ a = act;
+ }
+
+ if (!s) {
+ s = 1;
+ a_s = a;
+ }
+
+ ret = tcf_action_get_1(tb[i],act,n,pid);
+ if (ret < 0) {
+ printk("tcf_action_get: failed to get! \n");
+ ret = -EINVAL;
+ goto rtattr_failure;
+ }
+
+ }
+
+
+ if (RTM_GETACTION == event) {
+ ret = act_get_notify(pid, n, a_s, event);
+ } else { /* delete */
+
+ struct sk_buff *skb;
+
+ skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!skb) {
+ ret = -ENOBUFS;
+ goto nlmsg_failure;
+ }
+
+ if (tca_get_fill(skb, a_s, pid, n->nlmsg_seq, 0, event, 0 , 1) <= 0) {
+ kfree_skb(skb);
+ ret = -EINVAL;
+ goto nlmsg_failure;
+ }
+
+ /* now do the delete */
+ tcf_action_destroy(a_s, 0);
+
+ ret = rtnetlink_send(skb, pid, RTMGRP_TC, n->nlmsg_flags&NLM_F_ECHO);
+ if (ret > 0)
+ return 0;
+ return ret;
+ }
+rtattr_failure:
+nlmsg_failure:
+ cleanup_a(a_s);
+ return ret;
+}
+
+
+static int tcf_add_notify(struct tc_action *a, u32 pid, u32 seq, int event, unsigned flags)
+{
+ struct tcamsg *t;
+ struct nlmsghdr *nlh;
+ struct sk_buff *skb;
+ struct rtattr *x;
+ unsigned char *b;
+
+
+ int err = 0;
+
+ skb = alloc_skb(NLMSG_GOODSIZE, GFP_KERNEL);
+ if (!skb)
+ return -ENOBUFS;
+
+ b = (unsigned char *)skb->tail;
+
+ nlh = NLMSG_PUT(skb, pid, seq, event, sizeof(*t));
+ nlh->nlmsg_flags = flags;
+ t = NLMSG_DATA(nlh);
+ t->tca_family = AF_UNSPEC;
+
+ x = (struct rtattr*) skb->tail;
+ RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
+
+ if (0 > tcf_action_dump(skb, a, 0, 0)) {
+ goto rtattr_failure;
+ }
+
+ x->rta_len = skb->tail - (u8*)x;
+
+ nlh->nlmsg_len = skb->tail - b;
+ NETLINK_CB(skb).dst_groups = RTMGRP_TC;
+
+ err = rtnetlink_send(skb, pid, RTMGRP_TC, flags&NLM_F_ECHO);
+ if (err > 0)
+ err = 0;
+
+ return err;
+
+rtattr_failure:
+nlmsg_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+
+
+static int tcf_action_add(struct rtattr *rta, struct nlmsghdr *n, u32 pid, int ovr )
+{
+ int ret = 0;
+ struct tc_action *act = NULL;
+ struct tc_action *a = NULL;
+ u32 seq = n->nlmsg_seq;
+
+ act = kmalloc(sizeof(*act),GFP_KERNEL);
+ if (NULL == act)
+ return -ENOMEM;
+
+ memset(act, 0, sizeof(*act));
+
+ ret = tcf_action_init(rta, NULL,act,NULL,ovr,0);
+ /* NOTE: We have an all-or-none model
+ * This means that of any of the actions fail
+ * to update then all are undone.
+ * */
+ if (0 > ret) {
+ tcf_action_destroy(act, 0);
+ goto done;
+ }
+
+ /* dump then free all the actions after update; inserted policy
+ * stays intact
+ * */
+ ret = tcf_add_notify(act, pid, seq, RTM_NEWACTION, n->nlmsg_flags);
+ for (a = act; act; a = act) {
+ if (a) {
+ act = act->next;
+ a->ops = NULL;
+ a->priv = NULL;
+ kfree(a);
+ } else {
+ printk("tcf_action_add: BUG? empty action\n");
+ }
+ }
+done:
+
+ return ret;
+}
+
+static int tc_ctl_action(struct sk_buff *skb, struct nlmsghdr *n, void *arg)
+{
+ struct rtattr **tca = arg;
+ u32 pid = skb ? NETLINK_CB(skb).pid : 0;
+
+ int ret = 0, ovr = 0;
+
+ if (NULL == tca[TCA_ACT_TAB-1]) {
+ printk("tc_ctl_action: received NO action attribs\n");
+ return -EINVAL;
+ }
+
+ /* n->nlmsg_flags&NLM_F_CREATE
+ * */
+ switch (n->nlmsg_type) {
+ case RTM_NEWACTION:
+ /* we are going to assume all other flags
+ * imply create only if it doesnt exist
+ * Note that CREATE | EXCL implies that
+ * but since we want avoid ambiguity (eg when flags
+ * is zero) then just set this
+ */
+ if (n->nlmsg_flags&NLM_F_REPLACE) {
+ ovr = 1;
+ }
+ ret = tcf_action_add(tca[TCA_ACT_TAB-1], n, pid, ovr);
+ break;
+ case RTM_DELACTION:
+ ret = tca_action_gd(tca[TCA_ACT_TAB-1], n, pid,RTM_DELACTION);
+ break;
+ case RTM_GETACTION:
+ ret = tca_action_gd(tca[TCA_ACT_TAB-1], n, pid,RTM_GETACTION);
+ break;
+ default:
+ printk(" Unknown cmd was detected\n");
+ break;
+ }
+
+ return ret;
+}
+
+static char *
+find_dump_kind(struct nlmsghdr *n)
+{
+ struct rtattr *tb1, *tb2[TCA_ACT_MAX+1];
+ struct rtattr *tb[TCA_ACT_MAX_PRIO + 1];
+ struct rtattr *rta[TCAA_MAX + 1];
+ struct rtattr *kind = NULL;
+ int min_len = NLMSG_LENGTH(sizeof (struct tcamsg));
+
+ int attrlen = n->nlmsg_len - NLMSG_ALIGN(min_len);
+ struct rtattr *attr = (void *) n + NLMSG_ALIGN(min_len);
+
+ if (rtattr_parse(rta, TCAA_MAX, attr, attrlen) < 0)
+ return NULL;
+ tb1 = rta[TCA_ACT_TAB - 1];
+ if (NULL == tb1) {
+ return NULL;
+ }
+
+ if (rtattr_parse(tb, TCA_ACT_MAX_PRIO, RTA_DATA(tb1), NLMSG_ALIGN(RTA_PAYLOAD(tb1))) < 0)
+ return NULL;
+ if (NULL == tb[0])
+ return NULL;
+
+ if (rtattr_parse(tb2, TCA_ACT_MAX, RTA_DATA(tb[0]), RTA_PAYLOAD(tb[0]))<0)
+ return NULL;
+ kind = tb2[TCA_ACT_KIND-1];
+
+ return (char *) RTA_DATA(kind);
+}
+
+static int
+tc_dump_action(struct sk_buff *skb, struct netlink_callback *cb)
+{
+ struct nlmsghdr *nlh;
+ unsigned char *b = skb->tail;
+ struct rtattr *x;
+ struct tc_action_ops *a_o;
+ struct tc_action a;
+ int ret = 0;
+
+ struct tcamsg *t = (struct tcamsg *) NLMSG_DATA(cb->nlh);
+ char *kind = find_dump_kind(cb->nlh);
+ if (NULL == kind) {
+ printk("tc_dump_action: action bad kind\n");
+ return 0;
+ }
+
+ a_o = tc_lookup_action_n(kind);
+
+ if (NULL == a_o) {
+ printk("failed to find %s\n", kind);
+ return 0;
+ }
+
+ memset(&a,0,sizeof(struct tc_action));
+ a.ops = a_o;
+
+ if (NULL == a_o->walk) {
+ printk("tc_dump_action: %s !capable of dumping table\n",kind);
+ goto rtattr_failure;
+ }
+
+ nlh = NLMSG_PUT(skb, NETLINK_CB(cb->skb).pid, cb->nlh->nlmsg_seq, cb->nlh->nlmsg_type, sizeof (*t));
+ t = NLMSG_DATA(nlh);
+ t->tca_family = AF_UNSPEC;
+
+ x = (struct rtattr *) skb->tail;
+ RTA_PUT(skb, TCA_ACT_TAB, 0, NULL);
+
+ ret = a_o->walk(skb, cb, RTM_GETACTION, &a);
+ if (0 > ret ) {
+ goto rtattr_failure;
+ }
+
+ if (ret > 0) {
+ x->rta_len = skb->tail - (u8 *) x;
+ ret = skb->len;
+ } else {
+ skb_trim(skb, (u8*)x - skb->data);
+ }
+
+ nlh->nlmsg_len = skb->tail - b;
+ if (NETLINK_CB(cb->skb).pid && ret)
+ nlh->nlmsg_flags |= NLM_F_MULTI;
+ module_put(a_o->owner);
+ return skb->len;
+
+rtattr_failure:
+nlmsg_failure:
+ module_put(a_o->owner);
+ skb_trim(skb, b - skb->data);
+ return skb->len;
+}
+
+static int __init tc_action_init(void)
+{
+ struct rtnetlink_link *link_p = rtnetlink_links[PF_UNSPEC];
+
+ if (link_p) {
+ link_p[RTM_NEWACTION-RTM_BASE].doit = tc_ctl_action;
+ link_p[RTM_DELACTION-RTM_BASE].doit = tc_ctl_action;
+ link_p[RTM_GETACTION-RTM_BASE].doit = tc_ctl_action;
+ link_p[RTM_GETACTION-RTM_BASE].dumpit = tc_dump_action;
+ }
+
+ printk("TC classifier action (bugs to netdev@oss.sgi.com cc hadi@cyberus.ca)\n");
+ return 0;
+}
+
+subsys_initcall(tc_action_init);
+
+EXPORT_SYMBOL(tcf_register_action);
+EXPORT_SYMBOL(tcf_unregister_action);
+EXPORT_SYMBOL(tcf_action_init_1);
+EXPORT_SYMBOL(tcf_action_init);
+EXPORT_SYMBOL(tcf_action_destroy);
+EXPORT_SYMBOL(tcf_action_exec);
+EXPORT_SYMBOL(tcf_action_copy_stats);
+EXPORT_SYMBOL(tcf_action_dump);
+EXPORT_SYMBOL(tcf_action_dump_1);
+EXPORT_SYMBOL(tcf_action_dump_old);
--- /dev/null
+/*
+ * net/sched/sch_netem.c Network emulator
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ *
+ * Many of the algorithms and ideas for this came from
+ * NIST Net which is not copyrighted.
+ *
+ * Authors: Stephen Hemminger <shemminger@osdl.org>
+ * Catalin(ux aka Dino) BOIE <catab at umbrella dot ro>
+ */
+
+#include <linux/config.h>
+#include <linux/module.h>
+#include <linux/bitops.h>
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/netdevice.h>
+#include <linux/skbuff.h>
+#include <linux/rtnetlink.h>
+
+#include <net/pkt_sched.h>
+
+/* Network Emulation Queuing algorithm.
+ ====================================
+
+ Sources: [1] Mark Carson, Darrin Santay, "NIST Net - A Linux-based
+ Network Emulation Tool
+ [2] Luigi Rizzo, DummyNet for FreeBSD
+
+ ----------------------------------------------------------------
+
+ This started out as a simple way to delay outgoing packets to
+ test TCP but has grown to include most of the functionality
+ of a full blown network emulator like NISTnet. It can delay
+ packets and add random jitter (and correlation). The random
+ distribution can be loaded from a table as well to provide
+ normal, Pareto, or experimental curves. Packet loss,
+ duplication, and reordering can also be emulated.
+
+ This qdisc does not do classification that can be handled in
+ layering other disciplines. It does not need to do bandwidth
+ control either since that can be handled by using token
+ bucket or other rate control.
+
+ The simulator is limited by the Linux timer resolution
+ and will create packet bursts on the HZ boundary (1ms).
+*/
+
+struct netem_sched_data {
+ struct Qdisc *qdisc;
+ struct sk_buff_head delayed;
+ struct timer_list timer;
+
+ u32 latency;
+ u32 loss;
+ u32 limit;
+ u32 counter;
+ u32 gap;
+ u32 jitter;
+ u32 duplicate;
+
+ struct crndstate {
+ unsigned long last;
+ unsigned long rho;
+ } delay_cor, loss_cor, dup_cor;
+
+ struct disttable {
+ u32 size;
+ s16 table[0];
+ } *delay_dist;
+};
+
+/* Time stamp put into socket buffer control block */
+struct netem_skb_cb {
+ psched_time_t time_to_send;
+};
+
+/* init_crandom - initialize correlated random number generator
+ * Use entropy source for initial seed.
+ */
+static void init_crandom(struct crndstate *state, unsigned long rho)
+{
+ state->rho = rho;
+ state->last = net_random();
+}
+
+/* get_crandom - correlated random number generator
+ * Next number depends on last value.
+ * rho is scaled to avoid floating point.
+ */
+static unsigned long get_crandom(struct crndstate *state)
+{
+ u64 value, rho;
+ unsigned long answer;
+
+ if (state->rho == 0) /* no correllation */
+ return net_random();
+
+ value = net_random();
+ rho = (u64)state->rho + 1;
+ answer = (value * ((1ull<<32) - rho) + state->last * rho) >> 32;
+ state->last = answer;
+ return answer;
+}
+
+/* tabledist - return a pseudo-randomly distributed value with mean mu and
+ * std deviation sigma. Uses table lookup to approximate the desired
+ * distribution, and a uniformly-distributed pseudo-random source.
+ */
+static long tabledist(unsigned long mu, long sigma,
+ struct crndstate *state, const struct disttable *dist)
+{
+ long t, x;
+ unsigned long rnd;
+
+ if (sigma == 0)
+ return mu;
+
+ rnd = get_crandom(state);
+
+ /* default uniform distribution */
+ if (dist == NULL)
+ return (rnd % (2*sigma)) - sigma + mu;
+
+ t = dist->table[rnd % dist->size];
+ x = (sigma % NETEM_DIST_SCALE) * t;
+ if (x >= 0)
+ x += NETEM_DIST_SCALE/2;
+ else
+ x -= NETEM_DIST_SCALE/2;
+
+ return x / NETEM_DIST_SCALE + (sigma / NETEM_DIST_SCALE) * t + mu;
+}
+
+/* Put skb in the private delayed queue. */
+static int delay_skb(struct Qdisc *sch, struct sk_buff *skb)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ struct netem_skb_cb *cb = (struct netem_skb_cb *)skb->cb;
+ psched_tdiff_t td;
+ psched_time_t now;
+
+ PSCHED_GET_TIME(now);
+ td = tabledist(q->latency, q->jitter, &q->delay_cor, q->delay_dist);
+ PSCHED_TADD2(now, td, cb->time_to_send);
+
+ /* Always queue at tail to keep packets in order */
+ if (likely(q->delayed.qlen < q->limit)) {
+ __skb_queue_tail(&q->delayed, skb);
+ sch->bstats.bytes += skb->len;
+ sch->bstats.packets++;
+
+ if (!timer_pending(&q->timer)) {
+ q->timer.expires = jiffies + PSCHED_US2JIFFIE(td);
+ add_timer(&q->timer);
+ }
+ return NET_XMIT_SUCCESS;
+ }
+
+ sch->qstats.drops++;
+ kfree_skb(skb);
+ return NET_XMIT_DROP;
+}
+
+static int netem_enqueue(struct sk_buff *skb, struct Qdisc *sch)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+
+ pr_debug("netem_enqueue skb=%p @%lu\n", skb, jiffies);
+
+ /* Random packet drop 0 => none, ~0 => all */
+ if (q->loss && q->loss >= get_crandom(&q->loss_cor)) {
+ pr_debug("netem_enqueue: random loss\n");
+ sch->qstats.drops++;
+ return 0; /* lie about loss so TCP doesn't know */
+ }
+
+ /* Random duplication */
+ if (q->duplicate && q->duplicate >= get_crandom(&q->dup_cor)) {
+ struct sk_buff *skb2 = skb_clone(skb, GFP_ATOMIC);
+
+ pr_debug("netem_enqueue: dup %p\n", skb2);
+ if (skb2)
+ delay_skb(sch, skb2);
+ }
+
+ /* If doing simple delay then gap == 0 so all packets
+ * go into the delayed holding queue
+ * otherwise if doing out of order only "1 out of gap"
+ * packets will be delayed.
+ */
+ if (q->counter < q->gap) {
+ int ret;
+
+ ++q->counter;
+ ret = q->qdisc->enqueue(skb, q->qdisc);
+ if (likely(ret == NET_XMIT_SUCCESS)) {
+ sch->q.qlen++;
+ sch->bstats.bytes += skb->len;
+ sch->bstats.packets++;
+ } else
+ sch->qstats.drops++;
+ return ret;
+ }
+
+ q->counter = 0;
+
+ return delay_skb(sch, skb);
+}
+
+/* Requeue packets but don't change time stamp */
+static int netem_requeue(struct sk_buff *skb, struct Qdisc *sch)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ int ret;
+
+ if ((ret = q->qdisc->ops->requeue(skb, q->qdisc)) == 0) {
+ sch->q.qlen++;
+ sch->qstats.requeues++;
+ }
+
+ return ret;
+}
+
+static unsigned int netem_drop(struct Qdisc* sch)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ unsigned int len;
+
+ if ((len = q->qdisc->ops->drop(q->qdisc)) != 0) {
+ sch->q.qlen--;
+ sch->qstats.drops++;
+ }
+ return len;
+}
+
+/* Dequeue packet.
+ * Move all packets that are ready to send from the delay holding
+ * list to the underlying qdisc, then just call dequeue
+ */
+static struct sk_buff *netem_dequeue(struct Qdisc *sch)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ struct sk_buff *skb;
+
+ skb = q->qdisc->dequeue(q->qdisc);
+ if (skb)
+ sch->q.qlen--;
+ return skb;
+}
+
+static void netem_watchdog(unsigned long arg)
+{
+ struct Qdisc *sch = (struct Qdisc *)arg;
+ struct netem_sched_data *q = qdisc_priv(sch);
+ struct net_device *dev = sch->dev;
+ struct sk_buff *skb;
+ psched_time_t now;
+
+ pr_debug("netem_watchdog: fired @%lu\n", jiffies);
+
+ spin_lock_bh(&dev->queue_lock);
+ PSCHED_GET_TIME(now);
+
+ while ((skb = skb_peek(&q->delayed)) != NULL) {
+ const struct netem_skb_cb *cb
+ = (const struct netem_skb_cb *)skb->cb;
+ long delay
+ = PSCHED_US2JIFFIE(PSCHED_TDIFF(cb->time_to_send, now));
+ pr_debug("netem_watchdog: skb %p@%lu %ld\n",
+ skb, jiffies, delay);
+
+ /* if more time remaining? */
+ if (delay > 0) {
+ mod_timer(&q->timer, jiffies + delay);
+ break;
+ }
+ __skb_unlink(skb, &q->delayed);
+
+ if (q->qdisc->enqueue(skb, q->qdisc))
+ sch->qstats.drops++;
+ else
+ sch->q.qlen++;
+ }
+ qdisc_run(dev);
+ spin_unlock_bh(&dev->queue_lock);
+}
+
+static void netem_reset(struct Qdisc *sch)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+
+ qdisc_reset(q->qdisc);
+ skb_queue_purge(&q->delayed);
+
+ sch->q.qlen = 0;
+ del_timer_sync(&q->timer);
+}
+
+static int set_fifo_limit(struct Qdisc *q, int limit)
+{
+ struct rtattr *rta;
+ int ret = -ENOMEM;
+
+ rta = kmalloc(RTA_LENGTH(sizeof(struct tc_fifo_qopt)), GFP_KERNEL);
+ if (rta) {
+ rta->rta_type = RTM_NEWQDISC;
+ rta->rta_len = RTA_LENGTH(sizeof(struct tc_fifo_qopt));
+ ((struct tc_fifo_qopt *)RTA_DATA(rta))->limit = limit;
+
+ ret = q->ops->change(q, rta);
+ kfree(rta);
+ }
+ return ret;
+}
+
+/*
+ * Distribution data is a variable size payload containing
+ * signed 16 bit values.
+ */
+static int get_dist_table(struct Qdisc *sch, const struct rtattr *attr)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ unsigned long n = RTA_PAYLOAD(attr)/sizeof(__s16);
+ const __s16 *data = RTA_DATA(attr);
+ struct disttable *d;
+ int i;
+
+ if (n > 65536)
+ return -EINVAL;
+
+ d = kmalloc(sizeof(*d) + n*sizeof(d->table[0]), GFP_KERNEL);
+ if (!d)
+ return -ENOMEM;
+
+ d->size = n;
+ for (i = 0; i < n; i++)
+ d->table[i] = data[i];
+
+ spin_lock_bh(&sch->dev->queue_lock);
+ d = xchg(&q->delay_dist, d);
+ spin_unlock_bh(&sch->dev->queue_lock);
+
+ kfree(d);
+ return 0;
+}
+
+static int get_correlation(struct Qdisc *sch, const struct rtattr *attr)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ const struct tc_netem_corr *c = RTA_DATA(attr);
+
+ if (RTA_PAYLOAD(attr) != sizeof(*c))
+ return -EINVAL;
+
+ init_crandom(&q->delay_cor, c->delay_corr);
+ init_crandom(&q->loss_cor, c->loss_corr);
+ init_crandom(&q->dup_cor, c->dup_corr);
+ return 0;
+}
+
+static int netem_change(struct Qdisc *sch, struct rtattr *opt)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ struct tc_netem_qopt *qopt;
+ int ret;
+
+ if (opt == NULL || RTA_PAYLOAD(opt) < sizeof(*qopt))
+ return -EINVAL;
+
+ qopt = RTA_DATA(opt);
+ ret = set_fifo_limit(q->qdisc, qopt->limit);
+ if (ret) {
+ pr_debug("netem: can't set fifo limit\n");
+ return ret;
+ }
+
+ q->latency = qopt->latency;
+ q->jitter = qopt->jitter;
+ q->limit = qopt->limit;
+ q->gap = qopt->gap;
+ q->loss = qopt->loss;
+ q->duplicate = qopt->duplicate;
+
+ /* Handle nested options after initial queue options.
+ * Should have put all options in nested format but too late now.
+ */
+ if (RTA_PAYLOAD(opt) > sizeof(*qopt)) {
+ struct rtattr *tb[TCA_NETEM_MAX];
+ if (rtattr_parse(tb, TCA_NETEM_MAX,
+ RTA_DATA(opt) + sizeof(*qopt),
+ RTA_PAYLOAD(opt) - sizeof(*qopt)))
+ return -EINVAL;
+
+ if (tb[TCA_NETEM_CORR-1]) {
+ ret = get_correlation(sch, tb[TCA_NETEM_CORR-1]);
+ if (ret)
+ return ret;
+ }
+
+ if (tb[TCA_NETEM_DELAY_DIST-1]) {
+ ret = get_dist_table(sch, tb[TCA_NETEM_DELAY_DIST-1]);
+ if (ret)
+ return ret;
+ }
+ }
+
+
+ return 0;
+}
+
+static int netem_init(struct Qdisc *sch, struct rtattr *opt)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ int ret;
+
+ if (!opt)
+ return -EINVAL;
+
+ skb_queue_head_init(&q->delayed);
+ init_timer(&q->timer);
+ q->timer.function = netem_watchdog;
+ q->timer.data = (unsigned long) sch;
+ q->counter = 0;
+
+ q->qdisc = qdisc_create_dflt(sch->dev, &pfifo_qdisc_ops);
+ if (!q->qdisc) {
+ pr_debug("netem: qdisc create failed\n");
+ return -ENOMEM;
+ }
+
+ ret = netem_change(sch, opt);
+ if (ret) {
+ pr_debug("netem: change failed\n");
+ qdisc_destroy(q->qdisc);
+ }
+ return ret;
+}
+
+static void netem_destroy(struct Qdisc *sch)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+
+ del_timer_sync(&q->timer);
+ qdisc_destroy(q->qdisc);
+ kfree(q->delay_dist);
+}
+
+static int netem_dump(struct Qdisc *sch, struct sk_buff *skb)
+{
+ const struct netem_sched_data *q = qdisc_priv(sch);
+ unsigned char *b = skb->tail;
+ struct rtattr *rta = (struct rtattr *) b;
+ struct tc_netem_qopt qopt;
+ struct tc_netem_corr cor;
+
+ qopt.latency = q->latency;
+ qopt.jitter = q->jitter;
+ qopt.limit = q->limit;
+ qopt.loss = q->loss;
+ qopt.gap = q->gap;
+ qopt.duplicate = q->duplicate;
+ RTA_PUT(skb, TCA_OPTIONS, sizeof(qopt), &qopt);
+
+ cor.delay_corr = q->delay_cor.rho;
+ cor.loss_corr = q->loss_cor.rho;
+ cor.dup_corr = q->dup_cor.rho;
+ RTA_PUT(skb, TCA_NETEM_CORR, sizeof(cor), &cor);
+ rta->rta_len = skb->tail - b;
+
+ return skb->len;
+
+rtattr_failure:
+ skb_trim(skb, b - skb->data);
+ return -1;
+}
+
+static int netem_dump_class(struct Qdisc *sch, unsigned long cl,
+ struct sk_buff *skb, struct tcmsg *tcm)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+
+ if (cl != 1) /* only one class */
+ return -ENOENT;
+
+ tcm->tcm_handle |= TC_H_MIN(1);
+ tcm->tcm_info = q->qdisc->handle;
+
+ return 0;
+}
+
+static int netem_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new,
+ struct Qdisc **old)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+
+ if (new == NULL)
+ new = &noop_qdisc;
+
+ sch_tree_lock(sch);
+ *old = xchg(&q->qdisc, new);
+ qdisc_reset(*old);
+ sch->q.qlen = 0;
+ sch_tree_unlock(sch);
+
+ return 0;
+}
+
+static struct Qdisc *netem_leaf(struct Qdisc *sch, unsigned long arg)
+{
+ struct netem_sched_data *q = qdisc_priv(sch);
+ return q->qdisc;
+}
+
+static unsigned long netem_get(struct Qdisc *sch, u32 classid)
+{
+ return 1;
+}
+
+static void netem_put(struct Qdisc *sch, unsigned long arg)
+{
+}
+
+static int netem_change_class(struct Qdisc *sch, u32 classid, u32 parentid,
+ struct rtattr **tca, unsigned long *arg)
+{
+ return -ENOSYS;
+}
+
+static int netem_delete(struct Qdisc *sch, unsigned long arg)
+{
+ return -ENOSYS;
+}
+
+static void netem_walk(struct Qdisc *sch, struct qdisc_walker *walker)
+{
+ if (!walker->stop) {
+ if (walker->count >= walker->skip)
+ if (walker->fn(sch, 1, walker) < 0) {
+ walker->stop = 1;
+ return;
+ }
+ walker->count++;
+ }
+}
+
+static struct tcf_proto **netem_find_tcf(struct Qdisc *sch, unsigned long cl)
+{
+ return NULL;
+}
+
+static struct Qdisc_class_ops netem_class_ops = {
+ .graft = netem_graft,
+ .leaf = netem_leaf,
+ .get = netem_get,
+ .put = netem_put,
+ .change = netem_change_class,
+ .delete = netem_delete,
+ .walk = netem_walk,
+ .tcf_chain = netem_find_tcf,
+ .dump = netem_dump_class,
+};
+
+static struct Qdisc_ops netem_qdisc_ops = {
+ .id = "netem",
+ .cl_ops = &netem_class_ops,
+ .priv_size = sizeof(struct netem_sched_data),
+ .enqueue = netem_enqueue,
+ .dequeue = netem_dequeue,
+ .requeue = netem_requeue,
+ .drop = netem_drop,
+ .init = netem_init,
+ .reset = netem_reset,
+ .destroy = netem_destroy,
+ .change = netem_change,
+ .dump = netem_dump,
+ .owner = THIS_MODULE,
+};
+
+
+static int __init netem_module_init(void)
+{
+ return register_qdisc(&netem_qdisc_ops);
+}
+static void __exit netem_module_exit(void)
+{
+ unregister_qdisc(&netem_qdisc_ops);
+}
+module_init(netem_module_init)
+module_exit(netem_module_exit)
+MODULE_LICENSE("GPL");
--- /dev/null
+#!/usr/bin/perl
+
+# Check the stack usage of functions
+#
+# Copyright Joern Engel <joern@wh.fh-wedel.de>
+# Inspired by Linus Torvalds
+# Original idea maybe from Keith Owens
+# s390 port and big speedup by Arnd Bergmann <arnd@bergmann-dalldorf.de>
+# Mips port by Juan Quintela <quintela@mandrakesoft.com>
+# IA64 port via Andreas Dilger
+# Arm port by Holger Schurig
+# Random bits by Matt Mackall <mpm@selenic.com>
+# M68k port by Geert Uytterhoeven and Andreas Schwab
+#
+# Usage:
+# objdump -d vmlinux | stackcheck.pl [arch]
+#
+# TODO : Port to all architectures (one regex per arch)
+
+# check for arch
+#
+# $re is used for two matches:
+# $& (whole re) matches the complete objdump line with the stack growth
+# $1 (first bracket) matches the size of the stack growth
+#
+# use anything else and feel the pain ;)
+my (@stack, $re, $x, $xs);
+{
+ my $arch = shift;
+ if ($arch eq "") {
+ $arch = `uname -m`;
+ }
+
+ $x = "[0-9a-f]"; # hex character
+ $xs = "[0-9a-f ]"; # hex character or space
+ if ($arch eq 'arm') {
+ #c0008ffc: e24dd064 sub sp, sp, #100 ; 0x64
+ $re = qr/.*sub.*sp, sp, #(([0-9]{2}|[3-9])[0-9]{2})/o;
+ } elsif ($arch =~ /^i[3456]86$/) {
+ #c0105234: 81 ec ac 05 00 00 sub $0x5ac,%esp
+ $re = qr/^.*[as][du][db] \$(0x$x{1,8}),\%esp$/o;
+ } elsif ($arch eq 'x86_64') {
+ # 2f60: 48 81 ec e8 05 00 00 sub $0x5e8,%rsp
+ $re = qr/^.*[as][du][db] \$(0x$x{1,8}),\%rsp$/o;
+ } elsif ($arch eq 'ia64') {
+ #e0000000044011fc: 01 0f fc 8c adds r12=-384,r12
+ $re = qr/.*adds.*r12=-(([0-9]{2}|[3-9])[0-9]{2}),r12/o;
+ } elsif ($arch eq 'm68k') {
+ # 2b6c: 4e56 fb70 linkw %fp,#-1168
+ # 1df770: defc ffe4 addaw #-28,%sp
+ $re = qr/.*(?:linkw %fp,|addaw )#-([0-9]{1,4})(?:,%sp)?$/o;
+ } elsif ($arch eq 'mips64') {
+ #8800402c: 67bdfff0 daddiu sp,sp,-16
+ $re = qr/.*daddiu.*sp,sp,-(([0-9]{2}|[3-9])[0-9]{2})/o;
+ } elsif ($arch eq 'mips') {
+ #88003254: 27bdffe0 addiu sp,sp,-32
+ $re = qr/.*addiu.*sp,sp,-(([0-9]{2}|[3-9])[0-9]{2})/o;
+ } elsif ($arch eq 'ppc') {
+ #c00029f4: 94 21 ff 30 stwu r1,-208(r1)
+ $re = qr/.*stwu.*r1,-($x{1,8})\(r1\)/o;
+ } elsif ($arch eq 'ppc64') {
+ #XXX
+ $re = qr/.*stdu.*r1,-($x{1,8})\(r1\)/o;
+ } elsif ($arch =~ /^s390x?$/) {
+ # 11160: a7 fb ff 60 aghi %r15,-160
+ $re = qr/.*ag?hi.*\%r15,-(([0-9]{2}|[3-9])[0-9]{2})/o;
+ } else {
+ print("wrong or unknown architecture\n");
+ exit
+ }
+}
+
+sub bysize($) {
+ my ($asize, $bsize);
+ ($asize = $a) =~ s/.* +(.*)$/$1/;
+ ($bsize = $b) =~ s/.* +(.*)$/$1/;
+ $bsize <=> $asize
+}
+
+#
+# main()
+#
+my $funcre = qr/^$x* <(.*)>:$/;
+my $func;
+while (my $line = <STDIN>) {
+ if ($line =~ m/$funcre/) {
+ $func = $1;
+ }
+ if ($line =~ m/$re/) {
+ my $size = $1;
+ $size = hex($size) if ($size =~ /^0x/);
+
+ if ($size > 0x80000000) {
+ $size = - $size;
+ $size += 0x80000000;
+ $size += 0x80000000;
+ }
+
+ next if $line !~ m/^($xs*)/;
+ my $addr = $1;
+ $addr =~ s/ /0/g;
+ $addr = "0x$addr";
+
+ my $intro = "$addr $func:";
+ my $padlen = 56 - length($intro);
+ while ($padlen > 0) {
+ $intro .= ' ';
+ $padlen -= 8;
+ }
+ next if ($size < 100);
+ push @stack, "$intro$size\n";
+ }
+}
+
+print sort bysize @stack;
--- /dev/null
+hostprogs-y := modpost mk_elfconfig
+always := $(hostprogs-y) empty.o
+
+modpost-objs := modpost.o file2alias.o sumversion.o
+
+# dependencies on generated files need to be listed explicitly
+
+$(obj)/modpost.o $(obj)/file2alias.o $(obj)/sumversion.o: $(obj)/elfconfig.h
+
+quiet_cmd_elfconfig = MKELF $@
+ cmd_elfconfig = $(obj)/mk_elfconfig $(ARCH) < $< > $@
+
+$(obj)/elfconfig.h: $(obj)/empty.o $(obj)/mk_elfconfig FORCE
+ $(call if_changed,elfconfig)
+
+targets += elfconfig.h
--- /dev/null
+/* Simple code to turn various tables in an ELF file into alias definitions.
+ * This deals with kernel datastructures where they should be
+ * dealt with: in the kernel source.
+ *
+ * Copyright 2002-2003 Rusty Russell, IBM Corporation
+ * 2003 Kai Germaschewski
+ *
+ *
+ * This software may be used and distributed according to the terms
+ * of the GNU General Public License, incorporated herein by reference.
+ */
+
+#include "modpost.h"
+
+/* We use the ELF typedefs for kernel_ulong_t but bite the bullet and
+ * use either stdint.h or inttypes.h for the rest. */
+#if KERNEL_ELFCLASS == ELFCLASS32
+typedef Elf32_Addr kernel_ulong_t;
+#else
+typedef Elf64_Addr kernel_ulong_t;
+#endif
+#ifdef __sun__
+#include <inttypes.h>
+#else
+#include <stdint.h>
+#endif
+
+typedef uint32_t __u32;
+typedef uint16_t __u16;
+typedef unsigned char __u8;
+
+/* Big exception to the "don't include kernel headers into userspace, which
+ * even potentially has different endianness and word sizes, since
+ * we handle those differences explicitly below */
+#include "../../include/linux/mod_devicetable.h"
+
+#define ADD(str, sep, cond, field) \
+do { \
+ strcat(str, sep); \
+ if (cond) \
+ sprintf(str + strlen(str), \
+ sizeof(field) == 1 ? "%02X" : \
+ sizeof(field) == 2 ? "%04X" : \
+ sizeof(field) == 4 ? "%08X" : "", \
+ field); \
+ else \
+ sprintf(str + strlen(str), "*"); \
+} while(0)
+
+/* Looks like "usb:vNpNdlNdhNdcNdscNdpNicNiscNipN" */
+static int do_usb_entry(const char *filename,
+ struct usb_device_id *id, char *alias)
+{
+ id->match_flags = TO_NATIVE(id->match_flags);
+ id->idVendor = TO_NATIVE(id->idVendor);
+ id->idProduct = TO_NATIVE(id->idProduct);
+ id->bcdDevice_lo = TO_NATIVE(id->bcdDevice_lo);
+ id->bcdDevice_hi = TO_NATIVE(id->bcdDevice_hi);
+
+ /*
+ * Some modules (visor) have empty slots as placeholder for
+ * run-time specification that results in catch-all alias
+ */
+ if (!(id->idVendor | id->bDeviceClass | id->bInterfaceClass))
+ return 1;
+
+ strcpy(alias, "usb:");
+ ADD(alias, "v", id->match_flags&USB_DEVICE_ID_MATCH_VENDOR,
+ id->idVendor);
+ ADD(alias, "p", id->match_flags&USB_DEVICE_ID_MATCH_PRODUCT,
+ id->idProduct);
+ ADD(alias, "dl", id->match_flags&USB_DEVICE_ID_MATCH_DEV_LO,
+ id->bcdDevice_lo);
+ ADD(alias, "dh", id->match_flags&USB_DEVICE_ID_MATCH_DEV_HI,
+ id->bcdDevice_hi);
+ ADD(alias, "dc", id->match_flags&USB_DEVICE_ID_MATCH_DEV_CLASS,
+ id->bDeviceClass);
+ ADD(alias, "dsc",
+ id->match_flags&USB_DEVICE_ID_MATCH_DEV_SUBCLASS,
+ id->bDeviceSubClass);
+ ADD(alias, "dp",
+ id->match_flags&USB_DEVICE_ID_MATCH_DEV_PROTOCOL,
+ id->bDeviceProtocol);
+ ADD(alias, "ic",
+ id->match_flags&USB_DEVICE_ID_MATCH_INT_CLASS,
+ id->bInterfaceClass);
+ ADD(alias, "isc",
+ id->match_flags&USB_DEVICE_ID_MATCH_INT_SUBCLASS,
+ id->bInterfaceSubClass);
+ ADD(alias, "ip",
+ id->match_flags&USB_DEVICE_ID_MATCH_INT_PROTOCOL,
+ id->bInterfaceProtocol);
+ return 1;
+}
+
+/* Looks like: ieee1394:venNmoNspNverN */
+static int do_ieee1394_entry(const char *filename,
+ struct ieee1394_device_id *id, char *alias)
+{
+ id->match_flags = TO_NATIVE(id->match_flags);
+ id->vendor_id = TO_NATIVE(id->vendor_id);
+ id->model_id = TO_NATIVE(id->model_id);
+ id->specifier_id = TO_NATIVE(id->specifier_id);
+ id->version = TO_NATIVE(id->version);
+
+ strcpy(alias, "ieee1394:");
+ ADD(alias, "ven", id->match_flags & IEEE1394_MATCH_VENDOR_ID,
+ id->vendor_id);
+ ADD(alias, "mo", id->match_flags & IEEE1394_MATCH_MODEL_ID,
+ id->model_id);
+ ADD(alias, "sp", id->match_flags & IEEE1394_MATCH_SPECIFIER_ID,
+ id->specifier_id);
+ ADD(alias, "ver", id->match_flags & IEEE1394_MATCH_VERSION,
+ id->version);
+
+ return 1;
+}
+
+/* Looks like: pci:vNdNsvNsdNbcNscNiN. */
+static int do_pci_entry(const char *filename,
+ struct pci_device_id *id, char *alias)
+{
+ /* Class field can be divided into these three. */
+ unsigned char baseclass, subclass, interface,
+ baseclass_mask, subclass_mask, interface_mask;
+
+ id->vendor = TO_NATIVE(id->vendor);
+ id->device = TO_NATIVE(id->device);
+ id->subvendor = TO_NATIVE(id->subvendor);
+ id->subdevice = TO_NATIVE(id->subdevice);
+ id->class = TO_NATIVE(id->class);
+ id->class_mask = TO_NATIVE(id->class_mask);
+
+ strcpy(alias, "pci:");
+ ADD(alias, "v", id->vendor != PCI_ANY_ID, id->vendor);
+ ADD(alias, "d", id->device != PCI_ANY_ID, id->device);
+ ADD(alias, "sv", id->subvendor != PCI_ANY_ID, id->subvendor);
+ ADD(alias, "sd", id->subdevice != PCI_ANY_ID, id->subdevice);
+
+ baseclass = (id->class) >> 16;
+ baseclass_mask = (id->class_mask) >> 16;
+ subclass = (id->class) >> 8;
+ subclass_mask = (id->class_mask) >> 8;
+ interface = id->class;
+ interface_mask = id->class_mask;
+
+ if ((baseclass_mask != 0 && baseclass_mask != 0xFF)
+ || (subclass_mask != 0 && subclass_mask != 0xFF)
+ || (interface_mask != 0 && interface_mask != 0xFF)) {
+ fprintf(stderr,
+ "*** Warning: Can't handle masks in %s:%04X\n",
+ filename, id->class_mask);
+ return 0;
+ }
+
+ ADD(alias, "bc", baseclass_mask == 0xFF, baseclass);
+ ADD(alias, "sc", subclass_mask == 0xFF, subclass);
+ ADD(alias, "i", interface_mask == 0xFF, interface);
+ return 1;
+}
+
+/* looks like: "ccw:tNmNdtNdmN" */
+static int do_ccw_entry(const char *filename,
+ struct ccw_device_id *id, char *alias)
+{
+ id->match_flags = TO_NATIVE(id->match_flags);
+ id->cu_type = TO_NATIVE(id->cu_type);
+ id->cu_model = TO_NATIVE(id->cu_model);
+ id->dev_type = TO_NATIVE(id->dev_type);
+ id->dev_model = TO_NATIVE(id->dev_model);
+
+ strcpy(alias, "ccw:");
+ ADD(alias, "t", id->match_flags&CCW_DEVICE_ID_MATCH_CU_TYPE,
+ id->cu_type);
+ ADD(alias, "m", id->match_flags&CCW_DEVICE_ID_MATCH_CU_MODEL,
+ id->cu_model);
+ ADD(alias, "dt", id->match_flags&CCW_DEVICE_ID_MATCH_DEVICE_TYPE,
+ id->dev_type);
+ ADD(alias, "dm", id->match_flags&CCW_DEVICE_ID_MATCH_DEVICE_TYPE,
+ id->dev_model);
+ return 1;
+}
+
+/* looks like: "pnp:dD" */
+static int do_pnp_entry(const char *filename,
+ struct pnp_device_id *id, char *alias)
+{
+ sprintf(alias, "pnp:d%s", id->id);
+ return 1;
+}
+
+/* looks like: "pnp:cCdD..." */
+static int do_pnp_card_entry(const char *filename,
+ struct pnp_card_device_id *id, char *alias)
+{
+ int i;
+
+ sprintf(alias, "pnp:c%s", id->id);
+ for (i = 0; i < PNP_MAX_DEVICES; i++) {
+ if (! *id->devs[i].id)
+ break;
+ sprintf(alias + strlen(alias), "d%s", id->devs[i].id);
+ }
+ return 1;
+}
+
+/* Ignore any prefix, eg. v850 prepends _ */
+static inline int sym_is(const char *symbol, const char *name)
+{
+ const char *match;
+
+ match = strstr(symbol, name);
+ if (!match)
+ return 0;
+ return match[strlen(symbol)] == '\0';
+}
+
+static void do_table(void *symval, unsigned long size,
+ unsigned long id_size,
+ void *function,
+ struct module *mod)
+{
+ unsigned int i;
+ char alias[500];
+ int (*do_entry)(const char *, void *entry, char *alias) = function;
+
+ if (size % id_size || size < id_size) {
+ fprintf(stderr, "*** Warning: %s ids %lu bad size "
+ "(each on %lu)\n", mod->name, size, id_size);
+ }
+ /* Leave last one: it's the terminator. */
+ size -= id_size;
+
+ for (i = 0; i < size; i += id_size) {
+ if (do_entry(mod->name, symval+i, alias)) {
+ /* Always end in a wildcard, for future extension */
+ if (alias[strlen(alias)-1] != '*')
+ strcat(alias, "*");
+ buf_printf(&mod->dev_table_buf,
+ "MODULE_ALIAS(\"%s\");\n", alias);
+ }
+ }
+}
+
+/* Create MODULE_ALIAS() statements.
+ * At this time, we cannot write the actual output C source yet,
+ * so we write into the mod->dev_table_buf buffer. */
+void handle_moddevtable(struct module *mod, struct elf_info *info,
+ Elf_Sym *sym, const char *symname)
+{
+ void *symval;
+
+ /* We're looking for a section relative symbol */
+ if (!sym->st_shndx || sym->st_shndx >= info->hdr->e_shnum)
+ return;
+
+ symval = (void *)info->hdr
+ + info->sechdrs[sym->st_shndx].sh_offset
+ + sym->st_value;
+
+ if (sym_is(symname, "__mod_pci_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct pci_device_id),
+ do_pci_entry, mod);
+ else if (sym_is(symname, "__mod_usb_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct usb_device_id),
+ do_usb_entry, mod);
+ else if (sym_is(symname, "__mod_ieee1394_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct ieee1394_device_id),
+ do_ieee1394_entry, mod);
+ else if (sym_is(symname, "__mod_ccw_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct ccw_device_id),
+ do_ccw_entry, mod);
+ else if (sym_is(symname, "__mod_pnp_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct pnp_device_id),
+ do_pnp_entry, mod);
+ else if (sym_is(symname, "__mod_pnp_card_device_table"))
+ do_table(symval, sym->st_size, sizeof(struct pnp_card_device_id),
+ do_pnp_card_entry, mod);
+}
+
+/* Now add out buffered information to the generated C source */
+void add_moddevtable(struct buffer *buf, struct module *mod)
+{
+ buf_printf(buf, "\n");
+ buf_write(buf, mod->dev_table_buf.p, mod->dev_table_buf.pos);
+ free(mod->dev_table_buf.p);
+}
--- /dev/null
+/* Postprocess module symbol versions
+ *
+ * Copyright 2003 Kai Germaschewski
+ * Copyright 2002-2004 Rusty Russell, IBM Corporation
+ *
+ * Based in part on module-init-tools/depmod.c,file2alias
+ *
+ * This software may be used and distributed according to the terms
+ * of the GNU General Public License, incorporated herein by reference.
+ *
+ * Usage: modpost vmlinux module1.o module2.o ...
+ */
+
+#include <ctype.h>
+#include "modpost.h"
+
+/* Are we using CONFIG_MODVERSIONS? */
+int modversions = 0;
+/* Warn about undefined symbols? (do so if we have vmlinux) */
+int have_vmlinux = 0;
+/* Is CONFIG_MODULE_SRCVERSION_ALL set? */
+static int all_versions = 0;
+
+void
+fatal(const char *fmt, ...)
+{
+ va_list arglist;
+
+ fprintf(stderr, "FATAL: ");
+
+ va_start(arglist, fmt);
+ vfprintf(stderr, fmt, arglist);
+ va_end(arglist);
+
+ exit(1);
+}
+
+void
+warn(const char *fmt, ...)
+{
+ va_list arglist;
+
+ fprintf(stderr, "WARNING: ");
+
+ va_start(arglist, fmt);
+ vfprintf(stderr, fmt, arglist);
+ va_end(arglist);
+}
+
+void *do_nofail(void *ptr, const char *file, int line, const char *expr)
+{
+ if (!ptr) {
+ fatal("Memory allocation failure %s line %d: %s.\n",
+ file, line, expr);
+ }
+ return ptr;
+}
+
+/* A list of all modules we processed */
+
+static struct module *modules;
+
+struct module *
+find_module(char *modname)
+{
+ struct module *mod;
+
+ for (mod = modules; mod; mod = mod->next)
+ if (strcmp(mod->name, modname) == 0)
+ break;
+ return mod;
+}
+
+struct module *
+new_module(char *modname)
+{
+ struct module *mod;
+ char *p, *s;
+
+ mod = NOFAIL(malloc(sizeof(*mod)));
+ memset(mod, 0, sizeof(*mod));
+ p = NOFAIL(strdup(modname));
+
+ /* strip trailing .o */
+ if ((s = strrchr(p, '.')) != NULL)
+ if (strcmp(s, ".o") == 0)
+ *s = '\0';
+
+ /* add to list */
+ mod->name = p;
+ mod->next = modules;
+ modules = mod;
+
+ return mod;
+}
+
+/* A hash of all exported symbols,
+ * struct symbol is also used for lists of unresolved symbols */
+
+#define SYMBOL_HASH_SIZE 1024
+
+struct symbol {
+ struct symbol *next;
+ struct module *module;
+ unsigned int crc;
+ int crc_valid;
+ unsigned int weak:1;
+ char name[0];
+};
+
+static struct symbol *symbolhash[SYMBOL_HASH_SIZE];
+
+/* This is based on the hash agorithm from gdbm, via tdb */
+static inline unsigned int tdb_hash(const char *name)
+{
+ unsigned value; /* Used to compute the hash value. */
+ unsigned i; /* Used to cycle through random values. */
+
+ /* Set the initial value from the key size. */
+ for (value = 0x238F13AF * strlen(name), i=0; name[i]; i++)
+ value = (value + (((unsigned char *)name)[i] << (i*5 % 24)));
+
+ return (1103515243 * value + 12345);
+}
+
+/* Allocate a new symbols for use in the hash of exported symbols or
+ * the list of unresolved symbols per module */
+
+struct symbol *
+alloc_symbol(const char *name, unsigned int weak, struct symbol *next)
+{
+ struct symbol *s = NOFAIL(malloc(sizeof(*s) + strlen(name) + 1));
+
+ memset(s, 0, sizeof(*s));
+ strcpy(s->name, name);
+ s->weak = weak;
+ s->next = next;
+ return s;
+}
+
+/* For the hash of exported symbols */
+
+void
+new_symbol(const char *name, struct module *module, unsigned int *crc)
+{
+ unsigned int hash;
+ struct symbol *new;
+
+ hash = tdb_hash(name) % SYMBOL_HASH_SIZE;
+ new = symbolhash[hash] = alloc_symbol(name, 0, symbolhash[hash]);
+ new->module = module;
+ if (crc) {
+ new->crc = *crc;
+ new->crc_valid = 1;
+ }
+}
+
+struct symbol *
+find_symbol(const char *name)
+{
+ struct symbol *s;
+
+ /* For our purposes, .foo matches foo. PPC64 needs this. */
+ if (name[0] == '.')
+ name++;
+
+ for (s = symbolhash[tdb_hash(name) % SYMBOL_HASH_SIZE]; s; s=s->next) {
+ if (strcmp(s->name, name) == 0)
+ return s;
+ }
+ return NULL;
+}
+
+/* Add an exported symbol - it may have already been added without a
+ * CRC, in this case just update the CRC */
+void
+add_exported_symbol(const char *name, struct module *module, unsigned int *crc)
+{
+ struct symbol *s = find_symbol(name);
+
+ if (!s) {
+ new_symbol(name, module, crc);
+ return;
+ }
+ if (crc) {
+ s->crc = *crc;
+ s->crc_valid = 1;
+ }
+}
+
+void *
+grab_file(const char *filename, unsigned long *size)
+{
+ struct stat st;
+ void *map;
+ int fd;
+
+ fd = open(filename, O_RDONLY);
+ if (fd < 0 || fstat(fd, &st) != 0)
+ return NULL;
+
+ *size = st.st_size;
+ map = mmap(NULL, *size, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
+ close(fd);
+
+ if (map == MAP_FAILED)
+ return NULL;
+ return map;
+}
+
+/*
+ Return a copy of the next line in a mmap'ed file.
+ spaces in the beginning of the line is trimmed away.
+ Return a pointer to a static buffer.
+*/
+char*
+get_next_line(unsigned long *pos, void *file, unsigned long size)
+{
+ static char line[4096];
+ int skip = 1;
+ size_t len = 0;
+ signed char *p = (signed char *)file + *pos;
+ char *s = line;
+
+ for (; *pos < size ; (*pos)++)
+ {
+ if (skip && isspace(*p)) {
+ p++;
+ continue;
+ }
+ skip = 0;
+ if (*p != '\n' && (*pos < size)) {
+ len++;
+ *s++ = *p++;
+ if (len > 4095)
+ break; /* Too long, stop */
+ } else {
+ /* End of string */
+ *s = '\0';
+ return line;
+ }
+ }
+ /* End of buffer */
+ return NULL;
+}
+
+void
+release_file(void *file, unsigned long size)
+{
+ munmap(file, size);
+}
+
+void
+parse_elf(struct elf_info *info, const char *filename)
+{
+ unsigned int i;
+ Elf_Ehdr *hdr = info->hdr;
+ Elf_Shdr *sechdrs;
+ Elf_Sym *sym;
+
+ hdr = grab_file(filename, &info->size);
+ if (!hdr) {
+ perror(filename);
+ abort();
+ }
+ info->hdr = hdr;
+ if (info->size < sizeof(*hdr))
+ goto truncated;
+
+ /* Fix endianness in ELF header */
+ hdr->e_shoff = TO_NATIVE(hdr->e_shoff);
+ hdr->e_shstrndx = TO_NATIVE(hdr->e_shstrndx);
+ hdr->e_shnum = TO_NATIVE(hdr->e_shnum);
+ hdr->e_machine = TO_NATIVE(hdr->e_machine);
+ sechdrs = (void *)hdr + hdr->e_shoff;
+ info->sechdrs = sechdrs;
+
+ /* Fix endianness in section headers */
+ for (i = 0; i < hdr->e_shnum; i++) {
+ sechdrs[i].sh_type = TO_NATIVE(sechdrs[i].sh_type);
+ sechdrs[i].sh_offset = TO_NATIVE(sechdrs[i].sh_offset);
+ sechdrs[i].sh_size = TO_NATIVE(sechdrs[i].sh_size);
+ sechdrs[i].sh_link = TO_NATIVE(sechdrs[i].sh_link);
+ sechdrs[i].sh_name = TO_NATIVE(sechdrs[i].sh_name);
+ }
+ /* Find symbol table. */
+ for (i = 1; i < hdr->e_shnum; i++) {
+ const char *secstrings
+ = (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+ if (sechdrs[i].sh_offset > info->size)
+ goto truncated;
+ if (strcmp(secstrings+sechdrs[i].sh_name, ".modinfo") == 0) {
+ info->modinfo = (void *)hdr + sechdrs[i].sh_offset;
+ info->modinfo_len = sechdrs[i].sh_size;
+ }
+ if (sechdrs[i].sh_type != SHT_SYMTAB)
+ continue;
+
+ info->symtab_start = (void *)hdr + sechdrs[i].sh_offset;
+ info->symtab_stop = (void *)hdr + sechdrs[i].sh_offset
+ + sechdrs[i].sh_size;
+ info->strtab = (void *)hdr +
+ sechdrs[sechdrs[i].sh_link].sh_offset;
+ }
+ if (!info->symtab_start) {
+ fprintf(stderr, "modpost: %s no symtab?\n", filename);
+ abort();
+ }
+ /* Fix endianness in symbols */
+ for (sym = info->symtab_start; sym < info->symtab_stop; sym++) {
+ sym->st_shndx = TO_NATIVE(sym->st_shndx);
+ sym->st_name = TO_NATIVE(sym->st_name);
+ sym->st_value = TO_NATIVE(sym->st_value);
+ sym->st_size = TO_NATIVE(sym->st_size);
+ }
+ return;
+
+ truncated:
+ fprintf(stderr, "modpost: %s is truncated.\n", filename);
+ abort();
+}
+
+void
+parse_elf_finish(struct elf_info *info)
+{
+ release_file(info->hdr, info->size);
+}
+
+#define CRC_PFX MODULE_SYMBOL_PREFIX "__crc_"
+#define KSYMTAB_PFX MODULE_SYMBOL_PREFIX "__ksymtab_"
+
+void
+handle_modversions(struct module *mod, struct elf_info *info,
+ Elf_Sym *sym, const char *symname)
+{
+ unsigned int crc;
+
+ switch (sym->st_shndx) {
+ case SHN_COMMON:
+ fprintf(stderr, "*** Warning: \"%s\" [%s] is COMMON symbol\n",
+ symname, mod->name);
+ break;
+ case SHN_ABS:
+ /* CRC'd symbol */
+ if (memcmp(symname, CRC_PFX, strlen(CRC_PFX)) == 0) {
+ crc = (unsigned int) sym->st_value;
+ add_exported_symbol(symname + strlen(CRC_PFX),
+ mod, &crc);
+ }
+ break;
+ case SHN_UNDEF:
+ /* undefined symbol */
+ if (ELF_ST_BIND(sym->st_info) != STB_GLOBAL &&
+ ELF_ST_BIND(sym->st_info) != STB_WEAK)
+ break;
+ /* ignore global offset table */
+ if (strcmp(symname, "_GLOBAL_OFFSET_TABLE_") == 0)
+ break;
+ /* ignore __this_module, it will be resolved shortly */
+ if (strcmp(symname, MODULE_SYMBOL_PREFIX "__this_module") == 0)
+ break;
+#ifdef STT_REGISTER
+ if (info->hdr->e_machine == EM_SPARC ||
+ info->hdr->e_machine == EM_SPARCV9) {
+ /* Ignore register directives. */
+ if (ELF_ST_TYPE(sym->st_info) == STT_REGISTER)
+ break;
+ }
+#endif
+
+ if (memcmp(symname, MODULE_SYMBOL_PREFIX,
+ strlen(MODULE_SYMBOL_PREFIX)) == 0)
+ mod->unres = alloc_symbol(symname +
+ strlen(MODULE_SYMBOL_PREFIX),
+ ELF_ST_BIND(sym->st_info) == STB_WEAK,
+ mod->unres);
+ break;
+ default:
+ /* All exported symbols */
+ if (memcmp(symname, KSYMTAB_PFX, strlen(KSYMTAB_PFX)) == 0) {
+ add_exported_symbol(symname + strlen(KSYMTAB_PFX),
+ mod, NULL);
+ }
+ if (strcmp(symname, MODULE_SYMBOL_PREFIX "init_module") == 0)
+ mod->has_init = 1;
+ if (strcmp(symname, MODULE_SYMBOL_PREFIX "cleanup_module") == 0)
+ mod->has_cleanup = 1;
+ break;
+ }
+}
+
+int
+is_vmlinux(const char *modname)
+{
+ const char *myname;
+
+ if ((myname = strrchr(modname, '/')))
+ myname++;
+ else
+ myname = modname;
+
+ return strcmp(myname, "vmlinux") == 0;
+}
+
+/* Parse tag=value strings from .modinfo section */
+static char *next_string(char *string, unsigned long *secsize)
+{
+ /* Skip non-zero chars */
+ while (string[0]) {
+ string++;
+ if ((*secsize)-- <= 1)
+ return NULL;
+ }
+
+ /* Skip any zero padding. */
+ while (!string[0]) {
+ string++;
+ if ((*secsize)-- <= 1)
+ return NULL;
+ }
+ return string;
+}
+
+static char *get_modinfo(void *modinfo, unsigned long modinfo_len,
+ const char *tag)
+{
+ char *p;
+ unsigned int taglen = strlen(tag);
+ unsigned long size = modinfo_len;
+
+ for (p = modinfo; p; p = next_string(p, &size)) {
+ if (strncmp(p, tag, taglen) == 0 && p[taglen] == '=')
+ return p + taglen + 1;
+ }
+ return NULL;
+}
+
+void
+read_symbols(char *modname)
+{
+ const char *symname;
+ char *version;
+ struct module *mod;
+ struct elf_info info = { };
+ Elf_Sym *sym;
+
+ parse_elf(&info, modname);
+
+ mod = new_module(modname);
+
+ /* When there's no vmlinux, don't print warnings about
+ * unresolved symbols (since there'll be too many ;) */
+ if (is_vmlinux(modname)) {
+ unsigned int fake_crc = 0;
+ have_vmlinux = 1;
+ add_exported_symbol("struct_module", mod, &fake_crc);
+ mod->skip = 1;
+ }
+
+ for (sym = info.symtab_start; sym < info.symtab_stop; sym++) {
+ symname = info.strtab + sym->st_name;
+
+ handle_modversions(mod, &info, sym, symname);
+ handle_moddevtable(mod, &info, sym, symname);
+ }
+
+ version = get_modinfo(info.modinfo, info.modinfo_len, "version");
+ if (version)
+ maybe_frob_rcs_version(modname, version, info.modinfo,
+ version - (char *)info.hdr);
+ if (version || (all_versions && !is_vmlinux(modname)))
+ get_src_version(modname, mod->srcversion,
+ sizeof(mod->srcversion)-1);
+
+ parse_elf_finish(&info);
+
+ /* Our trick to get versioning for struct_module - it's
+ * never passed as an argument to an exported function, so
+ * the automatic versioning doesn't pick it up, but it's really
+ * important anyhow */
+ if (modversions)
+ mod->unres = alloc_symbol("struct_module", 0, mod->unres);
+}
+
+#define SZ 500
+
+/* We first write the generated file into memory using the
+ * following helper, then compare to the file on disk and
+ * only update the later if anything changed */
+
+void __attribute__((format(printf, 2, 3)))
+buf_printf(struct buffer *buf, const char *fmt, ...)
+{
+ char tmp[SZ];
+ int len;
+ va_list ap;
+
+ va_start(ap, fmt);
+ len = vsnprintf(tmp, SZ, fmt, ap);
+ if (buf->size - buf->pos < len + 1) {
+ buf->size += 128;
+ buf->p = realloc(buf->p, buf->size);
+ }
+ strncpy(buf->p + buf->pos, tmp, len + 1);
+ buf->pos += len;
+ va_end(ap);
+}
+
+void
+buf_write(struct buffer *buf, const char *s, int len)
+{
+ if (buf->size - buf->pos < len) {
+ buf->size += len;
+ buf->p = realloc(buf->p, buf->size);
+ }
+ strncpy(buf->p + buf->pos, s, len);
+ buf->pos += len;
+}
+
+/* Header for the generated file */
+
+void
+add_header(struct buffer *b, struct module *mod)
+{
+ buf_printf(b, "#include <linux/module.h>\n");
+ buf_printf(b, "#include <linux/vermagic.h>\n");
+ buf_printf(b, "#include <linux/compiler.h>\n");
+ buf_printf(b, "\n");
+ buf_printf(b, "MODULE_INFO(vermagic, VERMAGIC_STRING);\n");
+ buf_printf(b, "\n");
+ buf_printf(b, "#undef unix\n"); /* We have a module called "unix" */
+ buf_printf(b, "struct module __this_module\n");
+ buf_printf(b, "__attribute__((section(\".gnu.linkonce.this_module\"))) = {\n");
+ buf_printf(b, " .name = __stringify(KBUILD_MODNAME),\n");
+ if (mod->has_init)
+ buf_printf(b, " .init = init_module,\n");
+ if (mod->has_cleanup)
+ buf_printf(b, "#ifdef CONFIG_MODULE_UNLOAD\n"
+ " .exit = cleanup_module,\n"
+ "#endif\n");
+ buf_printf(b, "};\n");
+}
+
+/* Record CRCs for unresolved symbols */
+
+void
+add_versions(struct buffer *b, struct module *mod)
+{
+ struct symbol *s, *exp;
+
+ for (s = mod->unres; s; s = s->next) {
+ exp = find_symbol(s->name);
+ if (!exp || exp->module == mod) {
+ if (have_vmlinux && !s->weak)
+ fprintf(stderr, "*** Warning: \"%s\" [%s.ko] "
+ "undefined!\n", s->name, mod->name);
+ continue;
+ }
+ s->module = exp->module;
+ s->crc_valid = exp->crc_valid;
+ s->crc = exp->crc;
+ }
+
+ if (!modversions)
+ return;
+
+ buf_printf(b, "\n");
+ buf_printf(b, "static const struct modversion_info ____versions[]\n");
+ buf_printf(b, "__attribute_used__\n");
+ buf_printf(b, "__attribute__((section(\"__versions\"))) = {\n");
+
+ for (s = mod->unres; s; s = s->next) {
+ if (!s->module) {
+ continue;
+ }
+ if (!s->crc_valid) {
+ fprintf(stderr, "*** Warning: \"%s\" [%s.ko] "
+ "has no CRC!\n",
+ s->name, mod->name);
+ continue;
+ }
+ buf_printf(b, "\t{ %#8x, \"%s\" },\n", s->crc, s->name);
+ }
+
+ buf_printf(b, "};\n");
+}
+
+void
+add_depends(struct buffer *b, struct module *mod, struct module *modules)
+{
+ struct symbol *s;
+ struct module *m;
+ int first = 1;
+
+ for (m = modules; m; m = m->next) {
+ m->seen = is_vmlinux(m->name);
+ }
+
+ buf_printf(b, "\n");
+ buf_printf(b, "static const char __module_depends[]\n");
+ buf_printf(b, "__attribute_used__\n");
+ buf_printf(b, "__attribute__((section(\".modinfo\"))) =\n");
+ buf_printf(b, "\"depends=");
+ for (s = mod->unres; s; s = s->next) {
+ if (!s->module)
+ continue;
+
+ if (s->module->seen)
+ continue;
+
+ s->module->seen = 1;
+ buf_printf(b, "%s%s", first ? "" : ",",
+ strrchr(s->module->name, '/') + 1);
+ first = 0;
+ }
+ buf_printf(b, "\";\n");
+}
+
+void
+add_srcversion(struct buffer *b, struct module *mod)
+{
+ if (mod->srcversion[0]) {
+ buf_printf(b, "\n");
+ buf_printf(b, "MODULE_INFO(srcversion, \"%s\");\n",
+ mod->srcversion);
+ }
+}
+
+void
+write_if_changed(struct buffer *b, const char *fname)
+{
+ char *tmp;
+ FILE *file;
+ struct stat st;
+
+ file = fopen(fname, "r");
+ if (!file)
+ goto write;
+
+ if (fstat(fileno(file), &st) < 0)
+ goto close_write;
+
+ if (st.st_size != b->pos)
+ goto close_write;
+
+ tmp = NOFAIL(malloc(b->pos));
+ if (fread(tmp, 1, b->pos, file) != b->pos)
+ goto free_write;
+
+ if (memcmp(tmp, b->p, b->pos) != 0)
+ goto free_write;
+
+ free(tmp);
+ fclose(file);
+ return;
+
+ free_write:
+ free(tmp);
+ close_write:
+ fclose(file);
+ write:
+ file = fopen(fname, "w");
+ if (!file) {
+ perror(fname);
+ exit(1);
+ }
+ if (fwrite(b->p, 1, b->pos, file) != b->pos) {
+ perror(fname);
+ exit(1);
+ }
+ fclose(file);
+}
+
+void
+read_dump(const char *fname)
+{
+ unsigned long size, pos = 0;
+ void *file = grab_file(fname, &size);
+ char *line;
+
+ if (!file)
+ /* No symbol versions, silently ignore */
+ return;
+
+ while ((line = get_next_line(&pos, file, size))) {
+ char *symname, *modname, *d;
+ unsigned int crc;
+ struct module *mod;
+
+ if (!(symname = strchr(line, '\t')))
+ goto fail;
+ *symname++ = '\0';
+ if (!(modname = strchr(symname, '\t')))
+ goto fail;
+ *modname++ = '\0';
+ if (strchr(modname, '\t'))
+ goto fail;
+ crc = strtoul(line, &d, 16);
+ if (*symname == '\0' || *modname == '\0' || *d != '\0')
+ goto fail;
+
+ if (!(mod = find_module(modname))) {
+ if (is_vmlinux(modname)) {
+ have_vmlinux = 1;
+ }
+ mod = new_module(NOFAIL(strdup(modname)));
+ mod->skip = 1;
+ }
+ add_exported_symbol(symname, mod, &crc);
+ }
+ return;
+fail:
+ fatal("parse error in symbol dump file\n");
+}
+
+void
+write_dump(const char *fname)
+{
+ struct buffer buf = { };
+ struct symbol *symbol;
+ int n;
+
+ for (n = 0; n < SYMBOL_HASH_SIZE ; n++) {
+ symbol = symbolhash[n];
+ while (symbol) {
+ symbol = symbol->next;
+ }
+ }
+
+ for (n = 0; n < SYMBOL_HASH_SIZE ; n++) {
+ symbol = symbolhash[n];
+ while (symbol) {
+ buf_printf(&buf, "0x%08x\t%s\t%s\n", symbol->crc,
+ symbol->name, symbol->module->name);
+ symbol = symbol->next;
+ }
+ }
+ write_if_changed(&buf, fname);
+}
+
+int
+main(int argc, char **argv)
+{
+ struct module *mod;
+ struct buffer buf = { };
+ char fname[SZ];
+ char *dump_read = NULL, *dump_write = NULL;
+ int opt;
+
+ while ((opt = getopt(argc, argv, "i:mo:a")) != -1) {
+ switch(opt) {
+ case 'i':
+ dump_read = optarg;
+ break;
+ case 'm':
+ modversions = 1;
+ break;
+ case 'o':
+ dump_write = optarg;
+ break;
+ case 'a':
+ all_versions = 1;
+ break;
+ default:
+ exit(1);
+ }
+ }
+
+ if (dump_read)
+ read_dump(dump_read);
+
+ while (optind < argc) {
+ read_symbols(argv[optind++]);
+ }
+
+ for (mod = modules; mod; mod = mod->next) {
+ if (mod->skip)
+ continue;
+
+ buf.pos = 0;
+
+ add_header(&buf, mod);
+ add_versions(&buf, mod);
+ add_depends(&buf, mod, modules);
+ add_moddevtable(&buf, mod);
+ add_srcversion(&buf, mod);
+
+ sprintf(fname, "%s.mod.c", mod->name);
+ write_if_changed(&buf, fname);
+ }
+
+ if (dump_write)
+ write_dump(dump_write);
+
+ return 0;
+}
+
--- /dev/null
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdarg.h>
+#include <string.h>
+#include <sys/types.h>
+#include <sys/stat.h>
+#include <sys/mman.h>
+#include <fcntl.h>
+#include <unistd.h>
+#include <elf.h>
+
+#include "elfconfig.h"
+
+#if KERNEL_ELFCLASS == ELFCLASS32
+
+#define Elf_Ehdr Elf32_Ehdr
+#define Elf_Shdr Elf32_Shdr
+#define Elf_Sym Elf32_Sym
+#define ELF_ST_BIND ELF32_ST_BIND
+#define ELF_ST_TYPE ELF32_ST_TYPE
+
+#else
+
+#define Elf_Ehdr Elf64_Ehdr
+#define Elf_Shdr Elf64_Shdr
+#define Elf_Sym Elf64_Sym
+#define ELF_ST_BIND ELF64_ST_BIND
+#define ELF_ST_TYPE ELF64_ST_TYPE
+
+#endif
+
+#if KERNEL_ELFDATA != HOST_ELFDATA
+
+static inline void __endian(const void *src, void *dest, unsigned int size)
+{
+ unsigned int i;
+ for (i = 0; i < size; i++)
+ ((unsigned char*)dest)[i] = ((unsigned char*)src)[size - i-1];
+}
+
+
+
+#define TO_NATIVE(x) \
+({ \
+ typeof(x) __x; \
+ __endian(&(x), &(__x), sizeof(__x)); \
+ __x; \
+})
+
+#else /* endianness matches */
+
+#define TO_NATIVE(x) (x)
+
+#endif
+
+#define NOFAIL(ptr) do_nofail((ptr), __FILE__, __LINE__, #ptr)
+void *do_nofail(void *ptr, const char *file, int line, const char *expr);
+
+struct buffer {
+ char *p;
+ int pos;
+ int size;
+};
+
+void __attribute__((format(printf, 2, 3)))
+buf_printf(struct buffer *buf, const char *fmt, ...);
+
+void
+buf_write(struct buffer *buf, const char *s, int len);
+
+struct module {
+ struct module *next;
+ const char *name;
+ struct symbol *unres;
+ int seen;
+ int skip;
+ int has_init;
+ int has_cleanup;
+ struct buffer dev_table_buf;
+ char srcversion[25];
+};
+
+struct elf_info {
+ unsigned long size;
+ Elf_Ehdr *hdr;
+ Elf_Shdr *sechdrs;
+ Elf_Sym *symtab_start;
+ Elf_Sym *symtab_stop;
+ const char *strtab;
+ char *modinfo;
+ unsigned int modinfo_len;
+};
+
+void handle_moddevtable(struct module *mod, struct elf_info *info,
+ Elf_Sym *sym, const char *symname);
+
+void add_moddevtable(struct buffer *buf, struct module *mod);
+
+void maybe_frob_rcs_version(const char *modfilename,
+ char *version,
+ void *modinfo,
+ unsigned long modinfo_offset);
+void get_src_version(const char *modname, char sum[], unsigned sumlen);
+
+void *grab_file(const char *filename, unsigned long *size);
+char* get_next_line(unsigned long *pos, void *file, unsigned long size);
+void release_file(void *file, unsigned long size);
--- /dev/null
+#include <netinet/in.h>
+#ifdef __sun__
+#include <inttypes.h>
+#else
+#include <stdint.h>
+#endif
+#include <ctype.h>
+#include <errno.h>
+#include <string.h>
+#include "modpost.h"
+
+/*
+ * Stolen form Cryptographic API.
+ *
+ * MD4 Message Digest Algorithm (RFC1320).
+ *
+ * Implementation derived from Andrew Tridgell and Steve French's
+ * CIFS MD4 implementation, and the cryptoapi implementation
+ * originally based on the public domain implementation written
+ * by Colin Plumb in 1993.
+ *
+ * Copyright (c) Andrew Tridgell 1997-1998.
+ * Modified by Steve French (sfrench@us.ibm.com) 2002
+ * Copyright (c) Cryptoapi developers.
+ * Copyright (c) 2002 David S. Miller (davem@redhat.com)
+ * Copyright (c) 2002 James Morris <jmorris@intercode.com.au>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ */
+#define MD4_DIGEST_SIZE 16
+#define MD4_HMAC_BLOCK_SIZE 64
+#define MD4_BLOCK_WORDS 16
+#define MD4_HASH_WORDS 4
+
+struct md4_ctx {
+ uint32_t hash[MD4_HASH_WORDS];
+ uint32_t block[MD4_BLOCK_WORDS];
+ uint64_t byte_count;
+};
+
+static inline uint32_t lshift(uint32_t x, unsigned int s)
+{
+ x &= 0xFFFFFFFF;
+ return ((x << s) & 0xFFFFFFFF) | (x >> (32 - s));
+}
+
+static inline uint32_t F(uint32_t x, uint32_t y, uint32_t z)
+{
+ return (x & y) | ((~x) & z);
+}
+
+static inline uint32_t G(uint32_t x, uint32_t y, uint32_t z)
+{
+ return (x & y) | (x & z) | (y & z);
+}
+
+static inline uint32_t H(uint32_t x, uint32_t y, uint32_t z)
+{
+ return x ^ y ^ z;
+}
+
+#define ROUND1(a,b,c,d,k,s) (a = lshift(a + F(b,c,d) + k, s))
+#define ROUND2(a,b,c,d,k,s) (a = lshift(a + G(b,c,d) + k + (uint32_t)0x5A827999,s))
+#define ROUND3(a,b,c,d,k,s) (a = lshift(a + H(b,c,d) + k + (uint32_t)0x6ED9EBA1,s))
+
+/* XXX: this stuff can be optimized */
+static inline void le32_to_cpu_array(uint32_t *buf, unsigned int words)
+{
+ while (words--) {
+ *buf = ntohl(*buf);
+ buf++;
+ }
+}
+
+static inline void cpu_to_le32_array(uint32_t *buf, unsigned int words)
+{
+ while (words--) {
+ *buf = htonl(*buf);
+ buf++;
+ }
+}
+
+static void md4_transform(uint32_t *hash, uint32_t const *in)
+{
+ uint32_t a, b, c, d;
+
+ a = hash[0];
+ b = hash[1];
+ c = hash[2];
+ d = hash[3];
+
+ ROUND1(a, b, c, d, in[0], 3);
+ ROUND1(d, a, b, c, in[1], 7);
+ ROUND1(c, d, a, b, in[2], 11);
+ ROUND1(b, c, d, a, in[3], 19);
+ ROUND1(a, b, c, d, in[4], 3);
+ ROUND1(d, a, b, c, in[5], 7);
+ ROUND1(c, d, a, b, in[6], 11);
+ ROUND1(b, c, d, a, in[7], 19);
+ ROUND1(a, b, c, d, in[8], 3);
+ ROUND1(d, a, b, c, in[9], 7);
+ ROUND1(c, d, a, b, in[10], 11);
+ ROUND1(b, c, d, a, in[11], 19);
+ ROUND1(a, b, c, d, in[12], 3);
+ ROUND1(d, a, b, c, in[13], 7);
+ ROUND1(c, d, a, b, in[14], 11);
+ ROUND1(b, c, d, a, in[15], 19);
+
+ ROUND2(a, b, c, d,in[ 0], 3);
+ ROUND2(d, a, b, c, in[4], 5);
+ ROUND2(c, d, a, b, in[8], 9);
+ ROUND2(b, c, d, a, in[12], 13);
+ ROUND2(a, b, c, d, in[1], 3);
+ ROUND2(d, a, b, c, in[5], 5);
+ ROUND2(c, d, a, b, in[9], 9);
+ ROUND2(b, c, d, a, in[13], 13);
+ ROUND2(a, b, c, d, in[2], 3);
+ ROUND2(d, a, b, c, in[6], 5);
+ ROUND2(c, d, a, b, in[10], 9);
+ ROUND2(b, c, d, a, in[14], 13);
+ ROUND2(a, b, c, d, in[3], 3);
+ ROUND2(d, a, b, c, in[7], 5);
+ ROUND2(c, d, a, b, in[11], 9);
+ ROUND2(b, c, d, a, in[15], 13);
+
+ ROUND3(a, b, c, d,in[ 0], 3);
+ ROUND3(d, a, b, c, in[8], 9);
+ ROUND3(c, d, a, b, in[4], 11);
+ ROUND3(b, c, d, a, in[12], 15);
+ ROUND3(a, b, c, d, in[2], 3);
+ ROUND3(d, a, b, c, in[10], 9);
+ ROUND3(c, d, a, b, in[6], 11);
+ ROUND3(b, c, d, a, in[14], 15);
+ ROUND3(a, b, c, d, in[1], 3);
+ ROUND3(d, a, b, c, in[9], 9);
+ ROUND3(c, d, a, b, in[5], 11);
+ ROUND3(b, c, d, a, in[13], 15);
+ ROUND3(a, b, c, d, in[3], 3);
+ ROUND3(d, a, b, c, in[11], 9);
+ ROUND3(c, d, a, b, in[7], 11);
+ ROUND3(b, c, d, a, in[15], 15);
+
+ hash[0] += a;
+ hash[1] += b;
+ hash[2] += c;
+ hash[3] += d;
+}
+
+static inline void md4_transform_helper(struct md4_ctx *ctx)
+{
+ le32_to_cpu_array(ctx->block, sizeof(ctx->block) / sizeof(uint32_t));
+ md4_transform(ctx->hash, ctx->block);
+}
+
+static void md4_init(struct md4_ctx *mctx)
+{
+ mctx->hash[0] = 0x67452301;
+ mctx->hash[1] = 0xefcdab89;
+ mctx->hash[2] = 0x98badcfe;
+ mctx->hash[3] = 0x10325476;
+ mctx->byte_count = 0;
+}
+
+static void md4_update(struct md4_ctx *mctx,
+ const unsigned char *data, unsigned int len)
+{
+ const uint32_t avail = sizeof(mctx->block) - (mctx->byte_count & 0x3f);
+
+ mctx->byte_count += len;
+
+ if (avail > len) {
+ memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
+ data, len);
+ return;
+ }
+
+ memcpy((char *)mctx->block + (sizeof(mctx->block) - avail),
+ data, avail);
+
+ md4_transform_helper(mctx);
+ data += avail;
+ len -= avail;
+
+ while (len >= sizeof(mctx->block)) {
+ memcpy(mctx->block, data, sizeof(mctx->block));
+ md4_transform_helper(mctx);
+ data += sizeof(mctx->block);
+ len -= sizeof(mctx->block);
+ }
+
+ memcpy(mctx->block, data, len);
+}
+
+static void md4_final_ascii(struct md4_ctx *mctx, char *out, unsigned int len)
+{
+ const unsigned int offset = mctx->byte_count & 0x3f;
+ char *p = (char *)mctx->block + offset;
+ int padding = 56 - (offset + 1);
+
+ *p++ = 0x80;
+ if (padding < 0) {
+ memset(p, 0x00, padding + sizeof (uint64_t));
+ md4_transform_helper(mctx);
+ p = (char *)mctx->block;
+ padding = 56;
+ }
+
+ memset(p, 0, padding);
+ mctx->block[14] = mctx->byte_count << 3;
+ mctx->block[15] = mctx->byte_count >> 29;
+ le32_to_cpu_array(mctx->block, (sizeof(mctx->block) -
+ sizeof(uint64_t)) / sizeof(uint32_t));
+ md4_transform(mctx->hash, mctx->block);
+ cpu_to_le32_array(mctx->hash, sizeof(mctx->hash) / sizeof(uint32_t));
+
+ snprintf(out, len, "%08X%08X%08X%08X",
+ mctx->hash[0], mctx->hash[1], mctx->hash[2], mctx->hash[3]);
+}
+
+static inline void add_char(unsigned char c, struct md4_ctx *md)
+{
+ md4_update(md, &c, 1);
+}
+
+static int parse_string(const char *file, unsigned long len,
+ struct md4_ctx *md)
+{
+ unsigned long i;
+
+ add_char(file[0], md);
+ for (i = 1; i < len; i++) {
+ add_char(file[i], md);
+ if (file[i] == '"' && file[i-1] != '\\')
+ break;
+ }
+ return i;
+}
+
+static int parse_comment(const char *file, unsigned long len)
+{
+ unsigned long i;
+
+ for (i = 2; i < len; i++) {
+ if (file[i-1] == '*' && file[i] == '/')
+ break;
+ }
+ return i;
+}
+
+/* FIXME: Handle .s files differently (eg. # starts comments) --RR */
+static int parse_file(const signed char *fname, struct md4_ctx *md)
+{
+ signed char *file;
+ unsigned long i, len;
+
+ file = grab_file(fname, &len);
+ if (!file)
+ return 0;
+
+ for (i = 0; i < len; i++) {
+ /* Collapse and ignore \ and CR. */
+ if (file[i] == '\\' && (i+1 < len) && file[i+1] == '\n') {
+ i++;
+ continue;
+ }
+
+ /* Ignore whitespace */
+ if (isspace(file[i]))
+ continue;
+
+ /* Handle strings as whole units */
+ if (file[i] == '"') {
+ i += parse_string(file+i, len - i, md);
+ continue;
+ }
+
+ /* Comments: ignore */
+ if (file[i] == '/' && file[i+1] == '*') {
+ i += parse_comment(file+i, len - i);
+ continue;
+ }
+
+ add_char(file[i], md);
+ }
+ release_file(file, len);
+ return 1;
+}
+
+/* We have dir/file.o. Open dir/.file.o.cmd, look for deps_ line to
+ * figure out source file. */
+static int parse_source_files(const char *objfile, struct md4_ctx *md)
+{
+ char *cmd, *file, *line, *dir;
+ const char *base;
+ unsigned long flen, pos = 0;
+ int dirlen, ret = 0, check_files = 0;
+
+ cmd = NOFAIL(malloc(strlen(objfile) + sizeof("..cmd")));
+
+ base = strrchr(objfile, '/');
+ if (base) {
+ base++;
+ dirlen = base - objfile;
+ sprintf(cmd, "%.*s.%s.cmd", dirlen, objfile, base);
+ } else {
+ dirlen = 0;
+ sprintf(cmd, ".%s.cmd", objfile);
+ }
+ dir = NOFAIL(malloc(dirlen + 1));
+ strncpy(dir, objfile, dirlen);
+ dir[dirlen] = '\0';
+
+ file = grab_file(cmd, &flen);
+ if (!file) {
+ fprintf(stderr, "Warning: could not find %s for %s\n",
+ cmd, objfile);
+ goto out;
+ }
+
+ /* There will be a line like so:
+ deps_drivers/net/dummy.o := \
+ drivers/net/dummy.c \
+ $(wildcard include/config/net/fastroute.h) \
+ include/linux/config.h \
+ $(wildcard include/config/h.h) \
+ include/linux/module.h \
+
+ Sum all files in the same dir or subdirs.
+ */
+ while ((line = get_next_line(&pos, file, flen)) != NULL) {
+ signed char* p = line;
+ if (strncmp(line, "deps_", sizeof("deps_")-1) == 0) {
+ check_files = 1;
+ continue;
+ }
+ if (!check_files)
+ continue;
+
+ /* Continue until line does not end with '\' */
+ if ( *(p + strlen(p)-1) != '\\')
+ break;
+ /* Terminate line at first space, to get rid of final ' \' */
+ while (*p) {
+ if (isspace(*p)) {
+ *p = '\0';
+ break;
+ }
+ p++;
+ }
+
+ /* Check if this file is in same dir as objfile */
+ if ((strstr(line, dir)+strlen(dir)-1) == strrchr(line, '/')) {
+ if (!parse_file(line, md)) {
+ fprintf(stderr,
+ "Warning: could not open %s: %s\n",
+ line, strerror(errno));
+ goto out_file;
+ }
+
+ }
+
+ }
+
+ /* Everyone parsed OK */
+ ret = 1;
+out_file:
+ release_file(file, flen);
+out:
+ free(dir);
+ free(cmd);
+ return ret;
+}
+
+/* Calc and record src checksum. */
+void get_src_version(const char *modname, char sum[], unsigned sumlen)
+{
+ void *file;
+ unsigned long len;
+ struct md4_ctx md;
+ char *sources, *end, *fname;
+ const char *basename;
+ char filelist[strlen(getenv("MODVERDIR")) + strlen("/") +
+ strlen(modname) - strlen(".o") + strlen(".mod") + 1 ];
+
+ /* Source files for module are in .tmp_versions/modname.mod,
+ after the first line. */
+ if (strrchr(modname, '/'))
+ basename = strrchr(modname, '/') + 1;
+ else
+ basename = modname;
+ sprintf(filelist, "%s/%.*s.mod", getenv("MODVERDIR"),
+ (int) strlen(basename) - 2, basename);
+
+ file = grab_file(filelist, &len);
+ if (!file) {
+ fprintf(stderr, "Warning: could not find versions for %s\n",
+ filelist);
+ return;
+ }
+
+ sources = strchr(file, '\n');
+ if (!sources) {
+ fprintf(stderr, "Warning: malformed versions file for %s\n",
+ modname);
+ goto release;
+ }
+
+ sources++;
+ end = strchr(sources, '\n');
+ if (!end) {
+ fprintf(stderr, "Warning: bad ending versions file for %s\n",
+ modname);
+ goto release;
+ }
+ *end = '\0';
+
+ md4_init(&md);
+ for (fname = strtok(sources, " "); fname; fname = strtok(NULL, " ")) {
+ if (!parse_source_files(fname, &md))
+ goto release;
+ }
+
+ md4_final_ascii(&md, sum, sumlen);
+release:
+ release_file(file, len);
+}
+
+static void write_version(const char *filename, const char *sum,
+ unsigned long offset)
+{
+ int fd;
+
+ fd = open(filename, O_RDWR);
+ if (fd < 0) {
+ fprintf(stderr, "Warning: changing sum in %s failed: %s\n",
+ filename, strerror(errno));
+ return;
+ }
+
+ if (lseek(fd, offset, SEEK_SET) == (off_t)-1) {
+ fprintf(stderr, "Warning: changing sum in %s:%lu failed: %s\n",
+ filename, offset, strerror(errno));
+ goto out;
+ }
+
+ if (write(fd, sum, strlen(sum)+1) != strlen(sum)+1) {
+ fprintf(stderr, "Warning: writing sum in %s failed: %s\n",
+ filename, strerror(errno));
+ goto out;
+ }
+out:
+ close(fd);
+}
+
+static int strip_rcs_crap(signed char *version)
+{
+ unsigned int len, full_len;
+
+ if (strncmp(version, "$Revision", strlen("$Revision")) != 0)
+ return 0;
+
+ /* Space for version string follows. */
+ full_len = strlen(version) + strlen(version + strlen(version) + 1) + 2;
+
+ /* Move string to start with version number: prefix will be
+ * $Revision$ or $Revision: */
+ len = strlen("$Revision");
+ if (version[len] == ':' || version[len] == '$')
+ len++;
+ while (isspace(version[len]))
+ len++;
+ memmove(version, version+len, full_len-len);
+ full_len -= len;
+
+ /* Preserve up to next whitespace. */
+ len = 0;
+ while (version[len] && !isspace(version[len]))
+ len++;
+ memmove(version + len, version + strlen(version),
+ full_len - strlen(version));
+ return 1;
+}
+
+/* Clean up RCS-style version numbers. */
+void maybe_frob_rcs_version(const char *modfilename,
+ char *version,
+ void *modinfo,
+ unsigned long version_offset)
+{
+ if (strip_rcs_crap(version))
+ write_version(modfilename, version, version_offset);
+}
--- /dev/null
+/* mod-extract.c: module extractor for signing
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <stdio.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <string.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <sys/mman.h>
+#include <sys/stat.h>
+#include <elf.h>
+#include <asm/byteorder.h>
+
+void extract_elf64(void *buffer, size_t size, Elf64_Ehdr *hdr);
+void extract_elf32(void *buffer, size_t size, Elf32_Ehdr *hdr);
+
+struct byteorder {
+ uint16_t (*get16)(const uint16_t *);
+ uint32_t (*get32)(const uint32_t *);
+ uint64_t (*get64)(const uint64_t *);
+ void (*set16)(uint16_t *, uint16_t);
+ void (*set32)(uint32_t *, uint32_t);
+ void (*set64)(uint64_t *, uint64_t);
+};
+
+uint16_t get16_le(const uint16_t *p) { return __le16_to_cpu(*p); }
+uint32_t get32_le(const uint32_t *p) { return __le32_to_cpu(*p); }
+uint64_t get64_le(const uint64_t *p) { return __le64_to_cpu(*p); }
+uint16_t get16_be(const uint16_t *p) { return __be16_to_cpu(*p); }
+uint32_t get32_be(const uint32_t *p) { return __be32_to_cpu(*p); }
+uint64_t get64_be(const uint64_t *p) { return __be64_to_cpu(*p); }
+
+void set16_le(uint16_t *p, uint16_t n) { *p = __cpu_to_le16(n); }
+void set32_le(uint32_t *p, uint32_t n) { *p = __cpu_to_le32(n); }
+void set64_le(uint64_t *p, uint64_t n) { *p = __cpu_to_le64(n); }
+void set16_be(uint16_t *p, uint16_t n) { *p = __cpu_to_be16(n); }
+void set32_be(uint32_t *p, uint32_t n) { *p = __cpu_to_be32(n); }
+void set64_be(uint64_t *p, uint64_t n) { *p = __cpu_to_be64(n); }
+
+const struct byteorder byteorder_le = {
+ get16_le, get32_le, get64_le,
+ set16_le, set32_le, set64_le
+};
+const struct byteorder byteorder_be = {
+ get16_be, get32_be, get64_be,
+ set16_be, set32_be, set64_be
+};
+const struct byteorder *order;
+
+uint16_t get16(const uint16_t *p) { return order->get16(p); }
+uint32_t get32(const uint32_t *p) { return order->get32(p); }
+uint64_t get64(const uint64_t *p) { return order->get64(p); }
+void set16(uint16_t *p, uint16_t n) { order->set16(p, n); }
+void set32(uint32_t *p, uint32_t n) { order->set32(p, n); }
+void set64(uint64_t *p, uint64_t n) { order->set64(p, n); }
+
+FILE *outfd;
+uint8_t csum, xcsum;
+
+void write_out(const void *data, size_t size)
+{
+ const uint8_t *p = data;
+ size_t loop;
+
+ for (loop = 0; loop < size; loop++) {
+ csum += p[loop];
+ xcsum += p[loop];
+ }
+
+ if (fwrite(data, 1, size, outfd) != size) {
+ perror("write");
+ exit(1);
+ }
+}
+
+#define write_out_val(VAL) write_out(&(VAL), sizeof(VAL))
+
+int is_verbose;
+
+void verbose(const char *fmt, ...) __attribute__((format(printf,1,2)));
+void verbose(const char *fmt, ...)
+{
+ va_list va;
+
+ if (is_verbose) {
+ va_start(va, fmt);
+ vprintf(fmt, va);
+ va_end(va);
+ }
+}
+
+void usage(void) __attribute__((noreturn));
+void usage(void)
+{
+ fprintf(stderr, "Usage: mod-extract [-v] <modulefile> <extractfile>\n");
+ exit(2);
+}
+
+/*****************************************************************************/
+/*
+ *
+ */
+int main(int argc, char **argv)
+{
+ struct stat st;
+ Elf32_Ehdr *hdr32;
+ Elf64_Ehdr *hdr64;
+ size_t len;
+ void *buffer;
+ int fd, be, b64;
+
+ while (argc > 1 && strcmp("-v", argv[1]) == 0) {
+ argv++;
+ argc--;
+ is_verbose++;
+ }
+
+ if (argc != 3)
+ usage();
+
+ /* map the module into memory */
+ fd = open(argv[1], O_RDONLY);
+ if (fd < 0) {
+ perror("open input");
+ exit(1);
+ }
+
+ if (fstat(fd, &st) < 0) {
+ perror("fstat");
+ exit(1);
+ }
+
+ len = st.st_size;
+
+ buffer = mmap(NULL, len, PROT_READ|PROT_WRITE, MAP_PRIVATE, fd, 0);
+ if (buffer == MAP_FAILED) {
+ perror("mmap");
+ exit(1);
+ }
+
+ if (close(fd) < 0) {
+ perror("close input");
+ exit(1);
+ }
+
+ /* check it's an ELF object */
+ hdr32 = buffer;
+ hdr64 = buffer;
+
+ if (hdr32->e_ident[EI_MAG0] != ELFMAG0 ||
+ hdr32->e_ident[EI_MAG1] != ELFMAG1 ||
+ hdr32->e_ident[EI_MAG2] != ELFMAG2 ||
+ hdr32->e_ident[EI_MAG3] != ELFMAG3
+ ) {
+ fprintf(stderr, "Module does not appear to be ELF\n");
+ exit(3);
+ }
+
+ /* determine endianness and word size */
+ b64 = (hdr32->e_ident[EI_CLASS] == ELFCLASS64);
+ be = (hdr32->e_ident[EI_DATA] == ELFDATA2MSB);
+ order = be ? &byteorder_be : &byteorder_le;
+
+ verbose("Module is %s-bit %s-endian\n",
+ b64 ? "64" : "32",
+ be ? "big" : "little");
+
+ /* open the output file */
+ outfd = fopen(argv[2], "w");
+ if (!outfd) {
+ perror("open output");
+ exit(1);
+ }
+
+ /* perform the extraction */
+ if (b64)
+ extract_elf64(buffer, len, hdr64);
+ else
+ extract_elf32(buffer, len, hdr32);
+
+ /* done */
+ if (fclose(outfd) == EOF) {
+ perror("close output");
+ exit(1);
+ }
+
+ return 0;
+
+} /* end main() */
+
+/*****************************************************************************/
+/*
+ * extract a RELA table
+ * - need to canonicalise the entries in case section addition/removal has
+ * rearranged the symbol table and the section table
+ */
+void extract_elf64_rela(const void *buffer, int secix, int targetix,
+ const Elf64_Rela *relatab, size_t nrels,
+ const Elf64_Sym *symbols, size_t nsyms,
+ const Elf64_Shdr *sections, size_t nsects, int *canonmap,
+ const char *strings, size_t nstrings,
+ const char *sh_name)
+{
+ struct {
+ uint64_t r_offset;
+ uint64_t r_addend;
+ uint64_t st_value;
+ uint64_t st_size;
+ uint32_t r_type;
+ uint16_t st_shndx;
+ uint8_t st_info;
+ uint8_t st_other;
+
+ } __attribute__((packed)) relocation;
+
+ const Elf64_Sym *symbol;
+ size_t loop;
+
+ /* contribute the relevant bits from a join of { RELA, SYMBOL, SECTION } */
+ for (loop = 0; loop < nrels; loop++) {
+ Elf64_Section st_shndx;
+ Elf64_Xword r_info;
+
+ /* decode the relocation */
+ r_info = get64(&relatab[loop].r_info);
+ relocation.r_offset = relatab[loop].r_offset;
+ relocation.r_addend = relatab[loop].r_addend;
+ set32(&relocation.r_type, ELF64_R_TYPE(r_info));
+
+ if (ELF64_R_SYM(r_info) >= nsyms) {
+ fprintf(stderr, "Invalid symbol ID %lx in relocation %zu\n",
+ ELF64_R_SYM(r_info), loop);
+ exit(1);
+ }
+
+ /* decode the symbol referenced by the relocation */
+ symbol = &symbols[ELF64_R_SYM(r_info)];
+ relocation.st_info = symbol->st_info;
+ relocation.st_other = symbol->st_other;
+ relocation.st_value = symbol->st_value;
+ relocation.st_size = symbol->st_size;
+ relocation.st_shndx = symbol->st_shndx;
+ st_shndx = get16(&symbol->st_shndx);
+
+ /* canonicalise the section used by the symbol */
+ if (st_shndx > SHN_UNDEF && st_shndx < nsects)
+ set16(&relocation.st_shndx, canonmap[st_shndx]);
+
+ write_out_val(relocation);
+
+ /* undefined symbols must be named if referenced */
+ if (st_shndx == SHN_UNDEF) {
+ const char *name = strings + get32(&symbol->st_name);
+ write_out(name, strlen(name) + 1);
+ }
+ }
+
+ verbose("%02x %4d %s [canon]\n", csum, secix, sh_name);
+
+} /* end extract_elf64_rela() */
+
+/*****************************************************************************/
+/*
+ * extract a REL table
+ * - need to canonicalise the entries in case section addition/removal has
+ * rearranged the symbol table and the section table
+ */
+void extract_elf64_rel(const void *buffer, int secix, int targetix,
+ const Elf64_Rel *relatab, size_t nrels,
+ const Elf64_Sym *symbols, size_t nsyms,
+ const Elf64_Shdr *sections, size_t nsects, int *canonmap,
+ const char *strings, size_t nstrings,
+ const char *sh_name)
+{
+ struct {
+ uint64_t r_offset;
+ uint64_t st_value;
+ uint64_t st_size;
+ uint32_t r_type;
+ uint16_t st_shndx;
+ uint8_t st_info;
+ uint8_t st_other;
+
+ } __attribute__((packed)) relocation;
+
+ const Elf64_Sym *symbol;
+ size_t loop;
+
+ /* contribute the relevant bits from a join of { RELA, SYMBOL, SECTION } */
+ for (loop = 0; loop < nrels; loop++) {
+ Elf64_Section st_shndx;
+ Elf64_Xword r_info;
+
+ /* decode the relocation */
+ r_info = get64(&relatab[loop].r_info);
+ relocation.r_offset = relatab[loop].r_offset;
+ set32(&relocation.r_type, ELF64_R_TYPE(r_info));
+
+ if (ELF64_R_SYM(r_info) >= nsyms) {
+ fprintf(stderr, "Invalid symbol ID %lx in relocation %zi\n",
+ ELF64_R_SYM(r_info), loop);
+ exit(1);
+ }
+
+ /* decode the symbol referenced by the relocation */
+ symbol = &symbols[ELF64_R_SYM(r_info)];
+ relocation.st_info = symbol->st_info;
+ relocation.st_other = symbol->st_other;
+ relocation.st_value = symbol->st_value;
+ relocation.st_size = symbol->st_size;
+ relocation.st_shndx = symbol->st_shndx;
+ st_shndx = get16(&symbol->st_shndx);
+
+ /* canonicalise the section used by the symbol */
+ if (st_shndx > SHN_UNDEF && st_shndx < nsects)
+ set16(&relocation.st_shndx, canonmap[st_shndx]);
+
+ write_out_val(relocation);
+
+ /* undefined symbols must be named if referenced */
+ if (st_shndx == SHN_UNDEF) {
+ const char *name = strings + get32(&symbol->st_name);
+ write_out(name, strlen(name) + 1);
+ }
+ }
+
+ verbose("%02x %4d %s [canon]\n", csum, secix, sh_name);
+
+} /* end extract_elf64_rel() */
+
+/*****************************************************************************/
+/*
+ * extract the data from a 64-bit module
+ */
+void extract_elf64(void *buffer, size_t len, Elf64_Ehdr *hdr)
+{
+ const Elf64_Sym *symbols;
+ Elf64_Shdr *sections;
+ const char *secstrings, *strings;
+ size_t nsyms, nstrings;
+ int loop, shnum, *canonlist, *canonmap, canon, changed, tmp;
+
+ sections = buffer + get64(&hdr->e_shoff);
+ secstrings = buffer + get64(§ions[get16(&hdr->e_shstrndx)].sh_offset);
+ shnum = get16(&hdr->e_shnum);
+
+ /* find the symbol table and the string table and produce a list of
+ * index numbers of sections that contribute to the kernel's module
+ * image
+ */
+ canonlist = calloc(sizeof(int), shnum * 2);
+ if (!canonlist) {
+ perror("calloc");
+ exit(1);
+ }
+ canonmap = canonlist + shnum;
+ canon = 0;
+
+ symbols = NULL;
+ strings = NULL;
+
+ for (loop = 1; loop < shnum; loop++) {
+ const char *sh_name = secstrings + get32(§ions[loop].sh_name);
+ Elf64_Word sh_type = get32(§ions[loop].sh_type);
+ Elf64_Xword sh_size = get64(§ions[loop].sh_size);
+ Elf64_Xword sh_flags = get64(§ions[loop].sh_flags);
+ Elf64_Off sh_offset = get64(§ions[loop].sh_offset);
+ void *data = buffer + sh_offset;
+
+ /* quick sanity check */
+ if (sh_type != SHT_NOBITS && len < sh_offset + sh_size) {
+ fprintf(stderr, "Section goes beyond EOF\n");
+ exit(3);
+ }
+
+ /* we only need to canonicalise allocatable sections */
+ if (sh_flags & SHF_ALLOC)
+ canonlist[canon++] = loop;
+
+ /* keep track of certain special sections */
+ switch (sh_type) {
+ case SHT_SYMTAB:
+ if (strcmp(sh_name, ".symtab") == 0) {
+ symbols = data;
+ nsyms = sh_size / sizeof(Elf64_Sym);
+ }
+ break;
+
+ case SHT_STRTAB:
+ if (strcmp(sh_name, ".strtab") == 0) {
+ strings = data;
+ nstrings = sh_size;
+ }
+ break;
+
+ default:
+ break;
+ }
+ }
+
+ if (!symbols) {
+ fprintf(stderr, "Couldn't locate symbol table\n");
+ exit(3);
+ }
+
+ if (!strings) {
+ fprintf(stderr, "Couldn't locate strings table\n");
+ exit(3);
+ }
+
+ /* canonicalise the index numbers of the contributing section */
+ do {
+ changed = 0;
+
+ for (loop = 0; loop < canon - 1; loop++) {
+ const char *x = secstrings + get32(§ions[canonlist[loop + 0]].sh_name);
+ const char *y = secstrings + get32(§ions[canonlist[loop + 1]].sh_name);
+ if (strcmp(x, y) > 0) {
+ tmp = canonlist[loop + 0];
+ canonlist[loop + 0] = canonlist[loop + 1];
+ canonlist[loop + 1] = tmp;
+ changed = 1;
+ }
+ }
+
+ } while(changed);
+
+ for (loop = 0; loop < canon; loop++)
+ canonmap[canonlist[loop]] = loop + 1;
+
+ if (is_verbose > 1) {
+ printf("\nSection canonicalisation map:\n");
+ for (loop = 1; loop < shnum; loop++) {
+ const char *x = secstrings + get32(§ions[loop].sh_name);
+ printf("%4d %s\n", canonmap[loop], x);
+ }
+
+ printf("\nAllocated section list in canonical order:\n");
+ for (loop = 0; loop < canon; loop++) {
+ const char *x = secstrings + get32(§ions[canonlist[loop]].sh_name);
+ printf("%4d %s\n", canonlist[loop], x);
+ }
+ }
+
+ memset(canonlist, 0, sizeof(int) * shnum);
+
+ /* iterate through the section table looking for sections we want to
+ * contribute to the signature */
+ verbose("\n");
+ verbose("FILE POS CS SECT NAME\n");
+ verbose("======== == ==== ==============================\n");
+
+ for (loop = 1; loop < shnum; loop++) {
+ const char *sh_name = secstrings + get32(§ions[loop].sh_name);
+ Elf64_Word sh_type = get32(§ions[loop].sh_type);
+ Elf64_Xword sh_size = get64(§ions[loop].sh_size);
+ Elf64_Xword sh_flags = get64(§ions[loop].sh_flags);
+ Elf64_Word sh_info = get32(§ions[loop].sh_info);
+ Elf64_Off sh_offset = get64(§ions[loop].sh_offset);
+ void *data = buffer + sh_offset;
+
+ csum = 0;
+
+ /* include canonicalised relocation sections */
+ if (sh_type == SHT_REL || sh_type == SHT_RELA) {
+ if (sh_info <= 0 && sh_info >= hdr->e_shnum) {
+ fprintf(stderr,
+ "Invalid ELF - REL/RELA sh_info does"
+ " not refer to a valid section\n");
+ exit(3);
+ }
+
+ if (canonlist[sh_info]) {
+ Elf32_Word xsh_info;
+
+ verbose("%08lx ", ftell(outfd));
+
+ set32(&xsh_info, canonmap[sh_info]);
+
+ /* write out selected portions of the section
+ * header */
+ write_out(sh_name, strlen(sh_name));
+ write_out_val(sections[loop].sh_type);
+ write_out_val(sections[loop].sh_flags);
+ write_out_val(sections[loop].sh_size);
+ write_out_val(sections[loop].sh_addralign);
+ write_out_val(xsh_info);
+
+ if (sh_type == SHT_RELA)
+ extract_elf64_rela(buffer, loop, sh_info,
+ data, sh_size / sizeof(Elf64_Rela),
+ symbols, nsyms,
+ sections, shnum, canonmap,
+ strings, nstrings,
+ sh_name);
+ else
+ extract_elf64_rel(buffer, loop, sh_info,
+ data, sh_size / sizeof(Elf64_Rel),
+ symbols, nsyms,
+ sections, shnum, canonmap,
+ strings, nstrings,
+ sh_name);
+ }
+
+ continue;
+ }
+
+ /* include allocatable loadable sections */
+ if (sh_type != SHT_NOBITS && sh_flags & SHF_ALLOC)
+ goto include_section;
+
+ /* not this section */
+ continue;
+
+ include_section:
+ verbose("%08lx ", ftell(outfd));
+
+ /* write out selected portions of the section header */
+ write_out(sh_name, strlen(sh_name));
+ write_out_val(sections[loop].sh_type);
+ write_out_val(sections[loop].sh_flags);
+ write_out_val(sections[loop].sh_size);
+ write_out_val(sections[loop].sh_addralign);
+
+ /* write out the section data */
+ write_out(data, sh_size);
+
+ verbose("%02x %4d %s\n", csum, loop, sh_name);
+
+ /* note the section has been written */
+ canonlist[loop] = 1;
+ }
+
+ verbose("%08lx (%lu bytes csum 0x%02x)\n",
+ ftell(outfd), ftell(outfd), xcsum);
+
+} /* end extract_elf64() */
+
+/*****************************************************************************/
+/*
+ * extract a RELA table
+ * - need to canonicalise the entries in case section addition/removal has
+ * rearranged the symbol table and the section table
+ */
+void extract_elf32_rela(const void *buffer, int secix, int targetix,
+ const Elf32_Rela *relatab, size_t nrels,
+ const Elf32_Sym *symbols, size_t nsyms,
+ const Elf32_Shdr *sections, size_t nsects, int *canonmap,
+ const char *strings, size_t nstrings,
+ const char *sh_name)
+{
+ struct {
+ uint32_t r_offset;
+ uint32_t r_addend;
+ uint32_t st_value;
+ uint32_t st_size;
+ uint16_t st_shndx;
+ uint8_t r_type;
+ uint8_t st_info;
+ uint8_t st_other;
+
+ } __attribute__((packed)) relocation;
+
+ const Elf32_Sym *symbol;
+ size_t loop;
+
+ /* contribute the relevant bits from a join of { RELA, SYMBOL, SECTION } */
+ for (loop = 0; loop < nrels; loop++) {
+ Elf32_Section st_shndx;
+ Elf32_Word r_info;
+
+ /* decode the relocation */
+ r_info = get32(&relatab[loop].r_info);
+ relocation.r_offset = relatab[loop].r_offset;
+ relocation.r_addend = relatab[loop].r_addend;
+ relocation.r_type = ELF32_R_TYPE(r_info);
+
+ if (ELF32_R_SYM(r_info) >= nsyms) {
+ fprintf(stderr, "Invalid symbol ID %x in relocation %zu\n",
+ ELF32_R_SYM(r_info), loop);
+ exit(1);
+ }
+
+ /* decode the symbol referenced by the relocation */
+ symbol = &symbols[ELF32_R_SYM(r_info)];
+ relocation.st_info = symbol->st_info;
+ relocation.st_other = symbol->st_other;
+ relocation.st_value = symbol->st_value;
+ relocation.st_size = symbol->st_size;
+ relocation.st_shndx = symbol->st_shndx;
+ st_shndx = get16(&symbol->st_shndx);
+
+ /* canonicalise the section used by the symbol */
+ if (st_shndx > SHN_UNDEF && st_shndx < nsects)
+ set16(&relocation.st_shndx, canonmap[st_shndx]);
+
+ write_out_val(relocation);
+
+ /* undefined symbols must be named if referenced */
+ if (st_shndx == SHN_UNDEF) {
+ const char *name = strings + get32(&symbol->st_name);
+ write_out(name, strlen(name) + 1);
+ }
+ }
+
+ verbose("%02x %4d %s [canon]\n", csum, secix, sh_name);
+
+} /* end extract_elf32_rela() */
+
+/*****************************************************************************/
+/*
+ * extract a REL table
+ * - need to canonicalise the entries in case section addition/removal has
+ * rearranged the symbol table and the section table
+ */
+void extract_elf32_rel(const void *buffer, int secix, int targetix,
+ const Elf32_Rel *relatab, size_t nrels,
+ const Elf32_Sym *symbols, size_t nsyms,
+ const Elf32_Shdr *sections, size_t nsects, int *canonmap,
+ const char *strings, size_t nstrings,
+ const char *sh_name)
+{
+ struct {
+ uint32_t r_offset;
+ uint32_t st_value;
+ uint32_t st_size;
+ uint16_t st_shndx;
+ uint8_t r_type;
+ uint8_t st_info;
+ uint8_t st_other;
+
+ } __attribute__((packed)) relocation;
+
+ const Elf32_Sym *symbol;
+ size_t loop;
+
+ /* contribute the relevant bits from a join of { RELA, SYMBOL, SECTION } */
+ for (loop = 0; loop < nrels; loop++) {
+ Elf32_Section st_shndx;
+ Elf32_Word r_info;
+
+ /* decode the relocation */
+ r_info = get32(&relatab[loop].r_info);
+ relocation.r_offset = relatab[loop].r_offset;
+ relocation.r_type = ELF32_R_TYPE(r_info);
+
+ if (ELF32_R_SYM(r_info) >= nsyms) {
+ fprintf(stderr, "Invalid symbol ID %x in relocation %zu\n",
+ ELF32_R_SYM(r_info), loop);
+ exit(1);
+ }
+
+ /* decode the symbol referenced by the relocation */
+ symbol = &symbols[ELF32_R_SYM(r_info)];
+ relocation.st_info = symbol->st_info;
+ relocation.st_other = symbol->st_other;
+ relocation.st_value = symbol->st_value;
+ relocation.st_size = symbol->st_size;
+ relocation.st_shndx = symbol->st_shndx;
+ st_shndx = get16(&symbol->st_shndx);
+
+ /* canonicalise the section used by the symbol */
+ if (st_shndx > SHN_UNDEF && st_shndx < nsects)
+ set16(&relocation.st_shndx, canonmap[st_shndx]);
+
+ write_out_val(relocation);
+
+ /* undefined symbols must be named if referenced */
+ if (st_shndx == SHN_UNDEF) {
+ const char *name = strings + get32(&symbol->st_name);
+ write_out(name, strlen(name) + 1);
+ }
+ }
+
+ verbose("%02x %4d %s [canon]\n", csum, secix, sh_name);
+
+} /* end extract_elf32_rel() */
+
+/*****************************************************************************/
+/*
+ * extract the data from a 32-bit module
+ */
+void extract_elf32(void *buffer, size_t len, Elf32_Ehdr *hdr)
+{
+ const Elf32_Sym *symbols;
+ Elf32_Shdr *sections;
+ const char *secstrings, *strings;
+ size_t nsyms, nstrings;
+ int loop, shnum, *canonlist, *canonmap, canon, changed, tmp;
+
+ sections = buffer + get32(&hdr->e_shoff);
+ secstrings = buffer + get32(§ions[get16(&hdr->e_shstrndx)].sh_offset);
+ shnum = get16(&hdr->e_shnum);
+
+ /* find the symbol table and the string table and produce a list of
+ * index numbers of sections that contribute to the kernel's module
+ * image
+ */
+ canonlist = calloc(sizeof(int), shnum * 2);
+ if (!canonlist) {
+ perror("calloc");
+ exit(1);
+ }
+ canonmap = canonlist + shnum;
+ canon = 0;
+
+ symbols = NULL;
+ strings = NULL;
+
+ for (loop = 1; loop < shnum; loop++) {
+ const char *sh_name = secstrings + get32(§ions[loop].sh_name);
+ Elf32_Word sh_type = get32(§ions[loop].sh_type);
+ Elf32_Xword sh_size = get32(§ions[loop].sh_size);
+ Elf32_Xword sh_flags = get32(§ions[loop].sh_flags);
+ Elf32_Off sh_offset = get32(§ions[loop].sh_offset);
+ void *data = buffer + sh_offset;
+
+ /* quick sanity check */
+ if (sh_type != SHT_NOBITS && len < sh_offset + sh_size) {
+ fprintf(stderr, "Section goes beyond EOF\n");
+ exit(3);
+ }
+
+ /* we only need to canonicalise allocatable sections */
+ if (sh_flags & SHF_ALLOC)
+ canonlist[canon++] = loop;
+
+ /* keep track of certain special sections */
+ switch (sh_type) {
+ case SHT_SYMTAB:
+ if (strcmp(sh_name, ".symtab") == 0) {
+ symbols = data;
+ nsyms = sh_size / sizeof(Elf32_Sym);
+ }
+ break;
+
+ case SHT_STRTAB:
+ if (strcmp(sh_name, ".strtab") == 0) {
+ strings = data;
+ nstrings = sh_size;
+ }
+ break;
+
+ default:
+ break;
+ }
+ }
+
+ if (!symbols) {
+ fprintf(stderr, "Couldn't locate symbol table\n");
+ exit(3);
+ }
+
+ if (!strings) {
+ fprintf(stderr, "Couldn't locate strings table\n");
+ exit(3);
+ }
+
+ /* canonicalise the index numbers of the contributing section */
+ do {
+ changed = 0;
+
+ for (loop = 0; loop < canon - 1; loop++) {
+ const char *x = secstrings + get32(§ions[canonlist[loop + 0]].sh_name);
+ const char *y = secstrings + get32(§ions[canonlist[loop + 1]].sh_name);
+ if (strcmp(x, y) > 0) {
+ tmp = canonlist[loop + 0];
+ canonlist[loop + 0] = canonlist[loop + 1];
+ canonlist[loop + 1] = tmp;
+ changed = 1;
+ }
+ }
+
+ } while(changed);
+
+ for (loop = 0; loop < canon; loop++)
+ canonmap[canonlist[loop]] = loop + 1;
+
+ if (is_verbose > 1) {
+ printf("\nSection canonicalisation map:\n");
+ for (loop = 1; loop < shnum; loop++) {
+ const char *x = secstrings + get32(§ions[loop].sh_name);
+ printf("%4d %s\n", canonmap[loop], x);
+ }
+
+ printf("\nAllocated section list in canonical order:\n");
+ for (loop = 0; loop < canon; loop++) {
+ const char *x = secstrings + get32(§ions[canonlist[loop]].sh_name);
+ printf("%4d %s\n", canonlist[loop], x);
+ }
+ }
+
+ memset(canonlist, 0, sizeof(int) * shnum);
+
+ /* iterate through the section table looking for sections we want to
+ * contribute to the signature */
+ verbose("\n");
+ verbose("FILE POS CS SECT NAME\n");
+ verbose("======== == ==== ==============================\n");
+
+ for (loop = 1; loop < shnum; loop++) {
+ const char *sh_name = secstrings + get32(§ions[loop].sh_name);
+ Elf32_Word sh_type = get32(§ions[loop].sh_type);
+ Elf32_Xword sh_size = get32(§ions[loop].sh_size);
+ Elf32_Xword sh_flags = get32(§ions[loop].sh_flags);
+ Elf32_Word sh_info = get32(§ions[loop].sh_info);
+ Elf32_Off sh_offset = get32(§ions[loop].sh_offset);
+ void *data = buffer + sh_offset;
+
+ csum = 0;
+
+ /* quick sanity check */
+ if (sh_type != SHT_NOBITS && len < sh_offset + sh_size) {
+ fprintf(stderr, "section goes beyond EOF\n");
+ exit(3);
+ }
+
+ /* include canonicalised relocation sections */
+ if (sh_type == SHT_REL || sh_type == SHT_RELA) {
+ if (sh_info <= 0 && sh_info >= hdr->e_shnum) {
+ fprintf(stderr,
+ "Invalid ELF - REL/RELA sh_info does"
+ " not refer to a valid section\n");
+ exit(3);
+ }
+
+ if (canonlist[sh_info]) {
+ Elf32_Word xsh_info;
+
+ verbose("%08lx ", ftell(outfd));
+
+ set32(&xsh_info, canonmap[sh_info]);
+
+ /* write out selected portions of the section header */
+ write_out(sh_name, strlen(sh_name));
+ write_out_val(sections[loop].sh_type);
+ write_out_val(sections[loop].sh_flags);
+ write_out_val(sections[loop].sh_size);
+ write_out_val(sections[loop].sh_addralign);
+ write_out_val(xsh_info);
+
+ if (sh_type == SHT_RELA)
+ extract_elf32_rela(buffer, loop, sh_info,
+ data, sh_size / sizeof(Elf32_Rela),
+ symbols, nsyms,
+ sections, shnum, canonmap,
+ strings, nstrings,
+ sh_name);
+ else
+ extract_elf32_rel(buffer, loop, sh_info,
+ data, sh_size / sizeof(Elf32_Rel),
+ symbols, nsyms,
+ sections, shnum, canonmap,
+ strings, nstrings,
+ sh_name);
+ }
+
+ continue;
+ }
+
+ /* include allocatable loadable sections */
+ if (sh_type != SHT_NOBITS && sh_flags & SHF_ALLOC)
+ goto include_section;
+
+ /* not this section */
+ continue;
+
+ include_section:
+ verbose("%08lx ", ftell(outfd));
+
+ /* write out selected portions of the section header */
+ write_out(sh_name, strlen(sh_name));
+ write_out_val(sections[loop].sh_type);
+ write_out_val(sections[loop].sh_flags);
+ write_out_val(sections[loop].sh_size);
+ write_out_val(sections[loop].sh_addralign);
+
+ /* write out the section data */
+ write_out(data, sh_size);
+
+ verbose("%02x %4d %s\n", csum, loop, sh_name);
+
+ /* note the section has been written */
+ canonlist[loop] = 1;
+ }
+
+ verbose("%08lx (%lu bytes csum 0x%02x)\n",
+ ftell(outfd), ftell(outfd), xcsum);
+
+} /* end extract_elf32() */
--- /dev/null
+#!/bin/bash
+###############################################################################
+#
+# Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+# Written by David Howells (dhowells@redhat.com)
+#
+# This program is free software; you can redistribute it and/or
+# modify it under the terms of the GNU General Public License
+# as published by the Free Software Foundation; either version
+# 2 of the License, or (at your option) any later version.
+#
+###############################################################################
+
+verbose=
+
+if [ $# -gt 1 -a "x$1" = "x-v" ]
+ then
+ verbose=-v
+ shift
+fi
+
+if [ $# = 0 ]
+ then
+ echo
+ echo "usage: $0 [-v] <module_to_sign> [<key_name>]"
+ echo
+ exit 1
+fi
+
+module=$1
+
+if [ -z "$KEYFLAGS" ]
+ then
+ KEYFLAGS="--no-default-keyring --secret-keyring ../kernel.sec --keyring ../kernel.pub"
+fi
+
+if [ $# -eq 2 ]
+ then
+ KEYFLAGS="$KEYFLAGS --default-key $2"
+fi
+
+# strip out only the sections that we care about
+scripts/modsign/mod-extract $verbose $module $module.out || exit $?
+
+# sign the sections
+gpg --no-greeting $KEYFLAGS -b $module.out || exit $?
+
+# check the signature
+#gpg --verify rxrpc.ko.out.sig rxrpc.ko.out
+
+## sha1 the sections
+#sha1sum $module.out | awk "{print \$1}" > $module.sha1
+
+# add the encrypted data to the module
+objcopy --add-section .module_sig=$module.out.sig $module $module.signed || exit $?
+objcopy --set-section-flags .module_sig=alloc $module.signed || exit $?
+rm -f $module.out*
--- /dev/null
+# Makefile for the different targets used to generate full packages of a kernel
+# It uses the generic clean infrastructure of kbuild
+
+# Ignore the following files/directories during tar operation
+TAR_IGNORE := --exclude SCCS --exclude BitKeeper --exclude .svn --exclude CVS
+
+
+# RPM target
+# ---------------------------------------------------------------------------
+# The rpm target generates two rpm files:
+# /usr/src/packages/SRPMS/kernel-2.6.7rc2-1.src.rpm
+# /usr/src/packages/RPMS/i386/kernel-2.6.7rc2-1.<arch>.rpm
+# The src.rpm files includes all source for the kernel being built
+# The <arch>.rpm includes kernel configuration, modules etc.
+#
+# Process to create the rpm files
+# a) clean the kernel
+# b) Generate .spec file
+# c) Build a tar ball, using symlink to make kernel version
+# first entry in the path
+# d) and pack the result to a tar.gz file
+# e) generate the rpm files, based on kernel.spec
+# - Use /. to avoid tar packing just the symlink
+
+# Do we have rpmbuild, otherwise fall back to the older rpm
+RPM := $(shell if [ -x "/usr/bin/rpmbuild" ]; then echo rpmbuild; \
+ else echo rpm; fi)
+
+# Remove hyphens since they have special meaning in RPM filenames
+KERNELPATH := kernel-$(subst -,,$(KERNELRELEASE))
+MKSPEC := $(srctree)/scripts/package/mkspec
+PREV := set -e; cd ..;
+
+# rpm-pkg
+.PHONY: rpm-pkg rpm
+
+$(objtree)/kernel.spec: $(MKSPEC) $(srctree)/Makefile
+ $(CONFIG_SHELL) $(MKSPEC) > $@
+
+rpm-pkg rpm: $(objtree)/kernel.spec
+ $(MAKE) clean
+ $(PREV) ln -sf $(srctree) $(KERNELPATH)
+ $(PREV) tar -cz $(RCS_TAR_IGNORE) -f $(KERNELPATH).tar.gz $(KERNELPATH)/.
+ $(PREV) rm $(KERNELPATH)
+
+ set -e; \
+ $(CONFIG_SHELL) $(srctree)/scripts/mkversion > $(objtree)/.tmp_version
+ set -e; \
+ mv -f $(objtree)/.tmp_version $(objtree)/.version
+
+ $(RPM) --target $(UTS_MACHINE) -ta ../$(KERNELPATH).tar.gz
+ rm ../$(KERNELPATH).tar.gz
+
+clean-files := $(objtree)/kernel.spec
+
+# binrpm-pkg
+.PHONY: binrpm-pkg
+$(objtree)/binkernel.spec: $(MKSPEC) $(srctree)/Makefile
+ $(CONFIG_SHELL) $(MKSPEC) prebuilt > $@
+
+binrpm-pkg: $(objtree)/binkernel.spec
+ $(MAKE)
+ set -e; \
+ $(CONFIG_SHELL) $(srctree)/scripts/mkversion > $(objtree)/.tmp_version
+ set -e; \
+ mv -f $(objtree)/.tmp_version $(objtree)/.version
+
+ $(RPM) --define "_builddir $(srctree)" --target $(UTS_MACHINE) -bb $<
+
+clean-files += $(objtree)/binkernel.spec
+
+# Deb target
+# ---------------------------------------------------------------------------
+#
+.PHONY: deb-pkg
+deb-pkg:
+ $(MAKE)
+ $(CONFIG_SHELL) $(srctree)/scripts/package/builddeb
+
+clean-dirs += $(objtree)/debian/
+
+
+# Help text displayed when executing 'make help'
+# ---------------------------------------------------------------------------
+help:
+ @echo ' rpm-pkg - Build the kernel as an RPM package'
+ @echo ' binrpm-pkg - Build an rpm package containing the compiled kernel & modules'
+ @echo ' deb-pkg - Build the kernel as an deb package'
+
--- /dev/null
+#!/bin/sh
+#
+# builddeb 1.2
+# Copyright 2003 Wichert Akkerman <wichert@wiggy.net>
+#
+# Simple script to generate a deb package for a Linux kernel. All the
+# complexity of what to do with a kernel after it is installer or removed
+# is left to other scripts and packages: they can install scripts in the
+# /etc/kernel/{pre,post}{inst,rm}.d/ directories that will be called on
+# package install and removal.
+
+set -e
+
+# Some variables and settings used throughout the script
+version=$KERNELRELEASE
+tmpdir="$objtree/debian/tmp"
+
+# Setup the directory structure
+rm -rf "$tmpdir"
+mkdir -p "$tmpdir/DEBIAN" "$tmpdir/lib" "$tmpdir/boot"
+
+# Build and install the kernel
+cp System.map "$tmpdir/boot/System.map-$version"
+cp .config "$tmpdir/boot/config-$version"
+cp $KBUILD_IMAGE "$tmpdir/boot/vmlinuz-$version"
+
+if grep -q '^CONFIG_MODULES=y' .config ; then
+ INSTALL_MOD_PATH="$tmpdir" make modules_install
+fi
+
+# Install the maintainer scripts
+for script in postinst postrm preinst prerm ; do
+ mkdir -p "$tmpdir/etc/kernel/$script.d"
+ cat <<EOF > "$tmpdir/DEBIAN/$script"
+#!/bin/sh
+
+set -e
+
+test -d /etc/kernel/$script.d && run-parts --arg="$version" /etc/kernel/$script.d
+exit 0
+EOF
+ chmod 755 "$tmpdir/DEBIAN/$script"
+done
+
+name="Kernel Compiler <$(id -nu)@$(hostname -f)>"
+# Generate a simple changelog template
+cat <<EOF > debian/changelog
+linux ($version) unstable; urgency=low
+
+ * A standard release
+
+ -- $name $(date -R)
+EOF
+
+# Generate a control file
+cat <<EOF > debian/control
+Source: linux
+Section: base
+Priority: optional
+Maintainer: $name
+Standards-Version: 3.6.1
+
+Package: linux-$version
+Architecture: any
+Description: Linux kernel, version $version
+ This package contains the Linux kernel, modules and corresponding other
+ files version $version.
+EOF
+
+# Fix some ownership and permissions
+chown -R root:root "$tmpdir"
+chmod -R go-w "$tmpdir"
+
+# Perform the final magic
+dpkg-gencontrol -isp
+dpkg --build "$tmpdir" ..
+
+exit 0
+
--- /dev/null
+#!/bin/sh
+#
+# Output a simple RPM spec file that uses no fancy features requring
+# RPM v4. This is intended to work with any RPM distro.
+#
+# The only gothic bit here is redefining install_post to avoid
+# stripping the symbols from files in the kernel which we want
+#
+# Patched for non-x86 by Opencon (L) 2002 <opencon@rio.skydome.net>
+#
+
+# how we were called determines which rpms we build and how we build them
+if [ "$1" = "prebuilt" ]; then
+ PREBUILT=true
+else
+ PREBUILT=false
+fi
+
+# starting to output the spec
+if [ "`grep CONFIG_DRM=y .config | cut -f2 -d\=`" = "y" ]; then
+ PROVIDES=kernel-drm
+fi
+
+PROVIDES="$PROVIDES kernel-$KERNELRELEASE"
+__KERNELRELEASE=`echo $KERNELRELEASE | sed -e "s/-//g"`
+
+echo "Name: kernel"
+echo "Summary: The Linux Kernel"
+echo "Version: $__KERNELRELEASE"
+# we need to determine the NEXT version number so that uname and
+# rpm -q will agree
+echo "Release: `. $srctree/scripts/mkversion`"
+echo "License: GPL"
+echo "Group: System Environment/Kernel"
+echo "Vendor: The Linux Community"
+echo "URL: http://www.kernel.org"
+
+if ! $PREBUILT; then
+echo "Source: kernel-$__KERNELRELEASE.tar.gz"
+fi
+
+echo "BuildRoot: /var/tmp/%{name}-%{PACKAGE_VERSION}-root"
+echo "Provides: $PROVIDES"
+echo "%define __spec_install_post /usr/lib/rpm/brp-compress || :"
+echo "%define debug_package %{nil}"
+echo ""
+echo "%description"
+echo "The Linux Kernel, the operating system core itself"
+echo ""
+
+if ! $PREBUILT; then
+echo "%prep"
+echo "%setup -q"
+echo ""
+fi
+
+echo "%build"
+
+if ! $PREBUILT; then
+echo "make clean && make %{_smp_mflags}"
+echo ""
+fi
+
+echo "%install"
+echo 'mkdir -p $RPM_BUILD_ROOT/boot $RPM_BUILD_ROOT/lib $RPM_BUILD_ROOT/lib/modules'
+
+echo 'INSTALL_MOD_PATH=$RPM_BUILD_ROOT make %{_smp_mflags} modules_install'
+echo 'cp $KBUILD_IMAGE $RPM_BUILD_ROOT'"/boot/vmlinuz-$KERNELRELEASE"
+
+echo 'cp System.map $RPM_BUILD_ROOT'"/boot/System.map-$KERNELRELEASE"
+
+echo 'cp .config $RPM_BUILD_ROOT'"/boot/config-$KERNELRELEASE"
+echo ""
+echo "%clean"
+echo '#echo -rf $RPM_BUILD_ROOT'
+echo ""
+echo "%files"
+echo '%defattr (-, root, root)'
+echo "%dir /lib/modules"
+echo "/lib/modules/$KERNELRELEASE"
+echo "/boot/*"
+echo ""
--- /dev/null
+#!/usr/bin/perl -w
+#
+# reference_discarded.pl (C) Keith Owens 2001 <kaos@ocs.com.au>
+#
+# Released under GPL V2.
+#
+# List dangling references to vmlinux discarded sections.
+
+use strict;
+die($0 . " takes no arguments\n") if($#ARGV >= 0);
+
+my %object;
+my $object;
+my $line;
+my $ignore;
+my $errorcount;
+
+$| = 1;
+
+# printf("Finding objects, ");
+open(OBJDUMP_LIST, "find . -name '*.o' | xargs objdump -h |") || die "getting objdump list failed";
+while (defined($line = <OBJDUMP_LIST>)) {
+ chomp($line);
+ if ($line =~ /:\s+file format/) {
+ ($object = $line) =~ s/:.*//;
+ $object{$object}->{'module'} = 0;
+ $object{$object}->{'size'} = 0;
+ $object{$object}->{'off'} = 0;
+ }
+ if ($line =~ /^\s*\d+\s+\.modinfo\s+/) {
+ $object{$object}->{'module'} = 1;
+ }
+ if ($line =~ /^\s*\d+\s+\.comment\s+/) {
+ ($object{$object}->{'size'}, $object{$object}->{'off'}) = (split(' ', $line))[2,5];
+ }
+}
+close(OBJDUMP_LIST);
+# printf("%d objects, ", scalar keys(%object));
+$ignore = 0;
+foreach $object (keys(%object)) {
+ if ($object{$object}->{'module'}) {
+ ++$ignore;
+ delete($object{$object});
+ }
+}
+# printf("ignoring %d module(s)\n", $ignore);
+
+# Ignore conglomerate objects, they have been built from multiple objects and we
+# only care about the individual objects. If an object has more than one GCC:
+# string in the comment section then it is conglomerate. This does not filter
+# out conglomerates that consist of exactly one object, can't be helped.
+
+# printf("Finding conglomerates, ");
+$ignore = 0;
+foreach $object (keys(%object)) {
+ if (exists($object{$object}->{'off'})) {
+ my ($off, $size, $comment, $l);
+ $off = hex($object{$object}->{'off'});
+ $size = hex($object{$object}->{'size'});
+ open(OBJECT, "<$object") || die "cannot read $object";
+ seek(OBJECT, $off, 0) || die "seek to $off in $object failed";
+ $l = read(OBJECT, $comment, $size);
+ die "read $size bytes from $object .comment failed" if ($l != $size);
+ close(OBJECT);
+ if ($comment =~ /GCC\:.*GCC\:/m) {
+ ++$ignore;
+ delete($object{$object});
+ }
+ }
+}
+# printf("ignoring %d conglomerate(s)\n", $ignore);
+
+# printf("Scanning objects\n");
+$errorcount = 0;
+foreach $object (keys(%object)) {
+ my $from;
+ open(OBJDUMP, "objdump -r $object|") || die "cannot objdump -r $object";
+ while (defined($line = <OBJDUMP>)) {
+ chomp($line);
+ if ($line =~ /RELOCATION RECORDS FOR /) {
+ ($from = $line) =~ s/.*\[([^]]*).*/$1/;
+ }
+ if (($line =~ /\.text\.exit$/ ||
+ $line =~ /\.exit\.text$/ ||
+ $line =~ /\.data\.exit$/ ||
+ $line =~ /\.exit\.data$/ ||
+ $line =~ /\.exitcall\.exit$/) &&
+ ($from !~ /\.text\.exit$/ &&
+ $from !~ /\.exit\.text$/ &&
+ $from !~ /\.data\.exit$/ &&
+ $from !~ /\.opd$/ &&
+ $from !~ /\.exit\.data$/ &&
+ $from !~ /\.altinstructions$/ &&
+ $from !~ /\.debug_info$/ &&
+ $from !~ /\.debug_aranges$/ &&
+ $from !~ /\.debug_ranges$/ &&
+ $from !~ /\.debug_line$/ &&
+ $from !~ /\.debug_frame$/ &&
+ $from !~ /\.exitcall\.exit$/ &&
+ $from !~ /\.eh_frame$/ &&
+ $from !~ /\.stab$/)) {
+ printf("Error: %s %s refers to %s\n", $object, $from, $line);
+ $errorcount = $errorcount + 1;
+ }
+ }
+ close(OBJDUMP);
+}
+# printf("Done\n");
+
+exit($errorcount);
--- /dev/null
+# This is a very simple initramfs
+
+dir /dev 0755 0 0
+nod /dev/console 0600 0 0 c 5 1
+dir /root 0700 0 0